[go: up one dir, main page]

WO2025050578A1 - 6G Sensing Framework - Google Patents

6G Sensing Framework Download PDF

Info

Publication number
WO2025050578A1
WO2025050578A1 PCT/CN2023/143256 CN2023143256W WO2025050578A1 WO 2025050578 A1 WO2025050578 A1 WO 2025050578A1 CN 2023143256 W CN2023143256 W CN 2023143256W WO 2025050578 A1 WO2025050578 A1 WO 2025050578A1
Authority
WO
WIPO (PCT)
Prior art keywords
sensing
data
result
output
function
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
PCT/CN2023/143256
Other languages
French (fr)
Inventor
Hao Tang
Jianglei Ma
Peiying Zhu
Wen Tong
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Technologies Co Ltd
Original Assignee
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co Ltd filed Critical Huawei Technologies Co Ltd
Publication of WO2025050578A1 publication Critical patent/WO2025050578A1/en
Pending legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W24/00Supervisory, monitoring or testing arrangements
    • H04W24/10Scheduling measurement reports ; Arrangements for measurement reports
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W4/00Services specially adapted for wireless communication networks; Facilities therefor
    • H04W4/30Services specially adapted for particular environments, situations or purposes
    • H04W4/38Services specially adapted for particular environments, situations or purposes for collecting sensor information

Definitions

  • the present application relates generally to communications, and in particular to sensing.
  • Sensing is a process of obtaining surrounding information, and it can be broadly classified as
  • ⁇ RF sensing Send an RF signal and obtain the surrounding information either by receiving and processing of this RF signal or the echoed (reflected) RF signal.
  • Non-RF sensing Surrounding information is obtained via non-RF signal such as video camera or other sensors.
  • sensing refers to RF-sensing if it is not specified.
  • RF refers to radio frequency
  • Sensing can be used to detect the information of an object, such as location, speed, distance, orientation, shape, texture, and so on.
  • Sensing can be classified as
  • a sensor sends an RF signal to the sensed object, which is capable of detecting the RF signal and obtaining sensed information from the RF signal or measuring some intermediate information, which is fed back to the sensor to assist the sensor to obtain sensed information.
  • Passive sensing also called device free sensing: A sensor sends an RF signal to the sensed object and detects the reflected echo of the RF signal, and obtains the sensed information from the echo.
  • the object may or may not contain certain ID information (RF tag) , for example an Ambient IoT device.
  • RF tag for example an Ambient IoT device.
  • ID refers to identifier.
  • IoT refers to Internet of things.
  • the transmitter and receiver are different devices, for example a BS sends the sensing signals, and a UE receives the sensing signals.
  • ⁇ Multi-static sensing can be decomposed into a set of N bi-static Tx-Rx pairs, where N>1, for example a BS sends the sensing signals, and two UEs (UE1, UE2) receive the sensing signals, pair-1: BS and UE1, pair-2: BS and UE2.
  • a radar system sends a RF signal to localize, detect and track a target. It is a passive sensing system.
  • a radar system is a standalone system and designed for a specific application.
  • ISAC Integrated Sensing And Communication
  • sensing functional framework there is no sensing functional framework in 5G network.
  • 6G ISAC the sensing functional framework and corresponding air interface procedure remain to be designed.
  • the present disclosure includes embodiments that define a sensing functional framework and its procedure.
  • a sensing system includes a data collector to provide: first input data related to sensing in a communication system; and second input data related to sensing management; and a sensing result generator, coupled to the data collector, to receive the first input data from the data collector and to generate or obtain, based on the first input data, a sensing result.
  • Another aspect of the present disclosure relates to a sensing method that involves providing, by a data collection function in a communication system: first input data related to sensing in the communication system; and second input data related to sensing management; and generating, by a sensing result generation function in the communication system, a sensing result based on the first input data.
  • a sensing system includes one or more processors configured to perform a method as disclosed herein.
  • the one or more processors may be or include processors in different devices.
  • Another example system may include one or more processors coupled with one or more non-transitory computer readable storage media that store programming for execution by the one or more processors.
  • the programming includes instructions to perform a method as disclosed herein.
  • a storage medium need not necessarily or only be implemented in or in conjunction with a system or an apparatus.
  • a computer program product may be or include a non-transitory computer readable medium storing programming for execution by a processor.
  • Programming stored by a computer readable storage medium may include instructions to, or to cause a processor to, perform, implement, support, or enable any of the methods disclosed herein.
  • a non-transitory computer readable medium may store programming for execution by a processor (or by more than one processor) , and the programming includes instructions to: provide, by a data collection function in a communication system: first input data related to sensing in the communication system; and second input data related to sensing management; and generate, by a sensing result generation function in the communication system, a sensing result based on the first input data.
  • one or more integrated circuits which may also be referred to as a chip or a chipset, may implement features disclosed herein. These may also be considered examples of system elements or apparatus as disclosed herein.
  • Fig. 1 illustrates an example of a sensing functional framework and procedures.
  • Fig. 2 illustrates another example of a sensing functional framework and procedures.
  • Fig. 3 illustrates a further example of a sensing functional framework and procedures.
  • Fig. 4 illustrates yet another example of a sensing functional framework and procedures.
  • Fig. 5 is a simplified schematic illustration of a communication system.
  • Fig. 6 is a block diagram illustration of the example communication system in Fig. 5.
  • Fig. 7 illustrates an example electronic device and examples of base stations.
  • Fig. 8 illustrates units or modules in a device.
  • Fig. 9 is a block diagram illustration of another example communication system.
  • Fig. 10 is a block diagram illustration of an example sensing management function (SMF) .
  • SMF sensing management function
  • Fig. 11 illustrates another example of a sensing functional framework and procedures.
  • Fig. 12 is a block diagram illustrating a sensing system according to an embodiment.
  • Fig. 13 is a flow diagram illustrating a method according to an embodiment.
  • RF sensing or wireless sensing may refer to obtaining information about characteristics of an environment and/or objects within the environment (such as any one or more of the following: shape, size, orientation, speed, location, distances or relative motion between objects, and so on) using RF signals;
  • 3GPP (3 rd generation partnership project) sensing data may refer to data derived from 3GPP radio signals impacted (reflected, refracted, diffracted, for example) by an object or environment of interest for sensing purposes, and optionally processed within a 5G system;
  • Non-3GPP sensing data may refer to data provided by non-3GPP sensors (video, LiDAR, sonar, for example) about an object or environment of interest for sensing purposes;
  • a sensing result may refer to processed 3GPP sensing data requested by a service consumer
  • a sensing transmitter may refer to an entity that sends out a sensing signal that may be used by a sensing service in its operation.
  • a sensing transmitter may be an NR radio access network (RAN) node or a UE.
  • RAN radio access network
  • a Sensing transmitter can be located in the same or different entity as the Sensing receiver.
  • ISAC is used herein for ease of reference in respect of reuse of a communication signal for sensing.
  • Other names or terms may be used for this feature, and/or others, disclosed herein.
  • Fig. 1 illustrates an example of a sensing functional framework and procedures.
  • sensing functional framework and the flowchart shown in Fig. 1 includes 5 parts. These parts are discussed below.
  • Sensing Data Collection is a function that provides input data to Sensing Modelling, Sensing Management, Sensing Application functions.
  • Examples of input data may include measurements from UEs or different network entities, where the measurement may be RF sensing measurement, non-RF sensing (LIDAR (Light Detection and Ranging) , camera, video, sensor, and so on) measurement.
  • LIDAR Light Detection and Ranging
  • Sensing Data Collection is used for ease of reference, but other names or terminology may be used.
  • Sensing Data Collection can also be referred to as data collection, 3GPP sensing data collection, 3GPP and non-3GPP sensing data collection, data measurement, or sensing measurement.
  • the name “data collector” is also used herein as a general term for a Sensing Data Collection element in a sensing system.
  • FIG. 1 Three types of data are shown by way of example in Fig. 1 for illustrative embodiment 1, and are described below:
  • Training Data Data needed as input for the Sensing Modelling function, such as data for sensing analysis, including assistance information.
  • Monitoring Data Data needed as input for the Sensing Management function.
  • Action data Data needed as input for the Sensing Application function.
  • Training data can also be referred to by other names, such as sensing modeling data in some embodiments that involve sensing modeling, data for deriving (or data related to, or associated with) sensing results, or data related to sensing.
  • sensing modeling data in some embodiments that involve sensing modeling
  • data for deriving (or data related to, or associated with) sensing results or data related to sensing.
  • first input data is also used herein as a general term for such data.
  • Monitoring data can be referred to by other names as well, such as data related to (or data associated with) sensing management.
  • Sensing management may include, for example, sensing performance monitoring and/or, sensing update control.
  • second input data is also used herein as a general term for such data.
  • Action data can also be referred to by other names, such as data for (or data related to, or data associated with) the sensing application.
  • third input data is also used herein as a general term for such data.
  • Sensing Modelling is a function that reconstructs the physical world (that is, get a model for the physical world) , include environment reconstruction, channel reconstruction (by a ray tracing scheme, for example) , target reconstruction, digital twin, and so on. Other features or functions that may be supported or provided in the context of physical world reconstruction include target detection and target tracking.
  • the Sensing Modelling function should be able to request specific information to be used to train the sensing model and to avoid reception of unnecessary information.
  • the Sensing Modelling function may train a sensing model in some embodiments and training is one way to obtain a sensing model, but more generally a model may be trained or otherwise obtained.
  • Trained/Updated Model in Fig. 1 Used to deliver a trained sensing model to the Sensing Results Storage function or to deliver an updated sensing model to the Sensing Results Storage function.
  • Sensing Modeling can also be referred to as Sensing Results Processing, Sensing Information Processing, Sensing Data Processing, Sensing Measurement Processing, Environment Information Processing, Object Information Processing, Environment and Object Information Processing, for example.
  • Sensing Modelling can be called sensing functionalities or sensing tasks or sensing uses cases. It is a function to process data from the sensing data collection function to get information about characteristics of the environment and/or objects within the environment.
  • sensing result generator is also used herein as a general term for a Sensing Modeling element in a sensing system.
  • Sensing tasks or Sensing use cases may include, for example: environment reconstruction, channel prediction, intruder detection, pedestrian/animal intrusion detection, rainfall monitoring, Transparent Sensing, sensing for flooding, intruder detection in surroundings of smart home, sensing for railway intrusion detection, Sensing Assisted Automotive Maneuvering and Navigation, automated guided vehicle (AGV) detection and tracking in factories, unmanned aerial vehicle (UAV) flight trajectory tracing, sensing at crossroads with/without obstacle, Network assisted sensing to avoid UAV collision, sensing for UAV intrusion detection, sensing for tourist spot traffic management, contactless sleep monitoring service, Protection of Sensing Information, health monitoring, service continuity of unobtrusive health monitoring, use case on Sensor Groups, Sensing for Parking Space Determination, Seamless extended reality (XR) streaming, UAVs/vehicles/pedestrians detection near Smart Grid equipment, Autonomous mobile robots collision avoidance in smart factories, roaming for sensing service of sports monitoring, on immersive experience based on sensing, accurate sensing
  • Sensing Modelling is not in any way limited to providing or generating a model. Sensing Modelling can output sensing results, which may include, for example:
  • Sensing Modelling can be enabled by AI, for example using AI to derive the sensing results.
  • An output of Sensing Modeling is provided to sensing results storage in the example shown in Fig. 1, but full or partial sensing results can also or instead be delivered to a core network or a 3rd party, so as to provide a sensing service by a RAN for example.
  • Sensing Management is a function that is responsible for performing sensing control to Sensing modelling and Sensing application functions. Sensing Management monitors the sensing output, and if the sensing results are no longer applicable, it will request the Sensing Modelling function to re-train the model, and it will indicate to the Sensing Application function to switch the model.
  • Sensing Management can also be referred to as Sensing Control, Sensing Results Management, or Management.
  • the name “sensing manager” is also used herein as a general term for a Sensing Management element in a sensing system.
  • the Sensing Management function receives the monitoring data from Sensing Data Collection function, for example the ground truth data, and after comparing the sensing output and the ground truth, the sensing performance can be evaluated.
  • the Sensing Management function receives the output of the Sensing Application function, and the output includes the performance of the sensing application.
  • the Sensing Management function may observe that the sensing performance of current sensing model is not good enough. For example, for channel construction, a sensing model is generated according to a static environment map, but when there are many moving targets in the environment, causing too much signal refection, the channel construction model is inapplicable. In this case, the Sensing Management function will send current sensing performance to the Sensing Modelling function, including current sensing output and its accuracy, resolution, and so on. In addition, the Sensing Management function also requests the Sensing Modelling function to retrain the model, and request to get an updated sensing model.
  • the Sensing Management function When the Sensing Management function observes that the sensing performance of current sensing model is not good enough, it can send model switching signalling to Sensing Application function to switch to another sensing model, or send fallback signalling to indicate Sensing Application function to use non-sensing mode.
  • the Sensing Management function can indicate to the Sensing Application function which sensing model the Sensing Application is to use, and activate or de-activate one or multiple of the candidate sensing models.
  • Sensing Management function sends a sensing model transfer request to the Sensing Results Storage function to request a model for the Sensing Application function, and the request can be an initial model transfer request or an updated model transfer request.
  • Sensing Application is a function that provides sensing decision output or sensing inference output (predictions or detections for example) . Some examples are target detection, channel prediction.
  • the sensing application function is also responsible for performing actions according to sensing results, for example it triggers or performs corresponding actions according to sensing decision or prediction, and it may trigger actions directed to other entities or to itself.
  • the sensing application function is also responsible for data preparation (such as data pre-processing and cleaning, formatting, and transformation) based on Action Data delivered by a Data Collection function.
  • the Sensing Application can also be referred to as Sensing Action, Sensing in RAN, Sensing usage, Sensing use cases, sensing assisted communication, sensing service, or sensing assisted communication.
  • the name “output generator” is also used herein as a general term for a Sensing Application element in a sensing system.
  • Sensing Application uses sensing results to assist communication and/or perform actions according to the sensing results.
  • a sensing application for assisting communication may perform any of various communication-related functions or operations based on sensing results (such as an RF environment map) .
  • Such functions may include, for example, any one or more of the following: beam prediction, CSI prediction, mobility prediction, beam management.
  • a sensing result from Sensing Modelling is detecting an intruder in a smart home, the Sensing Application may send alarm message to a user (home owner and/or police for example) to inform the user of the incident.
  • Output from the Sensing Application function is the output of the sensing model produced by a Sensing Application function in Fig. 1.
  • the Sensing Application function should signal the outputs of the sensing to nodes that have explicitly requested them (via subscription for example) , or nodes that are subject to actions based on the output from Sensing Application.
  • Sensing Results Storage is a function that stores the sensing models, for example the reconstructed physical world (environment map, target and its location for example) .
  • the storage location can be within RAN (BS and/or UE side for example) , or outside RAN (core network or a third party for example) . It receives the sensing model from the Sensing Modelling function, the model may be the first trained model, or the re-trained/updated model.
  • Sensing Results Storage can also be referred to as Sensing Storage, RAN Storage, local RAN storage, or RAN and Core Network storage.
  • Storage subsystem is also used herein as a general term for a Sensing Results Storage element in a sensing system.
  • a model is one type of sensing result shown in Fig. 1. More generally, Sensing Results Storage stores sensing results, which may, but need not necessarily, include a model.
  • the Sensing Results Storage function may receive Sensing Model transfer request from the Sensing Management function, and if it received the request:
  • the Sensing Results Storage function will send the corresponding model to the Sensing Application function.
  • the request indicates the request model ID
  • the Sensing Results Storage function sends the model with the request ID.
  • the request indicates the sensing functionality ID and/or sensing performance requirement (sensing accuracy, sensing distance/speed/angle resolution for example)
  • the Sensing Results Storage function delivers a model satisfying the indicated sensing functionality and the performance requirement.
  • multiple TRPs may perform sensing measurement to collect data and perform sensing modelling (to construct the surrounding environment sensed by a TRP for example) .
  • sensing modelling to construct the surrounding environment sensed by a TRP for example
  • header TRP which is responsible for sensing modelling fusion to obtain a large unified sensing model (the whole environment for example) , where the other multiple TRPs send their sensing modelling results to the header TRP.
  • a sensing function in Fig. 1 it can be located at UE, BS, core network, or a 3 rd party.
  • each can be located at the same physical entity or different physical entities.
  • a sensing functional framework (including the signaling interface among functions) to support sensing procedures in the network is defined.
  • Fig. 1 and embodiment 1 provide an example, and other examples are provided by other embodiments herein.
  • Embodiment 1 and some other embodiments refer to a sensing model, which is one example of a sensing result.
  • the present disclosure is not in any way restricted to a sensing model, and features disclosed herein may also or instead be applied to other types of sensing results.
  • Fig. 2 illustrates another example of a sensing functional framework and procedures.
  • the sensing functional framework and the flowchart shown in Fig. 2 includes 5 parts.
  • Sensing Data Collection is a function that provides input data to Sensing Modelling, Sensing Management, Sensing Application functions.
  • Examples of input data may include measurements from UEs or different network entities, where the measurement maybe RF sensing measurement, non-RF sensing (LIDAR (Light Detection and Ranging) , camera, video, sensor, and so on) measurement.
  • LIDAR Light Detection and Ranging
  • Fig. 1 The three types of data shown by way of example in Fig. 1 are also shown in Fig. 2 for illustrative embodiment 2:
  • Training Data Data needed as input for the Sensing Modelling function, such as data for sensing analysis, including assistance information.
  • Monitoring Data Data needed as input for the Sensing Management function.
  • Action data Data needed as input for the Sensing Application function.
  • Sensing Modelling is a function that reconstructs the physical world (that is, get a model for the physical world) , include environment reconstruction, channel reconstruction (by a ray tracing scheme, for example) , target reconstruction, digital twin, and so on.
  • the Sensing Modelling function should be able to request specific information to be used to train the sensing model and to avoid reception of unnecessary information.
  • Sensing Modelling function can receive a sensing model from the Sensing Management function, such as a foundation sensing model, then the Sensing Modelling function fine-tunes the model to get an updated model.
  • the Sensing Modelling function in Fig. 2 can receive a sensing model (in the example shown) , or more generally a sensing result, from the Sensing Management function and fine-tune the model (result) to get an updated model (result) .
  • a sensing result that is provided to the Sensing Modelling function from the Sensing Management function may be referred to as a foundation sensing result (or a foundation set of data) or a base sensing result or (or a base set of data) , for example.
  • the sensing modelling function is also responsible for data processing, such as data pre-processing and cleaning, formatting, and transformation based on Training Data delivered by a Sensing Data Collection function, if required.
  • Sensing Management is a function that is responsible for performing sensing control to Sensing modelling and Sensing application functions. Sensing Management monitors the sensing output, and if the sensing results are no longer applicable, it will request the Sensing Modelling function to re-train the model, and it will indicate to the Sensing Application function to switch the model.
  • the Sensing Management function receives the monitoring data from Sensing Data Collection function, for example the ground truth data, and after comparing the sensing output and the ground truth, the sensing performance can be evaluated.
  • the Sensing Management function receives the output of the Sensing Application function, and the output includes the performance of the sensing application.
  • the Sensing Management function may observe that the sensing performance of current sensing model is not good enough. For example, for channel construction, a sensing model is generated according to a static environment map, but when there are many moving targets in the environment, causing too much signal refection, the channel construction model is inapplicable. In this case, the Sensing Management function will send current sensing performance to the Sensing Modelling function, including current sensing output and its accuracy, resolution, and so on. In addition, the Sensing Management function also requests the Sensing Modelling function to retrain the model, and request to get an updated sensing model.
  • the Sensing Management function When the Sensing Management function observes that the sensing performance of current sensing model is not good enough, it can send model switching signalling to Sensing Application function to switch to another sensing model, or send fallback signalling to indicate Sensing Application function to use non-sensing mode.
  • the Sensing Management function can indicate to the Sensing Application function which sensing model the Sensing Application is to use, and activate or de-activate one or multiple of the candidate sensing models.
  • Sensing Management function sends a sensing model transfer request to the Sensing Results Storage function to request a model for the Sensing Application function, and the request can be an initial model transfer request or an updated model transfer request.
  • the sensing management function may have some basic sensing models (such as a foundation model) , from RAN or core network or a 3 rd party for example. To enable fast training at the Sensing Modelling function, it can send one or multiple basic sensing models to the sensing modelling function, and the Modelling function can fine-tune the model, which is not modelling from the very beginning.
  • some basic sensing models such as a foundation model
  • the Sensing Modelling function it can send one or multiple basic sensing models to the sensing modelling function, and the Modelling function can fine-tune the model, which is not modelling from the very beginning.
  • This type of model transfer by the Sensing Management function to the Sensing Modelling function in Fig. 2, is another difference of embodiment 2 relative to embodiment 1.
  • a model and model transfer are referenced in the context of the example shown in Fig. 2, but features related to a model and model transfer may apply more generally to sensing results.
  • Sensing Application is a function that provides sensing decision output or sensing inference output (predictions or detections for example) . Some examples are target detection, channel prediction.
  • the sensing application function is also responsible for performing actions according to sensing results, for example it triggers or performs corresponding actions according to sensing decision or prediction, and it may trigger actions directed to other entities or to itself.
  • the sensing application function is also responsible for data preparation (such as data pre-processing and cleaning, formatting, and transformation) based on Action Data delivered by a Data Collection function.
  • Output from the Sensing Application function is the output of the sensing model produced by a Sensing Application function in Fig. 2.
  • the Sensing Application function should signal the outputs of the sensing to nodes that have explicitly requested them (via subscription for example) , or nodes that are subject to actions based on the output from Sensing Application.
  • Sensing Results Storage is a function that stores the sensing models, for example the reconstructed physical world (environment map, target and its location for example) .
  • the storage location can be within RAN (e.g. BS and/or UE side) , or outside RAN (core network or a third party for example) . It receives the sensing model from the Sensing Modelling function, the model may be the first trained model, or the re-trained/updated model.
  • the Sensing Results Storage function may receive Sensing Model transfer request from the Sensing Management function, and if it received the request:
  • the Sensing Results Storage function will send the corresponding model to the Sensing Application function.
  • the request indicates the request model ID
  • the Sensing Results Storage function sends the model with the request ID.
  • the request indicates the sensing functionality ID and/or sensing performance requirement (sensing accuracy, sensing distance/speed/angle resolution for example)
  • the Sensing Results Storage function delivers a model satisfying the indicated sensing functionality and the performance requirement.
  • a sensing function in Fig. 2 as in Fig. 1, it can be located at UE, BS, core network, or a 3 rd party.
  • each can be located at the same physical entity or different physical entities.
  • Fig. 1 and embodiment 1 provide an example
  • Fig. 2 and embodiment 2 provide another example.
  • Fig. 3 illustrates a further example of a sensing functional framework and procedures.
  • the sensing functional framework supports multiple sensing nodes and non-sensing nodes, and supports data fusion from multiple nodes to get processed and combined data to Sensing Modelling, Sensing Management, and Sensing Application functions.
  • sensing functional framework and the flowchart shown in Fig. 3 includes the parts described below.
  • Data Fusion is a function that provides input data to Sensing Modelling, Sensing Management, and Sensing Application functions. It should be noted that data fusion could be also called data collection.
  • the name “input data generator” may be used to refer to a Data Fusion element of a sensing system, for example. Data Fusion may be considered a special case or embodiment of data collection.
  • the Data Fusion function receives input from one or multiple sensing functions, including RF sensing function and non-RF sensing function.
  • the RF sensing function uses RF sensing to collect data, including 3GPP based RF sensing and non-3GPP based RF sensing (wifi sensing, Radar sensing, for example) , and it can be TRP sensing or UE sensing measurement.
  • Non-RF sensing function uses non-RF sensing to collect data, such as LIDAR (Light Detection and Ranging) , camera, video, sensor, and so on) , video, sensor.
  • LIDAR Light Detection and Ranging
  • Fig. 3 illustrates, by way of example, N RF sensing functions and N Non-RF sensing functions.
  • the number of RF sensing functions may be different from the number of Non-RF sensing functions (for example, there may be N RF sensing functions and N Non-RF sensing functions, where N is not equal to M) . More generally, there may be one or more sensing functions, and those sensing functions may include either or both of RF sensing functions and Non-RF sensing functions.
  • Sensing functions may also be referred to as sensors or sensing devices, for example.
  • the data fusion function is responsible for data processing, including data pre-processing and cleaning, formatting, and transformation, integrating multiple data sources to produce more useful information than that provided by any individual data source.
  • the data fusion function in Fig. 3 combines RF sensing data and non-RF sensing data derived from multiple functions such that the resulting information has less uncertainty than that when these data is used individually.
  • Figs. 1 and 2 are also shown in Fig. 3 for illustrative embodiment 2:
  • Training Data Data needed as input for the Sensing Modelling function, such as data for sensing analysis, including assistance information.
  • Monitoring Data Data needed as input for the Sensing Management function.
  • Action data Data needed as input for the Sensing Application function.
  • Embodiment 3 differs from embodiment 1 in that embodiment 3 includes the data fusion function and sensing functions.
  • Sensing Modelling is a function that reconstructs the physical world (that is, get a model for the physical world) , include environment reconstruction, channel reconstruction (by a ray tracing scheme, for example) , target reconstruction, digital twin, and so on.
  • the Sensing Modelling function should be able to request specific information to be used to train the sensing model and to avoid reception of unnecessary information.
  • the sensing modelling function is also responsible for data processing, such as data pre-processing and cleaning, formatting, and transformation based on Training Data delivered by a Sensing Data Collection function, if required.
  • Sensing Management is a function that is responsible for performing sensing control to Sensing modelling and Sensing application functions. Sensing Management monitors the sensing output, and if the sensing results are no longer applicable, it will request the Sensing Modelling function to re-train the model, and it will indicate to the Sensing Application function to switch the model.
  • the Sensing Management function receives the monitoring data from Sensing Data Collection (and specifically from the Data Fusion function in the example shown in Fig. 3) , for example the ground truth data, and after comparing the sensing output and the ground truth, the sensing performance can be evaluated.
  • the Sensing Management function receives the output of the Sensing Application function, and the output includes the performance of the sensing application.
  • the Sensing Management function may observe that the sensing performance of current sensing model is not good enough. For example, for channel construction, a sensing model is generated according to a static environment map, but when there are many moving targets in the environment, causing too much signal refection, the channel construction model is inapplicable. In this case, the Sensing Management function will send current sensing performance to the Sensing Modelling function, including current sensing output and its accuracy, resolution, and so on. In addition, the Sensing Management function also requests the Sensing Modelling function to retrain the model, and request to get an updated sensing model.
  • the Sensing Management function When the Sensing Management function observes that the sensing performance of current sensing model is not good enough, it can send model switching signalling to Sensing Application function to switch to another sensing model, or send fallback signalling to indicate Sensing Application function to use non-sensing mode.
  • the Sensing Management function can indicate to the Sensing Application function which sensing model the Sensing Application is to use, and activate or de-activate one or multiple of the candidate sensing models.
  • a sensing functional framework (including the signaling interface among functions) to support sensing procedure in the network is defined.
  • Fig. 4 illustrates yet another example of a sensing functional framework and procedures.
  • the sensing functional framework supports anchor management function, where an anchor is a specific type of node (for example a sensing UE deployed by an operator) , and the anchor could report ground truth to the network to assist network to calibrate/adjust sensing results.
  • an anchor is a specific type of node (for example a sensing UE deployed by an operator)
  • the anchor could report ground truth to the network to assist network to calibrate/adjust sensing results.
  • a node is one example of an anchor, and an anchor may be a different type of device rather than a node.
  • Anchor Management is one feature or function by which embodiment 4 differs from embodiment 1.
  • Anchor Management is a function that is responsible for performing control to anchors and non-anchors.
  • the Anchor Management function can configure which node is the anchor, and indicates to the anchor to perform data collection and corresponding collected data type.
  • the Anchor Management function could also indicate to a non-anchor to perform data collection and corresponding collected data type.
  • Anchor Management may also be referred to, for example, as Sensing Anchor Management or, in the case of a sensing anchor being a sensing device or sensor, as Sensing Device Management, Sensor (or Sensing Device) Management, or Sensing Function Management, for example.
  • Anchor manager is also used herein to refer generally to an Anchor Management element of a sensing system.
  • An anchor is the node that can report ground truth to other functions, such as Sensing Modelling, Sensing Management, Sensing Application functions.
  • the anchor is deployed by the network operator at a known location, and the anchor performs sensing measurement and reports the sensed data to the network, including the measurement data and the ground truth, where the ground truth includes the position information.
  • a node is one example of an anchor device. More generally, an anchor may be a device, which may be a node but need not necessarily be a node, that can report ground truth.
  • a second example of an anchor is a device, deployed by the network operator at a known location for example, that transmits a sensing signal to assist one or more other sensing devices to perform sensing measurement.
  • an anchor is a passive object.
  • an anchor may be an object with known information such as shape, size, orientation, speed, location, distances or relative motion between objects.
  • the anchor information can be indicated from a BS to a UE for example, in which case the UE can perform sensing measurement and compare its sensing results with the anchor information, so as to calibrate its sensing results.
  • Anchor Data Collection is another difference between embodiment 4 and embodiment 1.
  • Anchor Data Collection is a function that provides input data to Sensing Modelling, Sensing Management, Sensing Application functions.
  • the input data includes ground truth information.
  • Ground truth refers to the true answer to a specific problem or question. For example, when perform target sensing, the ground truth is the target exact location, exact shape. For environment reconstruction, the ground truth is the exact environment, includes the building locations/shapes, street locations, and so on.
  • Examples of input data may include measurements from UEs or different network entities, where the measurement may be RF sensing measurement, non-RF sensing (LIDAR (Light Detection and Ranging) , camera, video, sensor, and so on) measurement.
  • LIDAR Light Detection and Ranging
  • the input data from anchor data collection may include the three types of data shown by way of example in Fig. 1:
  • Training Data Data needed as input for the Sensing Modelling function, such as data for sensing analysis, including assistance information.
  • Monitoring Data Data needed as input for the Sensing Management function.
  • Action data Data needed as input for the Sensing Application function.
  • Embodiment 4 also differs from embodiment 1 in that it includes Non-anchor Data Collection.
  • Non-anchor Data Collection is a function that provides input data to Sensing Modelling, Sensing Management, Sensing Application functions.
  • the input data does not include the ground truth information.
  • Examples of input data may include measurements from UEs or different network entities, where the measurement may be RF sensing measurement, non-RF sensing (LIDAR (Light Detection and Ranging) , camera, video, sensor, and so on) measurement.
  • LIDAR Light Detection and Ranging
  • the input data from anchor data collection may include the three types of data shown by way of example in Fig. 1:
  • Training Data Data needed as input for the Sensing Modelling function, such as data for sensing analysis, including assistance information.
  • Monitoring Data Data needed as input for the Sensing Management function.
  • Action data Data needed as input for the Sensing Application function.
  • Anchor Data collection and Non-anchor Data collection may be considered a special case or embodiment of data collection.
  • Sensors in a sensing system may include one or more sensors to provide (to Anchor Data collection) anchor data related to one or more sensing anchors, and one or more sensors to provide (to Non-anchor Data collection) non-anchor data that is not related to a sensing anchor.
  • Sensing Modelling is a function that reconstructs the physical world (that is, get a model for the physical world) , include environment reconstruction, channel reconstruction (by a ray tracing scheme, for example) , target reconstruction, digital twin, and so on.
  • the Sensing Modelling function should be able to request specific information to be used to train the sensing model and to avoid reception of unnecessary information.
  • the sensing modelling function is also responsible for data processing, such as data pre-processing and cleaning, formatting, and transformation based on Training Data delivered by an Anchor Data Collection and/or a Non-Anchor Data Collection function, if required.
  • Sensing Management is a function that is responsible for performing sensing control to Sensing modelling and Sensing application functions. Sensing Management monitors the sensing output, and if the sensing results are no longer applicable, it will request the Sensing Modelling function to re-train the model, and it will indicate to the Sensing Application function to switch the model.
  • the Sensing Management function receives the monitoring data from an Anchor Data Collection and/or a Non-Anchor Data Collection function, for example the ground truth data, and after comparing the sensing output and the ground truth, the sensing performance can be evaluated.
  • the Sensing Management function receives the output of the Sensing Application function, and the output includes the performance of the sensing application.
  • the Sensing Management function may observe that the sensing performance of current sensing model is not good enough. For example, for channel construction, a sensing model is generated according to a static environment map, but when there are many moving targets in the environment, causing too much signal refection, the channel construction model is inapplicable. In this case, the Sensing Management function will send current sensing performance to the Sensing Modelling function, including current sensing output and its accuracy, resolution, and so on. In addition, the Sensing Management function also requests the Sensing Modelling function to retrain the model, and request to get an updated sensing model.
  • the Sensing Management function When the Sensing Management function observes that the sensing performance of current sensing model is not good enough, it can send model switching signalling to Sensing Application function to switch to another sensing model, or send fallback signalling to indicate Sensing Application function to use non-sensing mode.
  • the Sensing Management function can indicate to the Sensing Application function which sensing model the Sensing Application is to use, and activate or de-activate one or multiple of the candidate sensing models.
  • Sensing Management function sends a sensing model transfer request to the Sensing Results Storage function to request a model for the Sensing Application function, and the request can be an initial model transfer request or an updated model transfer request.
  • Sensing Application is a function that provides sensing decision output or sensing inference output (predictions or detections for example) . Some examples are target detection, channel prediction.
  • the sensing application function is also responsible for performing actions according to sensing results, for example it triggers or performs corresponding actions according to sensing decision or prediction, and it may trigger actions directed to other entities or to itself.
  • the sensing application function is also responsible for data preparation (such as data pre-processing and cleaning, formatting, and transformation) based on Action Data delivered by an Anchor Data Collection and/or a Non-Anchor Data Collection function.
  • Output from the Sensing Application function is the output of the sensing model produced by a Sensing Application function.
  • the Sensing Application function should signal the outputs of the sensing to nodes that have explicitly requested them (via subscription for example) , or nodes that are subject to actions based on the output from Sensing Application.
  • Sensing Results Storage is a function that store the sensing models, for example the reconstructed physical world (environment map, target and its location for example) .
  • the storage location can be within RAN (e.g. BS and/or UE side) , or outside RAN (core network or a third party for example) . It receives the sensing model from the Sensing Modelling function, the model may be the first trained model, or the re-trained/updated model.
  • the Sensing Results Storage function may receive Sensing Model transfer request from the Sensing Management function, and if it received the request:
  • the Sensing Results Storage function will send the corresponding model to the Sensing Application function.
  • the request indicates the request model ID
  • the Sensing Results Storage function sends the model with the request ID.
  • the request indicates the sensing functionality ID and/or sensing performance requirement (sensing accuracy, sensing distance/speed/angle resolution for example)
  • the Sensing Results Storage function delivers a model satisfying the indicated sensing functionality and the performance requirement.
  • a sensing function in Fig. 4 as in Figs. 1 to 3, it can be located at UE, BS, core network, or a 3 rd party.
  • each can be located at the same physical entity or different physical entities.
  • anchor management function can be combined with the data fusion function proposed in embodiment 3, where data fusion function can be placed after anchor data collection and non-anchor data collection fusion, so as to get combined, more accurate data.
  • a sensing functional framework (including the signaling interface among functions) to support sensing procedure in the network is defined.
  • Figs. 1 to 3 and embodiments 1 to 3 provide examples, and Fig. 4 and embodiment 4 provide another example.
  • Alt-1 including Sensing data collection, Sensing Modelling, Sensing Management, Sensing Inference, Sensing Model Storage
  • LTE Long Term Evolution NR New Radio BWP Bandwidth part BS Base Station CA Carrier Aggregation CC Component Carrier CG Cell Group CSI Channel state information CSI-RS Channel state information Reference Signal DC Dual Connectivity DCI Downlink control information DL Downlink DL-SCH Downlink shared channel EN-DC E-UTRA NR dual connectivity with MCG using E-UTRA and SCG using NR gNB Next generation (or 5G) base station HARQ-ACK Hybrid automatic repeat request acknowledgement MCG Master cell group MCS Modulation and coding scheme MAC-CE Medium Access Control-Control Element PBCH Physical broadcast channel PCell Primary cell PDCCH Physical downlink control channel PDSCH Physical downlink shared channel PRACH Physical Random Access Channel PRG Physical resource block group PSCell Primary SCG Cell PSS Primary synchronization signal PUCCH Physical uplink control channel PUSCH Physical uplink shared channel RACH Random access channel RAPID Random access preamble identity RB Resource block RE Resource element RRM Radio
  • the communication system 100 comprises a radio access network 120.
  • the radio access network 120 may be a next generation (sixth generation (6G) or later for example) radio access network, or a legacy (5G, 4G, 3G or 2G for example) radio access network.
  • One or more communication electric device (ED) 110a, 110b, 110c, 110d, 110e, 110f, 110g, 110h, 110i, 110j (generically referred to as 110) may be interconnected to one another or connected to one or more network nodes (170a, 170b, generically referred to as 170) in the radio access network 120.
  • a core network 130 may be a part of the communication system and may be dependent or independent of the radio access technology used in the communication system 100.
  • the communication system 100 comprises a public switched telephone network (PSTN) 140, the internet 150, and other networks 160.
  • PSTN public switched telephone network
  • Fig. 6 illustrates an example communication system 100.
  • the communication system 100 enables multiple wireless or wired elements to communicate data and other content.
  • the purpose of the communication system 100 may be to provide content, such as voice, data, video, and/or text, via broadcast, multicast and unicast, and so on.
  • the communication system 100 may operate by sharing resources, such as carrier spectrum bandwidth, between its constituent elements.
  • the communication system 100 may include a terrestrial communication system and/or a non-terrestrial communication system.
  • the communication system 100 may provide a wide range of communication services and applications (such as earth monitoring, remote sensing, passive sensing and positioning, navigation and tracking, autonomous delivery and mobility, and so on) .
  • the communication system 100 may provide a high degree of availability and robustness through a joint operation of the terrestrial communication system and the non-terrestrial communication system.
  • integrating a non-terrestrial communication system (or components thereof) into a terrestrial communication system can result in what may be considered a heterogeneous network comprising multiple layers.
  • the heterogeneous network may achieve better overall performance through efficient multi-link joint operation, more flexible functionality sharing, and faster physical layer link switching between terrestrial networks and non-terrestrial networks.
  • the communication system 100 includes electronic devices (ED) 110a-110d (generically referred to as ED 110) , radio access networks (RANs) 120a-120b, non-terrestrial communication network 120c, a core network 130, a public switched telephone network (PSTN) 140, the internet 150, and other networks 160.
  • the RANs 120a-120b include respective base stations (BSs) 170a-170b, which may be generically referred to as terrestrial transmit and receive points (T-TRPs) 170a-170b.
  • the non-terrestrial communication network 120c includes an access node 172, which may be generically referred to as a non-terrestrial transmit and receive point (NT-TRP) 172.
  • N-TRP non-terrestrial transmit and receive point
  • Any ED 110 may be alternatively or additionally configured to interface, access, or communicate with any other T-TRP 170a-170b and NT-TRP 172, the internet 150, the core network 130, the PSTN 140, the other networks 160, or any combination of the preceding.
  • ED 110a may communicate an uplink and/or downlink transmission over an interface 190a with T-TRP 170a.
  • the EDs 110a, 110b and 110d may also communicate directly with one another via one or more sidelink air interfaces 190b.
  • ED 110d may communicate an uplink and/or downlink transmission over an interface 190c with NT-TRP 172.
  • the air interfaces 190a and 190b may use similar communication technology, such as any suitable radio access technology.
  • the communication system 100 may implement one or more channel access methods, such as code division multiple access (CDMA) , time division multiple access (TDMA) , frequency division multiple access (FDMA) , orthogonal FDMA (OFDMA) , or single-carrier FDMA (SC-FDMA) in the air interfaces 190a and 190b.
  • CDMA code division multiple access
  • TDMA time division multiple access
  • FDMA frequency division multiple access
  • OFDMA orthogonal FDMA
  • SC-FDMA single-carrier FDMA
  • the air interfaces 190a and 190b may utilize other higher dimension signal spaces, which may involve a combination of orthogonal and/or non-orthogonal dimensions.
  • the air interface 190c can enable communication between the ED 110d and one or multiple NT-TRPs 172 via a wireless link or simply a link.
  • the link is a dedicated connection for unicast transmission, a connection for broadcast transmission, or a connection between a group of EDs and one or multiple NT-TRPs for multicast transmission.
  • the RANs 120a and 120b are in communication with the core network 130 to provide the EDs 110a 110b, and 110c with various services such as voice, data, and other services.
  • the RANs 120a and 120b and/or the core network 130 may be in direct or indirect communication with one or more other RANs (not shown) , which may or may not be directly served by core network 130, and may or may not employ the same radio access technology as RAN 120a, RAN 120b or both.
  • the core network 130 may also serve as a gateway access between (i) the RANs 120a and 120b or EDs 110a 110b, and 110c or both, and (ii) other networks (such as the PSTN 140, the internet 150, and the other networks 160) .
  • the EDs 110a 110b, and 110c may include functionality for communicating with different wireless networks over different wireless links using different wireless technologies and/or protocols. Instead of wireless communication (or in addition thereto) , the EDs 110a 110b, and 110c may communicate via wired communication channels to a service provider or switch (not shown) , and to the internet 150.
  • PSTN 140 may include circuit switched telephone networks for providing plain old telephone service (POTS) .
  • Internet 150 may include a network of computers and subnets (intranets) or both, and incorporate protocols, such as Internet Protocol (IP) , Transmission Control Protocol (TCP) , User Datagram Protocol (UDP) .
  • IP Internet Protocol
  • TCP Transmission Control Protocol
  • UDP User Datagram Protocol
  • EDs 110a 110b, and 110c may be multimode devices capable of operation according to multiple radio access technologies, and incorporate multiple transceivers necessary to support such.
  • Fig. 7 illustrates another example of an ED 110 and a base station 170a, 170b and/or 172.
  • the ED 110 is used to connect persons, objects, machines, and so on.
  • the ED 110 may be widely used in various scenarios, for example, cellular communications, device-to-device (D2D) , vehicle to everything (V2X) , peer-to-peer (P2P) , machine-to-machine (M2M) , machine-type communications (MTC) , internet of things (IOT) , virtual reality (VR) , augmented reality (AR) , industrial control, self-driving, remote medical, smart grid, smart furniture, smart office, smart wearable, smart transportation, smart city, drones, robots, remote sensing, passive sensing, positioning, navigation and tracking, autonomous delivery and mobility, and so on.
  • D2D device-to-device
  • V2X vehicle to everything
  • P2P peer-to-peer
  • M2M machine-to-
  • Each ED 110 represents any suitable end user device for wireless operation and may include such devices (or may be referred to) as a user equipment/device (UE) , a wireless transmit/receive unit (WTRU) , a mobile station, a fixed or mobile subscriber unit, a cellular telephone, a station (STA) , a machine type communication (MTC) device, a personal digital assistant (PDA) , a smartphone, a laptop, a computer, a tablet, a wireless sensor, a consumer electronics device, a smart book, a vehicle, a car, a truck, a bus, a train, or an IoT device, an industrial device, or apparatus (communication module, modem, or chip for example) in the forgoing devices, among other possibilities.
  • UE user equipment/device
  • WTRU wireless transmit/receive unit
  • MTC machine type communication
  • PDA personal digital assistant
  • the base station 170a and 170b is a T-TRP and will hereafter be referred to as T-TRP 170. Also shown in Fig. 7, an NT-TRP will hereafter be referred to as NT-TRP 172.
  • Each ED 110 connected to T-TRP 170 and/or NT-TRP 172 can be dynamically or semi-statically turned-on (that is, established, activated, or enabled) , turned-off (that is, released, deactivated, or disabled) and/or configured in response to one of more of: connection availability and connection necessity.
  • the ED 110 includes a transmitter 201 and a receiver 203 coupled to one or more antennas 204. Only one antenna 204 is illustrated. One, some, or all of the antennas may alternatively be panels.
  • the transmitter 201 and the receiver 203 may be integrated, as a transceiver for example.
  • the transceiver is configured to modulate data or other content for transmission by at least one antenna 204 or network interface controller (NIC) .
  • the transceiver is also configured to demodulate data or other content received by the at least one antenna 204.
  • Each transceiver includes any suitable structure for generating signals for wireless or wired transmission and/or processing signals received wirelessly or by wire.
  • Each antenna 204 includes any suitable structure for transmitting and/or receiving wireless or wired signals.
  • the ED 110 includes at least one memory 208.
  • the memory 208 stores instructions and data used, generated, or collected by the ED 110.
  • the memory 208 could store software instructions or modules configured to implement some or all of the functionality and/or embodiments described herein and that are executed by processing unit (s) , shown by way of example as a processor 210.
  • Each memory 208 includes any suitable volatile and/or non-volatile storage and retrieval device (s) . Any suitable type of memory may be used, such as random access memory (RAM) , read only memory (ROM) , hard disk, optical disc, subscriber identity module (SIM) card, memory stick, secure digital (SD) memory card, on-processor cache, and the like.
  • RAM random access memory
  • ROM read only memory
  • SIM subscriber identity module
  • SD secure digital
  • the ED 110 may further include one or more input/output devices (not shown) or interfaces (such as a wired interface to the internet 150 in Fig. 5) .
  • the input/output devices permit interaction with a user or other devices in the network.
  • Each input/output device includes any suitable structure for providing information to or receiving information from a user, such as a speaker, microphone, keypad, keyboard, display, or touch screen, including network interface communications.
  • the ED 110 further includes a processor 210 for performing operations including those related to preparing a transmission for uplink transmission to the NT-TRP 172 and/or T-TRP 170, those related to processing downlink transmissions received from the NT-TRP 172 and/or T-TRP 170, and those related to processing sidelink transmission to and from another ED 110.
  • Processing operations related to preparing a transmission for uplink transmission may include operations such as encoding, modulating, transmit beamforming, and generating symbols for transmission.
  • Processing operations related to processing downlink transmissions may include operations such as receive beamforming, demodulating and decoding received symbols.
  • a downlink transmission may be received by the receiver 203, possibly using receive beamforming, and the processor 210 may extract signaling from the downlink transmission (for example by detecting and/or decoding the signaling) .
  • An example of signaling may be a reference signal transmitted by NT-TRP 172 and/or T-TRP 170.
  • the processor 210 implements the transmit beamforming and/or receive beamforming based on the indication of beam direction, beam angle information (BAI) for example, received from T-TRP 170.
  • BAI beam angle information
  • the processor 210 may perform operations relating to network access (such as initial access) and/or downlink synchronization, such as operations relating to detecting a synchronization sequence, decoding and obtaining the system information, and so on. In some embodiments, the processor 210 may perform channel estimation, using a reference signal received from the NT-TRP 172 and/or T-TRP 170 for example.
  • the processor 210 may form part of the transmitter 201 and/or receiver 203.
  • the memory 208 may form part of the processor 210.
  • the processor 210, and the processing components of the transmitter 201 and receiver 203 may each be implemented by the same or different one or more processors that are configured to execute instructions stored in a memory (in memory 208 for example) .
  • some or all of the processor 210, and the processing components of the transmitter 201 and receiver 203 may be implemented using dedicated circuitry, such as a programmed field-programmable gate array (FPGA) , a graphical processing unit (GPU) , or an application-specific integrated circuit (ASIC) .
  • FPGA field-programmable gate array
  • GPU graphical processing unit
  • ASIC application-specific integrated circuit
  • the T-TRP 170 may be known by other names in some implementations, such as a base station, a base transceiver station (BTS) , a radio base station, a network node, a network device, a device on the network side, a transmit/receive node, a Node B, an evolved NodeB (eNodeB or eNB) , a Home eNodeB, a next Generation NodeB (gNB) , a transmission point (TP) ) , a site controller, an access point (AP) , or a wireless router, a relay station, a remote radio head, a terrestrial node, a terrestrial network device, or a terrestrial base station, base band unit (BBU) , remote radio unit (RRU) , active antenna unit (AAU) , remote radio head (RRH) , central unit (CU) , distribute unit (DU) , positioning node, among other possibilities.
  • BBU base band unit
  • RRU remote radio unit
  • the T-TRP 170 may be macro BSs, pico BSs, relay node, donor node, or the like, or combinations thereof.
  • the T-TRP 170 may refer to the forging devices or apparatus (communication module, modem, or chip for example) in the forgoing devices.
  • the parts of the T-TRP 170 may be distributed.
  • some of the modules of the T-TRP 170 may be located remote from the equipment housing the antennas of the T-TRP 170, and may be coupled to the equipment housing the antennas over a communication link (not shown) sometimes known as front haul, such as common public radio interface (CPRI) .
  • the term T-TRP 170 may also refer to modules on the network side that perform processing operations, such as determining the location of the ED 110, resource allocation (scheduling) , message generation, and encoding/decoding, and that are not necessarily part of the equipment housing the antennas of the T-TRP 170.
  • the modules may also be coupled to other T-TRPs.
  • the T-TRP 170 may actually be a plurality of T-TRPs that are operating together to serve the ED 110, through coordinated multipoint transmissions for example.
  • the T-TRP 170 includes at least one transmitter 252 and at least one receiver 254 coupled to one or more antennas 256. Only one antenna 256 is illustrated. One, some, or all of the antennas may alternatively be panels. The transmitter 252 and the receiver 254 may be integrated as a transceiver.
  • the T-TRP 170 further includes a processor 260 for performing operations including those related to: preparing a transmission for downlink transmission to the ED 110, processing an uplink transmission received from the ED 110, preparing a transmission for backhaul transmission to NT-TRP 172, and processing a transmission received over backhaul from the NT-TRP 172.
  • Processing operations related to preparing a transmission for downlink or backhaul transmission may include operations such as encoding, modulating, precoding (MIMO precoding for example) , transmit beamforming, and generating symbols for transmission.
  • Processing operations related to processing received transmissions in the uplink or over backhaul may include operations such as receive beamforming, and demodulating and decoding received symbols.
  • the processor 260 may also perform operations relating to network access (such as initial access) and/or downlink synchronization, such as generating the content of synchronization signal blocks (SSBs) , generating the system information, etc.
  • the processor 260 also generates the indication of beam direction, such as BAI, which may be scheduled for transmission by scheduler 253.
  • the processor 260 performs other network-side processing operations described herein, such as determining the location of the ED 110, determining where to deploy NT-TRP 172, and so on.
  • the processor 260 may generate signaling, to configure one or more parameters of the ED 110 and/or one or more parameters of the NT-TRP 172 for example. Any signaling generated by the processor 260 is sent by the transmitter 252. Note that “signaling” , as used herein, may alternatively be called control signaling.
  • a scheduler 253 may be coupled to the processor 260.
  • the scheduler 253 may be included within or operated separately from the T-TRP 170, which may schedule uplink, downlink, and/or backhaul transmissions, including issuing scheduling grants and/or configuring scheduling-free ( “configured grant” ) resources.
  • the T-TRP 170 further includes a memory 258 for storing information and data.
  • the memory 258 stores instructions and data used, generated, or collected by the T-TRP 170.
  • the memory 258 could store software instructions or modules configured to implement some or all of the functionality and/or embodiments described herein and that are executed by the processor 260.
  • the processor 260, the scheduler 253, and the processing components of the transmitter 252 and receiver 254 may each be implemented by the same or different one or more processors that are configured to execute instructions stored in a memory, in memory 258 for example.
  • some or all of the processor 260, the scheduler 253, and the processing components of the transmitter 252 and receiver 254 may be implemented using dedicated circuitry, such as a FPGA, a GPU, or an ASIC.
  • the NT-TRP 172 is illustrated as a drone only as an example, the NT-TRP 172 may be implemented in any suitable non-terrestrial form. Also, the NT-TRP 172 may be known by other names in some implementations, such as a non-terrestrial node, a non-terrestrial network device, or a non-terrestrial base station.
  • the NT-TRP 172 includes a transmitter 272 and a receiver 274 coupled to one or more antennas 280. Only one antenna 280 is illustrated. One, some, or all of the antennas may alternatively be panels.
  • the transmitter 272 and the receiver 274 may be integrated as a transceiver.
  • the NT-TRP 172 further includes a processor 276 for performing operations including those related to: preparing a transmission for downlink transmission to the ED 110, processing an uplink transmission received from the ED 110, preparing a transmission for backhaul transmission to T-TRP 170, and processing a transmission received over backhaul from the T-TRP 170.
  • Processing operations related to preparing a transmission for downlink or backhaul transmission may include operations such as encoding, modulating, precoding (MIMO precoding for example) , transmit beamforming, and generating symbols for transmission.
  • Processing operations related to processing received transmissions in the uplink or over backhaul may include operations such as receive beamforming, and demodulating and decoding received symbols.
  • the processor 276 implements the transmit beamforming and/or receive beamforming based on beam direction information (BAI for example) received from T-TRP 170. In some embodiments, the processor 276 may generate signaling, to configure one or more parameters of the ED 110 for example.
  • the NT-TRP 172 implements physical layer processing, but does not implement higher layer functions such as functions at the medium access control (MAC) or radio link control (RLC) layer. As this is only an example, more generally, the NT-TRP 172 may implement higher layer functions in addition to physical layer processing.
  • MAC medium access control
  • RLC radio link control
  • the NT-TRP 172 further includes a memory 278 for storing information and data.
  • the processor 276 may form part of the transmitter 272 and/or receiver 274.
  • the memory 278 may form part of the processor 276.
  • the processor 276 and the processing components of the transmitter 272 and receiver 274 may each be implemented by the same or different one or more processors that are configured to execute instructions stored in a memory, e.g. in memory 278. Alternatively, some or all of the processor 276 and the processing components of the transmitter 272 and receiver 274 may be implemented using dedicated circuitry, such as a programmed FPGA, a GPU, or an ASIC. In some embodiments, the NT-TRP 172 may actually be a plurality of NT-TRPs that are operating together to serve the ED 110, through coordinated multipoint transmissions for example.
  • the T-TRP 170, the NT-TRP 172, and/or the ED 110 may include other components, but these have been omitted for the sake of clarity.
  • Fig. 8 illustrates units or modules in a device, such as in ED 110, in T-TRP 170, or in NT-TRP 172.
  • a signal may be transmitted by a transmitting unit or a transmitting module.
  • a signal may be transmitted by a transmitting unit or a transmitting module.
  • a signal may be received by a receiving unit or a receiving module.
  • a signal may be processed by a processing unit or a processing module.
  • Other steps may be performed by an artificial intelligence (AI) or machine learning (ML) module.
  • the respective units or modules may be implemented using hardware, one or more components or devices that execute software, or a combination thereof.
  • one or more of the units or modules may be an integrated circuit, such as a programmed FPGA, a GPU, or an ASIC.
  • the modules may be retrieved by a processor, in whole or part as needed, individually or together for processing, in single or multiple instances, and that the modules themselves may include instructions for further deployment and instantiation.
  • An air interface generally includes a number of components and associated parameters that collectively specify how a transmission is to be sent and/or received over a wireless communications link between two or more communicating devices.
  • an air interface may include one or more components defining the waveform (s) , frame structure (s) , multiple access scheme (s) , protocol (s) , coding scheme (s) and/or modulation scheme (s) for conveying information (data for example) over a wireless communications link.
  • the wireless communications link may support a link between a radio access network and user equipment (a “Uu” link for example) , and/or the wireless communications link may support a link between device and device, such as between two user equipments (a “sidelink” for example) , and/or the wireless communications link may support a link between a non-terrestrial (NT) -communication network and user equipment (UE) .
  • Uu radio access network
  • UE user equipment
  • a waveform component may specify a shape and form of a signal being transmitted.
  • Waveform options may include orthogonal multiple access waveforms and non-orthogonal multiple access waveforms.
  • Non-limiting examples of such waveform options include Orthogonal Frequency Division Multiplexing (OFDM) , Filtered OFDM (f-OFDM) , Time windowing OFDM, Filter Bank Multicarrier (FBMC) , Universal Filtered Multicarrier (UFMC) , Generalized Frequency Division Multiplexing (GFDM) , Wavelet Packet Modulation (WPM) , Faster Than Nyquist (FTN) Waveform, and low Peak to Average Power Ratio Waveform (low PAPR WF) .
  • OFDM Orthogonal Frequency Division Multiplexing
  • f-OFDM Filtered OFDM
  • FBMC Filter Bank Multicarrier
  • UMC Universal Filtered Multicarrier
  • GFDM Generalized Frequency Division Multiplexing
  • WPM Wavelet Packet Modulation
  • a frame structure component may specify a configuration of a frame or group of frames.
  • the frame structure component may indicate one or more of a time, frequency, pilot signature, code, or other parameter of the frame or group of frames. More details of frame structure will be discussed below.
  • a multiple access scheme component may specify multiple access technique options, including technologies defining how communicating devices share a common physical channel, such as: Time Division Multiple Access (TDMA) , Frequency Division Multiple Access (FDMA) , Code Division Multiple Access (CDMA) , Single Carrier Frequency Division Multiple Access (SC-FDMA) , Low Density Signature Multicarrier Code Division Multiple Access (LDS-MC-CDMA) , Non-Orthogonal Multiple Access (NOMA) , Pattern Division Multiple Access (PDMA) , Lattice Partition Multiple Access (LPMA) , Resource Spread Multiple Access (RSMA) , and Sparse Code Multiple Access (SCMA) .
  • TDMA Time Division Multiple Access
  • FDMA Frequency Division Multiple Access
  • CDMA Code Division Multiple Access
  • SC-FDMA Single Carrier Frequency Division Multiple Access
  • LDS-MC-CDMA Low Density Signature Multicarrier Code Division Multiple Access
  • NOMA Non-Orthogonal Multiple Access
  • PDMA Pattern Division Multiple Access
  • LPMA Lat
  • multiple access technique options may include: scheduled access versus non-scheduled access, also known as grant-free access; non-orthogonal multiple access versus orthogonal multiple access, via a dedicated channel resource for example (such as no sharing between multiple communicating devices) ; contention-based shared channel resources vs. non-contention-based shared channel resources, and cognitive radio-based access.
  • a hybrid automatic repeat request (HARQ) protocol component may specify how a transmission and/or a re-transmission is to be made.
  • Non-limiting examples of transmission and/or re-transmission mechanism options include those that specify a scheduled data pipe size, a signaling mechanism for transmission and/or re-transmission, and a re-transmission mechanism.
  • a coding and modulation component may specify how information being transmitted may be encoded/decoded and modulated/demodulated for transmission/reception purposes.
  • Coding may refer to methods of error detection and forward error correction.
  • Non-limiting examples of coding options include turbo trellis codes, turbo product codes, fountain codes, low-density parity check codes, and polar codes.
  • Modulation may refer, simply, to the constellation (including, for example, the modulation technique and order) , or more specifically to various types of advanced modulation methods such as hierarchical modulation and low PAPR modulation.
  • the air interface may be a “one-size-fits-all concept” .
  • the components within the air interface cannot be changed or adapted once the air interface is defined.
  • only limited parameters or modes of an air interface such as a cyclic prefix (CP) length or a multiple input multiple output (MIMO) mode, can be configured.
  • an air interface design may provide a unified or flexible framework to support below 6GHz and beyond 6GHz frequency (such as mmWave) bands for both licensed and unlicensed access.
  • flexibility of a configurable air interface provided by a scalable numerology and symbol duration may allow for transmission parameter optimization for different spectrum bands and for different services/devices.
  • a unified air interface may be self-contained in a frequency domain, and a frequency domain self-contained design may support more flexible radio access network (RAN) slicing through channel resource sharing between different services in both frequency and time.
  • RAN radio access network
  • a frame structure is a feature of the wireless communication physical layer that defines a time domain signal transmission structure, for example to allow for timing reference and timing alignment of basic time domain transmission units.
  • Wireless communication between communicating devices may occur on time-frequency resources governed by a frame structure.
  • the frame structure may sometimes instead be called a radio frame structure.
  • FDD frequency division duplex
  • TDD time-division duplex
  • FD full duplex
  • FDD communication is when transmissions in different directions (uplink versus downlink for example) occur in different frequency bands.
  • TDD communication is when transmissions in different directions (uplink versus downlink for example) occur over different time durations.
  • FD communication is when transmission and reception occurs on the same time-frequency resource, that is, a device can both transmit and receive on the same frequency resource concurrently in time.
  • each frame structure is a frame structure in long-term evolution (LTE) having the following specifications: each frame is 10ms in duration; each frame has 10 subframes, which are each 1ms in duration; each subframe includes two slots, each of which is 0.5ms in duration; each slot is for transmission of 7 OFDM symbols (assuming normal CP) ; each OFDM symbol has a symbol duration and a particular bandwidth (or partial bandwidth or bandwidth partition) related to the number of subcarriers and subcarrier spacing; the frame structure is based on OFDM waveform parameters such as subcarrier spacing and CP length (where the CP has a fixed length or limited length options) ; and the switching gap between uplink and downlink in TDD has to be the integer time of OFDM symbol duration.
  • LTE long-term evolution
  • a frame structure is a frame structure in new radio (NR) having the following specifications: multiple subcarrier spacings are supported, each subcarrier spacing corresponding to a respective numerology; the frame structure depends on the numerology, but in any case the frame length is set at 10ms, and consists of ten subframes of 1ms each; a slot is defined as 14 OFDM symbols, and slot length depends upon the numerology.
  • the NR frame structure for normal CP 15 kHz subcarrier spacing ( “numerology 1” ) and the NR frame structure for normal CP 30 kHz subcarrier spacing ( “numerology 2” ) are different. For 15 kHz subcarrier spacing a slot length is 1ms, and for 30 kHz subcarrier spacing a slot length is 0.5ms.
  • the NR frame structure may have more flexibility than the LTE frame structure.
  • a frame structure is an example flexible frame structure, e.g. for use in a 6G network or later.
  • a symbol block may be defined as the minimum duration of time that may be scheduled in the flexible frame structure.
  • a symbol block may be a unit of transmission having an optional redundancy portion (CP portion for example) and an information (data for example) portion.
  • An OFDM symbol is an example of a symbol block.
  • a symbol block may alternatively be called a symbol.
  • Embodiments of flexible frame structures include different parameters that may be configurable, such as frame length, subframe length, symbol block length, and so on.
  • a non-exhaustive list of possible configurable parameters in some embodiments of a flexible frame structure include:
  • each frame includes one or multiple downlink synchronization channels and/or one or multiple downlink broadcast channels, and each synchronization channel and/or broadcast channel may be transmitted in a different direction by different beamforming.
  • the frame length may be more than one possible value and configured based on the application scenario. For example, autonomous vehicles may require relatively fast initial access, in which case the frame length may be set as 5ms for autonomous vehicle applications. As another example, smart meters on houses may not require fast initial access, in which case the frame length may be set as 20ms for smart meter applications.
  • a subframe might or might not be defined in the flexible frame structure, depending upon the implementation.
  • a frame may be defined to include slots, but no subframes.
  • the duration of the subframe may be configurable.
  • a subframe may be configured to have a length of 0.1 ms or 0.2 ms or 0.5 ms or 1 ms or 2 ms or 5 ms, etc.
  • the subframe length may be defined to be the same as the frame length or not defined.
  • slot configuration A slot might or might not be defined in the flexible frame structure, depending upon the implementation. In frames in which a slot is defined, then the definition of a slot (in time duration and/or in number of symbol blocks for example) may be configurable.
  • the slot configuration is common to all UEs or a group of UEs.
  • the slot configuration information may be transmitted to UEs in a broadcast channel or common control channel (s) .
  • the slot configuration may be UE specific, in which case the slot configuration information may be transmitted in a UE-specific control channel.
  • the slot configuration signaling can be transmitted together with frame configuration signaling and/or subframe configuration signaling.
  • the slot configuration can be transmitted independently from the frame configuration signaling and/or subframe configuration signaling.
  • the slot configuration may be system common, base station common, UE group common, or UE specific.
  • SCS is one parameter of scalable numerology which may allow the SCS to possibly range from 15 KHz to 480 KHz.
  • the SCS may vary with the frequency of the spectrum and/or maximum UE speed to minimize the impact of the Doppler shift and phase noise.
  • there may be separate transmission and reception frames and the SCS of symbols in the reception frame structure may be configured independently from the SCS of symbols in the transmission frame structure.
  • the SCS in a reception frame may be different from the SCS in a transmission frame.
  • the SCS of each transmission frame may be half the SCS of each reception frame.
  • the difference does not necessarily have to scale by a factor of two, e.g. if more flexible symbol durations are implemented using inverse discrete Fourier transform (IDFT) instead of fast Fourier transform (FFT) .
  • IDFT inverse discrete Fourier transform
  • FFT fast Fourier transform
  • the symbol block length may be adjusted according to: channel condition (for example multi-path delay, Doppler) ; and/or latency requirement; and/or available time duration.
  • a symbol block length may be adjusted to fit an available time duration in the frame.
  • BWPs Cell/Carrier/Bandwidth Parts
  • a BWP is a set of contiguous or non-contiguous frequency subcarriers on a carrier, or a set of contiguous or non-contiguous frequency subcarriers on multiple carriers, or a set of non-contiguous or contiguous frequency subcarriers, which may have one or more carriers.
  • a carrier may have one or more BWPs, e.g. a carrier may have a bandwidth of 20 MHz and consist of one BWP, or a carrier may have a bandwidth of 80 MHz and consist of two adjacent contiguous BWPs, etc.
  • a BWP may have one or more carriers, e.g. a BWP may have a bandwidth of 40 MHz and consists of two adjacent contiguous carriers, where each carrier has a bandwidth of 20 MHz.
  • a BWP may comprise non-contiguous spectrum resources which consists of non-contiguous multiple carriers, where the first carrier of the non-contiguous multiple carriers may be in mmW band, the second carrier may be in a low band (such as 2GHz band) , the third carrier (if it exists) may be in THz band, and the fourth carrier (if it exists) may be in visible light band.
  • Resources in one carrier which belong to the BWP may be contiguous or non-contiguous.
  • a BWP has non-contiguous spectrum resources on one carrier.
  • Wireless communication may occur over an occupied bandwidth.
  • the occupied bandwidth may be defined as the width of a frequency band such that, below the lower and above the upper frequency limits, the mean powers emitted are each equal to a specified percentage ⁇ /2 of the total mean transmitted power, for example, the value of ⁇ /2 is taken as 0.5%.
  • the carrier, the BWP, or the occupied bandwidth may be signaled by a network device (a base station for example) dynamically, for example in physical layer control signaling such as DCI, or semi-statically, for example in radio resource control (RRC) signaling or in the medium access control (MAC) layer, or be predefined based on the application scenario; or be determined by the UE as a function of other parameters that are known by the UE, or may be fixed, by a standard for example.
  • a network device a base station for example
  • RRC radio resource control
  • MAC medium access control
  • frame timing and synchronization is established based on synchronization signals, such as a primary synchronization signal (PSS) and a secondary synchronization signal (SSS) .
  • PSS primary synchronization signal
  • SSS secondary synchronization signal
  • known frame timing and synchronization strategies involve adding a timestamp, for example, (xx0: yy0: zz) , to a frame boundary, where xx0, yy0, zz in the timestamp may represent a time format such as hour, minute, and second, respectively.
  • the present disclosure relates, generally, to mobile, wireless communication and, in particular embodiments, to a frame timing alignment/realignment, where the frame timing alignment/realignment may comprise a timing alignment/realignment in terms of a boundary of a symbol, a slot or a sub-frame within a frame; or a frame (thus the frame timing alignment/realignment here is more general, not limiting to the cases where a timing alignment/realignment is from a frame boundary only) .
  • relative timing to a frame or frame boundary should be interpreted in a more general sense, that is, the frame boundary means a timing point of a frame element with the frame such as (starting or ending of) a symbol, a slot or subframe within a frame, or a frame.
  • the phrases “ (frame) timing alignment or timing realignment” and “relative timing to a frame boundary” are used in more general sense described in above.
  • a network device such as a base station 170, referenced hereinafter as a TRP 170, may transmit signaling that carries a timing realignment indication message.
  • the timing realignment indication message includes information allowing a receiving UE 110 to determine a timing reference point.
  • transmission of frames, by the UE 110 may be aligned.
  • the frames that become aligned may be in different sub-bands of one carrier frequency band.
  • the frames that become aligned may instead be found in neighboring carrier frequency bands.
  • one or more types of signaling may be used to indicate the timing realignment (or/and timing correction) message.
  • Two example types of signaling are provided here to show the schemes.
  • the first example type of signaling may be referenced as cell-specific signaling, examples of which include group common signaling and broadcast signaling.
  • the second example type of signaling may be referenced as UE-specific signaling.
  • One of these two types of signaling or a combination of the two types of signaling may be used to transmit a timing realignment indication message.
  • the timing realignment indication message may be shown to notify one or more UEs 110 of a configuration of a timing reference point.
  • references, hereinafter, to the term “UE 110” may be understood to represent reference to a broad class of generic wireless communication devices within a cell (a network receiving node, such as a wireless device, a sensor, a gateway, a router, and so on) , that is, being served by the TRP 170.
  • a timing reference point is a timing reference instant and may be expressed in terms of a relative timing, in view of a timing point in a frame, such as (starting or ending boundary of) a symbol, a slot or a sub-frame within a frame; or a frame.
  • a frame boundary is used to represent a boundary of possibly a symbol, a slot or a sub-frame within a frame; or a frame.
  • the timing reference point may be expressed in terms of a relative timing, in view of a current frame boundary, for example, the start of the current frame.
  • the timing reference point may be expressed in terms of an absolute timing based on certain standards timing reference such as a GNSS (GPS for example) , Coordinated Universal Time ( “UTC” ) , and so on.
  • GNSS GPS for example
  • UTC Coordinated Universal Time
  • the timing reference point may be shown to allow for timing adjustments to be implemented at the UEs 110.
  • the timing adjustments may be implemented for improvement of accuracy for a clock at the UE 110.
  • the timing reference point may be shown to allow for adjustments to be implemented in future transmissions made from the UEs 110.
  • the adjustments may be shown to cause realignment of transmitted frames at the timing reference point.
  • the realignment of transmitted frames at the timing reference point may comprise the timing realignment from (the starting boundary of) a symbol, a slot or a sub-frame within a frame; or a frame at the timing reference point for one or more UEs and one or more BSs (in a cell or a group of cells) , which applies across the application below.
  • the UE 110 may monitor for the timing realignment indication message. Responsive to receiving the timing realignment indication message, the UE 110 may obtain the timing reference point and take steps to cause frame realignment at the timing reference point. Those steps may, for example, include commencing transmission of a subsequent frame at the timing reference point.
  • the UE 110 may cause the TRP 170 to transmit the timing realignment indication message by transmitting, to the TRP 170, a request for a timing realignment, that is, a timing realignment request message.
  • the TRP 170 may transmit, to the UE 110, a timing realignment indication message including information on a timing reference point, thereby allowing the UE 110 to implement a timing realignment (or/and a timing adjustment including clock timing error correction) , wherein the timing realignment is in terms of (for example a starting boundary of) a symbol, a slot or a sub-frame within a frame; or a frame for UEs and base station (s) in a cell (or a group of cells) .
  • a TRP 170 associated with a given cell may transmit a timing realignment indication message.
  • the timing realignment indication message may include enough information to allow a receiver of the message to obtain a timing reference point.
  • the timing reference point may be used, by one or more UEs 110 in the given cell, when performing a timing realignment (or/and a timing adjustment including clock timing error correction) .
  • the timing reference point may be expressed, within the timing realignment indication message, relative to a frame boundary (where, as previously described and to be applicable below across the application, a frame boundary can be a boundary of a symbol, a slot or a sub-frame with a frame; or a frame) .
  • the timing realignment indication message may include a relative timing indication, ⁇ t. It may be shown that the relative timing indication, ⁇ t, expresses the timing reference point as occurring a particular duration ( ⁇ t, subsequent to a frame boundary for a given frame) . Since the frame boundary is important to allowing the UE 110 to determine the timing reference point, it is important that the UE 110 be aware of the given frame that has the frame boundary of interest. Accordingly, the timing realignment indication message may also include a system frame number (SFN) for the given frame.
  • SFN system frame number
  • the SFN is a value in range from 0 to 1023, inclusive. Accordingly, 10 bits may be used to represent a SFN.
  • MIB Master Information Block
  • PBCH Physical Broadcast Channel
  • the timing realignment indication message may include other parameters.
  • the other parameters may, for example, include a minimum time offset.
  • the minimum time offset may establish a duration of time preceding the timing reference point.
  • the UE 110 may rely upon the minimum time offset as an indication that DL signaling, including the timing realignment indication message, will allow the UE 110 enough time to detect the timing realignment indication message to obtain information on the timing reference point.
  • UE position information is often used in cellular communication networks to improve various performance metrics for the network.
  • performance metrics may, for example, include capacity, agility, and efficiency.
  • the improvement may be achieved when elements of the network exploit the position, the behavior, the mobility pattern, and so on, of the UE in the context of a priori information describing a wireless environment in which the UE is operating.
  • a sensing system may be used to help gather UE pose information, including its location in a global coordinate system, its velocity and direction of movement in the global coordinate system, orientation information, and the information about the wireless environment. “Location” is also known as “position” and these two terms may be used interchangeably herein. Examples of well-known sensing systems include RADAR (Radio Detection and Ranging) and LIDAR (Light Detection and Ranging) . While the sensing system can be separate from the communication system, it could be advantageous to gather the information using an integrated system, which reduces the hardware (and cost) in the system as well as the time, frequency, or spatial resources needed to perform both functionalities.
  • the difficulty of the problem relates to factors such as the limited resolution of the communication system, the dynamicity of the environment, and the huge number of objects whose electromagnetic properties and position are to be estimated.
  • integrated sensing and communication also known as integrated communication and sensing
  • integrated communication and sensing is a desirable feature in existing and future communication systems.
  • Sensing Node Sensing Management Function
  • sensing nodes are network entities that perform sensing by transmitting and receiving sensing signals. Some sensing nodes are communication equipment that perform both communications and sensing. However, it is possible that some sensing nodes do not perform communications, and are instead dedicated to sensing.
  • the sensing agent 174 (Fig. 9) is an example of a sensing node that is dedicated to sensing. Unlike the EDs 110 and BS 170, the sensing agent 174 does not transmit or receive communication signals. However, the sensing agent 174 may communicate configuration information, sensing information, signaling information, or other information within the communication system 100.
  • the sensing agent 174 may be in communication with the core network 130 to communicate information with the rest of the communication system 900.
  • the sensing agent 174 may determine the location of the ED 110a, and transmit this information to the base station 170a via the core network 130.
  • any number of sensing agents may be implemented in the communication system 900.
  • one or more sensing agents may be implemented at one or more of the RANs 120.
  • a sensing node may combine sensing-based techniques with reference signal-based techniques to enhance UE pose determination.
  • This type of sensing node may also be known as a sensing management function (SMF) .
  • the SMF may also be known as a location management function (LMF) .
  • the SMF may be implemented as a physically independent entity located at the core network 130 with connection to the multiple BSs 170.
  • the SMF may be implemented as a logical entity co-located inside a BS 170 through logic carried out by the processor 260.
  • the SMF 176 when implemented as a physically independent entity, includes at least one processor 290, at least one transmitter 282, at least one receiver 284, one or more antennas 286, and at least one memory 288.
  • a transceiver not shown, may be used instead of the transmitter 282 and receiver 284.
  • a scheduler 283 may be coupled to the processor 290. The scheduler 283 may be included within or operated separately from the SMF 176.
  • the processor 290 implements various processing operations of the SMF 176, such as signal coding, data processing, power control, input/output processing, or any other functionality.
  • the processor 290 can also be configured to implement some or all of the functionality and/or embodiments described in more detail above.
  • Each processor 290 includes any suitable processing or computing device configured to perform one or more operations.
  • Each processor 290 could, for example, include a microprocessor, microcontroller, digital signal processor, field programmable gate array, or application specific integrated circuit.
  • a reference signal-based pose determination technique belongs to an “active” pose estimation paradigm.
  • the enquirer of pose information (the UE) takes part in process of determining the pose of the enquirer.
  • the enquirer may transmit or receive (or both) a signal specific to pose determination process.
  • Positioning techniques based on a global navigation satellite system (GNSS) such as Global Positioning System (GPS) are other examples of the active pose estimation paradigm.
  • GNSS global navigation satellite system
  • GPS Global Positioning System
  • a sensing technique based on radar for example, may be considered as belonging to a “passive” pose determination paradigm.
  • a passive pose determination paradigm the target is oblivious to the pose determination process.
  • sensing-based techniques By integrating sensing and communications in one system, the system need not operate according to only a single paradigm. Thus, the combination of sensing-based techniques and reference signal-based techniques can yield enhanced pose determination.
  • the enhanced pose determination may, for example, include obtaining UE channel sub-space information, which is particularly useful for UE channel reconstruction at the sensing node, especially for a beam-based operation and communication.
  • the UE channel sub-space is a subset of the entire algebraic space, defined over the spatial domain, in which the entire channel from the TP to the UE lies. Accordingly, the UE channel sub-space defines the TP-to-UE channel with very high accuracy.
  • the signals transmitted over other sub-spaces result in a negligible contribution to the UE channel.
  • Knowledge of the UE channel sub-space helps to reduce the effort needed for channel measurement at the UE and channel reconstruction at the network-side. Therefore, the combination of sensing-based techniques and reference signal-based techniques may enable the UE channel reconstruction with much less overhead as compared to traditional methods.
  • Sub-space information can also facilitate sub-space based sensing to reduce sensing complexity and improve sensing accuracy.
  • a same radio access technology is used for sensing and communication. This avoids the need to multiplex two different RATs under one carrier spectrum, or necessitating two different carrier spectrums for the two different RATs.
  • a first set of channels may be used to transmit a sensing signal
  • a second set of channels may be used to transmit a communications signal.
  • each channel in the first set of channels and each channel in the second set of channels is a logical channel, a transport channel, or a physical channel.
  • communication and sensing may be performed via separate physical channels.
  • a first physical downlink shared channel PDSCH-C is defined for data communication
  • a second physical downlink shared channel PDSCH-S is defined for sensing.
  • separate physical uplink shared channels (PUSCH) , PUSCH-C and PUSCH-S could be defined for uplink communication and sensing.
  • control channel (s) and data channel (s) for sensing can have the same or different channel structure (format) , occupy same or different frequency bands or bandwidth parts.
  • a common physical downlink control channel (PDCCH) and a common physical uplink control channel (PUCCH) is used to carry control information for both sensing and communication.
  • separate physical layer control channels may be used to carry separate control information for communication and sensing.
  • PUCCH-S and PUCCH-C could be used for uplink control for sensing and communication respectively, and PDCCH-S and PDCCH-C for downlink control for sensing and communication respectively.
  • RADAR originates from the phrase Radio Detection and Ranging; however, expressions with different forms of capitalization (Radar and radar) are equally valid and now more common. Radar is typically used for detecting a presence and a location of an object.
  • a radar system radiates radio frequency energy and receives echoes of the energy reflected from one or more targets. The system determines the pose of a given target based on the echoes returned from the given target.
  • the radiated energy can be in the form of an energy pulse or a continuous wave, which can be expressed or defined by a particular waveform. Examples of waveforms used in radar include frequency modulated continuous wave (FMCW) and ultra-wideband (UWB) waveforms.
  • FMCW frequency modulated continuous wave
  • UWB ultra-wideband
  • Radar systems can be monostatic, bi-static, or multi-static.
  • a monostatic radar system the radar signal transmitter and receiver are co-located, such as being integrated in a transceiver.
  • a bi-static radar system the transmitter and receiver are spatially separated, and the distance of separation is comparable to, or larger than, the expected target distance (often referred to as the range) .
  • a multi-static radar system two or more radar components are spatially diverse but with a shared area of coverage.
  • a multi-static radar is also referred to as a multisite or netted radar.
  • Terrestrial radar applications encounter challenges such as multipath propagation and shadowing impairments. Another challenge is the problem of identifiability because terrestrial targets have similar physical attributes. Integrating sensing into a communication system is likely to suffer from these same challenges, and more.
  • Communication nodes can be either half-duplex or full-duplex.
  • a half-duplex node cannot both transmit and receive using the same physical resources (time, frequency, and so on) ; conversely, a full-duplex node can transmit and receive using the same physical resources.
  • Existing commercial wireless communications networks are all half-duplex. Even if full-duplex communications networks become practical in the future, it is expected that at least some of the nodes in the network will still be half-duplex nodes because half-duplex devices are less complex, and have lower cost and lower power consumption. In particular, full-duplex implementation is more challenging at higher frequencies (in the millimeter wave bands for example) , and very challenging for small and low-cost devices, such as femtocell base stations and UEs.
  • half-duplex nodes in the communications network presents further challenges toward integrating sensing and communications into the devices and systems of the communications network.
  • both half-duplex and full-duplex nodes can perform bi-static or multi-static sensing, but monostatic sensing typically requires the sensing node have full-duplex capability.
  • a half-duplex node may perform monostatic sensing with certain limitations, such as in a pulsed radar with a specific duty cycle and ranging capability.
  • Properties of a sensing signal include the waveform of the signal and the frame structure of the signal.
  • the frame structure defines the time-domain boundaries of the signal.
  • the waveform describes the shape of the signal as a function of time and frequency. Examples of waveforms that can be used for a sensing signal include ultra-wide band (UWB) pulse, Frequency-Modulated Continuous Wave (FMCW) or “chirp” , orthogonal frequency-division multiplexing (OFDM) , cyclic prefix (CP) -OFDM, and Discrete Fourier Transform spread (DFT-s) -OFDM.
  • UWB ultra-wide band
  • FMCW Frequency-Modulated Continuous Wave
  • OFDM orthogonal frequency-division multiplexing
  • CP cyclic prefix
  • DFT-s Discrete Fourier Transform spread
  • the sensing signal is a linear chirp signal with bandwidth B and time duration T.
  • a linear chirp signal is generally known from its use in FMCW radar systems.
  • Such linear chirp signal can be presented as in the baseband representation.
  • Precoding as used herein may refer to any coding operation (s) or modulation (s) that transform an input signal into an output signal. Precoding may be performed in different domains, and typically transform the input signal in a first domain to an output signal in a second domain. Precoding may include linear operations.
  • a terrestrial communication system may also be referred to as a land-based or ground-based communication system, although a terrestrial communication system can also, or instead, be implemented on or in water.
  • the non-terrestrial communication system may bridge the coverage gaps for underserved areas by extending the coverage of cellular networks through non-terrestrial nodes, which will be key to ensuring global seamless coverage and providing mobile broadband services to unserved/underserved regions, in this case, it is hardly possible to implement terrestrial access-points/base-stations infrastructure in the areas like oceans, mountains, forests, or other remote areas.
  • the terrestrial communication system may be a wireless communications using 5G technology and/or later generation wireless technology (for example, 6G or later) .
  • the terrestrial communication system may also accommodate some legacy wireless technology (for example, 3G or 4G wireless technology) .
  • the non-terrestrial communication system may be a communications using the satellite constellations like conventional Geo-Stationary Orbit (GEO) satellites which utilizing broadcast public/popular contents to a local server, Low earth orbit (LEO) satellites establishing a better balance between large coverage area and propagation path-loss/delay, stabilize satellites in very low earth orbits (VLEO) enabling technologies substantially reducing the costs for launching satellites to lower orbits, high altitude platforms (HAPs) providing a low path-loss air interface for the users with limited power budget, or Unmanned Aerial Vehicles (UAVs) (or unmanned aerial system (UAS) ) achieving a dense deployment since their coverage can be limited to a local area, such as airborne, balloon, quadcopter, drones,
  • GEO satellites, LEO satellites, UAVs, HAPs and VLEOs may be horizontal and two-dimensional.
  • UAVs, HAPs and VLEOs coupled to integrate satellite communications to cellular networks emerging 3D vertical networks consist of many moving (other than geostationary satellites) and high altitude access points such as UAVs, HAPs and VLEOs.
  • MIMO Multiple input multiple-output
  • the above ED110 and T-TRP 170, and/or NT-TRP use MIMO to communicate over the wireless resource blocks.
  • MIMO utilizes multiple antennas at the transmitter and/or receiver to transmit wireless resource blocks over parallel wireless signals.
  • MIMO may beamform parallel wireless signals for reliable multipath transmission of a wireless resource block.
  • MIMO may bond parallel wireless signals that transport different data to increase the data rate of the wireless resource block.
  • the T-TRP 170, and/or NT-TRP 172 is generally configured with more than ten antenna units (such as 128 or 256) , and serves for dozens of the ED 110 (such as 40 EDs) in the meanwhile.
  • a large number of antenna units of the T-TRP 170, and NT-TRP 172 can greatly increase the degree of spatial freedom of wireless communication, greatly improve the transmission rate, spectrum efficiency and power efficiency, and eliminate the interference between cells to a large extent.
  • each antenna unit makes each antenna unit be made in a smaller size with a lower cost.
  • the T-TRP 170, and NT-TRP 172 of each cell can communicate with many ED 110 in the cell on the same time-frequency resource at the same time, thus greatly increasing the spectrum efficiency.
  • a large number of antenna units of the T-TRP 170, and/or NT-TRP 172 also enable each user to have better spatial directivity for uplink and downlink transmission, so that the transmitting power of the T-TRP 170, and/or NT-TRP 172 and a ED 110 is obviously reduced, and the power efficiency is greatly increased.
  • a MIMO system may include a receiver connected to a receive (Rx) antenna, a transmitter connected to transmit (Tx) antenna, and a signal processor connected to the transmitter and the receiver.
  • Each of the Rx antenna and the Tx antenna may include a plurality of antennas.
  • the Rx antenna may have an ULA antenna array in which the plurality of antennas are arranged in line at even intervals.
  • RF radio frequency
  • a non-exhaustive list of possible unit or possible configurable parameters or in some embodiments of a MIMO system include:
  • Panel unit of antenna group, or antenna array, or antenna sub-array which can control its Tx or Rx beam independently.
  • a beam is formed by performing amplitude and/or phase weighting on data transmitted or received by at least one antenna port, or may be formed by using another method, for example, adjusting a related parameter of an antenna unit.
  • the beam may include a Tx beam and/or a Rx beam.
  • the transmit beam indicates distribution of signal strength formed in different directions in space after a signal is transmitted through an antenna.
  • the receive beam indicates distribution of signal strength that is of a wireless signal received from an antenna and that is in different directions in space.
  • the beam information may be a beam identifier, or antenna port (s) identifier, or CSI-RS resource identifier, or SSB resource identifier, or SRS resource identifier, or other reference signal resource identifier.
  • Artificial Intelligence technologies can be applied in communication, including artificial intelligence or machine learning (AI/ML) based communication in the physical layer and/or AI/ML based communication in the higher layer, for example medium access control (MAC) layer.
  • AI/ML machine learning
  • MAC medium access control
  • the AI/ML based communication may aim to optimize component design and/or improve the algorithm performance.
  • the AI/ML based communication may aim to utilize the AI/ML capability for learning, prediction, and/or making a decision to solve a complicated optimization problem with possible better strategy and/or optimal solution, for example to optimize the functionality in the MAC layer, such as intelligent TRP management, intelligent beam management, intelligent channel resource allocation, intelligent power control, intelligent spectrum utilization, intelligent modulation and coding scheme (MCS) , intelligent hybrid automatic repeat request (HARQ) strategy, intelligent transmit/receive (Tx/Rx) mode adaption, and so on.
  • intelligent TRP management intelligent beam management, intelligent channel resource allocation, intelligent power control, intelligent spectrum utilization, intelligent modulation and coding scheme (MCS) , intelligent hybrid automatic repeat request (HARQ) strategy, intelligent transmit/receive (Tx/Rx) mode adaption, and so on.
  • MCS modulation and coding scheme
  • HARQ hybrid automatic repeat request
  • Tx/Rx intelligent transmit/receive
  • Data is the very important component for AI/ML techniques.
  • Data collection is a process of collecting data by the network nodes, management entity, or UE for the purpose of AI/ML model training, data analytics and inference.
  • AI/ML model training is a process to train an AI/ML Model by learning the input/output relationship in a data driven manner and obtain the trained AI/ML Model for inference.
  • a process of using a trained AI/ML model to produce a set of outputs based on a set of inputs is a process of using a trained AI/ML model to produce a set of outputs based on a set of inputs.
  • validation is used to evaluate the quality of an AI/ML model using a dataset different from the one used for model training. Validation can help selecting model parameters that generalize beyond the dataset used for model training. The model parameter after training can be adjusted further by the validation process.
  • testing is also a sub-process of training, and it is used to evaluate the performance of a final AI/ML model using a dataset different from the one used for model training and validation. Differently from AI/ML model validation, testing do not assume subsequent tuning of the model.
  • Online training means an AI/ML training process where the model being used for inference is typically continuously trained in (near) real-time with the arrival of new training samples.
  • An AI/ML training process where the model is trained based on collected dataset, and where the trained model is later used or delivered for inference.
  • the lifecycle management (LCM) of AI/ML models is essential for sustainable operation of AI/ML in NR air-interface.
  • Life cycle management covers the whole procedure of AI/ML technologies which applied on one or more nodes.
  • it includes at least one of the following sub-process: data collection, model training, model identification, model registration, model deployment, model configuration, model inference, model selection, model activation, deactivation, model switching, model fallback, model monitoring, model update, model transfer/delivery and UE capability report.
  • Model monitoring can be based on inference accuracy, including metrics related to intermediate key performance indicator (KPI) s, and it can also be based on system performance, including metrics related to system performance KPIs, such as accuracy and relevance, overhead, complexity (computation and memory cost) , latency (timeliness of monitoring result, from model failure to action) and power consumption.
  • KPI intermediate key performance indicator
  • system performance KPIs such as accuracy and relevance, overhead, complexity (computation and memory cost) , latency (timeliness of monitoring result, from model failure to action) and power consumption.
  • data distribution may shift after deployment due to the environment changes, thus the model based on input or output data distribution should also be considered.
  • the goal of supervised learning algorithms is to train a model that maps feature vectors (inputs) to labels (output) , based on the training data which includes the example feature-label pairs.
  • the supervised learning can analyze the training data and produce an inferred function, which can be used for mapping the inference data.
  • Supervised learning can be further divided into two types: Classification and Regression.
  • Classification is used when the output of the AI/ML model is categorical, with two or more classes.
  • Regression is used when the output of the AI/ML model is a real or continuous value.
  • the unsupervised methods learn concise representations of the input data without the labelled data, which can be used for data exploration or to analyze or generate new data.
  • One typical unsupervised learning is clustering which explores the hidden structure of input data and provide the classification results for the data.
  • Reinforce learning is used to solve sequential decision-making problems.
  • Reinforce learning is a process of training the action of intelligent agent from input (state) and a feedback signal (reward) in an environment.
  • an intelligent agent interacts with an environment by taking an action to maximize the cumulative reward. Whenever the intelligent agent takes one action, the current state in the environment may transfer to the new state, and the new state resulted by the action will bring to the associated reward. Then the intelligent agent can take the next action based on the received reward and new state in the environment.
  • the agent interacts with the environment to collect experience. The environments often mimicked by the simulator since it is expensive to directly interact with the real system.
  • the agent can use the optimal decision-making rule learned from the training phase to achieve the maximal accumulated reward.
  • Federated learning is a machine learning technique that is used to train an AI/ML model by a central node (such as a server) and a plurality of decentralized edge nodes (for example UEs, next Generation NodeBs, “gNBs” ) .
  • a central node such as a server
  • gNBs next Generation NodeBs
  • a server may provide, to an edge node, a set of model parameters (weights, biases, gradients for example) that describe a global AI/ML model.
  • the edge node may initialize a local AI/ML model with the received global AI/ML model parameters.
  • the edge node may then train the local AI/ML model using local data samples to, thereby, produce a trained local AI/ML model.
  • the edge node may then provide, to the serve, a set of AI/ML model parameters that describe the local AI/ML model.
  • the server may aggregate the local AI/ML model parameters reported from the plurality of UEs and, based on such aggregation, update the global AI/ML model. A subsequent iteration progresses much like the first iteration.
  • the server may transmit the aggregated global model to a plurality of edge nodes. The above procedure are performed multiple iterations until the global AI/ML model is considered to be finalized, for example the AI/ML model is converged or the training stopping conditions are satisfied.
  • AI technologies may be applied in communication, including AI-based communication in the physical layer and/or AI-based communication in the MAC layer.
  • the AI communication may aim to optimize component design and/or improve the algorithm performance.
  • AI may be applied in relation to the implementation of: channel coding, channel modelling, channel estimation, channel decoding, modulation, demodulation, MIMO, waveform, multiple access, physical layer element parameter optimization and update, beam forming, tracking, sensing, and/or positioning, and so on.
  • the AI communication may aim to utilize the AI capability for learning, prediction, and/or making a decision to solve a complicated optimization problem with possible better strategy and/or optimal solution, for example to optimize the functionality in the MAC layer.
  • AI may be applied to implement: intelligent TRP management, intelligent beam management, intelligent channel resource allocation, intelligent power control, intelligent spectrum utilization, intelligent MCS, intelligent HARQ strategy, and/or intelligent transmission/reception mode adaption, and so on.
  • An AI architecture may involve multiple nodes, where the multiple nodes may possibly be organized in one of two modes (centralized and distributed) , both of which may be deployed in an access network, a core network, or an edge computing system or third party network.
  • a centralized training and computing architecture is restricted by possibly large communication overhead and strict user data privacy.
  • a distributed training and computing architecture may comprise several frameworks, for example distributed machine learning and federated learning.
  • an AI architecture may comprise an intelligent controller which can perform as a single agent or a multi-agent, based on joint optimization or individual optimization. New protocols and signaling mechanisms are desired so that the corresponding interface link can be personalized with customized parameters to meet particular requirements while minimizing signaling overhead and maximizing the whole system spectrum efficiency by personalized AI technologies.
  • New protocols and signaling mechanisms are provided for operating within and switching between different modes of operation, including between AI and non-AI modes, and for measurement and feedback to accommodate the different possible measurements and information that may need to be fed back, depending upon the implementation.
  • AI enabled air interface An air interface that uses AI as part of the implementation, for example to optimize one or more components of the air interface, will be referred to herein as an “AI enabled air interface” .
  • AI enabled air interface there may be two types of AI operation in an AI enabled air interface: both the network and the UE implement learning; or learning is only applied by the network.
  • Figs. 1-4 show sensing functional frameworks and procedures
  • Fig. 11 illustrates another example of a sensing functional framework and procedures.
  • the example in Fig. 11 includes many of the same or similar parts and features as the example in Fig. 1, but is provided to illustrate and explain features that may be supported in some embodiments.
  • the Sensing Modelling function can output sensing results, such as information about sensed objects, a reconstructed physical environment, or a reconstructed RF map, for example.
  • the Sensing Modelling function can be enabled by AI, for example by using AI to derive sensing results.
  • the Sensing Modelling function may be implemented with or without AI, and accordingly in Fig. 11 one output from the Sensing Data collection is shown as sensing data for modelling rather than Training data as in Fig. 1.
  • the reference to the sensing data being “for modelling” in this instance is referring to the Sensing Modelling function.
  • the Sensing Modelling may, but need not necessarily, generate an AI model (or other model) for prediction by the Sensing Application function.
  • a model is just one example of sensing results that may be generated by the Sensing Modelling function.
  • sensing results may be provided to a core network and/or a 3rd party, as one way to provide a sensing service.
  • an output of sensing modeling which may include partial sensing results for example, can be delivered to a core network or a 3rd party.
  • the Sensing Modelling function may generate sensing results including moving target information and a static environment map. The Sensing Modelling function could send the static environment map, as a partial sensing result, to the core network and/or the 3rd party.
  • the sensing framework may be implemented in a RAN, so as to provide a sensing service by the RAN with reporting of sensing results to a core network or 3rd party.
  • Sensing as disclosed herein may be applied in any of various ways, to any of a wide variety of applications.
  • the Sensing Modelling function may generate sensing results in the form of an operating environment map for example, and the Sensing Application function may then use the environment map to assist communications, by performing beam prediction for example.
  • the data for the Sensing Application function (also referred to herein as Action data but shown in Fig. 11 as “Data for application) may be a reference signal (RS) with low density for beam management, and potential benefits of sensing in this example may include reducing the RS overhead by allowing lower density of RS signaling.
  • RS reference signal
  • the Sensing Modelling function may again generate sensing results, in the form of an environment map for example, but it generates multiple sensing results. Multiple sensing results (or sets of sensing results) may be multiple sensing models for example, such as one model for a static environment, and one model for a moving objects environment) .
  • the Sensing Management function may then indicate, to the Sensing Results Storage function and/or to the Sensing Application function, which model the Sensing Application function is to use.
  • a sensing service is referenced at least above, and is another example application of sensing.
  • the Sensing Modelling function may generate sensing results (whether there is an object and object information, which would be intruder information for intruder detection, for example) based on received input data.
  • the sensing results can be provided to a core network or 3 rd party as shown.
  • the sensing service procedure may end here.
  • the sensing results may also be stored. Object information from such detection (location, shape, and so on) may be used for other purposes, to assist communications for example.
  • the Sensing Application function can use the sensing results for beam management, in which case the Data for application in Fig. 11 may be the RS for beam management as in another example above.
  • the Sensing Modelling function determines whether there is an object, and the object information.
  • Object detection may instead be a feature of the Sensing Application function, and the Sensing Application function determines whether there is an object.
  • the Sensing Modelling function generates a sensing model for object data (according to received input data based on one or more sensing signals to determine object information) , and the Sensing Application function application function uses the model for object detection.
  • any of embodiments 1 to 4 may implement a Sensing Modelling function with or without AI, provide a sensing service with reporting of sensing results to a core network or 3 rd party, and/or support sensing for any of a variety of purposes or applications.
  • Fig. 12 is a block diagram illustrating a sensing system according to an embodiment.
  • the sensing framework parts and their operation as described primarily above are examples of the sensing system elements in Fig. 12 and their operation.
  • the example sensing system 1200 includes a data collector 1210, a sensing result generator 1212, a storage subsystem 1214, an output generator 1216, and a sensing manager 1218, interconnected as shown.
  • Other embodiments may include additional, fewer, and/or different elements, interconnected together in a similar or different way.
  • a sensing system element or a component thereof such as a processor may be configured or otherwise operable to perform (or for performing) , or programming may include instructions to perform (or for performing) or to cause a processor to perform operations as disclosed herein.
  • programming may include instructions to perform (or for performing) or to cause a processor to perform operations as disclosed herein.
  • present disclosure is not in any way limited to any particular type of implementation.
  • sensing system elements shown in Fig. 12 may be implemented in any of various ways, at one or more than one device in a communication system.
  • the sensing system elements may be described as being configured or otherwise operable to perform various operations. These operations are also described by way of example at least above, with reference to functions. Such functions are commonly used to describe communication network or communication system frameworks or architectures, and in the context of system embodiments such functions would be implemented in a device (or multiple devices) operable to perform operations associated with those functions. To the extent that functions are referenced in describing a system or apparatus embodiment, it is to be understood that such references are intended to denote the device (s) operable to perform operations associated with the functions.
  • a sensing system includes a data collector 1210 and a sensing result generator 1212.
  • the data collector is configured or otherwise operable to provide first input data that is related to sensing in a communication system, and second input data related to sensing management.
  • the Sensing Data Collection function in Fig. 1 is an example of a data collector 1210.
  • Features disclosed herein with reference to a Sensing Data Collection function may be implemented by the data collector 1210.
  • the Training data referenced in respect of Figs. 1-4 and the sensing data for modeling referenced in respect of Fig. 11 are examples of the first input data
  • the Monitoring data referenced in respect of Figs. 1-4 and 11 is an example of the second input data.
  • the sensing result generator 1212 is configured or otherwise operable to receive the first input data from the data collector 1210 and to generate a sensing result based on the first input data.
  • the sensing result may be or include one or more of the following:
  • a data collector 1210 and a sensing results generator 1212 support collection of sensing data and generating of sensing results.
  • a storage subsystem 1214 may also be provided.
  • a Sensing Results Storage function as shown in Figs. 1-4 and 11 is an example of the storage subsystem 1214.
  • the storage subsystem 1214 is coupled to the sensing result generator 1212 in Fig. 12, and is configured or otherwise operable to receive the sensing result from the sensing result generator and to store the sensing result.
  • the storage subsystem 1214 may also be configured or otherwise operable to perform other operations as well, such as sensing result transfer as described elsewhere herein.
  • An output generator 1216 may be provided in some embodiments.
  • the output generator 1216 is coupled to the data collector 1210 and to the storage subsystem 1214, to receive a sensing result from the storage subsystem.
  • the output generator 1216 may be coupled to the sensing result generator 1212, to receive the sensing result from the sensing result generator.
  • a Sensing Application function as shown in Figs. 1-4 and 11 is an example of the output generator 1216.
  • the output generator 1216 is configured to or otherwise operable to generate a further output based on the sensing result.
  • an output generated by the output generator 1216 may be for assisting communication or providing a sensing service (such as intruder detection in smart home, or UE positioning for example) .
  • Generating an output may involve using a sensing result, such as a model, to make an inference or prediction that is provided as the output.
  • a sensing result may be an object detection or condition detection, in which case the sensing result may trigger an alarm or alert.
  • lookup information such as a lookup table as the sensing result
  • generating an output may involve receiving or otherwise obtaining input data for lookup and the output is a lookup entry that is mapped to the input data in the lookup information.
  • the latter example above refers to input data
  • the Action data referenced in the context of Figs. 1-4 and the Data for application in Fig. 11 are examples of such input data that may be used by a Sensing Application function, or more generally by an output generator, in some embodiments.
  • the data collector 1210 may be configured or operable to provide this input data, which may be referred to as third input data related to a further output to be generated by the output generator 1216, to the output generator.
  • the output generator 1216 may then generate an output based on the third input data, and the sensing result.
  • a sensing manager 1218 may be coupled to the data collector 1210 and to the sensing result generator 1212.
  • the sensing manager may be configured to or otherwise operable to monitor the sensing result, or sensing more generally, based on the second input data and to provide feedback to the sensing result generator.
  • a Sensing Management function as shown in Figs. 1-4 and 11 is an example of the sensing manager 1218.
  • Monitoring of a sensing result may involve monitoring any of various conditions or criteria related to a sensing result, sensing, or a communication system or other system in which or in conjunction with which sensing is implemented. Such monitoring may be referred to as performance monitoring, or monitoring performance of (or performance associated with) a sensing result or sensing, for example. Performance may include, for example, sensing performance (such as sensing (or sensing result) accuracy, resolution, missed detection probability and/or frequency, false alarm probability and/or frequency) , communication performance assisted by sensing (link and/or system level performance) , or both.
  • sensing performance such as sensing (or sensing result) accuracy, resolution, missed detection probability and/or frequency, false alarm probability and/or frequency
  • communication performance assisted by sensing link and/or system level performance
  • Sensing monitoring is based in part on the above-referenced second input data.
  • Ground truth data is one type of data that may be used for sensing monitoring. Another type is non ground truth data.
  • Non ground truth data One example of non ground truth data that may be used in sensing monitoring is communication device (BS (s) and/or UE (s) for example) monitoring system performance of sensing.
  • the further action may be initiated.
  • feedback provided by the sensing manager 1218 to the sensing result generator 1212 may be or include an indication to the sensing result generator to update the sensing result in response to performance associated with the sensing result being below a target.
  • the sensing result generator may be instructed or indicated to update the sensing result, by re-training a model for example.
  • the sensing manager 1218 may interact with a sensing result generator 1212 in other ways.
  • the sensing manager 1218 may be configured to or otherwise operable to transfer (or to control transfer of) the sensing result to the output generator 1216 from the sensing result generator1210.
  • the sensing manager 1218 may be coupled to the storage subsystem as shown, and be configured to or otherwise operable to transfer (or control transfer of) the sensing result to the output generator 1218 from the storage subsystem.
  • a model transfer request as referenced in the context of Figs. 1-4 and a sensing results transfer request as referenced in Fig. 11 are examples of signaling that may be sent by the sensing manager 1218 to the storage subsystem 1214 (or to the sensing result generator in other embodiments) to transfer the sensing result to the output generator 1216.
  • the sensing manager 1218 may request or otherwise obtain a sensing result from the sensing result generator 1212 or the storage subsystem 1214, and then provide the sensing result to the output generator 1216.
  • a sensing result is one of multiple sensing results generated by a sensing result generator, and the sensing manager 1218 may be configured to or otherwise operable to select, from the multiple sensing results, the sensing result that is to be used by the output generator 1216.
  • the selected sensing result may be indicated in signaling that is sent from the sensing manager 1218 to one or more of the sensing result generator 1212, the storage subsystem 1214, or the output generator 1216, depending on how the selected sensing result is to be provided to the output generator.
  • the sensing manager 1218 may, for example, be configured to or otherwise operable to monitor the sensing result, or sensing more generally, based on the second input data and the output from the output generator 1216, and to provide feedback to the output generator to control usage of the sensing result by the output generator.
  • the sensing manager 1218 may be configured to or otherwise operable to also or instead provide feedback to the sensing result generator 1212, based on monitoring that involves an output from the output generator 1216.
  • Controlling usage of a sensing result may involve, for example, sending signaling to control selection of a sensing result that the output generator 1216 is to use in generating its output, to control de-activation (or activation) of a current sensing result that was used by the output generator to generate, to control switching to a different sensing result (adifferent model for example) to be used by the output generator in generating its output, and/or to control fallback of the output generator to a previously used different sensing result or a non-sensing mode.
  • the Sensing Management function when the Sensing Management function observes that the sensing performance of a current sensing model is not good enough, it can send model switching signaling to the Sensing Application function to switch to another sensing model, or send fallback signaling to indicate that the Sensing Application function is to use non-sensing mode.
  • This is one example how a sensing manager and an output generator may interact to control usage of sensing results in generating further outputs.
  • the Sensing Management function can indicate to the Sensing Application function which sensing model the Sensing Application is to use, and activate or de-activate one or multiple of the candidate sensing models.
  • Embodiment 2 above illustrates an example in which a sensing manager 1218 may be configured or otherwise operable to provide a set of sensing results to a sensing result generator 1212, and the sensing result generator is configured to or otherwise operable to generate the sensing result by updating the set of sensing results based on the first input data.
  • the foundation model referenced in Fig. 2 is an example of such a set of sensing results that may be provided to and updated by a sensing result generator.
  • the data collector 1210 is configured to or otherwise operable to receive sensing data from multiple devices, and to generate one or more of the first input data or the second input data (or third input data) based on the received sensing data.
  • Embodiment 3 shows an example of this.
  • the multiple devices are sensing functions, which as described at least above may also be referred to as sensors or sensing devices for example.
  • These devices from which sensing data is received may include sensors of multiple different types.
  • the RF sensing functions and non-RF sensing functions in Fig. 3 are examples of different types of sensors. More generally, an RF sensor that uses RF sensing to collect sensing data is one example of a device from which a data collector may receive sensing data, and a non-RF sensor is another example of a device from which a data collector may receive sensing data.
  • RF and non-RF sensors are examples of different types of sensors that use different types of sensing.
  • Devices that provide different types of sensing data are another example of devices that may be considered devices of different types.
  • a data collector may receive sensing data from devices that are the same or similar to each other in some respects, and/or from devices that are different from each other in some respects.
  • the present disclosure is not restricted to sensing data that is provided by any particular type of device.
  • the Data Fusion function in Embodiment 3 provides input data to the Sensing Modelling, Sensing Management, and Sensing Application functions.
  • data fusion could be also called data collection
  • another possible name for Data Fusion is input data generation
  • the name “input data generator” may be used to refer to a Data Fusion element of a sensing system, for example.
  • the data collector 1210 in Fig. 12 may provide data fusion features, or in another embodiment the data collector is (or includes) an input data generator. Data Fusion may thus be considered a special case or embodiment of data collection, as also described at least above.
  • Data fusion features may include data processing, and accordingly the data collector 1210 or an element thereof such as an input data generator may be configured to or otherwise operable to perform data processing of received sensing data.
  • data processing may include, for example, data pre-processing and cleaning, formatting, and transformation, integrating multiple data sources to produce more useful information than that provided by any individual data source.
  • Fig. 3 provides an example in which sensing data received from multiple devices is processed in such a way that the resulting input data has less uncertainty than when the sensing data is used individually.
  • Embodiment 4 and Fig. 4 provide another example of devices from which sensing data may be received.
  • devices from which sensing data may be received by the data collector 1210 in Fig. 12 may include a device to provide anchor data related to a sensing anchor and a device to provide non-anchor data not related to a sensing anchor.
  • the data collector 1210 may be configured to or otherwise operable to receive sensing data from devices that may include one or more devices to provide anchor data related to one or more sensing anchors and/or one or more devices to provide non-anchor data not related to a sensing anchor.
  • a device may provide sensing data related to one, or more than one, sensing anchor or non-anchor.
  • a device may interact with multiple sensing anchors and/or non-anchors, and provide related sensing data to a data collector.
  • a sensing anchor may or may not generate sensing data.
  • a sensing anchor or non-anchor may be or include a device that is configured to or otherwise operable to provide sensing data to a data collector, or a passive object.
  • a device to provide anchor data related to a sensing anchor is (or includes) the sensing anchor, or that a sensing anchor is (or includes) such a device.
  • a passive object as a sensing anchor, a device senses information about the object and provides sensing data to a data collector.
  • the name “sensing anchor” is used for ease of reference for an element that is an anchor for the purpose of sensing. An anchor may, but need not necessarily, perform any sensing operations.
  • anchor devices are provided at least above, in the context of Embodiment 4. These examples include a node, a device that can report ground truth, a device that transmits a sensing signal to assist one or more other sensing devices to perform sensing measurement, and a passive object. These examples also apply to non-anchors, with the exception that sensing data related to non-anchors would not be ground truth data.
  • Anchor Data Collection is a function that provides input data to Sensing Modelling, Sensing Management, Sensing Application functions.
  • the input data includes ground truth information.
  • Ground truth refers to the true answer to a specific problem or question.
  • the ground truth is the target exact location, exact shape.
  • the ground truth is the exact environment, and may include building locations/shapes, street locations, and so on. Examples of input data are also provided at least above.
  • Anchor Data Collection is described above as a function, and the name “anchor data collector” is also used herein to refer generally to an Anchor Data Collection element of a sensing system.
  • Non-anchor Data Collection is a function that provides input data to Sensing Modelling, Sensing Management, Sensing Application functions. The input data does not include the ground truth information. Examples of input data from non-anchor data collection are also provided at least above.
  • Non-anchor Data Collection is described above as a function, and the name “non-anchor data collector” is also used herein to refer generally to a Non-anchor Data Collection element of a sensing system.
  • Anchor Data Collection and Non-anchor Data Collection may be considered a special case or embodiment of data collection.
  • Devices in a sensing system may include one or more devices to provide anchor data related to one or more sensing anchors, and one or more devices to provide non-anchor data that is not related to a sensing anchor.
  • a data collector such as the data collector 1210 may include: an anchor data collector to collect, from multiple devices of the plurality of devices, anchor data related to multiple sensing anchors; and a non-anchor data collector to collect, from multiple devices of the plurality of devices, non-anchor data not related to a sensing anchor.
  • anchor and non-anchor data collection are implemented separately, in respective elements or functions as shown by way of example in Fig. 4.
  • Fig. 4 also illustrates an Anchor Management function, is responsible for performing control to anchors and non-anchors.
  • the Anchor Management function can configure which node is the anchor, and indicates to the anchor to perform data collection and corresponding collected data type.
  • the Anchor Management function could also indicate to a non-anchor to perform data collection and corresponding collected data type.
  • These examples refer data collection by anchors and non-anchors, but anchors and non-anchors may or may not provide sensing data. Devices that provide sensing data related to anchors and non-anchors may or may be the sensing anchors or non-anchors.
  • anchor management may involve configuring anchors, and/or non-anchors, and indicating to one or more devices (which may or may not be anchors or non-anchors for sensing) to perform sensing data collection and provide sensing data to a data collector.
  • Anchor Management may also be referred to, for example, as Sensing Anchor Management or, in the case of a sensing anchor being a sensing device or sensor, as Sensing Device Management, Sensor (or Sensing Device) Management, or Sensing Function Management, for example.
  • the name “anchor manager” is also used herein to refer generally to an Anchor Management element of a sensing system.
  • a sensing system may include an anchor manager, coupled to an anchor data collector and to a non-anchor data collector, to manage sensing anchors and/or non-anchors.
  • Managing anchors and/or non-anchors may include, for example, the configuring and indicating features described at least above.
  • Embodiments as disclosed herein may provide or support life cycle management (LCM) in sensing.
  • LCM life cycle management
  • LCM as a whole may involve an entire sensing procedure, from data collection to modelling (more generally, sensing result generation) , storage, receiving sensing results at a Sensing Application function (or more generally, an output generator) , generating an output based on a sensing result, monitoring, and other sensing management such as indicating to switch or fallback for output generation and/or to update sensing results.
  • LCM of sensing may therefore include sensing data collection, sensing modelling (or other sensing result generation) , sensing application (to generate an output) , and sensing management including monitoring and possible updating of sensing result generation or application.
  • LCM may involve the data collector 1210, the sensing result generator 1212, the storage subsystem 1214 to receive and store a sensing result from the sensing result generator, the output generator 1216 to generate a further output based on the sensing result, and the sensing manager 1218 for management of sensing based on one or more of the second input data or the output generated by the output generator.
  • Sensing LCM may include sensing functionality-based LCM and sensing model-based (or more generally sensing result-based) LCM.
  • Sensing functionality-based LCM refers to an embodiment of LCM procedure in which a given functionality is provided by sensing operations.
  • Sensing result-based LCM refers to an embodiment of LCM procedure in which a sensing result such as a model (which may be referred to as a sensing type) has a sensing result ID or model ID (or type ID) , and associated information is provided by sensing operations.
  • Sensing functionality identification refers to a process or method of identifying a sensing functionality, for a common understanding between a network and UEs for example. Information regarding a sensing functionality may be shared during functionality identification.
  • a sensing functionality may be configured with an ID, and a UE may have one sensing model or multiple sensing models for the functionality, which may depend on UE implementation for example.
  • Sensing result identification refers to a process or method of identifying a sensing result (such as a model) , for a common understanding between a network and UEs for example. Information regarding a sensing result may be shared during identification.
  • a sensing result may be configured with a result ID.
  • the NW and the UE align the sensing result according to the result ID, and sensing result management (including LCM) may be under the control of the network.
  • the management of sensing by the sensing manager 1218 may involve transferring a sensing result, which is generated by the sensing result generator 1212, to the output generator 1216.
  • a Sensing Management function may send a sensing result to a Sensing Application function, for example by sending a sensing result request to a Sensing Results Storage function, and then the Sensing Results Storage function sends the sensing result to the Sensing Application function. More generally, with reference to the example in Fig.
  • the sensing manager 1218 may be configured to or otherwise operable to transfer the sensing result to the output generator 1216 by sending a request to the storage subsystem 1214, in which case the storage subsystem is configured to or otherwise operable to receive the request from the sensing manager and to transfer the sensing result to the output generator responsive to the request.
  • a Sensing Management function may also receive the output of a Sensing Application function.
  • the output may include information about performance of the Sensing Application function and/or performance of the communication system.
  • a Sensing Management function may receive monitoring data from a Sensing Data Collection function, such as ground truth data, and after comparing the sensing output and the ground truth, the sensing performance can be assessed or evaluated.
  • Such features may be embodied in the example shown in Fig. 12 in that the management of sensing by the sensing manager 1218 may involve monitoring sensing performance based on the second input data and the output from the output generator 1216.
  • the sensing manager 1218 may be configured to or otherwise operable to receive the output of the output generator 1216, and that output may include information about performance of the output generator and/or performance of the communication system.
  • the sensing manager may also or instead receive monitoring data (also referred to as second input data herein) from the data collector 1210, and monitor the sensing performance based on comparing the output and the monitoring data.
  • a Sensing management function can send model switching signaling to a Sensing Application function to switch to another sensing model, or send fallback signaling to indicate to the Sensing Application function to use a non-sensing mode.
  • the management of sensing by the sensing manager 1218 may involve sending, to the output generator 1216, first signaling to indicate a switch to another sensing result for generating the further output, or second signaling to indicate a non-sensing mode for generating the further output.
  • the output generator 1216 is configured to or otherwise operable to switch to the other sensing result for generating the further output responsive to receiving the first signaling from the sensing manager 1218, or to use the non-sensing mode for generating the further output responsive to receiving the second signaling from the sensing manager.
  • a Sensing Management function may send current sensing performance to a Sensing Modelling function, including a current sensing output and/or one or more properties or characteristics of that output, such as any of the following: accuracy, resolution, and so on.
  • the Sensing Management function may also request the Sensing Modelling function to retrain the model (in an AI-based embodiment) , and request to get an updated sensing model. This may involve one request or two requests. For the two requests case, the Sensing Management function sends a request to retrain or otherwise update the model, but it does not need the updated model right away. After a certain time, such as when the model is to be used, the Sensing Management function sends another request, to get the updated model.
  • sensing manager 1218 may involve sending, to the sensing result generator 1212, any or more of the following:
  • first feedback signaling to indicate current sensing performance
  • the sensing result generator is configured to or otherwise operable to update the sensing result responsive to receiving the second feedback signaling from the sensing manager.
  • a sensing system or elements thereof may also provide or support other features.
  • Embodiments may incorporate, individually or in combinations, the features disclosed herein.
  • An apparatus or system element may be configured to or otherwise operable to perform operations or implement features disclosed herein.
  • An apparatus or system element may include a processor or other component that is configured, by executing programming for example, to cause the apparatus or system element to perform operations or implement features disclosed herein.
  • An apparatus or system element may also include a non-transitory computer readable storage medium, coupled to the processor, storing programming or instructions for execution by the processor.
  • the processors 210, 260, 276 may each be or include one or more processors, and each memory 208, 258, 278 is an example of a non-transitory computer readable storage medium, in an ED 110 and a TRP 170, 172.
  • a non-transitory computer readable storage medium need not necessarily be provided only in combination with a processor, and may be provided separately in a computer program product, for example.
  • programming stored in or on a non-transitory computer readable storage medium may include instructions to or to cause a processor to, or a processor, device, or other component may otherwise be configured to: provide, by a data collection function in a communication system: first input data related to sensing in the communication system; and second input data related to sensing management; and generate, by a sensing result generation function in the communication system, a sensing result based on the first input data.
  • Apparatus embodiments are not limited to the foregoing examples, or to processor-based or programming-based embodiments.
  • Fig. 13 is a flow diagram illustrating an example method.
  • a sensing method consistent with the example method 1300 may include some or all of the illustrated features.
  • LCM for example, may include most or all of the illustrated features, whereas other embodiments may include fewer than all of those features.
  • a sensing method involves: providing at 1304, by a data collection function in a communication system for example: first input data related to sensing in the communication system; and second input data related to sensing management; and generating at 1306, by a sensing result generation function in the communication system for example, a sensing result based on the first input data.
  • Method embodiments may include other features, such as any one or more of the following features, for example, which are also discussed elsewhere herein:
  • the sensing result may be or include any one or more of the following: a model of an operating environment of the communication system; a map of the operating environment; lookup information associated with the operating environment; one or more characteristics of the operating environment; characteristics of one or more objects within the operating environment; data derived from radio signals impacted by an object or the operating environment;
  • the generating at 1310 may involve generating the further output based on the sensing result and the third input data;
  • the feedback provided to the sensing result generation function may be or include an indication to the sensing result generation function to update the sensing result in response to performance associated with the sensing result being below a target;
  • the sensing result may be or include one of multiple sensing results generated by the sensing result generation function, in which case a method may involve selecting, by the sensing management function for example, the sensing result from the multiple sensing results;
  • a method may also involve providing feedback at 1314, by the sensing management function for example, to the sensing result generation function, as shown by way of example in Fig. 13 by the arrow between 1314 and 1306;
  • a method may involve providing, by the sensing management function for example, a set of sensing results to the sensing result generation function, in which case generating the sensing result at 1306 may involve updating the set of sensing results based on the first input data;
  • Fig. 13 illustrates providing sensing data at 1302 –this may involve multiple devices, and a sensing method may involve receiving sensing data from those devices by the data collection function at 1304, in which case one or more of the first input data or the second input data (or the third input data in some embodiments) provided at 1304 may be generated by the data collection function based on the received sensing data;
  • the devices may include, for example, sensors of multiple different types
  • the devices may include a device to provide anchor data related to a sensing anchor and a device to provide non-anchor data not related to a sensing anchor;
  • the device to provide anchor data related to a sensing anchor may be or include the sensing anchor;
  • the sensing anchor may be or include a passive object
  • a method may involve managing the sensing anchors, by an anchor management function in the communication system for example;
  • a method may include providing input data at 1304 and generating a sensing result at 1306, as well as: receiving and storing at 1308, by a storage subsystem for example, the sensing result from the sensing result generation function; generating at 1310, by an output generation function in the communication system for example, a further output based on the sensing result; and managing, by a sensing management function in the communication system for example, sensing in the communication system based on one or more of the second input data or the further output;
  • the managing may involve transferring the sensing result to the output generation function
  • the transferring may involve sending, by the sensing management function for example, a request to the storage subsystem, in which case a method may also involve receiving, by the storage subsystem, the request from the sensing management function and transferring the sensing result to the output generation function by the storage subsystem responsive to the request as shown by way of example by the arrow between 1308 and 1310 in Fig. 13;
  • the managing may involve monitoring sensing performance at 1312 based on the second input data and the further output;
  • the managing may also involve sending, to the output generation function for example, first signaling to indicate a switch to another sensing result for generating the further output, or second signaling to indicate a non-sensing mode for generating the further output –sending the first or second signaling is an example of providing feedback at 1314 to 1310;
  • a method may also involve updating the sensing result at 1306, by the sensing result generation function for example, responsive to receiving the second feedback signaling from the sensing management function.
  • any module, component, or device exemplified herein that executes instructions may include or otherwise have access to a non-transitory computer readable or processor readable storage medium or media for storage of information, such as computer readable or processor readable instructions, data structures, program modules, and/or other data.
  • non-transitory computer readable or processor readable storage media includes magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, optical disks such as compact disc read-only memory (CD-ROM) , digital video discs or digital versatile disc (DVDs) , Blu-ray Disc TM , or other optical storage, volatile and non-volatile, removable and nonremovable media implemented in any method or technology, random-access memory (RAM) , read-only memory (ROM) , electrically erasable programmable read-only memory (EEPROM) , flash memory or other memory technology. Any such non-transitory computer readable or processor readable storage media may be part of a device or accessible or connectable thereto. Any application or module herein described may be implemented using instructions that are readable and executable by a computer or processor may be stored or otherwise held by such non-transitory computer readable or processor readable storage media.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Testing Or Calibration Of Command Recording Devices (AREA)

Abstract

A sensing framework, and related system embodiments, method embodiments, and other embodiments, are disclosed. A data collector or data collection function in a communication system provides first input data related to sensing in the communication system and second input data related to sensing management. A sensing result is generated, by a sensing result generator or sensing result generation function in the communication system, based on the first input data. The second input data, and/or a further output that is generated based on the sensing result, may be used in monitoring sensing performance, and monitoring results may be used in the sensing management.

Description

6G Sensing Framework
CROSS-REFERENCE TO RELATED APPLICATION
The present application is related to, and claims priority to, United States provisional patent application Serial No. 63/580,525, entitled “6G Sensing Framework” , filed on September 5, 2023, the entire contents of which are hereby incorporated by reference.
TECHNICAL FIELD
The present application relates generally to communications, and in particular to sensing.
BACKGROUND
Sensing is a process of obtaining surrounding information, and it can be broadly classified as
· RF sensing: Send an RF signal and obtain the surrounding information either by receiving and processing of this RF signal or the echoed (reflected) RF signal.
· Non-RF sensing: Surrounding information is obtained via non-RF signal such as video camera or other sensors.
Herein, “sensing” refers to RF-sensing if it is not specified.
RF refers to radio frequency.
Sensing can be used to detect the information of an object, such as location, speed, distance, orientation, shape, texture, and so on.
Sensing can be classified as
· Active sensing (also called device based sensing) : A sensor sends an RF signal to the sensed object, which is capable of detecting the RF signal and obtaining sensed information from the RF signal or measuring some intermediate information, which is fed back to the sensor to assist the sensor to obtain sensed information.
· Passive sensing (also called device free sensing) : A sensor sends an RF signal to the sensed object and detects the reflected echo of the RF signal, and obtains the sensed information from the echo.
о The object may or may not contain certain ID information (RF tag) , for example an Ambient IoT device.
ID refers to identifier. IoT refers to Internet of things.
Generally, from the transmitter and receiver point of view, there are 3 types of sensing:
· Monostatic sensing: the transmitter and receiver are the same device
· Bi-static sensing: the transmitter and receiver are different devices, for example a BS sends the sensing signals, and a UE receives the sensing signals.
· Multi-static sensing: can be decomposed into a set of N bi-static Tx-Rx pairs, where N>1, for example a BS sends the sensing signals, and two UEs (UE1, UE2) receive the sensing signals, pair-1: BS and UE1, pair-2: BS and UE2.
Radar
· A radar system sends a RF signal to localize, detect and track a target. It is a passive sensing system.
Typically, a radar system is a standalone system and designed for a specific application.
ISAC (Integrated Sensing And Communication) : A system which reuses the communication RF signal for sensing, that is, the integrated sensing and communication system. In addition, the system is a networked and cooperative sensing system instead of a single standalone Radar system. Cooperating sensing can be done via the integrated communication protocols.
It may be desirable to provide a functional framework for sensing, for example sensing in Integrated Sensing And Communication.
There is no sensing functional framework in 5G network. In 6G ISAC, the sensing functional framework and corresponding air interface procedure remain to be designed.
SUMMARY
The present disclosure includes embodiments that define a sensing functional framework and its procedure.
According to an aspect of the present disclosure a sensing system includes a data collector to provide: first input data related to sensing in a communication system; and second input data related to sensing management; and a sensing result generator, coupled to the data collector, to receive the first input data from the data collector and to generate or obtain, based on the first input data, a sensing result.
Another aspect of the present disclosure relates to a sensing method that involves providing, by a data collection function in a communication system: first input data related to sensing in the communication system; and second input data related to sensing management; and generating, by a sensing result generation function in the communication system, a sensing result based on the first input data.
In another embodiment, a sensing system includes one or more processors configured to perform a method as disclosed herein. The one or more processors may be or include processors in different devices.
Another example system may include one or more processors coupled with one or more non-transitory computer readable storage media that store programming for execution by the one or more processors. The programming includes instructions to perform a method as disclosed herein.
A storage medium need not necessarily or only be implemented in or in conjunction with a system or an apparatus. A computer program product, for example, may be or include a non-transitory computer readable medium storing programming for execution by a processor.
Programming stored by a computer readable storage medium may include instructions to, or to cause a processor to, perform, implement, support, or enable any of the methods disclosed herein.
For example, a non-transitory computer readable medium (or more than one such medium) may store programming for execution by a processor (or by more than one processor) , and the programming includes instructions to: provide, by a data collection function in a communication system: first input data related to sensing in the communication system; and second input data related to sensing management; and generate, by a sensing result generation function in the communication system, a sensing result based on the first input data.
Other embodiments are also possible. For example, one or more integrated circuits, which may also be referred to as a chip or a chipset, may implement features disclosed herein. These may also be considered examples of system elements or apparatus as disclosed herein.
The present disclosure encompasses these and other aspects or embodiments.
BRIEF DESCRIPTION OF THE DRAWINGS
For a more complete understanding of the present embodiments, and the advantages thereof, reference is now made, by way of example, to the following descriptions taken in conjunction with the accompanying drawings.
Fig. 1 illustrates an example of a sensing functional framework and procedures.
Fig. 2 illustrates another example of a sensing functional framework and procedures.
Fig. 3 illustrates a further example of a sensing functional framework and procedures.
Fig. 4 illustrates yet another example of a sensing functional framework and procedures.
Fig. 5 is a simplified schematic illustration of a communication system.
Fig. 6 is a block diagram illustration of the example communication system in Fig. 5.
Fig. 7 illustrates an example electronic device and examples of base stations.
Fig. 8 illustrates units or modules in a device.
Fig. 9 is a block diagram illustration of another example communication system.
Fig. 10 is a block diagram illustration of an example sensing management function (SMF) .
Fig. 11 illustrates another example of a sensing functional framework and procedures.
Fig. 12 is a block diagram illustrating a sensing system according to an embodiment.
Fig. 13 is a flow diagram illustrating a method according to an embodiment.
DETAILED DESCRIPTION
For illustrative purposes, specific example embodiments will now be explained in greater detail in conjunction with the figures.
The embodiments set forth herein represent information sufficient to practice the claimed subject matter and illustrate ways of practicing such subject matter. Upon reading the following description in light of the accompanying figures, those of skill in the art will understand the concepts of the claimed subject matter and will recognize applications of these concepts not particularly addressed herein. It should be understood that these concepts and applications fall within the scope of the disclosure and the accompanying claims.
In some embodiments herein:
RF sensing or wireless sensing may refer to obtaining information about characteristics of an environment and/or objects within the environment (such as any one or more of the following: shape, size, orientation, speed, location, distances or relative motion between objects, and so on) using RF signals;
3GPP (3rd generation partnership project) sensing data may refer to data derived from 3GPP radio signals impacted (reflected, refracted, diffracted, for example) by an object or environment of interest for sensing purposes, and optionally processed within a 5G system;
Non-3GPP sensing data may refer to data provided by non-3GPP sensors (video, LiDAR, sonar, for example) about an object or environment of interest for sensing purposes;
a sensing result may refer to processed 3GPP sensing data requested by a service consumer;
a sensing transmitter may refer to an entity that sends out a sensing signal that may be used by a sensing service in its operation. A sensing transmitter may be an NR radio access network (RAN) node or a UE. A Sensing transmitter can be located in the same or different entity as the Sensing receiver.
ISAC is used herein for ease of reference in respect of reuse of a communication signal for sensing. Other names or terms may be used for this feature, and/or others, disclosed herein.
EMBODIMENT 1
Fig. 1 illustrates an example of a sensing functional framework and procedures.
The sensing functional framework and the flowchart shown in Fig. 1 includes 5 parts. These parts are discussed below.
Sensing Data Collection
Sensing Data Collection is a function that provides input data to Sensing Modelling, Sensing Management, Sensing Application functions. Examples of input data may include measurements from UEs or different network entities, where the measurement may be RF sensing measurement, non-RF sensing (LIDAR (Light Detection and Ranging) , camera, video, sensor, and so on) measurement.
Like other terminology herein, Sensing Data Collection is used for ease of reference, but other names or terminology may be used. For example, Sensing Data Collection can also be referred to as data collection, 3GPP sensing data collection, 3GPP and non-3GPP sensing data collection, data measurement, or sensing measurement. The name “data collector” is also used herein as a general term for a Sensing Data Collection element in a sensing system.
Three types of data are shown by way of example in Fig. 1 for illustrative embodiment 1, and are described below:
Training Data: Data needed as input for the Sensing Modelling function, such as data for sensing analysis, including assistance information.
Monitoring Data: Data needed as input for the Sensing Management function.
Action data: Data needed as input for the Sensing Application function.
Training data can also be referred to by other names, such as sensing modeling data in some embodiments that involve sensing modeling, data for deriving (or data related to, or associated with) sensing results, or data related to sensing. The name “first input data” is also used herein as a general term for such data.
Monitoring data can be referred to by other names as well, such as data related to (or data associated with) sensing management. Sensing management may include, for example, sensing performance monitoring and/or, sensing update control. The name “second input data” is also used herein as a general term for such data.
Like the other data types, Action data can also be referred to by other names, such as data for (or data related to, or data associated with) the sensing application. The name “third input data” is also used herein as a general term for such data.
Sensing Modelling
Sensing Modelling is a function that reconstructs the physical world (that is, get a model for the physical world) , include environment reconstruction, channel reconstruction (by a ray tracing scheme, for example) , target reconstruction, digital twin, and so on. Other features or functions that may be supported or provided in the context of physical world reconstruction include target detection and target tracking. The Sensing Modelling function should be able to request specific information to be used to train the sensing model and to avoid reception of unnecessary information. The Sensing Modelling function may train a sensing model in some embodiments and training is one way to obtain a sensing model, but more generally a model may be trained or otherwise obtained.
In addition, the sensing modelling function is also responsible for data processing, such as data pre-processing and cleaning, formatting, and transformation based on Training Data delivered by a Sensing Data Collection function, if required.
Trained/Updated Model in Fig. 1: Used to deliver a trained sensing model to the Sensing Results Storage function or to deliver an updated sensing model to the Sensing Results Storage function.
Sensing Modeling can also be referred to as Sensing Results Processing, Sensing Information Processing, Sensing Data Processing, Sensing Measurement Processing, Environment Information Processing, Object Information Processing, Environment and Object Information Processing, for example.
Sensing Modelling can be called sensing functionalities or sensing tasks or sensing uses cases. It is a function to process data from the sensing data collection function to get information about characteristics of the environment and/or objects within the environment.
The name “sensing result generator” is also used herein as a general term for a Sensing Modeling element in a sensing system.
Sensing tasks or Sensing use cases may include, for example: environment reconstruction, channel prediction, intruder detection, pedestrian/animal intrusion detection, rainfall monitoring, Transparent Sensing, sensing for flooding, intruder detection in surroundings of smart home, sensing for railway intrusion detection, Sensing Assisted Automotive Maneuvering and Navigation, automated guided vehicle (AGV) detection and tracking in factories, unmanned aerial vehicle (UAV) flight trajectory tracing, sensing at crossroads with/without obstacle, Network assisted sensing to avoid UAV collision, sensing for UAV intrusion detection, sensing for tourist spot traffic management, contactless sleep monitoring service, Protection of Sensing Information, health monitoring, service continuity of unobtrusive health monitoring, use case on Sensor Groups, Sensing for Parking Space Determination, Seamless extended reality (XR) streaming, UAVs/vehicles/pedestrians detection near Smart Grid equipment, Autonomous mobile robots collision avoidance in smart factories, roaming for sensing service of sports monitoring, on immersive experience based on sensing, accurate sensing for automotive maneuvering and navigation service, public safety search and rescue or apprehend, Vehicles Sensing for Advanced Driving Assistance System, Gesture Recognition for Application Navigation and Immersive Interaction, sensing for automotive maneuvering and navigation service when not served by RAN, blind spot detection, integrated sensing and positioning in a factory hall. These are examples only, and the present disclosure is not in any way limited to these or any other types of sensing or sensing tasks.
Sensing Modelling is not in any way limited to providing or generating a model. Sensing Modelling can output sensing results, which may include, for example:
one or more sensed objects,
a reconstructed physical environment,
a reconstructed RF map.
Sensing Modelling can be enabled by AI, for example using AI to derive the sensing results. An output of Sensing Modeling is provided to sensing results storage in the example shown in Fig. 1, but full or partial sensing results can also or instead be delivered to a core network or a 3rd party, so as to provide a sensing service by a RAN for example.
Sensing Management
Sensing Management is a function that is responsible for performing sensing control to Sensing modelling and Sensing application functions. Sensing Management monitors the sensing output, and if the sensing results are no longer  applicable, it will request the Sensing Modelling function to re-train the model, and it will indicate to the Sensing Application function to switch the model.
Sensing Management can also be referred to as Sensing Control, Sensing Results Management, or Management. The name “sensing manager” is also used herein as a general term for a Sensing Management element in a sensing system.
For the sensing performance monitoring:
the Sensing Management function receives the monitoring data from Sensing Data Collection function, for example the ground truth data, and after comparing the sensing output and the ground truth, the sensing performance can be evaluated.
the Sensing Management function receives the output of the Sensing Application function, and the output includes the performance of the sensing application.
Performance feedback/re-model request:
Applied if certain information derived from sensing application function or information derived from the performance monitoring in the sensing management function is suitable for improvement of the sensing model trained in Sensing Modelling function.
The Sensing Management function may observe that the sensing performance of current sensing model is not good enough. For example, for channel construction, a sensing model is generated according to a static environment map, but when there are many moving targets in the environment, causing too much signal refection, the channel construction model is inapplicable. In this case, the Sensing Management function will send current sensing performance to the Sensing Modelling function, including current sensing output and its accuracy, resolution, and so on. In addition, the Sensing Management function also requests the Sensing Modelling function to retrain the model, and request to get an updated sensing model.
Sensing model selection/ (de) activation/switching/fallback:
When the Sensing Management function observes that the sensing performance of current sensing model is not good enough, it can send model switching signalling to Sensing Application function to switch to another sensing model, or send fallback signalling to indicate Sensing Application function to use non-sensing mode.
When there are multiple candidate sensing models, the Sensing Management function can indicate to the Sensing Application function which sensing model the Sensing Application is to use, and activate or de-activate one or multiple of the candidate sensing models.
Sensing Model transfer request:
Sensing Management function sends a sensing model transfer request to the Sensing Results Storage function to request a model for the Sensing Application function, and the request can be an initial model transfer request or an updated model transfer request.
Sensing Application
Sensing Application is a function that provides sensing decision output or sensing inference output (predictions or detections for example) . Some examples are target detection, channel prediction. The sensing application function is also responsible for performing actions according to sensing results, for example it triggers or performs corresponding actions according to sensing decision or prediction, and it may trigger actions directed to other entities or to itself. The sensing application function is also responsible for data preparation (such as data pre-processing and cleaning, formatting, and transformation) based on Action Data delivered by a Data Collection function.
The Sensing Application can also be referred to as Sensing Action, Sensing in RAN, Sensing usage, Sensing use cases, sensing assisted communication, sensing service, or sensing assisted communication. The name “output generator” is also used herein as a general term for a Sensing Application element in a sensing system.
Sensing Application uses sensing results to assist communication and/or perform actions according to the sensing results. For example, a sensing application for assisting communication may perform any of various communication-related functions or operations based on sensing results (such as an RF environment map) . Such functions may include, for example, any one or more of the following: beam prediction, CSI prediction, mobility prediction, beam management. Another example is when a sensing result from Sensing Modelling is detecting an intruder in a smart home, the Sensing Application may send alarm message to a user (home owner and/or police for example) to inform the user of the incident.
Output from the Sensing Application function is the output of the sensing model produced by a Sensing Application function in Fig. 1. The Sensing Application function should signal the outputs of the sensing to nodes that have explicitly requested them (via subscription for example) , or nodes that are subject to actions based on the output from Sensing Application.
Sensing Results Storage
Sensing Results Storage is a function that stores the sensing models, for example the reconstructed physical world (environment map, target and its location for example) . The storage location can be within RAN (BS and/or UE side for example) , or outside RAN (core network or a third party for example) . It receives the sensing model from the Sensing Modelling function, the model may be the first trained model, or the re-trained/updated model.
Sensing Results Storage can also be referred to as Sensing Storage, RAN Storage, local RAN storage, or RAN and Core Network storage. The name “storage subsystem” is also used herein as a general term for a Sensing Results Storage element in a sensing system. A model is one type of sensing result shown in Fig. 1. More generally, Sensing Results Storage stores sensing results, which may, but need not necessarily, include a model.
Model transfer:
The Sensing Results Storage function may receive Sensing Model transfer request from the Sensing Management function, and if it received the request:
The Sensing Results Storage function will send the corresponding model to the Sensing Application function. For example, the request indicates the request model ID, and the Sensing Results Storage function sends the model with the request ID. Or the request indicates the sensing functionality ID and/or sensing performance requirement (sensing accuracy, sensing distance/speed/angle resolution for example) , and the Sensing Results Storage function delivers a model satisfying the indicated sensing functionality and the performance requirement.
It is noted that there may be one or multiple devices within a function (implementing or supporting a function, fully in the case of one device or partially in the case of multiple devices) . For example, for multiple TRP sensing, multiple TRPs perform sensing measurement to collect data and perform sensing modelling (to construct the surrounding environment sensed by a TRP for example) . There may be a header TRP which is responsible for sensing modelling fusion to obtain a large unified sensing model (the whole environment for example) , where the other multiple TRPs send their sensing modelling results to the header TRP.
Physical entity of the sensing functions:
For a sensing function in Fig. 1, it can be located at UE, BS, core network, or a 3rd party.
For different sensing functions, each can be located at the same physical entity or different physical entities.
A sensing functional framework (including the signaling interface among functions) to support sensing procedures in the network is defined.
Fig. 1 and embodiment 1 provide an example, and other examples are provided by other embodiments herein.
Embodiment 1 and some other embodiments refer to a sensing model, which is one example of a sensing result. The present disclosure is not in any way restricted to a sensing model, and features disclosed herein may also or instead be applied to other types of sensing results.
EMBODIMENT 2
Fig. 2 illustrates another example of a sensing functional framework and procedures.
The sensing functional framework and the flowchart shown in Fig. 2 includes 5 parts.
These parts are discussed below, and several of these parts are common parts that may be the same or substantially the same as parts shown in Fig. 1 and described above.
Variations of elements or features as disclosed above with reference to Fig. 1 and embodiment 1 also apply to the same or similar elements or features of Fig. 2 and embodiment 2.
Sensing Data Collection
Sensing Data Collection is a function that provides input data to Sensing Modelling, Sensing Management, Sensing Application functions. Examples of input data may include measurements from UEs or different network entities, where the measurement maybe RF sensing measurement, non-RF sensing (LIDAR (Light Detection and Ranging) , camera, video, sensor, and so on) measurement.
The three types of data shown by way of example in Fig. 1 are also shown in Fig. 2 for illustrative embodiment 2:
Training Data: Data needed as input for the Sensing Modelling function, such as data for sensing analysis, including assistance information.
Monitoring Data: Data needed as input for the Sensing Management function.
Action data: Data needed as input for the Sensing Application function.
Sensing Modelling
Sensing Modelling is a function that reconstructs the physical world (that is, get a model for the physical world) , include environment reconstruction, channel reconstruction (by a ray tracing scheme, for example) , target reconstruction, digital twin, and so on. The Sensing Modelling function should be able to request specific information to be used to train the sensing model and to avoid reception of unnecessary information.
Sensing Modelling function can receive a sensing model from the Sensing Management function, such as a foundation sensing model, then the Sensing Modelling function fine-tunes the model to get an updated model.
One difference between embodiment 2 and embodiment 1 is this feature, that the Sensing Modelling function in Fig. 2 can receive a sensing model (in the example shown) , or more generally a sensing result, from the Sensing Management function and fine-tune the model (result) to get an updated model (result) . A sensing result that is provided to the Sensing Modelling function from the Sensing Management function may be referred to as a foundation sensing result (or a foundation set of data) or a base sensing result or (or a base set of data) , for example.
In addition, the sensing modelling function is also responsible for data processing, such as data pre-processing and cleaning, formatting, and transformation based on Training Data delivered by a Sensing Data Collection function, if required.
Trained/Updated Model in Fig. 2, as in Fig. 1: Used to deliver a trained sensing model to the Sensing Results Storage function or to deliver an updated sensing model to the Sensing Results Storage function.
Sensing Modelling features and examples described above for embodiment 1 may also or instead apply to embodiment 2.
Sensing Management
Sensing Management is a function that is responsible for performing sensing control to Sensing modelling and Sensing application functions. Sensing Management monitors the sensing output, and if the sensing results are no longer applicable, it will request the Sensing Modelling function to re-train the model, and it will indicate to the Sensing Application function to switch the model.
For the sensing performance monitoring:
the Sensing Management function receives the monitoring data from Sensing Data Collection function, for example the ground truth data, and after comparing the sensing output and the ground truth, the sensing performance can be evaluated.
the Sensing Management function receives the output of the Sensing Application function, and the output includes the performance of the sensing application.
Performance feedback/re-model request:
Applied if certain information derived from sensing application function or information derived from the performance monitoring in the sensing management function is suitable for improvement of the sensing model trained in Sensing Modelling function.
The Sensing Management function may observe that the sensing performance of current sensing model is not good enough. For example, for channel construction, a sensing model is generated according to a static environment map, but when there are many moving targets in the environment, causing too much signal refection, the channel construction model is inapplicable. In this case, the Sensing Management function will send current sensing performance to the Sensing Modelling function, including current sensing output and its accuracy, resolution, and so on. In addition, the Sensing Management function also requests the Sensing Modelling function to retrain the model, and request to get an updated sensing model.
Sensing model selection/ (de) activation/switching/fallback:
When the Sensing Management function observes that the sensing performance of current sensing model is not good enough, it can send model switching signalling to Sensing Application function to switch to another sensing model, or send fallback signalling to indicate Sensing Application function to use non-sensing mode.
When there are multiple candidate sensing models, the Sensing Management function can indicate to the Sensing Application function which sensing model the Sensing Application is to use, and activate or de-activate one or multiple of the candidate sensing models.
Sensing Model transfer request:
Sensing Management function sends a sensing model transfer request to the Sensing Results Storage function to request a model for the Sensing Application function, and the request can be an initial model transfer request or an updated model transfer request.
Sensing Model transfer:
The sensing management function may have some basic sensing models (such as a foundation model) , from RAN or core network or a 3rd party for example. To enable fast training at the Sensing Modelling function, it can send one or multiple basic sensing models to the sensing modelling function, and the Modelling function can fine-tune the model, which is not modelling from the very beginning.
This type of model transfer, by the Sensing Management function to the Sensing Modelling function in Fig. 2, is another difference of embodiment 2 relative to embodiment 1. A model and model transfer are referenced in the context of the example shown in Fig. 2, but features related to a model and model transfer may apply more generally to sensing results.
Sensing Application
Sensing Application is a function that provides sensing decision output or sensing inference output (predictions or detections for example) . Some examples are target detection, channel prediction. The sensing application function is also responsible for performing actions according to sensing results, for example it triggers or performs  corresponding actions according to sensing decision or prediction, and it may trigger actions directed to other entities or to itself. The sensing application function is also responsible for data preparation (such as data pre-processing and cleaning, formatting, and transformation) based on Action Data delivered by a Data Collection function.
Output from the Sensing Application function is the output of the sensing model produced by a Sensing Application function in Fig. 2. The Sensing Application function should signal the outputs of the sensing to nodes that have explicitly requested them (via subscription for example) , or nodes that are subject to actions based on the output from Sensing Application.
Sensing Results Storage
Sensing Results Storage is a function that stores the sensing models, for example the reconstructed physical world (environment map, target and its location for example) . The storage location can be within RAN (e.g. BS and/or UE side) , or outside RAN (core network or a third party for example) . It receives the sensing model from the Sensing Modelling function, the model may be the first trained model, or the re-trained/updated model.
Model transfer:
The Sensing Results Storage function may receive Sensing Model transfer request from the Sensing Management function, and if it received the request:
The Sensing Results Storage function will send the corresponding model to the Sensing Application function. For example, the request indicates the request model ID, and the Sensing Results Storage function sends the model with the request ID. Or the request indicates the sensing functionality ID and/or sensing performance requirement (sensing accuracy, sensing distance/speed/angle resolution for example) , and the Sensing Results Storage function delivers a model satisfying the indicated sensing functionality and the performance requirement.
The examples of one or multiple devices within a function, as discussed at least above in the context of embodiment 1, may also or instead apply to embodiment 2.
Physical entity of the sensing functions:
For a sensing function in Fig. 2, as in Fig. 1, it can be located at UE, BS, core network, or a 3rd party.
For different sensing functions, each can be located at the same physical entity or different physical entities.
A sensing functional framework (including the signaling interface among functions) to support sensing procedure in the network is defined.
Fig. 1 and embodiment 1 provide an example, and Fig. 2 and embodiment 2 provide another example.
EMBODIMENT 3
Fig. 3 illustrates a further example of a sensing functional framework and procedures.
The sensing functional framework supports multiple sensing nodes and non-sensing nodes, and supports data fusion from multiple nodes to get processed and combined data to Sensing Modelling, Sensing Management, and Sensing Application functions.
The sensing functional framework and the flowchart shown in Fig. 3 includes the parts described below.
Several of these parts are the same or substantially the same as those in Fig. 1 and/or Fig. 2 and described above. Variations of elements or features as disclosed above with reference to Fig. 1 and embodiment 1 and/or with reference to Fig. 2 and embodiment 2 also apply to the same or similar elements or features of Fig. 3 and embodiment 3.
Data Fusion
Data Fusion is a function that provides input data to Sensing Modelling, Sensing Management, and Sensing Application functions. It should be noted that data fusion could be also called data collection.
Another possible name for Data Fusion is input data generation. The name “input data generator” may be used to refer to a Data Fusion element of a sensing system, for example. Data Fusion may be considered a special case or embodiment of data collection.
The Data Fusion function receives input from one or multiple sensing functions, including RF sensing function and non-RF sensing function. The RF sensing function uses RF sensing to collect data, including 3GPP based RF sensing and non-3GPP based RF sensing (wifi sensing, Radar sensing, for example) , and it can be TRP sensing or UE sensing measurement. Non-RF sensing function uses non-RF sensing to collect data, such as LIDAR (Light Detection and Ranging) , camera, video, sensor, and so on) , video, sensor.
Fig. 3 illustrates, by way of example, N RF sensing functions and N Non-RF sensing functions. Although Fig. 3 illustrates N RF sensing functions and N Non-RF sensing functions, the number of RF sensing functions may be different from the number of Non-RF sensing functions (for example, there may be N RF sensing functions and N Non-RF sensing functions, where N is not equal to M) . More generally, there may be one or more sensing functions, and those sensing functions may include either or both of RF sensing functions and Non-RF sensing functions. Sensing functions may also be referred to as sensors or sensing devices, for example.
The data fusion function is responsible for data processing, including data pre-processing and cleaning, formatting, and transformation, integrating multiple data sources to produce more useful information than that provided by any individual data source. For example, the data fusion function in Fig. 3 combines RF sensing data and non-RF sensing data derived from multiple functions such that the resulting information has less uncertainty than that when these data is used individually.
The three types of data are shown by way of example in Figs. 1 and 2 are also shown in Fig. 3 for illustrative embodiment 2:
Training Data: Data needed as input for the Sensing Modelling function, such as data for sensing analysis, including assistance information.
Monitoring Data: Data needed as input for the Sensing Management function.
Action data: Data needed as input for the Sensing Application function.
Embodiment 3 differs from embodiment 1 in that embodiment 3 includes the data fusion function and sensing functions.
Sensing Modelling
Sensing Modelling is a function that reconstructs the physical world (that is, get a model for the physical world) , include environment reconstruction, channel reconstruction (by a ray tracing scheme, for example) , target reconstruction, digital twin, and so on. The Sensing Modelling function should be able to request specific information to be used to train the sensing model and to avoid reception of unnecessary information.
In addition, the sensing modelling function is also responsible for data processing, such as data pre-processing and cleaning, formatting, and transformation based on Training Data delivered by a Sensing Data Collection function, if required.
Trained/Updated Model in Fig. 3, as in Figs. 1 and 2: Used to deliver a trained sensing model to the Sensing Results Storage function or to deliver an updated sensing model to the Sensing Results Storage function.
Sensing Modelling features and examples described above for embodiment 1 may also or instead apply to embodiment 3.
Sensing Management
Sensing Management is a function that is responsible for performing sensing control to Sensing modelling and Sensing application functions. Sensing Management monitors the sensing output, and if the sensing results are no longer applicable, it will request the Sensing Modelling function to re-train the model, and it will indicate to the Sensing Application function to switch the model.
For the sensing performance monitoring:
the Sensing Management function receives the monitoring data from Sensing Data Collection (and specifically from the Data Fusion function in the example shown in Fig. 3) , for example the ground truth data, and after comparing the sensing output and the ground truth, the sensing performance can be evaluated.
the Sensing Management function receives the output of the Sensing Application function, and the output includes the performance of the sensing application.
Performance feedback/re-model request:
Applied if certain information derived from sensing application function or information derived from the performance monitoring in the sensing management function is suitable for improvement of the sensing model trained in Sensing Modelling function.
The Sensing Management function may observe that the sensing performance of current sensing model is not good enough. For example, for channel construction, a sensing model is generated according to a static environment map, but when there are many moving targets in the environment, causing too much signal  refection, the channel construction model is inapplicable. In this case, the Sensing Management function will send current sensing performance to the Sensing Modelling function, including current sensing output and its accuracy, resolution, and so on. In addition, the Sensing Management function also requests the Sensing Modelling function to retrain the model, and request to get an updated sensing model.
Sensing model selection/ (de) activation/switching/fallback:
When the Sensing Management function observes that the sensing performance of current sensing model is not good enough, it can send model switching signalling to Sensing Application function to switch to another sensing model, or send fallback signalling to indicate Sensing Application function to use non-sensing mode.
When there are multiple candidate sensing models, the Sensing Management function can indicate to the Sensing Application function which sensing model the Sensing Application is to use, and activate or de-activate one or multiple of the candidate sensing models.
Sensing Model transfer request:
Sensing Management function sends a sensing model transfer request to the Sensing Results Storage function to request a model for the Sensing Application function, and the request can be an initial model transfer request or an updated model transfer request.
Sensing Application
Sensing Application is a function that provides sensing decision output or sensing inference output (predictions or detections for example) . Some examples are target detection, channel prediction. The sensing application function is also responsible for performing actions according to sensing results, for example it triggers or performs corresponding actions according to sensing decision or prediction, and it may trigger actions directed to other entities or to itself. The sensing application function is also responsible for data preparation (such as data pre-processing and cleaning, formatting, and transformation) based on Action Data delivered by a Data Collection function.
Output from the Sensing Application function is the output of the sensing model produced by a Sensing Application function. The Sensing Application function should signal the outputs of the sensing to nodes that have explicitly requested them (via subscription for example) , or nodes that are subject to actions based on the output from Sensing Application.
Sensing Results Storage
Sensing Results Storage is a function that store the sensing models, for example the reconstructed physical world (environment map, target and its location for example) . The storage location can be within RAN (e.g. BS and/or UE side) , or outside RAN (core network or a third party for example) . It receives the sensing model from the Sensing Modelling function, the model may be the first trained model, or the re-trained/updated model.
Model transfer:
The Sensing Results Storage function may receive Sensing Model transfer request from the Sensing Management function, and if it received the request:
The Sensing Results Storage function will send the corresponding model to the Sensing Application function. For example, the request indicates the request model ID, and the Sensing Results Storage function sends the model with the request ID. Or the request indicates the sensing functionality ID and/or sensing performance requirement (sensing accuracy, sensing distance/speed/angle resolution for example) , and the Sensing Results Storage function delivers a model satisfying the indicated sensing functionality and the performance requirement.
The examples of one or multiple devices within a function, as discussed at least above in the context of embodiment 1, may also or instead apply to embodiment 3.
Physical entity of the sensing functions:
For a sensing function in Fig. 3, as in Figs. 1 and 2, it can be located at UE, BS, core network, or a 3rd party.
For different sensing functions, each can be located at the same physical entity or different physical entities.
A sensing functional framework (including the signaling interface among functions) to support sensing procedure in the network is defined.
Figs. 1 and 2 and embodiments 1 and 2 provide examples, and Fig. 3 and embodiment 3 provide another example.
EMBODIMENT 4
Fig. 4 illustrates yet another example of a sensing functional framework and procedures.
The sensing functional framework supports anchor management function, where an anchor is a specific type of node (for example a sensing UE deployed by an operator) , and the anchor could report ground truth to the network to assist network to calibrate/adjust sensing results.
As described at least below, a node is one example of an anchor, and an anchor may be a different type of device rather than a node.
The sensing functional framework and the flowchart shown in Fig. 4 includes the parts described below.
Several of these parts are the same or substantially the same as those in one or more of Figs. 1 to 3 and described above. Variations of elements or features as disclosed above with reference to Fig. 1 and embodiment 1, with reference to Fig. 2 and embodiment 2, and/or with reference to Fig. 3 and embodiment 3 also apply to the same or similar elements or features of Fig. 4 and embodiment 4.
Anchor Management
Anchor Management is one feature or function by which embodiment 4 differs from embodiment 1.
Anchor Management is a function that is responsible for performing control to anchors and non-anchors. The Anchor Management function can configure which node is the anchor, and indicates to the anchor to perform data collection  and corresponding collected data type. In addition, the Anchor Management function could also indicate to a non-anchor to perform data collection and corresponding collected data type.
Anchor Management may also be referred to, for example, as Sensing Anchor Management or, in the case of a sensing anchor being a sensing device or sensor, as Sensing Device Management, Sensor (or Sensing Device) Management, or Sensing Function Management, for example. The name “anchor manager” is also used herein to refer generally to an Anchor Management element of a sensing system.
An anchor is the node that can report ground truth to other functions, such as Sensing Modelling, Sensing Management, Sensing Application functions. For example, the anchor is deployed by the network operator at a known location, and the anchor performs sensing measurement and reports the sensed data to the network, including the measurement data and the ground truth, where the ground truth includes the position information.
A node is one example of an anchor device. More generally, an anchor may be a device, which may be a node but need not necessarily be a node, that can report ground truth.
A second example of an anchor is a device, deployed by the network operator at a known location for example, that transmits a sensing signal to assist one or more other sensing devices to perform sensing measurement.
As a third example, an anchor is a passive object. For example, an anchor may be an object with known information such as shape, size, orientation, speed, location, distances or relative motion between objects. The anchor information can be indicated from a BS to a UE for example, in which case the UE can perform sensing measurement and compare its sensing results with the anchor information, so as to calibrate its sensing results.
Anchor Data Collection
Anchor Data Collection is another difference between embodiment 4 and embodiment 1.
Anchor Data Collection is a function that provides input data to Sensing Modelling, Sensing Management, Sensing Application functions. The input data includes ground truth information. Ground truth refers to the true answer to a specific problem or question. For example, when perform target sensing, the ground truth is the target exact location, exact shape. For environment reconstruction, the ground truth is the exact environment, includes the building locations/shapes, street locations, and so on. Examples of input data may include measurements from UEs or different network entities, where the measurement may be RF sensing measurement, non-RF sensing (LIDAR (Light Detection and Ranging) , camera, video, sensor, and so on) measurement.
The input data from anchor data collection may include the three types of data shown by way of example in Fig. 1:
Training Data: Data needed as input for the Sensing Modelling function, such as data for sensing analysis, including assistance information.
Monitoring Data: Data needed as input for the Sensing Management function.
Action data: Data needed as input for the Sensing Application function.
Non-Anchor Data Collection
Embodiment 4 also differs from embodiment 1 in that it includes Non-anchor Data Collection.
Non-anchor Data Collection is a function that provides input data to Sensing Modelling, Sensing Management, Sensing Application functions. The input data does not include the ground truth information. Examples of input data may include measurements from UEs or different network entities, where the measurement may be RF sensing measurement, non-RF sensing (LIDAR (Light Detection and Ranging) , camera, video, sensor, and so on) measurement.
The input data from anchor data collection may include the three types of data shown by way of example in Fig. 1:
Training Data: Data needed as input for the Sensing Modelling function, such as data for sensing analysis, including assistance information.
Monitoring Data: Data needed as input for the Sensing Management function.
Action data: Data needed as input for the Sensing Application function.
Anchor Data collection and Non-anchor Data collection may be considered a special case or embodiment of data collection. Sensors in a sensing system may include one or more sensors to provide (to Anchor Data collection) anchor data related to one or more sensing anchors, and one or more sensors to provide (to Non-anchor Data collection) non-anchor data that is not related to a sensing anchor.
Sensing Modelling
Sensing Modelling is a function that reconstructs the physical world (that is, get a model for the physical world) , include environment reconstruction, channel reconstruction (by a ray tracing scheme, for example) , target reconstruction, digital twin, and so on. The Sensing Modelling function should be able to request specific information to be used to train the sensing model and to avoid reception of unnecessary information.
In addition, the sensing modelling function is also responsible for data processing, such as data pre-processing and cleaning, formatting, and transformation based on Training Data delivered by an Anchor Data Collection and/or a Non-Anchor Data Collection function, if required.
Trained/Updated Model in Fig. 4, as in Figs. 1 to 3: Used to deliver a trained sensing model to the Sensing Results Storage function or to deliver an updated sensing model to the Sensing Results Storage function.
Sensing Modelling features and examples described above for embodiment 1 may also or instead apply to embodiment 4.
Sensing Management
Sensing Management is a function that is responsible for performing sensing control to Sensing modelling and Sensing application functions. Sensing Management monitors the sensing output, and if the sensing results are no longer  applicable, it will request the Sensing Modelling function to re-train the model, and it will indicate to the Sensing Application function to switch the model.
For the sensing performance monitoring:
the Sensing Management function receives the monitoring data from an Anchor Data Collection and/or a Non-Anchor Data Collection function, for example the ground truth data, and after comparing the sensing output and the ground truth, the sensing performance can be evaluated.
the Sensing Management function receives the output of the Sensing Application function, and the output includes the performance of the sensing application.
Performance feedback/re-model request:
Applied if certain information derived from sensing application function or information derived from the performance monitoring in the sensing management function is suitable for improvement of the sensing model trained in Sensing Modelling function.
The Sensing Management function may observe that the sensing performance of current sensing model is not good enough. For example, for channel construction, a sensing model is generated according to a static environment map, but when there are many moving targets in the environment, causing too much signal refection, the channel construction model is inapplicable. In this case, the Sensing Management function will send current sensing performance to the Sensing Modelling function, including current sensing output and its accuracy, resolution, and so on. In addition, the Sensing Management function also requests the Sensing Modelling function to retrain the model, and request to get an updated sensing model.
Sensing model selection/ (de) activation/switching/fallback:
When the Sensing Management function observes that the sensing performance of current sensing model is not good enough, it can send model switching signalling to Sensing Application function to switch to another sensing model, or send fallback signalling to indicate Sensing Application function to use non-sensing mode.
When there are multiple candidate sensing models, the Sensing Management function can indicate to the Sensing Application function which sensing model the Sensing Application is to use, and activate or de-activate one or multiple of the candidate sensing models.
Sensing Model transfer request:
Sensing Management function sends a sensing model transfer request to the Sensing Results Storage function to request a model for the Sensing Application function, and the request can be an initial model transfer request or an updated model transfer request.
Sensing Application
Sensing Application is a function that provides sensing decision output or sensing inference output (predictions or detections for example) . Some examples are target detection, channel prediction. The sensing application function is also responsible for performing actions according to sensing results, for example it triggers or performs  corresponding actions according to sensing decision or prediction, and it may trigger actions directed to other entities or to itself. The sensing application function is also responsible for data preparation (such as data pre-processing and cleaning, formatting, and transformation) based on Action Data delivered by an Anchor Data Collection and/or a Non-Anchor Data Collection function.
Output from the Sensing Application function is the output of the sensing model produced by a Sensing Application function. The Sensing Application function should signal the outputs of the sensing to nodes that have explicitly requested them (via subscription for example) , or nodes that are subject to actions based on the output from Sensing Application.
Sensing Results Storage
Sensing Results Storage is a function that store the sensing models, for example the reconstructed physical world (environment map, target and its location for example) . The storage location can be within RAN (e.g. BS and/or UE side) , or outside RAN (core network or a third party for example) . It receives the sensing model from the Sensing Modelling function, the model may be the first trained model, or the re-trained/updated model.
Model transfer:
The Sensing Results Storage function may receive Sensing Model transfer request from the Sensing Management function, and if it received the request:
The Sensing Results Storage function will send the corresponding model to the Sensing Application function. For example, the request indicates the request model ID, and the Sensing Results Storage function sends the model with the request ID. Or the request indicates the sensing functionality ID and/or sensing performance requirement (sensing accuracy, sensing distance/speed/angle resolution for example) , and the Sensing Results Storage function delivers a model satisfying the indicated sensing functionality and the performance requirement.
The examples of one or multiple devices within a function, as discussed at least above in the context of embodiment 1, may also or instead apply to embodiment 4.
Physical entity of the sensing functions:
For a sensing function in Fig. 4, as in Figs. 1 to 3, it can be located at UE, BS, core network, or a 3rd party.
For different sensing functions, each can be located at the same physical entity or different physical entities.
It should be noted that the anchor management function can be combined with the data fusion function proposed in embodiment 3, where data fusion function can be placed after anchor data collection and non-anchor data collection fusion, so as to get combined, more accurate data.
A sensing functional framework (including the signaling interface among functions) to support sensing procedure in the network is defined.
Figs. 1 to 3 and embodiments 1 to 3 provide examples, and Fig. 4 and embodiment 4 provide another example.
Sensing Framework
Alt-1: including Sensing data collection, Sensing Modelling, Sensing Management, Sensing Inference, Sensing Model Storage
This example (Alt-1) is consistent with embodiment 1 and Fig. 1.
Alt-2: Sensing Management function performs model transfer to Sensing Modelling function.
This example (Alt-2) is consistent with embodiment 2 and Fig. 2.
Alt-3: add Data Fusion module
This example (Alt-3) is consistent with embodiment 3 and Fig. 3.
Alt-4: add Anchor Management module, anchor (s) are deployed by operate
This example (Alt-4) is consistent with embodiment 4 and Fig. 4.
Other embodiments are also possible, including:
embodiments that implement or support some, but not all, features of any of these disclosed embodiments;
embodiments that implement or support features from two or more these disclosed embodiments;
embodiments that implement or support different features;
embodiments that implement or support similar features.
DEFINITIONS OF ACRONYMS &GLOSSARIES
The following acronyms may appear herein, and expansions are provided below.
LTE     Long Term Evolution
NR New Radio
BWP     Bandwidth part
BS      Base Station
CA Carrier Aggregation
CC Component Carrier
CG Cell Group
CSI Channel state information
CSI-RS    Channel state information Reference Signal
DC Dual Connectivity
DCI       Downlink control information
DL Downlink
DL-SCH    Downlink shared channel
EN-DC     E-UTRA NR dual connectivity with MCG using E-UTRA and SCG using NR
gNB       Next generation (or 5G) base station
HARQ-ACK   Hybrid automatic repeat request acknowledgement
MCG       Master cell group
MCS       Modulation and coding scheme
MAC-CE    Medium Access Control-Control Element
PBCH      Physical broadcast channel
PCell     Primary cell
PDCCH     Physical downlink control channel
PDSCH     Physical downlink shared channel
PRACH     Physical Random Access Channel
PRG       Physical resource block group
PSCell    Primary SCG Cell
PSS Primary synchronization signal
PUCCH     Physical uplink control channel
PUSCH     Physical uplink shared channel
RACH      Random access channel
RAPID     Random access preamble identity
RB Resource block
RE Resource element
RRM     Radio resource management
RMSI    Remaining system information
RS      Reference signal
RSRP    Reference signal received power
RRC     Radio Resource Control
SCG     Secondary cell group
SFN     System frame number
SL      Sidelink
SCell   Secondary Cell
SPS Semi-persistent scheduling
SR      Scheduling request
SRI SRS resource indicator
SRS     Sounding reference signal
SSS Secondary synchronization signal
SSB     Synchronization Signal Block
SUL     Supplement Uplink
TA Timing advance
TAG     Timing advance group
TUE     target UE
UCI     Uplink control information
UE User Equipment
UL Uplink
UL-SCH  Uplink shared channel
6G System Structure
Referring to Fig. 5, as an illustrative example without limitation, a simplified schematic illustration of a communication system is provided. The communication system 100 comprises a radio access network 120. The radio access network 120 may be a next generation (sixth generation (6G) or later for example) radio access network, or a legacy (5G, 4G, 3G or 2G for example) radio access network. One or more communication electric device (ED) 110a, 110b, 110c, 110d, 110e, 110f, 110g, 110h, 110i, 110j (generically referred to as 110) may be interconnected to one another or connected to one or more network nodes (170a, 170b, generically referred to as 170) in the radio access network 120. A core network 130 may be a part of the communication system and may be dependent or independent of the radio access technology used in the communication system 100. Also, the communication system 100 comprises a public switched telephone network (PSTN) 140, the internet 150, and other networks 160.
Fig. 6 illustrates an example communication system 100. In general, the communication system 100 enables multiple wireless or wired elements to communicate data and other content. The purpose of the communication system 100 may be to provide content, such as voice, data, video, and/or text, via broadcast, multicast and unicast, and so on. The communication system 100 may operate by sharing resources, such as carrier spectrum bandwidth, between its constituent elements. The communication system 100 may include a terrestrial communication system and/or a non-terrestrial communication system. The communication system 100 may provide a wide range of communication services and applications (such as earth monitoring, remote sensing, passive sensing and positioning, navigation and tracking, autonomous delivery and mobility, and so on) . The communication system 100 may provide a high degree of availability and robustness through a joint operation of the terrestrial communication system and the non-terrestrial communication system. For example, integrating a non-terrestrial communication system (or components thereof) into a terrestrial communication system can result in what may be considered a heterogeneous network comprising multiple layers. Compared to conventional communication networks, the heterogeneous network may achieve better overall performance through efficient multi-link joint operation, more flexible functionality sharing, and faster physical layer link switching between terrestrial networks and non-terrestrial networks.
The terrestrial communication system and the non-terrestrial communication system could be considered sub-systems of the communication system. In the example shown, the communication system 100 includes electronic devices (ED) 110a-110d (generically referred to as ED 110) , radio access networks (RANs) 120a-120b, non-terrestrial communication network 120c, a core network 130, a public switched telephone network (PSTN) 140, the internet 150, and other networks 160. The RANs 120a-120b include respective base stations (BSs) 170a-170b, which may be generically referred to as terrestrial transmit and receive points (T-TRPs) 170a-170b. The non-terrestrial communication network 120c includes an access node 172, which may be generically referred to as a non-terrestrial transmit and receive point (NT-TRP) 172.
Any ED 110 may be alternatively or additionally configured to interface, access, or communicate with any other T-TRP 170a-170b and NT-TRP 172, the internet 150, the core network 130, the PSTN 140, the other networks 160, or any combination of the preceding. In some examples, ED 110a may communicate an uplink and/or downlink transmission over an interface 190a with T-TRP 170a. In some examples, the EDs 110a, 110b and 110d may also communicate directly with one another via one or more sidelink air interfaces 190b. In some examples, ED 110d may communicate an uplink and/or downlink transmission over an interface 190c with NT-TRP 172.
The air interfaces 190a and 190b may use similar communication technology, such as any suitable radio access technology. For example, the communication system 100 may implement one or more channel access methods, such  as code division multiple access (CDMA) , time division multiple access (TDMA) , frequency division multiple access (FDMA) , orthogonal FDMA (OFDMA) , or single-carrier FDMA (SC-FDMA) in the air interfaces 190a and 190b. The air interfaces 190a and 190b may utilize other higher dimension signal spaces, which may involve a combination of orthogonal and/or non-orthogonal dimensions.
The air interface 190c can enable communication between the ED 110d and one or multiple NT-TRPs 172 via a wireless link or simply a link. For some examples, the link is a dedicated connection for unicast transmission, a connection for broadcast transmission, or a connection between a group of EDs and one or multiple NT-TRPs for multicast transmission.
The RANs 120a and 120b are in communication with the core network 130 to provide the EDs 110a 110b, and 110c with various services such as voice, data, and other services. The RANs 120a and 120b and/or the core network 130 may be in direct or indirect communication with one or more other RANs (not shown) , which may or may not be directly served by core network 130, and may or may not employ the same radio access technology as RAN 120a, RAN 120b or both. The core network 130 may also serve as a gateway access between (i) the RANs 120a and 120b or EDs 110a 110b, and 110c or both, and (ii) other networks (such as the PSTN 140, the internet 150, and the other networks 160) . In addition, some or all of the EDs 110a 110b, and 110c may include functionality for communicating with different wireless networks over different wireless links using different wireless technologies and/or protocols. Instead of wireless communication (or in addition thereto) , the EDs 110a 110b, and 110c may communicate via wired communication channels to a service provider or switch (not shown) , and to the internet 150. PSTN 140 may include circuit switched telephone networks for providing plain old telephone service (POTS) . Internet 150 may include a network of computers and subnets (intranets) or both, and incorporate protocols, such as Internet Protocol (IP) , Transmission Control Protocol (TCP) , User Datagram Protocol (UDP) . EDs 110a 110b, and 110c may be multimode devices capable of operation according to multiple radio access technologies, and incorporate multiple transceivers necessary to support such.
6G Basic Component Structure
Fig. 7 illustrates another example of an ED 110 and a base station 170a, 170b and/or 172. The ED 110 is used to connect persons, objects, machines, and so on. The ED 110 may be widely used in various scenarios, for example, cellular communications, device-to-device (D2D) , vehicle to everything (V2X) , peer-to-peer (P2P) , machine-to-machine (M2M) , machine-type communications (MTC) , internet of things (IOT) , virtual reality (VR) , augmented reality (AR) , industrial control, self-driving, remote medical, smart grid, smart furniture, smart office, smart wearable, smart transportation, smart city, drones, robots, remote sensing, passive sensing, positioning, navigation and tracking, autonomous delivery and mobility, and so on.
Each ED 110 represents any suitable end user device for wireless operation and may include such devices (or may be referred to) as a user equipment/device (UE) , a wireless transmit/receive unit (WTRU) , a mobile station, a fixed or mobile subscriber unit, a cellular telephone, a station (STA) , a machine type communication (MTC) device, a personal digital assistant (PDA) , a smartphone, a laptop, a computer, a tablet, a wireless sensor, a consumer electronics device, a smart book, a vehicle, a car, a truck, a bus, a train, or an IoT device, an industrial device, or apparatus (communication module, modem, or chip for example) in the forgoing devices, among other possibilities. Future generation EDs 110 may be referred to using other terms. The base station 170a and 170b is a T-TRP and will hereafter be referred to as T-TRP 170. Also shown in Fig. 7, an NT-TRP will hereafter be referred to as NT-TRP 172. Each ED 110 connected to T-TRP 170 and/or NT-TRP 172 can be dynamically or semi-statically turned-on (that is, established, activated, or enabled) , turned-off (that is,  released, deactivated, or disabled) and/or configured in response to one of more of: connection availability and connection necessity.
The ED 110 includes a transmitter 201 and a receiver 203 coupled to one or more antennas 204. Only one antenna 204 is illustrated. One, some, or all of the antennas may alternatively be panels. The transmitter 201 and the receiver 203 may be integrated, as a transceiver for example. The transceiver is configured to modulate data or other content for transmission by at least one antenna 204 or network interface controller (NIC) . The transceiver is also configured to demodulate data or other content received by the at least one antenna 204. Each transceiver includes any suitable structure for generating signals for wireless or wired transmission and/or processing signals received wirelessly or by wire. Each antenna 204 includes any suitable structure for transmitting and/or receiving wireless or wired signals.
The ED 110 includes at least one memory 208. The memory 208 stores instructions and data used, generated, or collected by the ED 110. For example, the memory 208 could store software instructions or modules configured to implement some or all of the functionality and/or embodiments described herein and that are executed by processing unit (s) , shown by way of example as a processor 210. Each memory 208 includes any suitable volatile and/or non-volatile storage and retrieval device (s) . Any suitable type of memory may be used, such as random access memory (RAM) , read only memory (ROM) , hard disk, optical disc, subscriber identity module (SIM) card, memory stick, secure digital (SD) memory card, on-processor cache, and the like.
The ED 110 may further include one or more input/output devices (not shown) or interfaces (such as a wired interface to the internet 150 in Fig. 5) . The input/output devices permit interaction with a user or other devices in the network. Each input/output device includes any suitable structure for providing information to or receiving information from a user, such as a speaker, microphone, keypad, keyboard, display, or touch screen, including network interface communications.
The ED 110 further includes a processor 210 for performing operations including those related to preparing a transmission for uplink transmission to the NT-TRP 172 and/or T-TRP 170, those related to processing downlink transmissions received from the NT-TRP 172 and/or T-TRP 170, and those related to processing sidelink transmission to and from another ED 110. Processing operations related to preparing a transmission for uplink transmission may include operations such as encoding, modulating, transmit beamforming, and generating symbols for transmission. Processing operations related to processing downlink transmissions may include operations such as receive beamforming, demodulating and decoding received symbols. Depending upon the embodiment, a downlink transmission may be received by the receiver 203, possibly using receive beamforming, and the processor 210 may extract signaling from the downlink transmission (for example by detecting and/or decoding the signaling) . An example of signaling may be a reference signal transmitted by NT-TRP 172 and/or T-TRP 170. In some embodiments, the processor 210 implements the transmit beamforming and/or receive beamforming based on the indication of beam direction, beam angle information (BAI) for example, received from T-TRP 170. In some embodiments, the processor 210 may perform operations relating to network access (such as initial access) and/or downlink synchronization, such as operations relating to detecting a synchronization sequence, decoding and obtaining the system information, and so on. In some embodiments, the processor 210 may perform channel estimation, using a reference signal received from the NT-TRP 172 and/or T-TRP 170 for example.
Although not illustrated, the processor 210 may form part of the transmitter 201 and/or receiver 203. Although not illustrated, the memory 208 may form part of the processor 210.
The processor 210, and the processing components of the transmitter 201 and receiver 203 may each be implemented by the same or different one or more processors that are configured to execute instructions stored in a memory  (in memory 208 for example) . Alternatively, some or all of the processor 210, and the processing components of the transmitter 201 and receiver 203 may be implemented using dedicated circuitry, such as a programmed field-programmable gate array (FPGA) , a graphical processing unit (GPU) , or an application-specific integrated circuit (ASIC) .
The T-TRP 170 may be known by other names in some implementations, such as a base station, a base transceiver station (BTS) , a radio base station, a network node, a network device, a device on the network side, a transmit/receive node, a Node B, an evolved NodeB (eNodeB or eNB) , a Home eNodeB, a next Generation NodeB (gNB) , a transmission point (TP) ) , a site controller, an access point (AP) , or a wireless router, a relay station, a remote radio head, a terrestrial node, a terrestrial network device, or a terrestrial base station, base band unit (BBU) , remote radio unit (RRU) , active antenna unit (AAU) , remote radio head (RRH) , central unit (CU) , distribute unit (DU) , positioning node, among other possibilities. The T-TRP 170 may be macro BSs, pico BSs, relay node, donor node, or the like, or combinations thereof. The T-TRP 170 may refer to the forging devices or apparatus (communication module, modem, or chip for example) in the forgoing devices.
In some embodiments, the parts of the T-TRP 170 may be distributed. For example, some of the modules of the T-TRP 170 may be located remote from the equipment housing the antennas of the T-TRP 170, and may be coupled to the equipment housing the antennas over a communication link (not shown) sometimes known as front haul, such as common public radio interface (CPRI) . Therefore, in some embodiments, the term T-TRP 170 may also refer to modules on the network side that perform processing operations, such as determining the location of the ED 110, resource allocation (scheduling) , message generation, and encoding/decoding, and that are not necessarily part of the equipment housing the antennas of the T-TRP 170. The modules may also be coupled to other T-TRPs. In some embodiments, the T-TRP 170 may actually be a plurality of T-TRPs that are operating together to serve the ED 110, through coordinated multipoint transmissions for example.
The T-TRP 170 includes at least one transmitter 252 and at least one receiver 254 coupled to one or more antennas 256. Only one antenna 256 is illustrated. One, some, or all of the antennas may alternatively be panels. The transmitter 252 and the receiver 254 may be integrated as a transceiver. The T-TRP 170 further includes a processor 260 for performing operations including those related to: preparing a transmission for downlink transmission to the ED 110, processing an uplink transmission received from the ED 110, preparing a transmission for backhaul transmission to NT-TRP 172, and processing a transmission received over backhaul from the NT-TRP 172. Processing operations related to preparing a transmission for downlink or backhaul transmission may include operations such as encoding, modulating, precoding (MIMO precoding for example) , transmit beamforming, and generating symbols for transmission. Processing operations related to processing received transmissions in the uplink or over backhaul may include operations such as receive beamforming, and demodulating and decoding received symbols. The processor 260 may also perform operations relating to network access (such as initial access) and/or downlink synchronization, such as generating the content of synchronization signal blocks (SSBs) , generating the system information, etc. In some embodiments, the processor 260 also generates the indication of beam direction, such as BAI, which may be scheduled for transmission by scheduler 253. The processor 260 performs other network-side processing operations described herein, such as determining the location of the ED 110, determining where to deploy NT-TRP 172, and so on. In some embodiments, the processor 260 may generate signaling, to configure one or more parameters of the ED 110 and/or one or more parameters of the NT-TRP 172 for example. Any signaling generated by the processor 260 is sent by the transmitter 252. Note that “signaling” , as used herein, may alternatively be called control signaling. Dynamic signaling may be transmitted in a control channel, a physical downlink control channel (PDCCH) for example, and static or semi-static higher layer signaling may be included in a packet transmitted in a data channel, in a physical downlink shared channel (PDSCH) for example.
A scheduler 253 may be coupled to the processor 260. The scheduler 253 may be included within or operated separately from the T-TRP 170, which may schedule uplink, downlink, and/or backhaul transmissions, including issuing scheduling grants and/or configuring scheduling-free ( “configured grant” ) resources. The T-TRP 170 further includes a memory 258 for storing information and data. The memory 258 stores instructions and data used, generated, or collected by the T-TRP 170. For example, the memory 258 could store software instructions or modules configured to implement some or all of the functionality and/or embodiments described herein and that are executed by the processor 260.
Although not illustrated, the processor 260 may form part of the transmitter 252 and/or receiver 254. Also, although not illustrated, the processor 260 may implement the scheduler 253. Although not illustrated, the memory 258 may form part of the processor 260.
The processor 260, the scheduler 253, and the processing components of the transmitter 252 and receiver 254 may each be implemented by the same or different one or more processors that are configured to execute instructions stored in a memory, in memory 258 for example. Alternatively, some or all of the processor 260, the scheduler 253, and the processing components of the transmitter 252 and receiver 254 may be implemented using dedicated circuitry, such as a FPGA, a GPU, or an ASIC.
Although the NT-TRP 172 is illustrated as a drone only as an example, the NT-TRP 172 may be implemented in any suitable non-terrestrial form. Also, the NT-TRP 172 may be known by other names in some implementations, such as a non-terrestrial node, a non-terrestrial network device, or a non-terrestrial base station. The NT-TRP 172 includes a transmitter 272 and a receiver 274 coupled to one or more antennas 280. Only one antenna 280 is illustrated. One, some, or all of the antennas may alternatively be panels. The transmitter 272 and the receiver 274 may be integrated as a transceiver. The NT-TRP 172 further includes a processor 276 for performing operations including those related to: preparing a transmission for downlink transmission to the ED 110, processing an uplink transmission received from the ED 110, preparing a transmission for backhaul transmission to T-TRP 170, and processing a transmission received over backhaul from the T-TRP 170. Processing operations related to preparing a transmission for downlink or backhaul transmission may include operations such as encoding, modulating, precoding (MIMO precoding for example) , transmit beamforming, and generating symbols for transmission. Processing operations related to processing received transmissions in the uplink or over backhaul may include operations such as receive beamforming, and demodulating and decoding received symbols. In some embodiments, the processor 276 implements the transmit beamforming and/or receive beamforming based on beam direction information (BAI for example) received from T-TRP 170. In some embodiments, the processor 276 may generate signaling, to configure one or more parameters of the ED 110 for example. In some embodiments, the NT-TRP 172 implements physical layer processing, but does not implement higher layer functions such as functions at the medium access control (MAC) or radio link control (RLC) layer. As this is only an example, more generally, the NT-TRP 172 may implement higher layer functions in addition to physical layer processing.
The NT-TRP 172 further includes a memory 278 for storing information and data. Although not illustrated, the processor 276 may form part of the transmitter 272 and/or receiver 274. Although not illustrated, the memory 278 may form part of the processor 276.
The processor 276 and the processing components of the transmitter 272 and receiver 274 may each be implemented by the same or different one or more processors that are configured to execute instructions stored in a memory, e.g. in memory 278. Alternatively, some or all of the processor 276 and the processing components of the transmitter 272 and receiver 274 may be implemented using dedicated circuitry, such as a programmed FPGA, a GPU, or an ASIC. In some  embodiments, the NT-TRP 172 may actually be a plurality of NT-TRPs that are operating together to serve the ED 110, through coordinated multipoint transmissions for example.
The T-TRP 170, the NT-TRP 172, and/or the ED 110 may include other components, but these have been omitted for the sake of clarity.
6G Basic Module Structure
One or more steps of the embodiment methods provided herein may be performed by corresponding units or modules, according to Fig. 8. Fig. 8 illustrates units or modules in a device, such as in ED 110, in T-TRP 170, or in NT-TRP 172. For example, a signal may be transmitted by a transmitting unit or a transmitting module. For example, a signal may be transmitted by a transmitting unit or a transmitting module. A signal may be received by a receiving unit or a receiving module. A signal may be processed by a processing unit or a processing module. Other steps may be performed by an artificial intelligence (AI) or machine learning (ML) module. The respective units or modules may be implemented using hardware, one or more components or devices that execute software, or a combination thereof. For instance, one or more of the units or modules may be an integrated circuit, such as a programmed FPGA, a GPU, or an ASIC. It will be appreciated that where the modules are implemented using software for execution by a processor for example, they may be retrieved by a processor, in whole or part as needed, individually or together for processing, in single or multiple instances, and that the modules themselves may include instructions for further deployment and instantiation.
Additional details regarding the EDs 110, T-TRP 170, and NT-TRP 172 are known to those of skill in the art. As such, these details are omitted here.
6G Intelligent Air Interface
An air interface generally includes a number of components and associated parameters that collectively specify how a transmission is to be sent and/or received over a wireless communications link between two or more communicating devices. For example, an air interface may include one or more components defining the waveform (s) , frame structure (s) , multiple access scheme (s) , protocol (s) , coding scheme (s) and/or modulation scheme (s) for conveying information (data for example) over a wireless communications link. The wireless communications link may support a link between a radio access network and user equipment (a “Uu” link for example) , and/or the wireless communications link may support a link between device and device, such as between two user equipments (a “sidelink” for example) , and/or the wireless communications link may support a link between a non-terrestrial (NT) -communication network and user equipment (UE) . The following are some examples for the above components:
A waveform component may specify a shape and form of a signal being transmitted. Waveform options may include orthogonal multiple access waveforms and non-orthogonal multiple access waveforms. Non-limiting examples of such waveform options include Orthogonal Frequency Division Multiplexing (OFDM) , Filtered OFDM (f-OFDM) , Time windowing OFDM, Filter Bank Multicarrier (FBMC) , Universal Filtered Multicarrier (UFMC) , Generalized Frequency Division Multiplexing (GFDM) , Wavelet Packet Modulation (WPM) , Faster Than Nyquist (FTN) Waveform, and low Peak to Average Power Ratio Waveform (low PAPR WF) .
A frame structure component may specify a configuration of a frame or group of frames. The frame structure component may indicate one or more of a time, frequency, pilot signature, code, or other parameter of the frame or group of frames. More details of frame structure will be discussed below.
A multiple access scheme component may specify multiple access technique options, including technologies defining how communicating devices share a common physical channel, such as: Time Division Multiple Access (TDMA) , Frequency Division Multiple Access (FDMA) , Code Division Multiple Access (CDMA) , Single Carrier Frequency Division Multiple Access (SC-FDMA) , Low Density Signature Multicarrier Code Division Multiple Access (LDS-MC-CDMA) , Non-Orthogonal Multiple Access (NOMA) , Pattern Division Multiple Access (PDMA) , Lattice Partition Multiple Access (LPMA) , Resource Spread Multiple Access (RSMA) , and Sparse Code Multiple Access (SCMA) . Furthermore, multiple access technique options may include: scheduled access versus non-scheduled access, also known as grant-free access; non-orthogonal multiple access versus orthogonal multiple access, via a dedicated channel resource for example (such as no sharing between multiple communicating devices) ; contention-based shared channel resources vs. non-contention-based shared channel resources, and cognitive radio-based access.
A hybrid automatic repeat request (HARQ) protocol component may specify how a transmission and/or a re-transmission is to be made. Non-limiting examples of transmission and/or re-transmission mechanism options include those that specify a scheduled data pipe size, a signaling mechanism for transmission and/or re-transmission, and a re-transmission mechanism.
A coding and modulation component may specify how information being transmitted may be encoded/decoded and modulated/demodulated for transmission/reception purposes. Coding may refer to methods of error detection and forward error correction. Non-limiting examples of coding options include turbo trellis codes, turbo product codes, fountain codes, low-density parity check codes, and polar codes. Modulation may refer, simply, to the constellation (including, for example, the modulation technique and order) , or more specifically to various types of advanced modulation methods such as hierarchical modulation and low PAPR modulation.
In some embodiments, the air interface may be a “one-size-fits-all concept” . For example, the components within the air interface cannot be changed or adapted once the air interface is defined. In some implementations, only limited parameters or modes of an air interface, such as a cyclic prefix (CP) length or a multiple input multiple output (MIMO) mode, can be configured. In some embodiments, an air interface design may provide a unified or flexible framework to support below 6GHz and beyond 6GHz frequency (such as mmWave) bands for both licensed and unlicensed access. As an example, flexibility of a configurable air interface provided by a scalable numerology and symbol duration may allow for transmission parameter optimization for different spectrum bands and for different services/devices. As another example, a unified air interface may be self-contained in a frequency domain, and a frequency domain self-contained design may support more flexible radio access network (RAN) slicing through channel resource sharing between different services in both frequency and time.
Frame Structure
A frame structure is a feature of the wireless communication physical layer that defines a time domain signal transmission structure, for example to allow for timing reference and timing alignment of basic time domain transmission units. Wireless communication between communicating devices may occur on time-frequency resources governed by a frame structure. The frame structure may sometimes instead be called a radio frame structure.
Depending upon the frame structure and/or configuration of frames in the frame structure, frequency division duplex (FDD) and/or time-division duplex (TDD) and/or full duplex (FD) communication may be possible. FDD communication is when transmissions in different directions (uplink versus downlink for example) occur in different  frequency bands. TDD communication is when transmissions in different directions (uplink versus downlink for example) occur over different time durations. FD communication is when transmission and reception occurs on the same time-frequency resource, that is, a device can both transmit and receive on the same frequency resource concurrently in time.
One example of a frame structure is a frame structure in long-term evolution (LTE) having the following specifications: each frame is 10ms in duration; each frame has 10 subframes, which are each 1ms in duration; each subframe includes two slots, each of which is 0.5ms in duration; each slot is for transmission of 7 OFDM symbols (assuming normal CP) ; each OFDM symbol has a symbol duration and a particular bandwidth (or partial bandwidth or bandwidth partition) related to the number of subcarriers and subcarrier spacing; the frame structure is based on OFDM waveform parameters such as subcarrier spacing and CP length (where the CP has a fixed length or limited length options) ; and the switching gap between uplink and downlink in TDD has to be the integer time of OFDM symbol duration.
Another example of a frame structure is a frame structure in new radio (NR) having the following specifications: multiple subcarrier spacings are supported, each subcarrier spacing corresponding to a respective numerology; the frame structure depends on the numerology, but in any case the frame length is set at 10ms, and consists of ten subframes of 1ms each; a slot is defined as 14 OFDM symbols, and slot length depends upon the numerology. For example, the NR frame structure for normal CP 15 kHz subcarrier spacing ( “numerology 1” ) and the NR frame structure for normal CP 30 kHz subcarrier spacing ( “numerology 2” ) are different. For 15 kHz subcarrier spacing a slot length is 1ms, and for 30 kHz subcarrier spacing a slot length is 0.5ms. The NR frame structure may have more flexibility than the LTE frame structure.
Another example of a frame structure is an example flexible frame structure, e.g. for use in a 6G network or later. In a flexible frame structure, a symbol block may be defined as the minimum duration of time that may be scheduled in the flexible frame structure. A symbol block may be a unit of transmission having an optional redundancy portion (CP portion for example) and an information (data for example) portion. An OFDM symbol is an example of a symbol block. A symbol block may alternatively be called a symbol. Embodiments of flexible frame structures include different parameters that may be configurable, such as frame length, subframe length, symbol block length, and so on. A non-exhaustive list of possible configurable parameters in some embodiments of a flexible frame structure include:
(1) Frame: The frame length need not be limited to 10ms, and the frame length may be configurable and change over time. In some embodiments, each frame includes one or multiple downlink synchronization channels and/or one or multiple downlink broadcast channels, and each synchronization channel and/or broadcast channel may be transmitted in a different direction by different beamforming. The frame length may be more than one possible value and configured based on the application scenario. For example, autonomous vehicles may require relatively fast initial access, in which case the frame length may be set as 5ms for autonomous vehicle applications. As another example, smart meters on houses may not require fast initial access, in which case the frame length may be set as 20ms for smart meter applications.
(2) Subframe duration: A subframe might or might not be defined in the flexible frame structure, depending upon the implementation. For example, a frame may be defined to include slots, but no subframes. In frames in which a subframe is defined, e.g. for time domain alignment, then the duration of the subframe may be configurable. For example, a subframe may be configured to have a length of 0.1 ms or 0.2 ms or 0.5 ms or 1 ms or 2 ms or 5 ms, etc. In some embodiments, if a subframe is not needed in a particular scenario, then the subframe length may be defined to be the same as the frame length or not defined.
(3) Slot configuration: A slot might or might not be defined in the flexible frame structure, depending upon the implementation. In frames in which a slot is defined, then the definition of a slot (in time duration and/or  in number of symbol blocks for example) may be configurable. In one embodiment, the slot configuration is common to all UEs or a group of UEs. For this case, the slot configuration information may be transmitted to UEs in a broadcast channel or common control channel (s) . In other embodiments, the slot configuration may be UE specific, in which case the slot configuration information may be transmitted in a UE-specific control channel. In some embodiments, the slot configuration signaling can be transmitted together with frame configuration signaling and/or subframe configuration signaling. In other embodiments, the slot configuration can be transmitted independently from the frame configuration signaling and/or subframe configuration signaling. In general, the slot configuration may be system common, base station common, UE group common, or UE specific.
(4) Subcarrier spacing (SCS) : SCS is one parameter of scalable numerology which may allow the SCS to possibly range from 15 KHz to 480 KHz. The SCS may vary with the frequency of the spectrum and/or maximum UE speed to minimize the impact of the Doppler shift and phase noise. In some examples, there may be separate transmission and reception frames, and the SCS of symbols in the reception frame structure may be configured independently from the SCS of symbols in the transmission frame structure. The SCS in a reception frame may be different from the SCS in a transmission frame. In some examples, the SCS of each transmission frame may be half the SCS of each reception frame. If the SCS between a reception frame and a transmission frame is different, the difference does not necessarily have to scale by a factor of two, e.g. if more flexible symbol durations are implemented using inverse discrete Fourier transform (IDFT) instead of fast Fourier transform (FFT) . Additional examples of frame structures can be used with different SCSs.
(5) Flexible transmission duration of basic transmission unit: The basic transmission unit may be a symbol block (alternatively called a symbol) , which in general includes a redundancy portion (referred to as the CP) and an information (data for example) portion, although in some embodiments the CP may be omitted from the symbol block. The CP length may be flexible and configurable. The CP length may be fixed within a frame or flexible within a frame, and the CP length may possibly change from one frame to another, or from one group of frames to another group of frames, or from one subframe to another subframe, or from one slot to another slot, or dynamically from one scheduling to another scheduling. The information (data for example) portion may be flexible and configurable. Another possible parameter relating to a symbol block that may be defined is ratio of CP duration to information (data for example) duration. In some embodiments, the symbol block length may be adjusted according to: channel condition (for example multi-path delay, Doppler) ; and/or latency requirement; and/or available time duration. As another example, a symbol block length may be adjusted to fit an available time duration in the frame.
(6) Flexible switch gap: A frame may include both a downlink portion for downlink transmissions from a base station, and an uplink portion for uplink transmissions from UEs. A gap may be present between each uplink and downlink portion, which is referred to as a switching gap. The switching gap length (duration) may be configurable. A switching gap duration may be fixed within a frame or flexible within a frame, and a switching gap duration may possibly change from one frame to another, or from one group of frames to another group of frames, or from one subframe to another subframe, or from one slot to another slot, or dynamically from one scheduling to another scheduling.
Cell/Carrier/Bandwidth Parts (BWPs) /Occupied Bandwidth
A device, such as a base station, may provide coverage over a cell. Wireless communication with the device may occur over one or more carrier frequencies. A carrier frequency will be referred to as a carrier. A carrier may  alternatively be called a component carrier (CC) . A carrier may be characterized by its bandwidth and a reference frequency, for example the center or lowest or highest frequency of the carrier. A carrier may be on licensed or unlicensed spectrum. Wireless communication with the device may also or instead occur over one or more bandwidth parts (BWPs) . For example, a carrier may have one or more BWPs. More generally, wireless communication with the device may occur over spectrum. The spectrum may comprise one or more carriers and/or one or more BWPs.
A cell may include one or multiple downlink resources and optionally one or multiple uplink resources, or a cell may include one or multiple uplink resources and optionally one or multiple downlink resources, or a cell may include both one or multiple downlink resources and one or multiple uplink resources. As an example, a cell might only include one downlink carrier/BWP, or only include one uplink carrier/BWP, or include multiple downlink carriers/BWPs, or include multiple uplink carriers/BWPs, or include one downlink carrier/BWP and one uplink carrier/BWP, or include one downlink carrier/BWP and multiple uplink carriers/BWPs, or include multiple downlink carriers/BWPs and one uplink carrier/BWP, or include multiple downlink carriers/BWPs and multiple uplink carriers/BWPs. In some embodiments, a cell may instead or additionally include one or multiple sidelink resources, including sidelink transmitting and receiving resources.
A BWP is a set of contiguous or non-contiguous frequency subcarriers on a carrier, or a set of contiguous or non-contiguous frequency subcarriers on multiple carriers, or a set of non-contiguous or contiguous frequency subcarriers, which may have one or more carriers.
In some embodiments, a carrier may have one or more BWPs, e.g. a carrier may have a bandwidth of 20 MHz and consist of one BWP, or a carrier may have a bandwidth of 80 MHz and consist of two adjacent contiguous BWPs, etc. In other embodiments, a BWP may have one or more carriers, e.g. a BWP may have a bandwidth of 40 MHz and consists of two adjacent contiguous carriers, where each carrier has a bandwidth of 20 MHz. In some embodiments, a BWP may comprise non-contiguous spectrum resources which consists of non-contiguous multiple carriers, where the first carrier of the non-contiguous multiple carriers may be in mmW band, the second carrier may be in a low band (such as 2GHz band) , the third carrier (if it exists) may be in THz band, and the fourth carrier (if it exists) may be in visible light band. Resources in one carrier which belong to the BWP may be contiguous or non-contiguous. In some embodiments, a BWP has non-contiguous spectrum resources on one carrier.
Wireless communication may occur over an occupied bandwidth. The occupied bandwidth may be defined as the width of a frequency band such that, below the lower and above the upper frequency limits, the mean powers emitted are each equal to a specified percentage β/2 of the total mean transmitted power, for example, the value of β/2 is taken as 0.5%.
The carrier, the BWP, or the occupied bandwidth may be signaled by a network device (a base station for example) dynamically, for example in physical layer control signaling such as DCI, or semi-statically, for example in radio resource control (RRC) signaling or in the medium access control (MAC) layer, or be predefined based on the application scenario; or be determined by the UE as a function of other parameters that are known by the UE, or may be fixed, by a standard for example.
Timing Reference Point
In current networks, frame timing and synchronization is established based on synchronization signals, such as a primary synchronization signal (PSS) and a secondary synchronization signal (SSS) . Notably, known frame timing and  synchronization strategies involve adding a timestamp, for example, (xx0: yy0: zz) , to a frame boundary, where xx0, yy0, zz in the timestamp may represent a time format such as hour, minute, and second, respectively.
It is anticipated that diverse applications and use cases in future networks may involve usage of different periods of frames, slots and symbols to satisfy the different requirements, functionalities and Quality of Service (QoS) types. It follows that usage of different periods of frames to satisfy these applications may present challenges for frame timing alignment among diverse frame structures. Consider, for example, frame timing alignment for a TDD configuration in neighboring carrier frequency bands or among sub-bands (or bandwidth parts) of one channel/carrier bandwidth.
The present disclosure relates, generally, to mobile, wireless communication and, in particular embodiments, to a frame timing alignment/realignment, where the frame timing alignment/realignment may comprise a timing alignment/realignment in terms of a boundary of a symbol, a slot or a sub-frame within a frame; or a frame (thus the frame timing alignment/realignment here is more general, not limiting to the cases where a timing alignment/realignment is from a frame boundary only) . Also, in this application, relative timing to a frame or frame boundary should be interpreted in a more general sense, that is, the frame boundary means a timing point of a frame element with the frame such as (starting or ending of) a symbol, a slot or subframe within a frame, or a frame. In the following, the phrases “ (frame) timing alignment or timing realignment” and “relative timing to a frame boundary” are used in more general sense described in above.
A network device, such as a base station 170, referenced hereinafter as a TRP 170, may transmit signaling that carries a timing realignment indication message. The timing realignment indication message includes information allowing a receiving UE 110 to determine a timing reference point. On the basis of the timing reference point, transmission of frames, by the UE 110, may be aligned. The frames that become aligned may be in different sub-bands of one carrier frequency band. The frames that become aligned may instead be found in neighboring carrier frequency bands.
On the TRP 170 side, one or more types of signaling may be used to indicate the timing realignment (or/and timing correction) message. Two example types of signaling are provided here to show the schemes. The first example type of signaling may be referenced as cell-specific signaling, examples of which include group common signaling and broadcast signaling. The second example type of signaling may be referenced as UE-specific signaling. One of these two types of signaling or a combination of the two types of signaling may be used to transmit a timing realignment indication message. The timing realignment indication message may be shown to notify one or more UEs 110 of a configuration of a timing reference point. References, hereinafter, to the term “UE 110” may be understood to represent reference to a broad class of generic wireless communication devices within a cell (a network receiving node, such as a wireless device, a sensor, a gateway, a router, and so on) , that is, being served by the TRP 170. A timing reference point is a timing reference instant and may be expressed in terms of a relative timing, in view of a timing point in a frame, such as (starting or ending boundary of) a symbol, a slot or a sub-frame within a frame; or a frame. For a simple description in the following, the term “a frame boundary” is used to represent a boundary of possibly a symbol, a slot or a sub-frame within a frame; or a frame. Thus, the timing reference point may be expressed in terms of a relative timing, in view of a current frame boundary, for example, the start of the current frame. Alternatively, the timing reference point may be expressed in terms of an absolute timing based on certain standards timing reference such as a GNSS (GPS for example) , Coordinated Universal Time ( “UTC” ) , and so on. In the absolute timing version of the timing reference point, a timing reference point may be explicitly stated.
The timing reference point may be shown to allow for timing adjustments to be implemented at the UEs 110. The timing adjustments may be implemented for improvement of accuracy for a clock at the UE 110. Alternatively, or additionally, the timing reference point may be shown to allow for adjustments to be implemented in future transmissions made from the UEs 110. The adjustments may be shown to cause realignment of transmitted frames at the timing reference  point. Note that the realignment of transmitted frames at the timing reference point may comprise the timing realignment from (the starting boundary of) a symbol, a slot or a sub-frame within a frame; or a frame at the timing reference point for one or more UEs and one or more BSs (in a cell or a group of cells) , which applies across the application below.
At UE 110 side, the UE 110 may monitor for the timing realignment indication message. Responsive to receiving the timing realignment indication message, the UE 110 may obtain the timing reference point and take steps to cause frame realignment at the timing reference point. Those steps may, for example, include commencing transmission of a subsequent frame at the timing reference point.
Furthermore, or alternatively, before monitoring for the timing realignment indication message, the UE 110 may cause the TRP 170 to transmit the timing realignment indication message by transmitting, to the TRP 170, a request for a timing realignment, that is, a timing realignment request message. Responsive to receiving the timing realignment request message, the TRP 170 may transmit, to the UE 110, a timing realignment indication message including information on a timing reference point, thereby allowing the UE 110 to implement a timing realignment (or/and a timing adjustment including clock timing error correction) , wherein the timing realignment is in terms of (for example a starting boundary of) a symbol, a slot or a sub-frame within a frame; or a frame for UEs and base station (s) in a cell (or a group of cells) .
A TRP 170 associated with a given cell may transmit a timing realignment indication message. The timing realignment indication message may include enough information to allow a receiver of the message to obtain a timing reference point. The timing reference point may be used, by one or more UEs 110 in the given cell, when performing a timing realignment (or/and a timing adjustment including clock timing error correction) .
The timing reference point may be expressed, within the timing realignment indication message, relative to a frame boundary (where, as previously described and to be applicable below across the application, a frame boundary can be a boundary of a symbol, a slot or a sub-frame with a frame; or a frame) . The timing realignment indication message may include a relative timing indication, Δt. It may be shown that the relative timing indication, Δt, expresses the timing reference point as occurring a particular duration (Δt, subsequent to a frame boundary for a given frame) . Since the frame boundary is important to allowing the UE 110 to determine the timing reference point, it is important that the UE 110 be aware of the given frame that has the frame boundary of interest. Accordingly, the timing realignment indication message may also include a system frame number (SFN) for the given frame.
It is known, in 5G NR, that the SFN is a value in range from 0 to 1023, inclusive. Accordingly, 10 bits may be used to represent a SFN. When a SFN is carried by an SSB, six of the 10 bits for the SFN may be carried in a Master Information Block (MIB) and the remaining four bits of the 10 bits for the SFN may be carried in a Physical Broadcast Channel (PBCH) payload.
Optionally, the timing realignment indication message may include other parameters. The other parameters may, for example, include a minimum time offset. The minimum time offset may establish a duration of time preceding the timing reference point. The UE 110 may rely upon the minimum time offset as an indication that DL signaling, including the timing realignment indication message, will allow the UE 110 enough time to detect the timing realignment indication message to obtain information on the timing reference point.
6G Integrated Sensing and Communication
Generic Background
User Equipment (UE) position information is often used in cellular communication networks to improve various performance metrics for the network. Such performance metrics may, for example, include capacity, agility, and efficiency. The improvement may be achieved when elements of the network exploit the position, the behavior, the mobility pattern, and so on, of the UE in the context of a priori information describing a wireless environment in which the UE is operating.
A sensing system may be used to help gather UE pose information, including its location in a global coordinate system, its velocity and direction of movement in the global coordinate system, orientation information, and the information about the wireless environment. “Location” is also known as “position” and these two terms may be used interchangeably herein. Examples of well-known sensing systems include RADAR (Radio Detection and Ranging) and LIDAR (Light Detection and Ranging) . While the sensing system can be separate from the communication system, it could be advantageous to gather the information using an integrated system, which reduces the hardware (and cost) in the system as well as the time, frequency, or spatial resources needed to perform both functionalities. However, using the communication system hardware to perform sensing of UE pose and environment information is a highly challenging and open problem. The difficulty of the problem relates to factors such as the limited resolution of the communication system, the dynamicity of the environment, and the huge number of objects whose electromagnetic properties and position are to be estimated.
Accordingly, integrated sensing and communication (also known as integrated communication and sensing) is a desirable feature in existing and future communication systems.
Sensing Node, Sensing Management Function
Any or all of the EDs 110 and BS 170 may be sensing nodes in the system 100. Sensing nodes are network entities that perform sensing by transmitting and receiving sensing signals. Some sensing nodes are communication equipment that perform both communications and sensing. However, it is possible that some sensing nodes do not perform communications, and are instead dedicated to sensing. The sensing agent 174 (Fig. 9) is an example of a sensing node that is dedicated to sensing. Unlike the EDs 110 and BS 170, the sensing agent 174 does not transmit or receive communication signals. However, the sensing agent 174 may communicate configuration information, sensing information, signaling information, or other information within the communication system 100. The sensing agent 174 may be in communication with the core network 130 to communicate information with the rest of the communication system 900. By way of example, the sensing agent 174 may determine the location of the ED 110a, and transmit this information to the base station 170a via the core network 130. Although only one sensing agent 174 is shown in Fig. 9, any number of sensing agents may be implemented in the communication system 900. In some embodiments, one or more sensing agents may be implemented at one or more of the RANs 120.
A sensing node may combine sensing-based techniques with reference signal-based techniques to enhance UE pose determination. This type of sensing node may also be known as a sensing management function (SMF) . In some networks, the SMF may also be known as a location management function (LMF) . The SMF may be implemented as a physically independent entity located at the core network 130 with connection to the multiple BSs 170. In other aspects of the present application, the SMF may be implemented as a logical entity co-located inside a BS 170 through logic carried out by the processor 260.
As shown in Fig. 10, the SMF 176, when implemented as a physically independent entity, includes at least one processor 290, at least one transmitter 282, at least one receiver 284, one or more antennas 286, and at least one memory  288. A transceiver, not shown, may be used instead of the transmitter 282 and receiver 284. A scheduler 283 may be coupled to the processor 290. The scheduler 283 may be included within or operated separately from the SMF 176. The processor 290 implements various processing operations of the SMF 176, such as signal coding, data processing, power control, input/output processing, or any other functionality. The processor 290 can also be configured to implement some or all of the functionality and/or embodiments described in more detail above. Each processor 290 includes any suitable processing or computing device configured to perform one or more operations. Each processor 290 could, for example, include a microprocessor, microcontroller, digital signal processor, field programmable gate array, or application specific integrated circuit.
A reference signal-based pose determination technique belongs to an “active” pose estimation paradigm. In an active pose estimation paradigm, the enquirer of pose information (the UE) takes part in process of determining the pose of the enquirer. The enquirer may transmit or receive (or both) a signal specific to pose determination process. Positioning techniques based on a global navigation satellite system (GNSS) such as Global Positioning System (GPS) are other examples of the active pose estimation paradigm.
In contrast, a sensing technique, based on radar for example, may be considered as belonging to a “passive” pose determination paradigm. In a passive pose determination paradigm, the target is oblivious to the pose determination process.
By integrating sensing and communications in one system, the system need not operate according to only a single paradigm. Thus, the combination of sensing-based techniques and reference signal-based techniques can yield enhanced pose determination.
The enhanced pose determination may, for example, include obtaining UE channel sub-space information, which is particularly useful for UE channel reconstruction at the sensing node, especially for a beam-based operation and communication. The UE channel sub-space is a subset of the entire algebraic space, defined over the spatial domain, in which the entire channel from the TP to the UE lies. Accordingly, the UE channel sub-space defines the TP-to-UE channel with very high accuracy. The signals transmitted over other sub-spaces result in a negligible contribution to the UE channel. Knowledge of the UE channel sub-space helps to reduce the effort needed for channel measurement at the UE and channel reconstruction at the network-side. Therefore, the combination of sensing-based techniques and reference signal-based techniques may enable the UE channel reconstruction with much less overhead as compared to traditional methods. Sub-space information can also facilitate sub-space based sensing to reduce sensing complexity and improve sensing accuracy.
Sensing Channel
In some embodiments of integrated sensing and communication, a same radio access technology (RAT) is used for sensing and communication. This avoids the need to multiplex two different RATs under one carrier spectrum, or necessitating two different carrier spectrums for the two different RATs.
In embodiments that integrate sensing and communication under one RAT, a first set of channels may be used to transmit a sensing signal, and a second set of channels may be used to transmit a communications signal. In some embodiments, each channel in the first set of channels and each channel in the second set of channels is a logical channel, a transport channel, or a physical channel.
At the physical layer, communication and sensing may be performed via separate physical channels. For example, a first physical downlink shared channel PDSCH-C is defined for data communication, while a second physical downlink shared channel PDSCH-S is defined for sensing. Similarly, separate physical uplink shared channels (PUSCH) , PUSCH-C and PUSCH-S, could be defined for uplink communication and sensing.
In another example, the same PDSCH and PUSCH could be also used for both communication and sensing, with separate logical layer channels and/or transport layer channels defined for communication and sensing. Note also that control channel (s) and data channel (s) for sensing can have the same or different channel structure (format) , occupy same or different frequency bands or bandwidth parts.
In a further example, a common physical downlink control channel (PDCCH) and a common physical uplink control channel (PUCCH) is used to carry control information for both sensing and communication. Alternatively, separate physical layer control channels may be used to carry separate control information for communication and sensing. For example, PUCCH-S and PUCCH-C could be used for uplink control for sensing and communication respectively, and PDCCH-S and PDCCH-C for downlink control for sensing and communication respectively.
Different combinations of shared and dedicated channels for sensing and communication, at each of the physical, transport, and logical layers, are possible.
Radar
The term RADAR originates from the phrase Radio Detection and Ranging; however, expressions with different forms of capitalization (Radar and radar) are equally valid and now more common. Radar is typically used for detecting a presence and a location of an object. A radar system radiates radio frequency energy and receives echoes of the energy reflected from one or more targets. The system determines the pose of a given target based on the echoes returned from the given target. The radiated energy can be in the form of an energy pulse or a continuous wave, which can be expressed or defined by a particular waveform. Examples of waveforms used in radar include frequency modulated continuous wave (FMCW) and ultra-wideband (UWB) waveforms.
Radar systems can be monostatic, bi-static, or multi-static. In a monostatic radar system, the radar signal transmitter and receiver are co-located, such as being integrated in a transceiver. In a bi-static radar system, the transmitter and receiver are spatially separated, and the distance of separation is comparable to, or larger than, the expected target distance (often referred to as the range) . In a multi-static radar system, two or more radar components are spatially diverse but with a shared area of coverage. A multi-static radar is also referred to as a multisite or netted radar.
Terrestrial radar applications encounter challenges such as multipath propagation and shadowing impairments. Another challenge is the problem of identifiability because terrestrial targets have similar physical attributes. Integrating sensing into a communication system is likely to suffer from these same challenges, and more.
Half-Duplex and Full-Duplex
Communication nodes can be either half-duplex or full-duplex. A half-duplex node cannot both transmit and receive using the same physical resources (time, frequency, and so on) ; conversely, a full-duplex node can transmit and receive using the same physical resources. Existing commercial wireless communications networks are all half-duplex. Even if full-duplex communications networks become practical in the future, it is expected that at least some of the nodes in the  network will still be half-duplex nodes because half-duplex devices are less complex, and have lower cost and lower power consumption. In particular, full-duplex implementation is more challenging at higher frequencies (in the millimeter wave bands for example) , and very challenging for small and low-cost devices, such as femtocell base stations and UEs.
The limitation of half-duplex nodes in the communications network presents further challenges toward integrating sensing and communications into the devices and systems of the communications network. For example, both half-duplex and full-duplex nodes can perform bi-static or multi-static sensing, but monostatic sensing typically requires the sensing node have full-duplex capability. A half-duplex node may perform monostatic sensing with certain limitations, such as in a pulsed radar with a specific duty cycle and ranging capability.
Sensing Signal Waveform and Frame Structure
Properties of a sensing signal, or a signal used for both sensing and communication, include the waveform of the signal and the frame structure of the signal. The frame structure defines the time-domain boundaries of the signal. The waveform describes the shape of the signal as a function of time and frequency. Examples of waveforms that can be used for a sensing signal include ultra-wide band (UWB) pulse, Frequency-Modulated Continuous Wave (FMCW) or “chirp” , orthogonal frequency-division multiplexing (OFDM) , cyclic prefix (CP) -OFDM, and Discrete Fourier Transform spread (DFT-s) -OFDM.
In an embodiment, the sensing signal is a linear chirp signal with bandwidth B and time duration T. Such a linear chirp signal is generally known from its use in FMCW radar systems. A linear chirp signal is defined by an increase in frequency from an initial frequency, fchirp0, at an initial time, tchirp0, to a final frequency, fchirp1, at a final time, tchirp1 where the relation between the frequency (f) and time (t) can be expressed as a linear relation of f-fchirp0= α (t-tchirp0) , whereis defined as the chirp slope. The bandwidth of the linear chirp signal may be defined as B=fchirp1-fchirp0 and the time duration of the linear chirp signal may be defined as T=tchirp1-tchirp0. Such linear chirp signal can be presented asin the baseband representation.
Precoding
Precoding as used herein may refer to any coding operation (s) or modulation (s) that transform an input signal into an output signal. Precoding may be performed in different domains, and typically transform the input signal in a first domain to an output signal in a second domain. Precoding may include linear operations.
6G Integrated TN &NTN
A terrestrial communication system may also be referred to as a land-based or ground-based communication system, although a terrestrial communication system can also, or instead, be implemented on or in water. The non-terrestrial communication system may bridge the coverage gaps for underserved areas by extending the coverage of cellular networks through non-terrestrial nodes, which will be key to ensuring global seamless coverage and providing mobile broadband services to unserved/underserved regions, in this case, it is hardly possible to implement terrestrial access-points/base-stations infrastructure in the areas like oceans, mountains, forests, or other remote areas.
The terrestrial communication system may be a wireless communications using 5G technology and/or later generation wireless technology (for example, 6G or later) . In some examples, the terrestrial communication system may also accommodate some legacy wireless technology (for example, 3G or 4G wireless technology) . The non-terrestrial  communication system may be a communications using the satellite constellations like conventional Geo-Stationary Orbit (GEO) satellites which utilizing broadcast public/popular contents to a local server, Low earth orbit (LEO) satellites establishing a better balance between large coverage area and propagation path-loss/delay, stabilize satellites in very low earth orbits (VLEO) enabling technologies substantially reducing the costs for launching satellites to lower orbits, high altitude platforms (HAPs) providing a low path-loss air interface for the users with limited power budget, or Unmanned Aerial Vehicles (UAVs) (or unmanned aerial system (UAS) ) achieving a dense deployment since their coverage can be limited to a local area, such as airborne, balloon, quadcopter, drones, etc. In some examples, GEO satellites, LEO satellites, UAVs, HAPs and VLEOs may be horizontal and two-dimensional. In some examples, UAVs, HAPs and VLEOs coupled to integrate satellite communications to cellular networks emerging 3D vertical networks consist of many moving (other than geostationary satellites) and high altitude access points such as UAVs, HAPs and VLEOs.
6G MIMO
Multiple input multiple-output (MIMO) technology allows an antenna array of multiple antennas to perform signal transmissions and receptions to meet high transmission rate requirement. The above ED110 and T-TRP 170, and/or NT-TRP use MIMO to communicate over the wireless resource blocks. MIMO utilizes multiple antennas at the transmitter and/or receiver to transmit wireless resource blocks over parallel wireless signals. MIMO may beamform parallel wireless signals for reliable multipath transmission of a wireless resource block. MIMO may bond parallel wireless signals that transport different data to increase the data rate of the wireless resource block.
In recent years, a MIMO (large-scale MIMO) wireless communication system with the above T-TRP 170, and/or NT-TRP 172 configured with a large number of antennas has gained wide attentions from the academia and the industry. In the large-scale MIMO system, the T-TRP 170, and/or NT-TRP 172 is generally configured with more than ten antenna units (such as 128 or 256) , and serves for dozens of the ED 110 (such as 40 EDs) in the meanwhile. A large number of antenna units of the T-TRP 170, and NT-TRP 172 can greatly increase the degree of spatial freedom of wireless communication, greatly improve the transmission rate, spectrum efficiency and power efficiency, and eliminate the interference between cells to a large extent. The increase of the number of antennas makes each antenna unit be made in a smaller size with a lower cost. Using the degree of spatial freedom provided by the large-scale antenna units, the T-TRP 170, and NT-TRP 172 of each cell can communicate with many ED 110 in the cell on the same time-frequency resource at the same time, thus greatly increasing the spectrum efficiency. A large number of antenna units of the T-TRP 170, and/or NT-TRP 172 also enable each user to have better spatial directivity for uplink and downlink transmission, so that the transmitting power of the T-TRP 170, and/or NT-TRP 172 and a ED 110 is obviously reduced, and the power efficiency is greatly increased. When the antenna number of the T-TRP 170, and/or NT-TRP 172 is sufficiently large, random channels between each ED 110 and the T-TRP 170, and/or NT-TRP 172 can approach to be orthogonal, and the interference between the cell and the users and the effect of noises can be eliminated. The plurality of advantages described above enable large-scale MIMO to have a magnificent application prospect.
A MIMO system may include a receiver connected to a receive (Rx) antenna, a transmitter connected to transmit (Tx) antenna, and a signal processor connected to the transmitter and the receiver. Each of the Rx antenna and the Tx antenna may include a plurality of antennas. For instance, the Rx antenna may have an ULA antenna array in which the plurality of antennas are arranged in line at even intervals. When a radio frequency (RF) signal is transmitted through the Tx antenna, the Rx antenna may receive a signal reflected and returned from a forward target.
A non-exhaustive list of possible unit or possible configurable parameters or in some embodiments of a MIMO system include:
Panel: unit of antenna group, or antenna array, or antenna sub-array which can control its Tx or Rx beam independently.
Beam: A beam is formed by performing amplitude and/or phase weighting on data transmitted or received by at least one antenna port, or may be formed by using another method, for example, adjusting a related parameter of an antenna unit. The beam may include a Tx beam and/or a Rx beam. The transmit beam indicates distribution of signal strength formed in different directions in space after a signal is transmitted through an antenna. The receive beam indicates distribution of signal strength that is of a wireless signal received from an antenna and that is in different directions in space. The beam information may be a beam identifier, or antenna port (s) identifier, or CSI-RS resource identifier, or SSB resource identifier, or SRS resource identifier, or other reference signal resource identifier.
6G AI/ML
Artificial Intelligence technologies can be applied in communication, including artificial intelligence or machine learning (AI/ML) based communication in the physical layer and/or AI/ML based communication in the higher layer, for example medium access control (MAC) layer. For example, in the physical layer, the AI/ML based communication may aim to optimize component design and/or improve the algorithm performance. For the MAC layer, the AI/ML based communication may aim to utilize the AI/ML capability for learning, prediction, and/or making a decision to solve a complicated optimization problem with possible better strategy and/or optimal solution, for example to optimize the functionality in the MAC layer, such as intelligent TRP management, intelligent beam management, intelligent channel resource allocation, intelligent power control, intelligent spectrum utilization, intelligent modulation and coding scheme (MCS) , intelligent hybrid automatic repeat request (HARQ) strategy, intelligent transmit/receive (Tx/Rx) mode adaption, and so on.
The following are some terminologies which are used in AI/ML field:
Data collection
Data is the very important component for AI/ML techniques. Data collection is a process of collecting data by the network nodes, management entity, or UE for the purpose of AI/ML model training, data analytics and inference.
AI/ML model training
AI/ML model training is a process to train an AI/ML Model by learning the input/output relationship in a data driven manner and obtain the trained AI/ML Model for inference.
AI/ML model inference
A process of using a trained AI/ML model to produce a set of outputs based on a set of inputs.
AI/ML model validation
As a sub-process of training, validation is used to evaluate the quality of an AI/ML model using a dataset different from the one used for model training. Validation can help selecting model parameters that  generalize beyond the dataset used for model training. The model parameter after training can be adjusted further by the validation process.
AI/ML model testing
Similar with validation, testing is also a sub-process of training, and it is used to evaluate the performance of a final AI/ML model using a dataset different from the one used for model training and validation. Differently from AI/ML model validation, testing do not assume subsequent tuning of the model.
Online training:
Online training means an AI/ML training process where the model being used for inference is typically continuously trained in (near) real-time with the arrival of new training samples.
Offline training:
An AI/ML training process where the model is trained based on collected dataset, and where the trained model is later used or delivered for inference.
AI/ML model delivery/transfer
A generic term referring to delivery of an AI/ML model from one entity to another entity in any manner. Delivery of an AI/ML model over the air interface includes either parameters of a model structure known at the receiving end or a new model with parameters. Delivery may contain a full model or a partial model.
Life cycle management (LCM)
When the AI/ML model is trained and/or inferred at one device, it is necessary to monitor and manage the whole AI/ML process to guarantee the performance gain obtained by AI/ML technologies. For example, due to the randomness of wireless channels and the mobility of UEs, the propagation environment of wireless signals changes frequently. Nevertheless, it is difficult for an AI/ML model to maintain optimal performance in all scenarios for all the time, and the performance may even deteriorate sharply in some scenarios. Therefore, the lifecycle management (LCM) of AI/ML models is essential for sustainable operation of AI/ML in NR air-interface.
Life cycle management covers the whole procedure of AI/ML technologies which applied on one or more nodes. In specific, it includes at least one of the following sub-process: data collection, model training, model identification, model registration, model deployment, model configuration, model inference, model selection, model activation, deactivation, model switching, model fallback, model monitoring, model update, model transfer/delivery and UE capability report.
Model monitoring can be based on inference accuracy, including metrics related to intermediate key performance indicator (KPI) s, and it can also be based on system performance, including metrics related to system performance KPIs, such as accuracy and relevance, overhead, complexity (computation and memory cost) , latency (timeliness of monitoring result, from model failure to action) and power consumption. Moreover, data distribution may shift after deployment due to the environment changes, thus the model based on input or output data distribution should also be considered.
Supervised learning:
The goal of supervised learning algorithms is to train a model that maps feature vectors (inputs) to labels (output) , based on the training data which includes the example feature-label pairs. The supervised learning can analyze the training data and produce an inferred function, which can be used for mapping the inference data.
Supervised learning can be further divided into two types: Classification and Regression. Classification is used when the output of the AI/ML model is categorical, with two or more classes. Regression is used when the output of the AI/ML model is a real or continuous value.
Unsupervised learning:
In contrast to supervised learning where the AI/ML models learn to map the input to the target output, the unsupervised methods learn concise representations of the input data without the labelled data, which can be used for data exploration or to analyze or generate new data. One typical unsupervised learning is clustering which explores the hidden structure of input data and provide the classification results for the data.
Reinforce learning:
Reinforce learning is used to solve sequential decision-making problems. Reinforce learning is a process of training the action of intelligent agent from input (state) and a feedback signal (reward) in an environment. In reinforce learning, an intelligent agent interacts with an environment by taking an action to maximize the cumulative reward. Whenever the intelligent agent takes one action, the current state in the environment may transfer to the new state, and the new state resulted by the action will bring to the associated reward. Then the intelligent agent can take the next action based on the received reward and new state in the environment. During the training phase, the agent interacts with the environment to collect experience. The environments often mimicked by the simulator since it is expensive to directly interact with the real system. In the inference phase, the agent can use the optimal decision-making rule learned from the training phase to achieve the maximal accumulated reward.
Federated learning:
Federated learning (FL) is a machine learning technique that is used to train an AI/ML model by a central node (such as a server) and a plurality of decentralized edge nodes (for example UEs, next Generation NodeBs, “gNBs” ) .
According to the wireless FL technique, a server may provide, to an edge node, a set of model parameters (weights, biases, gradients for example) that describe a global AI/ML model. The edge node may initialize a local AI/ML model with the received global AI/ML model parameters. The edge node may then train the local AI/ML model using local data samples to, thereby, produce a trained local AI/ML model. The edge node may then provide, to the serve, a set of AI/ML model parameters that describe the local AI/ML model.
Upon receiving, from a plurality of edge nodes, a plurality of sets of AI/ML model parameters that describe respective local AI/ML models at the plurality of edge nodes, the server may aggregate the local AI/ML model parameters reported from the plurality of UEs and, based on such aggregation, update the global AI/ML model. A subsequent iteration progresses much like the first iteration. The server may transmit the  aggregated global model to a plurality of edge nodes. The above procedure are performed multiple iterations until the global AI/ML model is considered to be finalized, for example the AI/ML model is converged or the training stopping conditions are satisfied.
Notably, the wireless FL technique does not involve exchange of local data samples. Indeed, the local data samples remain at respective edge nodes.
AI technologies (which encompass ML technologies) may be applied in communication, including AI-based communication in the physical layer and/or AI-based communication in the MAC layer. For the physical layer, the AI communication may aim to optimize component design and/or improve the algorithm performance. For example, AI may be applied in relation to the implementation of: channel coding, channel modelling, channel estimation, channel decoding, modulation, demodulation, MIMO, waveform, multiple access, physical layer element parameter optimization and update, beam forming, tracking, sensing, and/or positioning, and so on. For the MAC layer, the AI communication may aim to utilize the AI capability for learning, prediction, and/or making a decision to solve a complicated optimization problem with possible better strategy and/or optimal solution, for example to optimize the functionality in the MAC layer. For example, AI may be applied to implement: intelligent TRP management, intelligent beam management, intelligent channel resource allocation, intelligent power control, intelligent spectrum utilization, intelligent MCS, intelligent HARQ strategy, and/or intelligent transmission/reception mode adaption, and so on.
An AI architecture may involve multiple nodes, where the multiple nodes may possibly be organized in one of two modes (centralized and distributed) , both of which may be deployed in an access network, a core network, or an edge computing system or third party network. A centralized training and computing architecture is restricted by possibly large communication overhead and strict user data privacy. A distributed training and computing architecture may comprise several frameworks, for example distributed machine learning and federated learning. In some embodiments, an AI architecture may comprise an intelligent controller which can perform as a single agent or a multi-agent, based on joint optimization or individual optimization. New protocols and signaling mechanisms are desired so that the corresponding interface link can be personalized with customized parameters to meet particular requirements while minimizing signaling overhead and maximizing the whole system spectrum efficiency by personalized AI technologies.
New protocols and signaling mechanisms are provided for operating within and switching between different modes of operation, including between AI and non-AI modes, and for measurement and feedback to accommodate the different possible measurements and information that may need to be fed back, depending upon the implementation.
An air interface that uses AI as part of the implementation, for example to optimize one or more components of the air interface, will be referred to herein as an “AI enabled air interface” . In some embodiments, there may be two types of AI operation in an AI enabled air interface: both the network and the UE implement learning; or learning is only applied by the network.
Overview
Various aspects of the present disclosure are described herein and shown in the drawings by way of example. Embodiments are not in any way limited to details that are provided as examples above, and this is not only discussed above but also illustrated in the further examples below.
Figs. 1-4, for example, show sensing functional frameworks and procedures, and Fig. 11 illustrates another example of a sensing functional framework and procedures. The example in Fig. 11 includes many of the same or similar parts and features as the example in Fig. 1, but is provided to illustrate and explain features that may be supported in some embodiments.
As shown in Fig. 11, the Sensing Modelling function can output sensing results, such as information about sensed objects, a reconstructed physical environment, or a reconstructed RF map, for example.
The Sensing Modelling function can be enabled by AI, for example by using AI to derive sensing results. The Sensing Modelling function may be implemented with or without AI, and accordingly in Fig. 11 one output from the Sensing Data collection is shown as sensing data for modelling rather than Training data as in Fig. 1. The reference to the sensing data being “for modelling” in this instance is referring to the Sensing Modelling function. The Sensing Modelling may, but need not necessarily, generate an AI model (or other model) for prediction by the Sensing Application function. A model is just one example of sensing results that may be generated by the Sensing Modelling function.
In addition to a Sensing Results Storage function for storing sensing results, Fig. 11 also shows that sensing results may be provided to a core network and/or a 3rd party, as one way to provide a sensing service. As shown, an output of sensing modeling, which may include partial sensing results for example, can be delivered to a core network or a 3rd party. As an example of partial sensing results, the Sensing Modelling function may generate sensing results including moving target information and a static environment map. The Sensing Modelling function could send the static environment map, as a partial sensing result, to the core network and/or the 3rd party.
In the example shown in Fig. 11, the sensing framework may be implemented in a RAN, so as to provide a sensing service by the RAN with reporting of sensing results to a core network or 3rd party.
Sensing as disclosed herein may be applied in any of various ways, to any of a wide variety of applications. Consider sensing assisted communications as an example. The Sensing Modelling function may generate sensing results in the form of an operating environment map for example, and the Sensing Application function may then use the environment map to assist communications, by performing beam prediction for example. In the case of beam prediction by the Sensing Application function, the data for the Sensing Application function (also referred to herein as Action data but shown in Fig. 11 as “Data for application) may be a reference signal (RS) with low density for beam management, and potential benefits of sensing in this example may include reducing the RS overhead by allowing lower density of RS signaling. Although there is data for the Sensing Application function in this example, such data is optional. Whether data in addition to sensing results is used by the Sensing Application may be dependent upon the sensing results and the output to be generated by the Sensing Application, for example.
In another example of sensing assisted communications, the Sensing Modelling function may again generate sensing results, in the form of an environment map for example, but it generates multiple sensing results. Multiple sensing results (or sets of sensing results) may be multiple sensing models for example, such as one model for a static environment,  and one model for a moving objects environment) . The Sensing Management function may then indicate, to the Sensing Results Storage function and/or to the Sensing Application function, which model the Sensing Application function is to use.
A sensing service is referenced at least above, and is another example application of sensing. For object detection, for example, the Sensing Modelling function may generate sensing results (whether there is an object and object information, which would be intruder information for intruder detection, for example) based on received input data. The sensing results can be provided to a core network or 3rd party as shown. For intruder detection, the sensing service procedure may end here. In some embodiments, the sensing results may also be stored. Object information from such detection (location, shape, and so on) may be used for other purposes, to assist communications for example. In this latter example, the Sensing Application function can use the sensing results for beam management, in which case the Data for application in Fig. 11 may be the RS for beam management as in another example above.
In these examples of object detection, the Sensing Modelling function determines whether there is an object, and the object information. Object detection may instead be a feature of the Sensing Application function, and the Sensing Application function determines whether there is an object. In an embodiment, the Sensing Modelling function generates a sensing model for object data (according to received input data based on one or more sensing signals to determine object information) , and the Sensing Application function application function uses the model for object detection.
Variations of elements or features as disclosed above with reference to Fig. 1 and embodiment 1 may also apply to the same or similar elements or features of Fig. 11. In addition, the example in Fig. 11 may be implemented with features that are disclosed herein with reference to embodiment 2, embodiment 3, embodiment 4, or elsewhere. For example, any of embodiments 1 to 4 may implement a Sensing Modelling function with or without AI, provide a sensing service with reporting of sensing results to a core network or 3rd party, and/or support sensing for any of a variety of purposes or applications.
Fig. 12 is a block diagram illustrating a sensing system according to an embodiment. The sensing framework parts and their operation as described primarily above are examples of the sensing system elements in Fig. 12 and their operation.
As shown in Fig. 12, the example sensing system 1200 includes a data collector 1210, a sensing result generator 1212, a storage subsystem 1214, an output generator 1216, and a sensing manager 1218, interconnected as shown. Other embodiments may include additional, fewer, and/or different elements, interconnected together in a similar or different way.
In general, a sensing system element or a component thereof such as a processor may be configured or otherwise operable to perform (or for performing) , or programming may include instructions to perform (or for performing) or to cause a processor to perform operations as disclosed herein. The present disclosure is not in any way limited to any particular type of implementation.
For example, as described at least above, there may be one or multiple devices within a function (implementing or supporting a function, fully in the case of one device or partially in the case of multiple devices) . It should therefore be appreciated that the sensing system elements shown in Fig. 12 may be implemented in any of various ways, at one or more than one device in a communication system.
In the following description of Fig. 12, the sensing system elements may be described as being configured or otherwise operable to perform various operations. These operations are also described by way of example at least above, with reference to functions. Such functions are commonly used to describe communication network or communication system frameworks or architectures, and in the context of system embodiments such functions would be implemented in a device (or multiple devices) operable to perform operations associated with those functions. To the extent that functions are referenced in describing a system or apparatus embodiment, it is to be understood that such references are intended to denote the device (s) operable to perform operations associated with the functions.
In an embodiment, a sensing system includes a data collector 1210 and a sensing result generator 1212.
The data collector is configured or otherwise operable to provide first input data that is related to sensing in a communication system, and second input data related to sensing management. The Sensing Data Collection function in Fig. 1 is an example of a data collector 1210. Features disclosed herein with reference to a Sensing Data Collection function may be implemented by the data collector 1210. The Training data referenced in respect of Figs. 1-4 and the sensing data for modeling referenced in respect of Fig. 11 are examples of the first input data, and the Monitoring data referenced in respect of Figs. 1-4 and 11 is an example of the second input data.
The sensing result generator 1212 is configured or otherwise operable to receive the first input data from the data collector 1210 and to generate a sensing result based on the first input data.
The present disclosure provides examples of sensing results that may be generated. In some embodiments, the sensing result may be or include one or more of the following:
a model of an operating environment of the communication system;
a map of the operating environment;
lookup information associated with the operating environment;
one or more characteristics of the operating environment;
characteristics of one or more objects within the operating environment;
data derived from radio signals impacted by an object or the operating environment.
A data collector 1210 and a sensing results generator 1212 support collection of sensing data and generating of sensing results. In some embodiments, a storage subsystem 1214 may also be provided. A Sensing Results Storage function as shown in Figs. 1-4 and 11 is an example of the storage subsystem 1214.
The storage subsystem 1214 is coupled to the sensing result generator 1212 in Fig. 12, and is configured or otherwise operable to receive the sensing result from the sensing result generator and to store the sensing result. The storage subsystem 1214 may also be configured or otherwise operable to perform other operations as well, such as sensing result transfer as described elsewhere herein.
An output generator 1216 may be provided in some embodiments. In the sensing system 1200, the output generator 1216 is coupled to the data collector 1210 and to the storage subsystem 1214, to receive a sensing result from the  storage subsystem. However, in other embodiments the output generator 1216 may be coupled to the sensing result generator 1212, to receive the sensing result from the sensing result generator. A Sensing Application function as shown in Figs. 1-4 and 11 is an example of the output generator 1216.
The output generator 1216 is configured to or otherwise operable to generate a further output based on the sensing result. For example, an output generated by the output generator 1216 may be for assisting communication or providing a sensing service (such as intruder detection in smart home, or UE positioning for example) . Generating an output may involve using a sensing result, such as a model, to make an inference or prediction that is provided as the output. A sensing result may be an object detection or condition detection, in which case the sensing result may trigger an alarm or alert. In the case of lookup information such as a lookup table as the sensing result, generating an output may involve receiving or otherwise obtaining input data for lookup and the output is a lookup entry that is mapped to the input data in the lookup information.
The latter example above refers to input data, and the Action data referenced in the context of Figs. 1-4 and the Data for application in Fig. 11 are examples of such input data that may be used by a Sensing Application function, or more generally by an output generator, in some embodiments. The data collector 1210 may be configured or operable to provide this input data, which may be referred to as third input data related to a further output to be generated by the output generator 1216, to the output generator. The output generator 1216 may then generate an output based on the third input data, and the sensing result.
As shown by way of example in Fig. 12, a sensing manager 1218 may be coupled to the data collector 1210 and to the sensing result generator 1212. The sensing manager may be configured to or otherwise operable to monitor the sensing result, or sensing more generally, based on the second input data and to provide feedback to the sensing result generator. A Sensing Management function as shown in Figs. 1-4 and 11 is an example of the sensing manager 1218.
Monitoring of a sensing result, or monitoring sensing more generally, may involve monitoring any of various conditions or criteria related to a sensing result, sensing, or a communication system or other system in which or in conjunction with which sensing is implemented. Such monitoring may be referred to as performance monitoring, or monitoring performance of (or performance associated with) a sensing result or sensing, for example. Performance may include, for example, sensing performance (such as sensing (or sensing result) accuracy, resolution, missed detection probability and/or frequency, false alarm probability and/or frequency) , communication performance assisted by sensing (link and/or system level performance) , or both.
Sensing monitoring is based in part on the above-referenced second input data. Ground truth data is one type of data that may be used for sensing monitoring. Another type is non ground truth data. One example of non ground truth data that may be used in sensing monitoring is communication device (BS (s) and/or UE (s) for example) monitoring system performance of sensing.
If the performance associated with a sensing result or sensing is below a target or threshold, the further action may be initiated. For example, feedback provided by the sensing manager 1218 to the sensing result generator 1212 may be or include an indication to the sensing result generator to update the sensing result in response to performance associated with the sensing result being below a target. In this manner, the sensing result generator may be instructed or indicated to update the sensing result, by re-training a model for example.
The sensing manager 1218 may interact with a sensing result generator 1212 in other ways. For example, the sensing manager 1218 may be configured to or otherwise operable to transfer (or to control transfer of) the sensing result to the output generator 1216 from the sensing result generator1210. In the embodiment shown in Fig. 12, which also includes the storage subsystem 1214, the sensing manager 1218 may be coupled to the storage subsystem as shown, and be configured to or otherwise operable to transfer (or control transfer of) the sensing result to the output generator 1218 from the storage subsystem. A model transfer request as referenced in the context of Figs. 1-4 and a sensing results transfer request as referenced in Fig. 11 are examples of signaling that may be sent by the sensing manager 1218 to the storage subsystem 1214 (or to the sensing result generator in other embodiments) to transfer the sensing result to the output generator 1216.
There are also other sensing result transfer options. For example, the sensing manager 1218 may request or otherwise obtain a sensing result from the sensing result generator 1212 or the storage subsystem 1214, and then provide the sensing result to the output generator 1216.
In some embodiments, a sensing result is one of multiple sensing results generated by a sensing result generator, and the sensing manager 1218 may be configured to or otherwise operable to select, from the multiple sensing results, the sensing result that is to be used by the output generator 1216. The selected sensing result may be indicated in signaling that is sent from the sensing manager 1218 to one or more of the sensing result generator 1212, the storage subsystem 1214, or the output generator 1216, depending on how the selected sensing result is to be provided to the output generator.
Interaction between the sensing manager 1218 and the output generator 1216 may involve not only the output generator providing its output to the sensing manager and the sensing manager receiving the output. The sensing manager 1218 may, for example, be configured to or otherwise operable to monitor the sensing result, or sensing more generally, based on the second input data and the output from the output generator 1216, and to provide feedback to the output generator to control usage of the sensing result by the output generator. The sensing manager 1218 may be configured to or otherwise operable to also or instead provide feedback to the sensing result generator 1212, based on monitoring that involves an output from the output generator 1216.
Controlling usage of a sensing result may involve, for example, sending signaling to control selection of a sensing result that the output generator 1216 is to use in generating its output, to control de-activation (or activation) of a current sensing result that was used by the output generator to generate, to control switching to a different sensing result (adifferent model for example) to be used by the output generator in generating its output, and/or to control fallback of the output generator to a previously used different sensing result or a non-sensing mode.
In an embodiment described at least above, when the Sensing Management function observes that the sensing performance of a current sensing model is not good enough, it can send model switching signaling to the Sensing Application function to switch to another sensing model, or send fallback signaling to indicate that the Sensing Application function is to use non-sensing mode. This is one example how a sensing manager and an output generator may interact to control usage of sensing results in generating further outputs.
As also described at least above, when there are multiple candidate sensing models, the Sensing Management function can indicate to the Sensing Application function which sensing model the Sensing Application is to use, and activate or de-activate one or multiple of the candidate sensing models. This is another example how a sensing manager and an output generator may interact to control usage of sensing results in generating further outputs. Activation or de-activation  of a sensing result (one or more models in this example) may also be considered to be examples of selection of a sensing result.
Embodiment 2 above illustrates an example in which a sensing manager 1218 may be configured or otherwise operable to provide a set of sensing results to a sensing result generator 1212, and the sensing result generator is configured to or otherwise operable to generate the sensing result by updating the set of sensing results based on the first input data. The foundation model referenced in Fig. 2 is an example of such a set of sensing results that may be provided to and updated by a sensing result generator. These features may be provided in combination with, or independently of, one or more other sensing management features.
In some embodiments, the data collector 1210 is configured to or otherwise operable to receive sensing data from multiple devices, and to generate one or more of the first input data or the second input data (or third input data) based on the received sensing data. Embodiment 3 shows an example of this. In Embodiment 3, the multiple devices are sensing functions, which as described at least above may also be referred to as sensors or sensing devices for example.
These devices from which sensing data is received may include sensors of multiple different types. The RF sensing functions and non-RF sensing functions in Fig. 3 are examples of different types of sensors. More generally, an RF sensor that uses RF sensing to collect sensing data is one example of a device from which a data collector may receive sensing data, and a non-RF sensor is another example of a device from which a data collector may receive sensing data. RF and non-RF sensors are examples of different types of sensors that use different types of sensing. Devices that provide different types of sensing data are another example of devices that may be considered devices of different types. More generally, a data collector may receive sensing data from devices that are the same or similar to each other in some respects, and/or from devices that are different from each other in some respects. The present disclosure is not restricted to sensing data that is provided by any particular type of device.
The Data Fusion function in Embodiment 3 provides input data to the Sensing Modelling, Sensing Management, and Sensing Application functions. As described at least above, data fusion could be also called data collection, another possible name for Data Fusion is input data generation, and the name “input data generator” may be used to refer to a Data Fusion element of a sensing system, for example. The data collector 1210 in Fig. 12 may provide data fusion features, or in another embodiment the data collector is (or includes) an input data generator. Data Fusion may thus be considered a special case or embodiment of data collection, as also described at least above.
Data fusion features may include data processing, and accordingly the data collector 1210 or an element thereof such as an input data generator may be configured to or otherwise operable to perform data processing of received sensing data. Such data processing may include, for example, data pre-processing and cleaning, formatting, and transformation, integrating multiple data sources to produce more useful information than that provided by any individual data source. Fig. 3 provides an example in which sensing data received from multiple devices is processed in such a way that the resulting input data has less uncertainty than when the sensing data is used individually.
Embodiment 4 and Fig. 4 provide another example of devices from which sensing data may be received. Consistent with this example, devices from which sensing data may be received by the data collector 1210 in Fig. 12 may include a device to provide anchor data related to a sensing anchor and a device to provide non-anchor data not related to a sensing anchor. More generally, the data collector 1210 may be configured to or otherwise operable to receive sensing data from devices that may include one or more devices to provide anchor data related to one or more sensing anchors and/or one or more devices to provide non-anchor data not related to a sensing anchor.
A device may provide sensing data related to one, or more than one, sensing anchor or non-anchor. For example, a device may interact with multiple sensing anchors and/or non-anchors, and provide related sensing data to a data collector.
A sensing anchor, or similarly a non-anchor, may or may not generate sensing data. In other words, a sensing anchor or non-anchor may be or include a device that is configured to or otherwise operable to provide sensing data to a data collector, or a passive object. In the former example, it may be said that a device to provide anchor data related to a sensing anchor is (or includes) the sensing anchor, or that a sensing anchor is (or includes) such a device. In the case of a passive object as a sensing anchor, a device senses information about the object and provides sensing data to a data collector. The name “sensing anchor” is used for ease of reference for an element that is an anchor for the purpose of sensing. An anchor may, but need not necessarily, perform any sensing operations.
Several examples of anchor devices are provided at least above, in the context of Embodiment 4. These examples include a node, a device that can report ground truth, a device that transmits a sensing signal to assist one or more other sensing devices to perform sensing measurement, and a passive object. These examples also apply to non-anchors, with the exception that sensing data related to non-anchors would not be ground truth data.
As described at least above in the context of Embodiment 4 and Fig. 4, Anchor Data Collection is a function that provides input data to Sensing Modelling, Sensing Management, Sensing Application functions. The input data includes ground truth information. Ground truth refers to the true answer to a specific problem or question. For example, when perform target sensing, the ground truth is the target exact location, exact shape. For environment reconstruction, the ground truth is the exact environment, and may include building locations/shapes, street locations, and so on. Examples of input data are also provided at least above. Anchor Data Collection is described above as a function, and the name “anchor data collector” is also used herein to refer generally to an Anchor Data Collection element of a sensing system.
Non-anchor Data Collection, as also described at least above, is a function that provides input data to Sensing Modelling, Sensing Management, Sensing Application functions. The input data does not include the ground truth information. Examples of input data from non-anchor data collection are also provided at least above. Non-anchor Data Collection is described above as a function, and the name “non-anchor data collector” is also used herein to refer generally to a Non-anchor Data Collection element of a sensing system.
Anchor Data Collection and Non-anchor Data Collection may be considered a special case or embodiment of data collection. Devices in a sensing system may include one or more devices to provide anchor data related to one or more sensing anchors, and one or more devices to provide non-anchor data that is not related to a sensing anchor. A data collector such as the data collector 1210 may include: an anchor data collector to collect, from multiple devices of the plurality of devices, anchor data related to multiple sensing anchors; and a non-anchor data collector to collect, from multiple devices of the plurality of devices, non-anchor data not related to a sensing anchor. In other embodiments, anchor and non-anchor data collection are implemented separately, in respective elements or functions as shown by way of example in Fig. 4.
Fig. 4 also illustrates an Anchor Management function, is responsible for performing control to anchors and non-anchors. As described at least above, the Anchor Management function can configure which node is the anchor, and indicates to the anchor to perform data collection and corresponding collected data type. In addition, the Anchor Management function could also indicate to a non-anchor to perform data collection and corresponding collected data type. These examples refer data collection by anchors and non-anchors, but anchors and non-anchors may or may not provide sensing data. Devices that provide sensing data related to anchors and non-anchors may or may be the sensing anchors or  non-anchors. Thus, more generally, anchor management may involve configuring anchors, and/or non-anchors, and indicating to one or more devices (which may or may not be anchors or non-anchors for sensing) to perform sensing data collection and provide sensing data to a data collector.
Anchor Management may also be referred to, for example, as Sensing Anchor Management or, in the case of a sensing anchor being a sensing device or sensor, as Sensing Device Management, Sensor (or Sensing Device) Management, or Sensing Function Management, for example. The name “anchor manager” is also used herein to refer generally to an Anchor Management element of a sensing system. Thus, in some embodiments, a sensing system may include an anchor manager, coupled to an anchor data collector and to a non-anchor data collector, to manage sensing anchors and/or non-anchors. Managing anchors and/or non-anchors may include, for example, the configuring and indicating features described at least above.
Embodiments as disclosed herein may provide or support life cycle management (LCM) in sensing. LCM as a whole may involve an entire sensing procedure, from data collection to modelling (more generally, sensing result generation) , storage, receiving sensing results at a Sensing Application function (or more generally, an output generator) , generating an output based on a sensing result, monitoring, and other sensing management such as indicating to switch or fallback for output generation and/or to update sensing results.
LCM of sensing may therefore include sensing data collection, sensing modelling (or other sensing result generation) , sensing application (to generate an output) , and sensing management including monitoring and possible updating of sensing result generation or application.
In the context of a sensing system, LCM may involve the data collector 1210, the sensing result generator 1212, the storage subsystem 1214 to receive and store a sensing result from the sensing result generator, the output generator 1216 to generate a further output based on the sensing result, and the sensing manager 1218 for management of sensing based on one or more of the second input data or the output generated by the output generator.
Sensing LCM may include sensing functionality-based LCM and sensing model-based (or more generally sensing result-based) LCM. Sensing functionality-based LCM refers to an embodiment of LCM procedure in which a given functionality is provided by sensing operations. Sensing result-based LCM refers to an embodiment of LCM procedure in which a sensing result such as a model (which may be referred to as a sensing type) has a sensing result ID or model ID (or type ID) , and associated information is provided by sensing operations.
Sensing functionality identification refers to a process or method of identifying a sensing functionality, for a common understanding between a network and UEs for example. Information regarding a sensing functionality may be shared during functionality identification. A sensing functionality may be configured with an ID, and a UE may have one sensing model or multiple sensing models for the functionality, which may depend on UE implementation for example.
Sensing result identification refers to a process or method of identifying a sensing result (such as a model) , for a common understanding between a network and UEs for example. Information regarding a sensing result may be shared during identification. A sensing result may be configured with a result ID. In a network /UE example, the NW and the UE align the sensing result according to the result ID, and sensing result management (including LCM) may be under the control of the network.
The management of sensing by the sensing manager 1218 may involve transferring a sensing result, which is generated by the sensing result generator 1212, to the output generator 1216. A Sensing Management function may send a sensing result to a Sensing Application function, for example by sending a sensing result request to a Sensing Results Storage function, and then the Sensing Results Storage function sends the sensing result to the Sensing Application function. More generally, with reference to the example in Fig. 12, the sensing manager 1218 may be configured to or otherwise operable to transfer the sensing result to the output generator 1216 by sending a request to the storage subsystem 1214, in which case the storage subsystem is configured to or otherwise operable to receive the request from the sensing manager and to transfer the sensing result to the output generator responsive to the request.
A Sensing Management function may also receive the output of a Sensing Application function. The output may include information about performance of the Sensing Application function and/or performance of the communication system. In addition, a Sensing Management function may receive monitoring data from a Sensing Data Collection function, such as ground truth data, and after comparing the sensing output and the ground truth, the sensing performance can be assessed or evaluated.
Such features may be embodied in the example shown in Fig. 12 in that the management of sensing by the sensing manager 1218 may involve monitoring sensing performance based on the second input data and the output from the output generator 1216. The sensing manager 1218 may be configured to or otherwise operable to receive the output of the output generator 1216, and that output may include information about performance of the output generator and/or performance of the communication system. The sensing manager may also or instead receive monitoring data (also referred to as second input data herein) from the data collector 1210, and monitor the sensing performance based on comparing the output and the monitoring data.
After monitoring, a Sensing management function can send model switching signaling to a Sensing Application function to switch to another sensing model, or send fallback signaling to indicate to the Sensing Application function to use a non-sensing mode. In an implementation of these features consistent with the example of Fig. 12, the management of sensing by the sensing manager 1218 may involve sending, to the output generator 1216, first signaling to indicate a switch to another sensing result for generating the further output, or second signaling to indicate a non-sensing mode for generating the further output. In such an embodiment, the output generator 1216 is configured to or otherwise operable to switch to the other sensing result for generating the further output responsive to receiving the first signaling from the sensing manager 1218, or to use the non-sensing mode for generating the further output responsive to receiving the second signaling from the sensing manager.
A Sensing Management function may send current sensing performance to a Sensing Modelling function, including a current sensing output and/or one or more properties or characteristics of that output, such as any of the following: accuracy, resolution, and so on. In addition, the Sensing Management function may also request the Sensing Modelling function to retrain the model (in an AI-based embodiment) , and request to get an updated sensing model. This may involve one request or two requests. For the two requests case, the Sensing Management function sends a request to retrain or otherwise update the model, but it does not need the updated model right away. After a certain time, such as when the model is to be used, the Sensing Management function sends another request, to get the updated model.
In the context of the example in Fig. 12, such features may be described as involving the sensing manager 1218 and the sensing result generator 1212. Management of sensing by the sensing manager 1218 may involve sending, to the sensing result generator 1212, any or more of the following:
first feedback signaling to indicate current sensing performance;
second feedback signaling to indicate to the sensing result generator to update the sensing result,
and the sensing result generator is configured to or otherwise operable to update the sensing result responsive to receiving the second feedback signaling from the sensing manager.
A sensing system or elements thereof may also provide or support other features.
The present disclosure encompasses various embodiments, including not only system embodiments, but also other embodiments such as method embodiments and embodiments related to non-transitory computer readable storage media. Embodiments may incorporate, individually or in combinations, the features disclosed herein.
An apparatus or system element may be configured to or otherwise operable to perform operations or implement features disclosed herein. An apparatus or system element may include a processor or other component that is configured, by executing programming for example, to cause the apparatus or system element to perform operations or implement features disclosed herein. An apparatus or system element may also include a non-transitory computer readable storage medium, coupled to the processor, storing programming or instructions for execution by the processor. In Fig. 7, for example, the processors 210, 260, 276 may each be or include one or more processors, and each memory 208, 258, 278 is an example of a non-transitory computer readable storage medium, in an ED 110 and a TRP 170, 172. A non-transitory computer readable storage medium need not necessarily be provided only in combination with a processor, and may be provided separately in a computer program product, for example.
As an illustrative example, programming stored in or on a non-transitory computer readable storage medium may include instructions to or to cause a processor to, or a processor, device, or other component may otherwise be configured to: provide, by a data collection function in a communication system: first input data related to sensing in the communication system; and second input data related to sensing management; and generate, by a sensing result generation function in the communication system, a sensing result based on the first input data.
Apparatus embodiments are not limited to the foregoing examples, or to processor-based or programming-based embodiments.
Regarding method embodiments, Fig. 13 is a flow diagram illustrating an example method.
A sensing method consistent with the example method 1300 may include some or all of the illustrated features. LCM, for example, may include most or all of the illustrated features, whereas other embodiments may include fewer than all of those features.
For example, in one embodiment, a sensing method involves: providing at 1304, by a data collection function in a communication system for example: first input data related to sensing in the communication system; and second input data related to sensing management; and generating at 1306, by a sensing result generation function in the communication system for example, a sensing result based on the first input data.
Method embodiments may include other features, such as any one or more of the following features, for example, which are also discussed elsewhere herein:
the sensing result may be or include any one or more of the following: a model of an operating environment of the communication system; a map of the operating environment; lookup information associated with the operating environment; one or more characteristics of the operating environment; characteristics of one or more objects within the operating environment; data derived from radio signals impacted by an object or the operating environment;
receiving and storing at 1308, by a storage subsystem in the communication system for example, the sensing result from the sensing result generation function;
receiving, by an output generation function in the communication system for example, the sensing result from the sensing result generation function, or from the storage subsystem as shown by way of example by the arrow between 1308 and 1310 in Fig. 13;
generating at 1310, by the output generation function, a further output based on the sensing result;
providing at 1304, by the data collection function for example, third input data related to the further output;
the generating at 1310 may involve generating the further output based on the sensing result and the third input data;
monitoring at 1312, by a sensing management function in the communication system for example, the sensing result based on the second input data;
providing at 1314, by the sensing management function, feedback to the sensing result generation function, as shown by way of example in Fig. 13 by the arrow between 1314 and 1306;
the feedback provided to the sensing result generation function may be or include an indication to the sensing result generation function to update the sensing result in response to performance associated with the sensing result being below a target;
transferring, by the sensing management function, the sensing result to the output generation function from the sensing result generation function, or from the storage subsystem;
the sensing result may be or include one of multiple sensing results generated by the sensing result generation function, in which case a method may involve selecting, by the sensing management function for example, the sensing result from the multiple sensing results;
receiving, by the sensing management function for example, the further output from the output generation function, as illustrated by way of example in Fig. 13 by the arrow between 1310 and 1312;
monitoring the sensing result at 1312, by the sensing management function for example, based on the second input data and the further output from the output generation function;
providing feedback at 1314, by the sensing management function for example, to the output generation function to control usage of the sensing result by the output generation function, as shown by way of example in Fig. 13 by the arrow between 1314 and 1310;
a method may also involve providing feedback at 1314, by the sensing management function for example, to the sensing result generation function, as shown by way of example in Fig. 13 by the arrow between 1314 and 1306;
a method may involve providing, by the sensing management function for example, a set of sensing results to the sensing result generation function, in which case generating the sensing result at 1306 may involve updating the set of sensing results based on the first input data;
Fig. 13 illustrates providing sensing data at 1302 –this may involve multiple devices, and a sensing method may involve receiving sensing data from those devices by the data collection function at 1304, in which case one or more of the first input data or the second input data (or the third input data in some embodiments) provided at 1304 may be generated by the data collection function based on the received sensing data;
the devices may include, for example, sensors of multiple different types;
the devices may include a device to provide anchor data related to a sensing anchor and a device to provide non-anchor data not related to a sensing anchor;
the device to provide anchor data related to a sensing anchor may be or include the sensing anchor;
the sensing anchor may be or include a passive object;
the data collection function may include: an anchor data collection function to collect, from two or more of the devices, anchor data related to multiple sensing anchors; and a non-anchor data collection function to collect, from two or more of the devices, non-anchor data not related to a sensing anchor;
a method may involve managing the sensing anchors, by an anchor management function in the communication system for example;
in the context of LCM, a method may include providing input data at 1304 and generating a sensing result at 1306, as well as: receiving and storing at 1308, by a storage subsystem for example, the sensing result from the sensing result generation function; generating at 1310, by an output generation function in the communication system for example, a further output based on the sensing result; and managing, by a sensing management function in the communication system for example, sensing in the communication system based on one or more of the second input data or the further output;
the managing may involve transferring the sensing result to the output generation function;
the transferring may involve sending, by the sensing management function for example, a request to the storage subsystem, in which case a method may also involve receiving, by the storage subsystem, the request from the sensing management function and transferring the sensing result to the output generation function by the storage subsystem responsive to the request as shown by way of example by the arrow between 1308 and 1310 in Fig. 13;
the managing may involve monitoring sensing performance at 1312 based on the second input data and the further output;
the managing may also involve sending, to the output generation function for example, first signaling to indicate a switch to another sensing result for generating the further output, or second signaling to indicate a non-sensing mode for generating the further output –sending the first or second signaling is an example of providing feedback at 1314 to 1310;
a method may also involve switching, by the output generation function for example, to the other sensing result for generating the further output at 1310 responsive to receiving the first signaling from the sensing manager;
a method may also involve using, by the output generation function for example, the non-sensing mode at 1310 for generating the further output responsive to receiving the second signaling from the sensing manager;
the managing may involve sending, to the sensing result generation function for example, any or more of the following: first feedback signaling to indicate current sensing performance; second feedback signaling to indicate to the sensing result generation function to update the sensing result -sending the first or second signaling is an example of providing feedback at 1314 to 1306;
a method may also involve updating the sensing result at 1306, by the sensing result generation function for example, responsive to receiving the second feedback signaling from the sensing management function.
A sensing system or elements thereof may also provide or support other features. It should also be noted that the division of operations among functions in the example described above is intended solely for illustrative purposes. Operations may be distributed, in a similar or different manner, between functions more, fewer, or different functions or system elements in other embodiments.
More generally, other features disclosed herein may also or instead be provided in method, apparatus, system, and/or other embodiments.
Features disclosed herein in the context of method embodiments, for example, may also or instead be implemented in apparatus, system, or computer program product embodiments. In addition, although embodiments are described primarily in the context of methods and systems, other implementations are also contemplated, as instructions stored on one or more non-transitory computer-readable media, for example. Such media could store programming or instructions to perform any of various methods consistent with the present disclosure.
Although this disclosure refers to illustrative embodiments, this is not intended to be construed in a limiting sense. Various modifications and combinations of the illustrative embodiments, as well as other embodiments of the disclosure, will be apparent to persons skilled in the art upon reference to the description.
Features disclosed herein in the context of any particular embodiments may also or instead be implemented in other embodiments. method embodiments, for example, may also or instead be implemented in apparatus, system, and/or computer program product embodiments. In addition, although embodiments are described primarily in the context of methods and apparatus, other implementations are also contemplated, as instructions stored on one or more non-transitory computer-readable media, for example. Such media could store programming or instructions to perform any of various methods consistent with the present disclosure.
Although aspects of the present invention have been described with reference to specific features and embodiments thereof, various modifications and combinations can be made thereto without departing from the invention. The description and drawings are, accordingly, to be regarded simply as an illustration of some embodiments of the invention as defined by the appended claims, and are contemplated to cover any and all modifications, variations, combinations or equivalents that fall within the scope of the present invention. Therefore, although embodiments and potential advantages have been described in detail, various changes, substitutions and alterations can be made herein without departing from the invention as defined by the appended claims. Moreover, the scope of the present application is not intended to be limited to the particular embodiments of the process, machine, manufacture, composition of matter, means, methods and steps described in the specification. As one of ordinary skill in the art will readily appreciate from the disclosure of the present invention, processes, machines, manufacture, compositions of matter, means, methods, or steps,  presently existing or later to be developed, that perform substantially the same function or achieve substantially the same result as the corresponding embodiments described herein may be utilized according to the present invention. Accordingly, the appended claims are intended to include within their scope such processes, machines, manufacture, compositions of matter, means, methods, or steps.
Moreover, any module, component, or device exemplified herein that executes instructions may include or otherwise have access to a non-transitory computer readable or processor readable storage medium or media for storage of information, such as computer readable or processor readable instructions, data structures, program modules, and/or other data. A non-exhaustive list of examples of non-transitory computer readable or processor readable storage media includes magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, optical disks such as compact disc read-only memory (CD-ROM) , digital video discs or digital versatile disc (DVDs) , Blu-ray DiscTM, or other optical storage, volatile and non-volatile, removable and nonremovable media implemented in any method or technology, random-access memory (RAM) , read-only memory (ROM) , electrically erasable programmable read-only memory (EEPROM) , flash memory or other memory technology. Any such non-transitory computer readable or processor readable storage media may be part of a device or accessible or connectable thereto. Any application or module herein described may be implemented using instructions that are readable and executable by a computer or processor may be stored or otherwise held by such non-transitory computer readable or processor readable storage media.

Claims (61)

  1. A sensing system comprising:
    a data collector to provide: first input data related to sensing in a communication system; and second input data related to sensing management; and
    a sensing result generator, coupled to the data collector, to receive the first input data from the data collector and to generate or obtain, based on the first input data, a sensing result.
  2. The sensing system of claim 1, wherein the sensing result comprises one or more of the following:
    a model of an operating environment of the communication system;
    a map of the operating environment;
    lookup information associated with the operating environment;
    one or more characteristics of the operating environment;
    characteristics of one or more objects within the operating environment;
    data derived from radio signals impacted by an object or the operating environment.
  3. The sensing system of claim 1 or claim 2, further comprising:
    a storage subsystem, coupled to the sensing result generator, to receive the sensing result from the sensing result generator and to store the sensing result.
  4. The sensing system of claim 1 or claim 2, further comprising:
    an output generator, coupled to the data collector and to the sensing result generator, to generate a further output based on the sensing result.
  5. The sensing system of claim 3, further comprising:
    an output generator, coupled to the data collector and to the storage subsystem, to receive the sensing result from the storage subsystem and to generate a further output based on the sensing result.
  6. The sensing system of claim 4 or claim 5,
    wherein the data collector is to provide third input data related to the further output,
    wherein the output generator is to generate the further output based on the third input data.
  7. The sensing system of any one of claims 1 to 6, further comprising:
    a sensing manager, coupled to the data collector and to the sensing result generator, to monitor the sensing result based on the second input data and to provide feedback to the sensing result generator.
  8. The sensing system of claim 7, wherein the feedback comprises an indication to the sensing result generator to update the sensing result in response to performance associated with the sensing result being below a target.
  9. The sensing system of claim 4, further comprising:
    a sensing manager, coupled to the sensing result generator, to transfer the sensing result to the output generator from the sensing result generator.
  10. The sensing system of claim 5, further comprising:
    a sensing manager, coupled to the storage subsystem, to transfer the sensing result to the output generator from the storage subsystem.
  11. The sensing system of claim 9 or claim 10,
    wherein the sensing result comprises one of a plurality of sensing results generated by the sensing result generator,
    wherein the sensing manager is to select the sensing result from the plurality of sensing results.
  12. The sensing system of any one of claims 4 to 6, further comprising:
    a sensing manager, coupled to the data collector and to the output generator, to receive the further output from the output generator and to monitor the sensing result based on the second input data and the further output from the output generator.
  13. The sensing system of claim 12, wherein the sensing manager is further to provide feedback to the output generator to control usage of the sensing result by the output generator.
  14. The sensing system of claim 12 or claim 13, wherein the sensing manager is further coupled to the sensing result generator, to provide feedback to the sensing result generator.
  15. The sensing system of any one of claims 1 to 6, further comprising:
    a sensing manager, coupled to the sensing result generator, to provide a set of sensing results to the sensing result generator,
    wherein the sensing result generator is configured to generate the sensing result by updating the set of sensing results based on the first input data.
  16. The sensing system of any one of claims 7 to 14,
    wherein the sensing manager is further to provide a set of sensing results to the sensing result generator,
    wherein the sensing result generator is configured to generate the sensing result by updating the set of sensing results based on the first input data.
  17. The sensing system of any one of claims 1 to 16, wherein the data collector is to receive sensing data from a plurality of devices, and to generate one or more of the first input data or the second input data based on the received sensing data.
  18. The sensing system of claim 17, wherein the plurality of devices comprises sensors of multiple different types.
  19. The sensing system of claim 17 or claim 18, wherein the plurality of devices comprises a device to provide anchor data related to a sensing anchor and a device to provide non-anchor data not related to a sensing anchor.
  20. The sensing system of claim 19, wherein
    the device to provide anchor data related to a sensing anchor comprises the sensing anchor, or
    the sensing anchor comprises a passive object.
  21. The sensing system of claim 19 or claim 20, wherein the data collector comprises:
    an anchor data collector to collect, from multiple devices of the plurality of devices, anchor data related to multiple sensing anchors; and
    a non-anchor data collector to collect, from multiple devices of the plurality of devices, non-anchor data not related to a sensing anchor.
  22. The sensing system of claim 21, further comprising:
    an anchor manager, coupled to the anchor data collector and to the non-anchor data collector, to manage the sensing anchors.
  23. The sensing system of claim 1 or claim 2, further comprising:
    a storage subsystem, coupled to the sensing result generator, to receive the sensing result from the sensing result generator and to store the sensing result;
    an output generator, coupled to the data collector and to the storage subsystem, to generate a further output based on the sensing result; and
    a sensing manager, coupled to the data collector, to the sensing result generator, to the storage subsystem, and to the output generator, for management of sensing based on one or more of the second input data or the further output.
  24. The sensing system of claim 23, wherein the management of sensing by the sensing manager comprises transferring the sensing result to the output generator.
  25. The sensing system of claim 24,
    wherein the sensing manager is to transfer the sensing result to the output generator by sending a request to the storage subsystem,
    wherein the storage subsystem is to receive the request from the sensing manager, and to transfer the sensing result to the output generator responsive to the request.
  26. The sensing system of claim 24 or claim 25,
    wherein the management of sensing by the sensing manager comprises monitoring sensing performance based on the second input data and the further output.
  27. The sensing system of claim 26,
    wherein the management of sensing by the sensing manager further comprises sending, to the output generator, first signaling to indicate a switch to another sensing result for generating the further output, or second signaling to indicate a non-sensing mode for generating the further output,
    wherein the output generator is to switch to the other sensing result for generating the further output responsive to receiving the first signaling from the sensing manager, or to use the non-sensing mode for generating the further output responsive to receiving the second signaling from the sensing manager.
  28. The sensing system of claim 27,
    wherein the management of sensing by the sensing manager further comprises sending, to the sensing result generator, any or more of the following:
    first feedback signaling to indicate current sensing performance;
    second feedback signaling to indicate to the sensing result generator to update the sensing result,
    wherein the sensing result generator is to update the sensing result responsive to receiving the second feedback signaling from the sensing manager.
  29. A sensing method comprising:
    providing, by a data collection function in a communication system: first input data related to sensing in the communication system; and second input data related to sensing management; and
    generating, by a sensing result generation function in the communication system, a sensing result based on the first input data.
  30. The sensing method of claim 29, wherein the sensing result comprises one or more of the following:
    a model of an operating environment of the communication system;
    a map of the operating environment;
    lookup information associated with the operating environment;
    one or more characteristics of the operating environment;
    characteristics of one or more objects within the operating environment;
    data derived from radio signals impacted by an object or the operating environment.
  31. The sensing method of claim 29 or claim 30, further comprising:
    receiving and storing, by a storage subsystem in the communication system, the sensing result from the sensing result generation function.
  32. The sensing method of claim 29 or claim 30, further comprising:
    receiving, by an output generation function in the communication system, the sensing result from the sensing result generation function;
    generating, by the output generation function, a further output based on the sensing result.
  33. The sensing method of claim 31, further comprising:
    receiving, by an output generation function in the communication system, the sensing result from the storage subsystem;
    generating, by the output generation function, a further output based on the sensing result.
  34. The sensing method of claim 32 or claim 33, further comprising:
    providing, by the data collection function, third input data related to the further output,
    wherein generating the further output comprises generating the further output based on the sensing result and the third input data.
  35. The sensing method of any one of claims 29 to 34, further comprising:
    monitoring, by a sensing management function in the communication system, the sensing result based on the second input data;
    providing, by the sensing management function, feedback to the sensing result generation function.
  36. The sensing method of claim 35, wherein the feedback comprises an indication to the sensing result generation function to update the sensing result in response to performance associated with the sensing result being below a target.
  37. The sensing method of claim 32, further comprising:
    transferring, by a sensing management function in the communication system, the sensing result to the output generation function from the sensing result generation function.
  38. The sensing method of claim 33, further comprising:
    transferring, by a sensing management function in the communication system, the sensing result to the output generation function from the storage subsystem.
  39. The sensing method of claim 37 or claim 38,
    wherein the sensing result comprises one of a plurality of sensing results generated by the sensing result generation function, the method further comprising:
    selecting, by the sensing management function, the sensing result from the plurality of sensing results.
  40. The sensing method of any one of claims 32 to 34, further comprising:
    receiving, by a sensing management function in the communication system, the further output from the output generation function;
    monitoring the sensing result, by the sensing management function, based on the second input data and the further output from the output generation function.
  41. The sensing method of claim 40, further comprising:
    providing feedback, by the sensing management function, to the output generation function to control usage of the sensing result by the output generation function.
  42. The sensing method of claim 40 or claim 41, further comprising:
    providing feedback, by the sensing management function, to the sensing result generation function.
  43. The sensing method of any one of claims 29 to 34, further comprising:
    providing, by a sensing management function in the communication system, a set of sensing results to the sensing result generation function,
    wherein generating the sensing result comprises updating the set of sensing results based on the first input data.
  44. The sensing method of any one of claims 35 to 42, further comprising:
    providing, by the sensing management function in the communication system, a set of sensing results to the sensing result generation function,
    wherein generating the sensing result comprises updating the set of sensing results based on the first input data.
  45. The sensing method of any one of claims 29 to 44, further comprising:
    receiving, by the data collection function, sensing data from a plurality of devices;
    generating, by the data collection function, one or more of the first input data or the second input data based on the received sensing data.
  46. The sensing method of claim 45, wherein the plurality of devices comprises sensors of multiple different types.
  47. The sensing method of claim 45 or claim 46, wherein the plurality of devices comprises a device to provide anchor data related to a sensing anchor and a device to provide non-anchor data not related to a sensing anchor.
  48. The sensing method of claim 47, wherein
    the device to provide anchor data related to a sensing anchor comprises the sensing anchor,
    or
    the sensing anchor comprises a passive object.
  49. The sensing method of claim 47 or claim 48, wherein the data collection function comprises:
    an anchor data collection function to collect, from multiple devices of the plurality of devices, anchor data related to multiple sensing anchors; and
    a non-anchor data collection function to collect, from multiple devices of the plurality of devices, non-anchor data not related to a sensing anchor.
  50. The sensing method of claim 49, further comprising:
    managing the sensing anchors, by an anchor management function in the communication system.
  51. The sensing method of claim 29 or claim 30, further comprising:
    receiving and storing, by a storage subsystem in the communication system, the sensing result from the sensing result generation function;
    generating, by an output generation function in the communication system, a further output based on the sensing result; and
    managing, by a sensing management function in the communication system, sensing in the communication system based on one or more of the second input data or the further output.
  52. The sensing method of claim 51, wherein the managing comprises transferring the sensing result to the output generation function.
  53. The sensing method of claim 52,
    wherein the transferring comprises sending, by the sensing management function, a request to the storage subsystem, the method further comprising:
    receiving, by the storage subsystem, the request from the sensing management function; and
    transferring the sensing result to the output generation function by the storage subsystem responsive to the request.
  54. The sensing method of claim 52 or claim 53,
    wherein the managing comprises monitoring sensing performance based on the second input data and the further output.
  55. The sensing method of claim 54,
    wherein the managing further comprises sending, to the output generation function, first signaling to indicate a switch to another sensing result for generating the further output, or second signaling to indicate a non-sensing mode for generating the further output,
    the method further comprising:
    switching, by the output generation function, to the other sensing result for generating the further output responsive to receiving the first signaling from the sensing manager;
    using, by the output generation function, the non-sensing mode for generating the further output responsive to receiving the second signaling from the sensing manager.
  56. The sensing method of claim 55,
    wherein the managing further comprises sending, to the sensing result generation function, any or more of the following:
    first feedback signaling to indicate current sensing performance;
    second feedback signaling to indicate to the sensing result generation function to update the sensing result,
    the method further comprising:
    updating the sensing result, by the sensing result generation function, responsive to receiving the second feedback signaling from the sensing management function.
  57. A sensing system comprising one or more processors configured to perform the method of any one of claims 29 to 56.
  58. The sensing system of claim 57, wherein the one or more processors comprise processors in a plurality of different devices.
  59. A computer program comprising programming for execution by a processor, the programming including instructions to perform the method of any one of claims 29 to 56.
  60. A non-transitory computer readable medium storing programming for execution by a processor, the programming including instructions to:
    provide, by a data collection function in a communication system: first input data related to sensing in the communication system; and second input data related to sensing management; and
    generate, by a sensing result generation function in the communication system, a sensing result based on the first input data.
  61. A non-transitory computer readable medium storing programming for execution by a processor, the programming including instructions to perform the method of any one of claims 29 to 56.
PCT/CN2023/143256 2023-09-05 2023-12-29 6G Sensing Framework Pending WO2025050578A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US202363580525P 2023-09-05 2023-09-05
US63/580,525 2023-09-05

Publications (1)

Publication Number Publication Date
WO2025050578A1 true WO2025050578A1 (en) 2025-03-13

Family

ID=94922760

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2023/143256 Pending WO2025050578A1 (en) 2023-09-05 2023-12-29 6G Sensing Framework

Country Status (1)

Country Link
WO (1) WO2025050578A1 (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2021047279A1 (en) * 2019-09-09 2021-03-18 Huawei Technologies Co., Ltd. Systems and methods for configuring sensing signals in a wireless communication network
US20220371614A1 (en) * 2021-05-24 2022-11-24 Zenseact Ab Ads perception development
WO2023014276A1 (en) * 2021-08-06 2023-02-09 Beammwave Ab A control unit for sensing measurement report configuration, a wireless device, a method, and a computer program product therefor
WO2023071931A1 (en) * 2021-10-27 2023-05-04 维沃移动通信有限公司 Sensing signal processing method and apparatus, and communication device
CN116347327A (en) * 2021-12-24 2023-06-27 维沃移动通信有限公司 Location perception method, perception measurement method, device, terminal and network side equipment

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2021047279A1 (en) * 2019-09-09 2021-03-18 Huawei Technologies Co., Ltd. Systems and methods for configuring sensing signals in a wireless communication network
US20220371614A1 (en) * 2021-05-24 2022-11-24 Zenseact Ab Ads perception development
WO2023014276A1 (en) * 2021-08-06 2023-02-09 Beammwave Ab A control unit for sensing measurement report configuration, a wireless device, a method, and a computer program product therefor
WO2023071931A1 (en) * 2021-10-27 2023-05-04 维沃移动通信有限公司 Sensing signal processing method and apparatus, and communication device
CN116347327A (en) * 2021-12-24 2023-06-27 维沃移动通信有限公司 Location perception method, perception measurement method, device, terminal and network side equipment

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
BEHRAVAN ALI; YAJNANARAYANA VIJAYA; KESKIN MUSA FURKAN; CHEN HUI; SHRESTHA DEEP; ABRUDAN TRAIAN E.; SVENSSON TOMMY; SCHINDHELM KIM: "Positioning and Sensing in 6G: Gaps, Challenges, and Opportunities", IEEE VEHICULAR TECHNOLOGY MAGAZINE, IEEE,, US, vol. 18, no. 1, 1 March 2023 (2023-03-01), US , pages 40 - 48, XP011935448, ISSN: 1556-6072, DOI: 10.1109/MVT.2022.3219999 *

Similar Documents

Publication Publication Date Title
CA3232052A1 (en) Methods and apparatuses for concurrent environment sensing and device sensing
US20250234343A1 (en) Methods and apparatus for power domain multiplexing of communication and sensing signals
US20240413947A1 (en) Method, apparatus, and system for multi-static sensing and communication
WO2024227331A1 (en) Communication systems, apparatuses, methods, and non-transitory computer-readable storage devices for integrated sensing and communication
US20250062795A1 (en) Methods and systems for sensing-based channel reconstruction and tracking
US20240275466A1 (en) Joint beam management in integrated terrestrial/non-terrestrial networks
WO2025050578A1 (en) 6G Sensing Framework
WO2025060347A1 (en) Life cycle management for sensing
US20250358597A1 (en) Communication systems, apparatuses, methods, and non-transitory computer-readable storage devices for integrated sensing and communication using cooperative sensing
WO2025060474A1 (en) Collision resolution for time-frequency resources
US20250365565A1 (en) Communication systems, apparatuses, methods, and non-transitory computer-readable storage devices for integrated sensing and communication using differential sensing reports
WO2024227335A1 (en) Communication systems, apparatuses, methods, and non-transitory computer-readable storage devices for integrated sensing and communication with alarms and corresponding uplink transmissions triggered by sensing
WO2025060324A1 (en) Ai/ml framework for communication
WO2025060327A1 (en) Ai/ml framework for communication
WO2024227336A1 (en) Communication systems, apparatuses, methods, and non-transitory computer-readable storage devices for integrated sensing and communication with two-stage downlink control information for unified uplink control information and local-traffic report
WO2025091799A1 (en) Apparatus, method and readable storage medium for communication
WO2025073175A1 (en) System and method for leveraging quasi-colocation (qcl) in communication and sensing operations
US20250358069A1 (en) Communication systems, apparatuses, methods, and non-transitory computer-readable storage devices for integrated sensing and communication using cooperative sensing with timing alignment
WO2024174561A1 (en) M2m with generative pretrained models
WO2025073174A1 (en) Scheduling of sensing operation and communication operation
WO2025060348A1 (en) Methods, devices, and computer readable medium for artificial intelligence (ai) service
US20250147143A1 (en) Methods and apparatus for hierarchical cooperative positioning
US20250358806A1 (en) Multi-non-terrestrial node beam configuration
WO2025025484A1 (en) Methods, apparatus, and system for multi-user uplink cooperative sensing and user-grouping via measurement fusion
WO2025025416A1 (en) Methods, apparatus, and system for sensing using measurement fusion

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23951389

Country of ref document: EP

Kind code of ref document: A1