WO2025172899A1 - Ai/ml model inference context and signaling - Google Patents
Ai/ml model inference context and signalingInfo
- Publication number
- WO2025172899A1 WO2025172899A1 PCT/IB2025/051569 IB2025051569W WO2025172899A1 WO 2025172899 A1 WO2025172899 A1 WO 2025172899A1 IB 2025051569 W IB2025051569 W IB 2025051569W WO 2025172899 A1 WO2025172899 A1 WO 2025172899A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- prs
- models
- model
- lmf
- positioning
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S5/00—Position-fixing by co-ordinating two or more direction or position line determinations; Position-fixing by co-ordinating two or more distance determinations
- G01S5/02—Position-fixing by co-ordinating two or more direction or position line determinations; Position-fixing by co-ordinating two or more distance determinations using radio waves
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N20/00—Machine learning
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N20/00—Machine learning
- G06N20/10—Machine learning using kernel methods, e.g. support vector machines [SVM]
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N20/00—Machine learning
- G06N20/20—Ensemble learning
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/044—Recurrent networks, e.g. Hopfield networks
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
- G06N3/0455—Auto-encoder networks; Encoder-decoder networks
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/0464—Convolutional networks [CNN, ConvNet]
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/048—Activation functions
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/084—Backpropagation, e.g. using gradient descent
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/09—Supervised learning
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N5/00—Computing arrangements using knowledge-based models
- G06N5/01—Dynamic search techniques; Heuristics; Dynamic trees; Branch-and-bound
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L5/00—Arrangements affording multiple use of the transmission path
- H04L5/003—Arrangements for allocating sub-channels of the transmission path
- H04L5/0048—Allocation of pilot signals, i.e. of signals known to the receiver
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04W—WIRELESS COMMUNICATION NETWORKS
- H04W24/00—Supervisory, monitoring or testing arrangements
- H04W24/10—Scheduling measurement reports ; Arrangements for measurement reports
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04W—WIRELESS COMMUNICATION NETWORKS
- H04W64/00—Locating users or terminals or network equipment for network management purposes, e.g. mobility management
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04W—WIRELESS COMMUNICATION NETWORKS
- H04W8/00—Network data management
- H04W8/22—Processing or transfer of terminal data, e.g. status or physical capabilities
Definitions
- the present disclosure relates generally to communication systems and, more specifically, to methods and systems for enhancing consistency among various context information related to training data collection associated with one or more artificial intelligence/machine learning (AI/ML) models, training of the one or more AI/ML models, and inference based on the one or more AI/ML models.
- AI/ML artificial intelligence/machine learning
- Example use cases include using autoencoders for channel state information (CSI) compression to reduce the feedback overhead and improve channel prediction accuracy; using deep neural networks for classifying line-of-sight (LOS) and non- LOS (NLOS) conditions to enhance the positioning accuracy; and using reinforcement learning for beam selection at the network side and/or the UE side to reduce the signaling overhead and beam alignment latency; and using deep reinforcement learning to learn an optimal precoding policy for complex MIMO precoding problems.
- CSI channel state information
- LOS line-of-sight
- NLOS non- LOS
- a method performed by a user equipment comprises receiving, or sending, context information related to one or more of: training data collection associated with one or more artificial intelligence/machine learning (AI/ML) models, training of the one or more AI/ML models, and inference based on the one or more AI/ML models.
- AI/ML artificial intelligence/machine learning
- the one or more AI/ML models are configured to provide measurements associated with positioning the UE.
- the method further comprises in accordance with a determination that a model inference context requirement is satisfied based on consistency between the context information related to the training of at least one of the one or more AI/ML models and the context information related to the inference based on the at least one of the one or more AI/ML models, selecting the at least one of the one or more AI/ML models for performing model inference, or sending a report including at least one of configurations associated with the positioning the UE or the UE’s capabilities in relation to performing inference based on the at least one of the one or more AI/ML models.
- the method further comprises that in accordance with a determination that the model inference context requirement is not satisfied, the configurations associated with the positioning the UE are updated or a notification message is provided.
- a method performed by a network node comprises sending, or receiving, context information related to one or more of: training data collection associated with one or more artificial intelligence/machine learning (AI/ML) models, training of the one or more AI/ML models, and inference based on the one or more AI/ML models.
- AI/ML artificial intelligence/machine learning
- the one or more AI/ML models are configured to provide measurements associated with positioning a user equipment (UE).
- UE user equipment
- the method comprises in accordance with a determination that a model inference context requirement is satisfied based on consistency between the context information related to the training of at least one of the one or more AI/ML models and the context information related to the inference based on the at least one of the one or more AI/ML models, selecting the at least one of the one or more AI/ML models for performing model inference, or sending a report including at least one of configurations associated with the positioning the UE or capabilities of the UE in relation to performing inference based on the at least one of the one or more AI/ML models.
- the method comprises that in accordance with a determination that the model inference context requirement is not satisfied, the configurations associated with the positioning the UE are updated or a notification message is provided.
- Figure 1 illustrates exemplary AI/ML model training and inference pipelines and their interactions within an AI/ML model lifecycle management procedure, in accordance with some embodiments.
- Figure 2 shows an example of a communication system in accordance with some embodiments.
- Figure 4 shows a network node in accordance with some embodiments.
- Figure 5 is a block diagram illustrating a virtualization environment in which functions implemented by some embodiments may be virtualized.
- Figure 6 is an illustration of training dataset validity area A dataset , model training validity area A training , and model inference validity area A in ⁇ erence , and TRPs (transmission and reception points) included in these validity areas.
- Figure 7 is an illustration of training dataset validity area A dataset , model training validity area A training , and model inference validity area A in ⁇ erence , with some TRPs excluded in some of the validity areas.
- Figure 8 is a flowchart illustrating an example method performed by a UE for enhancing the context information consistency according to some embodiments.
- Figure 9 is a flowchart illustrating example methods for enhancing the context information consistency using one or more UE-side models with UE-based positioning or UE- assisted/LMF-based (location management function-based) positioning, according to some embodiments.
- Figure 10 is a flowchart illustrating example methods for enhancing the context information consistency using one or more LMF-side models with UE-assisted/LMF-based positioning, according to some embodiments.
- Figure 11 is a flowchart illustrating example methods performed by a network node for enhancing the context information consistency, according to some embodiments.
- Figure 12 is a flowchart illustrating example methods for enhancing the context information consistency using one or more LMF-side models with network-assisted positioning, according to some embodiments.
- Figure 13 is a flowchart illustrating example methods for enhancing the context information consistency using one or more network-side models with network-assisted positioning, according to some embodiments.
- Figure 14 is a flowchart illustrating example method for enhancing the context information consistency using one or more LMF-side models with network-assisted positioning, according to some embodiments.
- Figure 15 is a flowchart illustrating example method for enhancing the context information consistency using one or more network node-side models with network node assisted positioning, according to some embodiments.
- Figure 1 further illustrates that an AI/ML model deployment stage 130, in which the trained (or re-trained) AI/ML model are deployed as a part of the inference pipeline 140.
- the trained (or re-trained) AI/ML model may be deployed to a UE for making inferences or predictions based on certain collected data.
- the inference pipeline 140 includes a data ingestion step 142, a data pre-processing step 144, a model operation step 146, and data and model monitoring step 148.
- the input/output interface 306 may be configured to provide an interface or interfaces to an input device, output device, or one or more input and/or output devices.
- Examples of an output device include a speaker, a sound card, a video card, a display, a monitor, a printer, an actuator, an emitter, a smartcard, another output device, or any combination thereof.
- An input device may allow a user to capture information into the UE 300.
- Examples of an input device include a touch-sensitive or presence-sensitive display, a camera (e.g., a digital camera, a digital video camera, a web camera, etc.), a microphone, a sensor, a mouse, a trackball, a directional pad, a trackpad, a scroll wheel, a smartcard, and the like.
- the presence-sensitive display may include a capacitive or resistive touch sensor to sense input from a user.
- a sensor may be, for instance, an accelerometer, a gyroscope, a tilt sensor, a force sensor, a magnetometer, an optical sensor, a proximity sensor, a biometric sensor, etc., or any combination thereof.
- An output device may use the same type of interface port as an input device. For example, a Universal Serial Bus (USB) port may be used to provide an input device and an output device.
- USB Universal Serial Bus
- the processing circuitry 302 may be configured to communicate with an access network or other network using the communication interface 312.
- the communication interface 312 may comprise one or more communication subsystems and may include or be communicatively coupled to an antenna 322.
- the communication interface 312 may include one or more transceivers used to communicate, such as by communicating with one or more remote transceivers of another device capable of wireless communication (e.g., another UE or a network node in an access network).
- Each transceiver may include a transmitter 318 and/or a receiver 320 appropriate to provide network communications (e.g., optical, electrical, frequency allocations, and so forth).
- the transmitter 318 and receiver 320 may be coupled to one or more antennas (e.g., antenna 322) and may share circuit components, software or firmware, or alternatively be implemented separately.
- communication functions of the communication interface 312 may include cellular communication, Wi-Fi communication, LPWAN communication, data communication, voice communication, multimedia communication, short-range communications such as Bluetooth, near-field communication, location-based communication such as the use of the global positioning system (GPS) to determine a location, another like communication function, or any combination thereof.
- GPS global positioning system
- a UE comprises an actuator, a motor, or a switch, related to a communication interface configured to receive wireless input from a network node via a wireless connection.
- the states of the actuator, the motor, or the switch may change.
- the UE may comprise a motor that adjusts the control surfaces or rotors of a drone in flight according to the received input or to a robotic arm performing a medical procedure according to the received input.
- a UE may represent a machine or other device that performs monitoring and/or measurements, and transmits the results of such monitoring and/or measurements to another UE and/or a network node.
- the UE may in this case be an M2M device, which may in a 3GPP context be referred to as an MTC device.
- the UE may implement the 3GPP NB-IoT standard.
- a UE may represent a vehicle, such as a car, a bus, a truck, a ship and an airplane, or other equipment that is capable of monitoring and/or reporting on its operational status or other functions associated with its operation.
- a first UE might be or be integrated in a drone and provide the drone’s speed information (obtained through a speed sensor) to a second UE that is a remote controller operating the drone.
- the first UE may adjust the throttle on the drone (e.g. by controlling an actuator) to increase or decrease the drone’s speed.
- the first and/or the second UE can also include more than one of the functionalities described above.
- a UE might comprise the sensor and the actuator, and handle communication of data for both the speed sensor and the actuators.
- FIG. 4 shows a network node 400 in accordance with some embodiments.
- network node refers to equipment capable, configured, arranged and/or operable to communicate directly or indirectly with a UE and/or with other network nodes or equipment, in a telecommunication network.
- network nodes include, but are not limited to, access points (APs) (e.g., radio access points), base stations (BSs) (e.g., radio base stations, Node Bs, evolved Node Bs (eNBs) and NRNodeBs (gNBs)), 0-RAN nodes or components of an 0-RAN node (e g., 0-RU, 0-DU, O-CU).
- APs access points
- BSs base stations
- eNBs evolved Node Bs
- gNBs NRNodeBs
- Base stations may be categorized based on the amount of coverage they provide (or, stated differently, their transmit power level) and so, depending on the provided amount of coverage, may be referred to as femto base stations, pico base stations, micro base stations, or macro base stations.
- a base station may be a relay node or a relay donor node controlling a relay.
- a network node may also include one or more (or all) parts of a distributed radio base station such as centralized digital units, distributed units (e.g., in an 0-RAN access node) and/or remote radio units (RRUs), sometimes referred to as Remote Radio Heads (RRHs). Such remote radio units may or may not be integrated with an antenna as an antenna integrated radio.
- Parts of a distributed radio base station may also be referred to as nodes in a distributed antenna system (DAS).
- DAS distributed antenna system
- network nodes include multiple transmission point (multi-TRP) 5G access nodes, multi -standard radio (MSR) equipment such as MSR BSs, network controllers such as radio network controllers (RNCs) or base station controllers (BSCs), base transceiver stations (BTSs), transmission points, transmission nodes, multi-cell/multicast coordination entities (MCEs), Operation and Maintenance (O&M) nodes, Operations Support System (OSS) nodes, Self-Organizing Network (SON) nodes, positioning nodes (e.g., Evolved Serving Mobile Location Centers (E-SMLCs)), and/or Minimization of Drive Tests (MDTs).
- MSR multi -standard radio
- RNCs radio network controllers
- BSCs base station controllers
- BTSs base transceiver stations
- OFDM Operation and Maintenance
- OSS Operations Support System
- SON Self-Organizing Network
- positioning nodes e.g., Evolved Serving Mobile Location Centers (E-SMLCs)
- the network node 400 may be configured to support multiple radio access technologies (RATs).
- RATs radio access technologies
- some components may be duplicated (e.g., separate memory 404 for different RATs) and some components may be reused (e.g., a same antenna 410 may be shared by different RATs).
- the network node 400 may also include multiple sets of the various illustrated components for different wireless technologies integrated into network node 400, for example GSM, WCDMA, LTE, NR, WiFi, Zigbee, Z-wave, LoRaWAN, Radio Frequency Identification (RFID) or Bluetooth wireless technologies. These wireless technologies may be integrated into the same or different chip or set of chips and other components within network node 400.
- RFID Radio Frequency Identification
- the processing circuitry 402 includes a system on a chip (SOC). In some embodiments, the processing circuitry 402 includes one or more of radio frequency (RF) transceiver circuitry 412 and baseband processing circuitry 414. In some embodiments, the radio frequency (RF) transceiver circuitry 412 and the baseband processing circuitry 414 may be on separate chips (or sets of chips), boards, or units, such as radio units and digital units. In alternative embodiments, part or all of RF transceiver circuitry 412 and baseband processing circuitry 414 may be on the same chip or set of chips, boards, or units.
- SOC system on a chip
- the processing circuitry 402 includes one or more of radio frequency (RF) transceiver circuitry 412 and baseband processing circuitry 414.
- the radio frequency (RF) transceiver circuitry 412 and the baseband processing circuitry 414 may be on separate chips (or sets of chips), boards, or units, such as radio units and digital units. In alternative embodiments, part or all of
- the memory 404 may comprise any form of volatile or non-volatile computer- readable memory including, without limitation, persistent storage, solid-state memory, remotely mounted memory, magnetic media, optical media, random access memory (RAM), read-only memory (ROM), mass storage media (for example, a hard disk), removable storage media (for example, a flash drive, a Compact Disk (CD) or a Digital Video Disk (DVD)), and/or any other volatile or non-volatile, non-transitory device-readable and/or computerexecutable memory devices that store information, data, and/or instructions that may be used by the processing circuitry 402.
- volatile or non-volatile computer- readable memory including, without limitation, persistent storage, solid-state memory, remotely mounted memory, magnetic media, optical media, random access memory (RAM), read-only memory (ROM), mass storage media (for example, a hard disk), removable storage media (for example, a flash drive, a Compact Disk (CD) or a Digital Video Disk (DVD)), and/or any other volatile or non
- the memory 404 may store any suitable instructions, data, or information, including a computer program, software, an application including one or more of logic, rules, code, tables, and/or other instructions capable of being executed by the processing circuitry 402 and utilized by the network node 400.
- the memory 404 may be used to store any calculations made by the processing circuitry 402 and/or any data received via the communication interface 406.
- the processing circuitry 402 and memory 404 is integrated.
- the network node 400 does not include separate radio front-end circuitry 418, instead, the processing circuitry 402 includes radio front-end circuitry and is connected to the antenna 410.
- the processing circuitry 402 includes radio front-end circuitry and is connected to the antenna 410.
- all or some of the RF transceiver circuitry 412 is part of the communication interface 406.
- the communication interface 406 includes one or more ports or terminals 416, the radio front-end circuitry 418, and the RF transceiver circuitry 412, as part of a radio unit (not shown), and the communication interface 406 communicates with the baseband processing circuitry 414, which is part of a digital unit (not shown).
- FIG. 5 is a block diagram illustrating a virtualization environment 500 in which functions implemented by some embodiments may be virtualized.
- virtualizing means creating virtual versions of apparatuses or devices which may include virtualizing hardware platforms, storage devices and networking resources.
- virtualization can be applied to any device described herein, or components thereof, and relates to an implementation in which at least a portion of the functionality is implemented as one or more virtual components.
- Some or all of the functions described herein may be implemented as virtual components executed by one or more virtual machines (VMs) implemented in one or more virtual environments 500 hosted by one or more of hardware nodes, such as a hardware computing device that operates as a network node, UE, core network node, or host.
- VMs virtual machines
- the node may be entirely virtualized.
- the virtualization environment 500 includes components defined by the O-RAN Alliance, such as an O-Cloud environment orchestrated by a Service Management and Orchestration Framework via an O-2 interface. Virtualization may facilitate distributed implementations of a network node, UE, core network node, or host.
- Applications 502 (which may alternatively be called software instances, virtual appliances, network functions, virtual nodes, virtual network functions, etc.) are run in the virtualization environment Q400 to implement some of the features, functions, and/or benefits of some of the embodiments disclosed herein.
- the VMs 508 comprise virtual processing, virtual memory, virtual networking or interface and virtual storage, and may be run by a corresponding virtualization layer 506. Different embodiments of the instance of a virtual appliance 502 may be implemented on one or more of VMs 508, and the implementations may be made in different ways. Virtualization of the hardware is in some contexts referred to as network function virtualization (NFV). NFV may be used to consolidate many network equipment types onto industry standard high volume server hardware, physical switches, and physical storage, which can be located in data centers, and customer premise equipment.
- NFV network function virtualization
- a VM 508 may be a software implementation of a physical machine that runs programs as if they were executing on a physical, non-virtualized machine.
- Each of the VMs 508, and that part of hardware 504 that executes that VM be it hardware dedicated to that VM and/or hardware shared by that VM with others of the VMs, forms separate virtual network elements.
- a virtual network function is responsible for handling specific network functions that run in one or more VMs 508 on top of the hardware 504 and corresponds to the application 502.
- Hardware 504 may be implemented in a standalone network node with generic or specific components. Hardware 504 may implement some functions via virtualization. Alternatively, hardware 504 may be part of a larger cluster of hardware (e.g. such as in a data center or CPE) where many hardware nodes work together and are managed via management and orchestration 510, which, among others, oversees lifecycle management of applications 502.
- hardware 504 is coupled to one or more radio units that each include one or more transmitters and one or more receivers that may be coupled to one or more antennas. Radio units may communicate directly with other hardware nodes via one or more appropriate network interfaces and may be used in combination with the virtual components to provide a virtual node with radio capabilities, such as a radio access node or a base station.
- some signaling can be provided with the use of a control system 512 which may alternatively be used for communication between hardware nodes and radio units.
- computing devices described herein may include the illustrated combination of hardware components
- computing devices may comprise multiple different physical components that make up a single illustrated component, and functionality may be partitioned between separate components.
- a communication interface may be configured to include any of the components described herein, and/or the functionality of the components may be partitioned between the processing circuitry and the communication interface.
- non-computationally intensive functions of any of such components may be implemented in software or firmware and computationally intensive functions may be implemented in hardware.
- processing circuitry executing instructions stored on in memory, which in certain embodiments may be a computer program product in the form of a non-transitory computer- readable storage medium.
- some or all of the functionality may be provided by the processing circuitry without executing instructions stored on a separate or discrete device-readable storage medium, such as in a hard-wired manner.
- the processing circuitry can be configured to perform the described functionality. The benefits provided by such functionality are not limited to the processing circuitry alone or to other components of the computing device, but are enjoyed by the computing device as a whole, and/or by end users and a wireless network generally.
- AI/ML models are increasingly being used in wireless communications.
- One AI/ML PHY (physical layer) use case is the positioning of a target user equipment (UE).
- the methods described in this disclosure use the positioning of a target UE as examples. It is understood, however, the methods described can also be applied to other use cases involving in using AI/ML models in wireless communications.
- the first approach relates to direct AI/ML positioning, where the AI/ML model output is a UE location.
- Direct AI/ML positioning typically refers to radio fingerprinting, where channel observation is used as the input of the AI/ML model.
- the second approach relates to AI/ML assisted positioning, where the AI/ML model output is a new measurement and/or enhancement of existing measurements.
- the model output can be, for example, LOS/NLOS (line-of-sight/non- line-of-sight) identification, timing and/or angle measurement, likelihood or reliability of the measurement.
- the model input is also channel observations.
- the model output can be used for performing UE positioning methods such as triangulation or trilateration.
- an entity e.g., UE
- an entity may assist in performing a target UE positioning, or may perform the positioning itself. If the entity is assisting, but not directly performing the positioning, it is referred to as the “entity-assisted positioning” (e.g., UE- assisted positioning). If the entity directly performs the positioning (e.g., using triangulation/trilateration based on model output), it is referred to as the “entity-based positioning” (e.g., UE-based positioning).
- the first case (denoted as Case 1 below) relates to UE-based positioning with one or more UE-side models.
- the positioning can be direct AI/ML positioning or AI/ML assisted positioning.
- the second case (denoted as Case 2a below) relates to UE-assisted/LMF-based positioning with one or more UE-side models.
- the positioning can be AI/ML assisted positioning.
- the third case (denoted as Case 2b below) relates to UE-assisted/LMF-based positioning with one or more LMF-side models.
- the positioning can be direct AI/ML positioning.
- the fourth case (denoted as Case 3a below) relates to network node (e.g., NG- RAN or next generation-radio access network) assisted positioning with one or more network node side models (e.g., gNB-side models, where gNB is a base station in NR).
- the positioning can be AI/ML assisted positioning.
- the fifth case (denoted as Case 3b below) relates to network node assisted (e.g., NG-RAN) positioning with one or more LMF-side models.
- the positioning can be direct AI/ML positioning. It is understood that the above cases 1, 2a, 2b, 3a, and 3b are illustrative examples. There may be other cases or scenarios, or other positioning methods for a given case. For example, while aforementioned case 2a uses AI/ML assisted positioning as an example, it is not limited and it may be direct positioning.
- a target UE in a challenging radio environment can be an issue.
- conventional methods rely on a sufficient number of line-of-sight (LoS) links, typically at least three to five LOS links depending on the positioning method, and whether vertical position is estimated in addition to horizontal position.
- LoS line-of-sight
- configurations for wireless signal transmission e.g., configuration of DL PRS transmission, configuration of uplink sounding reference signal (UL SRS) transmission
- UL SRS uplink sounding reference signal
- the receiver end e.g., UE, network node such as gNB
- the receiver needs to be aware of the requirement of the AI/ML model with regard to the measurements of the wireless signal, when such measurements are used as model input during the model inference stage. All these aspects provide context information for model inference. Before model inference can start, context information is exchanged, and system parameters are configured according to the desired context if needed. This improves or ensures that the trained AI/ML model operates (e.g., makes inference) within the context it was trained for.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Software Systems (AREA)
- General Physics & Mathematics (AREA)
- Computing Systems (AREA)
- Artificial Intelligence (AREA)
- Mathematical Physics (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- General Engineering & Computer Science (AREA)
- Computational Linguistics (AREA)
- Biomedical Technology (AREA)
- Molecular Biology (AREA)
- General Health & Medical Sciences (AREA)
- Biophysics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Health & Medical Sciences (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Medical Informatics (AREA)
- Radar, Positioning & Navigation (AREA)
- Remote Sensing (AREA)
- Mobile Radio Communication Systems (AREA)
Abstract
Methods and systems for enhancing context information consistency are provided. A method performed by a user equipment includes receiving, or sending, context information related to one or more of: training data collection associated with AI/ML models, training of the models, and inference based on the models. The models are configured to provide measurements associated with positioning the UE. The method further includes, if a model inference context requirement is satisfied based on consistency between the context information related to the training of at least one of the models and the context information related to the inference based on the at least one of the models, selecting the at least one of the models for performing model inference, or sending a report including at least one of configurations associated with the positioning the UE or the capabilities of the UE.
Description
AI/ML MODEL INFERENCE CONTEXT AND SIGNALING
CROSS REFERENCE TO RELATED APPLICATION
[0001] This application claims priority to U.S. Provisional Patent Application No. 63/554,918 filed on February 16, 2024, titled “AI/ML MODEL INFERENCE CONTEXT AND SIGNALING”.
FIELD
[0002] The present disclosure relates generally to communication systems and, more specifically, to methods and systems for enhancing consistency among various context information related to training data collection associated with one or more artificial intelligence/machine learning (AI/ML) models, training of the one or more AI/ML models, and inference based on the one or more AI/ML models.
BACKGROUND
[0003] Artificial Intelligence (Al), Machine Learning (ML) have been investigated as promising tools to optimize the design of air-interface in wireless communication networks in both academia and industry. Example use cases include using autoencoders for channel state information (CSI) compression to reduce the feedback overhead and improve channel prediction accuracy; using deep neural networks for classifying line-of-sight (LOS) and non- LOS (NLOS) conditions to enhance the positioning accuracy; and using reinforcement learning for beam selection at the network side and/or the UE side to reduce the signaling overhead and beam alignment latency; and using deep reinforcement learning to learn an optimal precoding policy for complex MIMO precoding problems.
[0004] In 3rd Generation Partnership Project (3GPP) new radio (NR) technology development, the benefits of augmenting the air-interface with features enabling improved support of AI/ML based algorithms for enhanced performance and/or reduced complex! ty/overhead have been, and are still being, explored. By analyzing a few selected use cases (e.g., CSI feedback, beam management, and positioning, etc.), the technology development work aims at laying the foundation for future air-interface use cases leveraging AI/ML techniques.
SUMMARY
[0005] Various computer-implemented systems, methods, and articles for enhancing consistency among various context information are described herein. In one embodiment, a method performed by a user equipment (UE) is provided. The method comprises receiving, or sending, context information related to one or more of: training data collection associated with one or more artificial intelligence/machine learning (AI/ML) models, training of the one or more AI/ML models, and inference based on the one or more AI/ML models. The one or more AI/ML models are configured to provide measurements associated with positioning the UE. The method further comprises in accordance with a determination that a model inference context requirement is satisfied based on consistency between the context information related to the training of at least one of the one or more AI/ML models and the context information related to the inference based on the at least one of the one or more AI/ML models, selecting the at least one of the one or more AI/ML models for performing model inference, or sending a report including at least one of configurations associated with the positioning the UE or the UE’s capabilities in relation to performing inference based on the at least one of the one or more AI/ML models. The method further comprises that in accordance with a determination that the model inference context requirement is not satisfied, the configurations associated with the positioning the UE are updated or a notification message is provided.
[0006] In another embodiment, a method performed by a network node is provided. The method comprises sending, or receiving, context information related to one or more of: training data collection associated with one or more artificial intelligence/machine learning (AI/ML) models, training of the one or more AI/ML models, and inference based on the one or more AI/ML models. The one or more AI/ML models are configured to provide measurements associated with positioning a user equipment (UE). The method comprises in accordance with a determination that a model inference context requirement is satisfied based on consistency between the context information related to the training of at least one of the one or more AI/ML models and the context information related to the inference based on the at least one of the one or more AI/ML models, selecting the at least one of the one or more AI/ML models for performing model inference, or sending a report including at least one of configurations associated with the positioning the UE or capabilities of the UE in relation to performing inference based on the at least one of the one or more AI/ML models. The method comprises that in accordance with a determination that the model inference context requirement is not satisfied, the configurations associated with the positioning the UE are updated or a notification message is provided.
BRIEF DESCRIPTION OF THE DRAWINGS
[0007] For a better understanding of the various described embodiments, reference should be made to the Detailed Description below, in conjunction with the following drawings in which like reference numerals refer to corresponding parts throughout the figures.
[0008] Figure 1 illustrates exemplary AI/ML model training and inference pipelines and their interactions within an AI/ML model lifecycle management procedure, in accordance with some embodiments.
[0009] Figure 2 shows an example of a communication system in accordance with some embodiments.
[0010] Figure 3 shows a user equipment (UE) in accordance with some embodiments.
[0011] Figure 4 shows a network node in accordance with some embodiments.
[0012] Figure 5 is a block diagram illustrating a virtualization environment in which functions implemented by some embodiments may be virtualized.
[0013] Figure 6 is an illustration of training dataset validity area Adataset, model training validity area Atraining, and model inference validity area Ain^erence, and TRPs (transmission and reception points) included in these validity areas.
[0014] Figure 7 is an illustration of training dataset validity area Adataset, model training validity area Atraining, and model inference validity area Ain^erence, with some TRPs excluded in some of the validity areas.
[0015] Figure 8 is a flowchart illustrating an example method performed by a UE for enhancing the context information consistency according to some embodiments.
[0016] Figure 9 is a flowchart illustrating example methods for enhancing the context information consistency using one or more UE-side models with UE-based positioning or UE- assisted/LMF-based (location management function-based) positioning, according to some embodiments.
[0017] Figure 10 is a flowchart illustrating example methods for enhancing the context information consistency using one or more LMF-side models with UE-assisted/LMF-based positioning, according to some embodiments.
[0018] Figure 11 is a flowchart illustrating example methods performed by a network node for enhancing the context information consistency, according to some embodiments.
[0019] Figure 12 is a flowchart illustrating example methods for enhancing the context information consistency using one or more LMF-side models with network-assisted positioning, according to some embodiments.
[0020] Figure 13 is a flowchart illustrating example methods for enhancing the context information consistency using one or more network-side models with network-assisted positioning, according to some embodiments.
[0021] Figure 14 is a flowchart illustrating example method for enhancing the context information consistency using one or more LMF-side models with network-assisted positioning, according to some embodiments.
[0022] Figure 15 is a flowchart illustrating example method for enhancing the context information consistency using one or more network node-side models with network node assisted positioning, according to some embodiments.
DETAILED DESCRIPTION
[0023] To provide a more thorough understanding of the present invention, the following description sets forth numerous specific details, such as specific configurations, parameters, examples, and the like. It should be recognized, however, that such description is not intended as a limitation on the scope of the present invention but is intended to provide a better description of the exemplary embodiments.
[0024] Throughout the specification and claims, the following terms take the meanings explicitly associated herein, unless the context clearly dictates otherwise:
[0025] The phrase “in one embodiment” as used herein does not necessarily refer to the same embodiment, though it may. Thus, as described below, various embodiments of the invention may be readily combined, without departing from the scope or spirit of the invention. [0026] As used herein, the term “or” is an inclusive “or” operator and is equivalent to the term “and/or,” unless the context clearly dictates otherwise.
[0027] The term “based on” is not exclusive and allows for being based on additional factors not described unless the context clearly dictates otherwise.
[0028] As used herein, and unless the context dictates otherwise, the term “coupled to” is intended to include both direct coupling (in which two elements that are coupled to each other contact each other) and indirect coupling (in which at least one additional element is located between the two elements). Therefore, the terms “coupled to” and “coupled with” are used synonymously. Within the context of a networked environment where two or more components or devices are able to exchange data, the terms “coupled to” and “coupled with”
are also used to mean “communicatively coupled with”, possibly via one or more intermediary devices.
[0029] In addition, throughout the specification, the meaning of “a”, “an”, and “the” includes plural references, and the meaning of “in” includes “in” and “on”.
[0030] Although some of the various embodiments presented herein constitute a single combination of inventive elements, it should be appreciated that the inventive subject matter is considered to include all possible combinations of the disclosed elements. As such, if one embodiment comprises elements A, B, and C, and another embodiment comprises elements B and D, then the inventive subject matter is also considered to include other remaining combinations of A, B, C, or D, even if not explicitly discussed herein. Further, the transitional term “comprising” means to have as parts or members, or to be those parts or members. As used herein, the transitional term “comprising” is inclusive or open-ended and does not exclude additional, unrecited elements or method steps.
[0031] A functional framework for AI/ML model lifecycle management (LCM) is described using Figure 1.
[0032] Building an (AI/ML) model includes several development steps where the actual training of the AI/ML model is one step in a training pipeline. Developing an AI/ML model also involves the AI/ML model’s lifecycle management. This is illustrated in Figure 1, which illustrates exemplary AI/ML model training and inference pipelines, and their interactions within a model lifecycle management procedure. As illustrated in Figure 1, an AI/ML model lifecycle management typically comprises a training (re-training) pipeline 120, a model deployment stage 130, an inference pipeline 140, and a drift detection stage 150.
[0033] In some embodiments, training (re-training) pipeline 120 includes several steps such as a data ingestion step 122, a data pre-processing step 124, a model training step 126, a model evaluation step 128, and a model registration step 129. In the data ingestion step 122, a device operating an AI/ML model (e.g., a user equipment (UE), a server, or a network node) gathers raw data (e.g., training data) from a data storage such as a database. Training data can be used by the AI/ML model to learn patterns and relationships that exist within the data, so that a trained AI/ML model can make accurate predictions of classifications on inference data (e.g., new data). Training data may include input data and corresponding output data. In some examples, after the ingestion of data to the device, there may also be an additional step that controls the validity of the gathered data. In the data pre-processing step 124, the device can apply some feature engineering to the gathered data. The feature engineering may include data normalization and possibly a data transformation required for the input data of the AI/ML
model. In the model training phase 126, the AI/ML model can be trained based on the pre- processed data.
[0034] With reference still to Figure 1, in the model evaluation step 128, the AI/ML model’s performance is evaluated (e.g., benchmarked with respect to certain modal baseline performance). The performance evaluation results can be used to make adjustments of the model training. Thus, the model training step 126 and the model evaluation step 128 can be iteratively performed until an acceptable level of performance (as previously exemplified) is achieved. Afterwards, the AI/ML model is considered to be sufficiently trained to satisfy a performance requirement. The model registration step 129 then registers the AI/ML model, including any corresponding AI/ML meta data that provides information on how the AI/ML model was developed, and possibly AI/ML model evaluations performance outcomes.
[0035] Figure 1 further illustrates that an AI/ML model deployment stage 130, in which the trained (or re-trained) AI/ML model are deployed as a part of the inference pipeline 140. For example, the trained (or re-trained) AI/ML model may be deployed to a UE for making inferences or predictions based on certain collected data. In one embodiment, the inference pipeline 140 includes a data ingestion step 142, a data pre-processing step 144, a model operation step 146, and data and model monitoring step 148. In the data ingestion step 142, a device operating an AI/ML model (e.g., a UE, a server, or a network node) gathers raw data (e.g., inference data) from a data storage. Unlike training data, raw data or inference data can be new data that have not been encountered or used by the AI/ML model. A trained AI/ML model can make predictions or classifications based on the raw data or inference data.
[0036] The data pre-processing step 144 is typically identical to corresponding data preprocessing step 124 that occurs in the training pipeline 120. In the model operation step 146, the AI/ML model uses the trained and deployed model in an operational mode, such that it makes predictions or classifications from the pre-processed inference data (and/or any features obtained based on the raw inference data). In the data and model monitoring step 148, the device can validate that the inference data are from a distribution that aligns well with the training data, as well as monitor the AI/ML model outputs for detecting any performance drifts or operational drifts. At the drift detection stage 150, the device can provide information about any drifts in the model operations. For instance, the device can provide such information to a device implementing the training pipeline 120 such that the AI/ML model can be retrained to at least partially correcting the performance drifts or operational drifts.
[0037] Figure 2 shows an example of a communication system 200 in accordance with some embodiments.
[0038] In the example, the communication system 200 includes a telecommunication network 202 that includes an access network 204, such as a radio access network (RAN), and a core network 206, which includes one or more core network nodes 208. The access network 204 includes one or more access network nodes, such as network nodes 210a and 210b (one or more of which may be generally referred to as network nodes 210), or any other similar 3rd Generation Partnership Project (3 GPP) access nodes or non-3GPP access points. Moreover, as will be appreciated by those of skill in the art, a network node is not necessarily limited to an implementation in which a radio portion and a baseband portion are supplied and integrated by a single vendor. Thus, it will be understood that network nodes include disaggregated implementations or portions thereof. For example, in some embodiments, the telecommunication network 202 includes one or more Open-RAN (ORAN) network nodes. An ORAN network node is a node in the telecommunication network 202 that supports an ORAN specification (e.g., a specification published by the O-RAN Alliance, or any similar organization) and may operate alone or together with other nodes to implement one or more functionalities of any node in the telecommunication network 202, including one or more network nodes 210 and/or core network nodes 208.
[0039] Examples of an ORAN network node include an open radio unit (O-RU), an open distributed unit (O-DU), an open central unit (O-CU), including an O-CU control plane (O- CU-CP) or an O-CU user plane (O-CU-UP), a RAN intelligent controller (near-real time or non-real time) hosting software or software plug-ins, such as a near-real time control application (e.g., xApp) or a non-real time control application (e.g., rApp), or any combination thereof (the adjective “open” designating support of an ORAN specification). The network node may support a specification by, for example, supporting an interface defined by the ORAN specification, such as an Al, Fl, Wl, El, E2, X2, Xn interface, an open fronthaul user plane interface, or an open fronthaul management plane interface. Moreover, an ORAN access node may be a logical node in a physical node. Furthermore, an ORAN network node may be implemented in a virtualization environment (described further below) in which one or more network functions are virtualized. For example, the virtualization environment may include an O-Cloud computing platform orchestrated by a Service Management and Orchestration Framework via an O-2 interface defined by the O-RAN Alliance or comparable technologies. The network nodes 210 facilitate direct or indirect connection of user equipment (UE), such as by connecting UEs 212a, 212b, 212c, and 212d (one or more of which may be generally referred to as UEs 212) to the core network 206 over one or more wireless connections.
[0040] Example wireless communications over a wireless connection include transmitting and/or receiving wireless signals using electromagnetic waves, radio waves, infrared waves, and/or other types of signals suitable for conveying information without the use of wires, cables, or other material conductors. Moreover, in different embodiments, the communication system 200 may include any number of wired or wireless networks, network nodes, UEs, and/or any other components or systems that may facilitate or participate in the communication of data and/or signals whether via wired or wireless connections. The communication system 200 may include and/or interface with any type of communication, telecommunication, data, cellular, radio network, and/or other similar type of system.
[0041] The UEs 212 may be any of a wide variety of communication devices, including wireless devices arranged, configured, and/or operable to communicate wirelessly with the network nodes 210 and other communication devices. Similarly, the network nodes 210 are arranged, capable, configured, and/or operable to communicate directly or indirectly with the UEs 212 and/or with other network nodes or equipment in the telecommunication network 202 to enable and/or provide network access, such as wireless network access, and/or to perform other functions, such as administration in the telecommunication network 202.
[0042] In the depicted example, the core network 206 connects the network nodes 210 to one or more host computing systems, such as host 216. These connections may be direct or indirect via one or more intermediary networks or devices. In other examples, network nodes may be directly coupled to hosts. The core network 206 includes one more core network nodes (e.g., core network node 208) that are structured with hardware and software components. Features of these components may be substantially similar to those described with respect to the UEs, network nodes, and/or hosts, such that the descriptions thereof are generally applicable to the corresponding components of the core network node 208. Example core network nodes include functions of one or more of a Mobile Switching Center (MSC), Mobility Management Entity (MME), Home Subscriber Server (HSS), Access and Mobility Management Function (AMF), Session Management Function (SMF), Authentication Server Function (AUSF), Subscription Identifier De-concealing function (SIDF), Unified Data Management (UDM), Security Edge Protection Proxy (SEPP), Network Exposure Function (NEF), and/or a User Plane Function (UPF).
[0043] The host 216 may be under the ownership or control of a service provider other than an operator or provider of the access network 204 and/or the telecommunication network 202. The host 216 may host a variety of applications to provide one or more service. Examples of such applications include live and pre-recorded audio/video content, data collection services
such as retrieving and compiling data on various ambient conditions detected by a plurality of UEs, analytics functionality, social media, functions for controlling or otherwise interacting with remote devices, functions for an alarm and surveillance center, or any other such function performed by a server.
[0044] As a whole, the communication system 200 of Figure 2 enables connectivity between the UEs, network nodes, and hosts. In that sense, the communication system may be configured to operate according to predefined rules or procedures, such as specific standards that include, but are not limited to: Global System for Mobile Communications (GSM); Universal Mobile Telecommunications System (UMTS); Long Term Evolution (LTE), and/or other suitable 2G, 3G, 4G, 5G standards, or any applicable future generation standard (e.g., 6G); wireless local area network (WLAN) standards, such as the Institute of Electrical and Electronics Engineers (IEEE) 802.11 standards (WiFi); and/or any other appropriate wireless communication standard, such as the Worldwide Interoperability for Microwave Access (WiMax), Bluetooth, Z-Wave, Near Field Communication (NFC) ZigBee, LiFi, and/or any low-power wide-area network (LPWAN) standards such as LoRa and Sigfox.
[0045] In some examples, the telecommunication network 202 is a cellular network that implements 3 GPP standardized features. Accordingly, the telecommunications network 202 may support network slicing to provide different logical networks to different devices that are connected to the telecommunication network 202. For example, the telecommunications network 202 may provide Ultra Reliable Low Latency Communication (URLLC) services to some UEs, while providing Enhanced Mobile Broadband (eMBB) services to other UEs, and/or Massive Machine Type Communication (mMTC)/Massive loT services to yet further UEs.
[0046] In some examples, the UEs 212 are configured to transmit and/or receive information without direct human interaction. For instance, a UE may be designed to transmit information to the access network 204 on a predetermined schedule, when triggered by an internal or external event, or in response to requests from the access network 204. Additionally, a UE may be configured for operating in single- or multi -RAT or multi -standard mode. For example, a UE may operate with any one or combination of Wi-Fi, NR (New Radio) and LTE, i.e. being configured for multi-radio dual connectivity (MR-DC), such as E-UTRAN (Evolved- UMTS Terrestrial Radio Access Network) New Radio - Dual Connectivity (EN-DC).
[0047] In the example, the hub 214 communicates with the access network 204 to facilitate indirect communication between one or more UEs (e.g., UE 212c and/or 212d) and network nodes (e.g., network node 210b). In some examples, the hub 214 may be a controller, router, content source and analytics, or any of the other communication devices described herein
regarding UEs. For example, the hub 214 may be a broadband router enabling access to the core network 206 for the UEs. As another example, the hub 214 may be a controller that sends commands or instructions to one or more actuators in the UEs. Commands or instructions may be received from the UEs, network nodes 210, or by executable code, script, process, or other instructions in the hub 214. As another example, the hub 214 may be a data collector that acts as temporary storage for UE data and, in some embodiments, may perform analysis or other processing of the data. As another example, the hub 214 may be a content source. For example, for a UE that is a VR device, display, loudspeaker, or other media delivery device, the hub 214 may retrieve VR assets, video, audio, or other media or data related to sensory information via a network node, which the hub 214 then provides to the UE either directly, after performing local processing, and/or after adding additional local content. In still another example, the hub 214 acts as a proxy server or orchestrator for the UEs, in particular if one or more of the UEs are low energy loT devices.
[0048] The hub 214 may have a constant/persistent or intermittent connection to the network node 210b. The hub 214 may also allow for a different communication scheme and/or schedule between the hub 214 and UEs (e.g., UE 212c and/or 212d), and between the hub 214 and the core network 206. In other examples, the hub 214 is connected to the core network 206 and/or one or more UEs via a wired connection. Moreover, the hub 214 may be configured to connect to an M2M service provider over the access network 204 and/or to another UE over a direct connection. In some scenarios, UEs may establish a wireless connection with the network nodes 210 while still connected via the hub 214 via a wired or wireless connection. In some embodiments, the hub 214 may be a dedicated hub - that is, a hub whose primary function is to route communications to/from the UEs from/to the network node 210b. In other embodiments, the hub 214 may be a non-dedicated hub - that is, a device which is capable of operating to route communications between the UEs and network node 210b, but which is additionally capable of operating as a communication start and/or end point for certain data channels.
[0049] Figure 3 shows a UE 300 in accordance with some embodiments. The UE 300 presents additional details of some embodiments of the UE 212 of Figure 1. As used herein, a UE refers to a device capable, configured, arranged and/or operable to communicate wirelessly with network nodes and/or other UEs. Examples of a UE include, but are not limited to, a smart phone, mobile phone, cell phone, voice over IP (VoIP) phone, wireless local loop phone, desktop computer, personal digital assistant (PDA), wireless cameras, gaming console or device, music storage/playback device, wearable terminal device, wireless endpoint, mobile
station, tablet, laptop, laptop-embedded equipment (LEE), laptop-mounted equipment (LME), an Augmented Reality (AR) or Virtual Reality (VR) device, wireless customer-premise equipment (CPE), vehicle, vehicle-mounted or vehicle embedded/integrated wireless device, etc. Other examples include any UE identified by the 3rd Generation Partnership Project (3 GPP), including a narrow band internet of things (NB-IoT) LIE, a machine type communication (MTC) UE, and/or an enhanced MTC (eMTC) UE.
[0050] A UE may support device-to-device (D2D) communication, for example by implementing a 3 GPP standard for sidelink communication, Dedicated Short-Range Communication (DSRC), vehi cl e-to- vehicle (V2V), vehicle-to-infrastructure (V2I), or vehicle- to-everything (V2X). In other examples, a UE may not necessarily have a user in the sense of a human user who owns and/or operates the relevant device. Instead, a UE may represent a device that is intended for sale to, or operation by, a human user but which may not, or which may not initially, be associated with a specific human user (e.g., a smart sprinkler controller). Alternatively, a UE may represent a device that is not intended for sale to, or operation by, an end user but which may be associated with or operated for the benefit of a user (e.g., a smart power meter).
[0051] The UE 300 includes processing circuitry 302 that is operatively coupled via a bus 304 to an input/output interface 306, a power source 308, a memory 310, a communication interface 312, and/or any other component, or any combination thereof. Certain UEs may utilize all or a subset of the components shown in Figure 3. The level of integration between the components may vary from one UE to another UE. Further, certain UEs may contain multiple instances of a component, such as multiple processors, memories, transceivers, transmitters, receivers, etc.
[0052] The processing circuitry 302 is configured to process instructions and data and may be configured to implement any sequential state machine operative to execute instructions stored as machine-readable computer programs in the memory 310. The processing circuitry 302 may be implemented as one or more hardware-implemented state machines (e.g., in discrete logic, field-programmable gate arrays (FPGAs), application specific integrated circuits (ASICs), etc.); programmable logic together with appropriate firmware; one or more stored computer programs, general-purpose processors, such as a microprocessor or digital signal processor (DSP), together with appropriate software; or any combination of the above. For example, the processing circuitry 302 may include multiple central processing units (CPUs). [0053] In the example, the input/output interface 306 may be configured to provide an interface or interfaces to an input device, output device, or one or more input and/or output
devices. Examples of an output device include a speaker, a sound card, a video card, a display, a monitor, a printer, an actuator, an emitter, a smartcard, another output device, or any combination thereof. An input device may allow a user to capture information into the UE 300. Examples of an input device include a touch-sensitive or presence-sensitive display, a camera (e.g., a digital camera, a digital video camera, a web camera, etc.), a microphone, a sensor, a mouse, a trackball, a directional pad, a trackpad, a scroll wheel, a smartcard, and the like. The presence-sensitive display may include a capacitive or resistive touch sensor to sense input from a user. A sensor may be, for instance, an accelerometer, a gyroscope, a tilt sensor, a force sensor, a magnetometer, an optical sensor, a proximity sensor, a biometric sensor, etc., or any combination thereof. An output device may use the same type of interface port as an input device. For example, a Universal Serial Bus (USB) port may be used to provide an input device and an output device.
[0054] In some embodiments, the power source 308 is structured as a battery or battery pack. Other types of power sources, such as an external power source (e.g., an electricity outlet), photovoltaic device, or power cell, may be used. The power source 308 may further include power circuitry for delivering power from the power source 308 itself, and/or an external power source, to the various parts of the UE 300 via input circuitry or an interface such as an electrical power cable. Delivering power may be, for example, for charging of the power source 308. Power circuitry may perform any formatting, converting, or other modification to the power from the power source 308 to make the power suitable for the respective components of the UE 300 to which power is supplied.
[0055] The memory 310 may be or be configured to include memory such as random access memory (RAM), read-only memory (ROM), programmable read-only memory (PROM), erasable programmable read-only memory (EPROM), electrically erasable programmable read-only memory (EEPROM), magnetic disks, optical disks, hard disks, removable cartridges, flash drives, and so forth. In one example, the memory 310 includes one or more application programs 314, such as an operating system, web browser application, a widget, gadget engine, or other application, and corresponding data 316. The memory 310 may store, for use by the UE 300, any of a variety of various operating systems or combinations of operating systems.
[0056] The memory 310 may be configured to include a number of physical drive units, such as redundant array of independent disks (RAID), flash memory, USB flash drive, external hard disk drive, thumb drive, pen drive, key drive, high-density digital versatile disc (HD- DVD) optical disc drive, internal hard disk drive, Blu-Ray optical disc drive, holographic
digital data storage (HDDS) optical disc drive, external mini-dual in-line memory module (DIMM), synchronous dynamic random access memory (SDRAM), external micro-DIMM SDRAM, smartcard memory such as tamper resistant module in the form of a universal integrated circuit card (UICC) including one or more subscriber identity modules (SIMs), such as a USIM and/or ISIM, other memory, or any combination thereof. The UICC may for example be an embedded UICC (eUICC), integrated UICC (iUICC) or a removable UICC commonly known as ‘SIM card.’ The memory 310 may allow the UE 300 to access instructions, application programs and the like, stored on transitory or non-transitory memory media, to off-load data, or to upload data. An article of manufacture, such as one utilizing a communication system may be tangibly embodied as or in the memory 310, which may be or comprise a device-readable storage medium.
[0057] The processing circuitry 302 may be configured to communicate with an access network or other network using the communication interface 312. The communication interface 312 may comprise one or more communication subsystems and may include or be communicatively coupled to an antenna 322. The communication interface 312 may include one or more transceivers used to communicate, such as by communicating with one or more remote transceivers of another device capable of wireless communication (e.g., another UE or a network node in an access network). Each transceiver may include a transmitter 318 and/or a receiver 320 appropriate to provide network communications (e.g., optical, electrical, frequency allocations, and so forth). Moreover, the transmitter 318 and receiver 320 may be coupled to one or more antennas (e.g., antenna 322) and may share circuit components, software or firmware, or alternatively be implemented separately.
[0058] In the illustrated embodiment, communication functions of the communication interface 312 may include cellular communication, Wi-Fi communication, LPWAN communication, data communication, voice communication, multimedia communication, short-range communications such as Bluetooth, near-field communication, location-based communication such as the use of the global positioning system (GPS) to determine a location, another like communication function, or any combination thereof. Communications may be implemented in according to one or more communication protocols and/or standards, such as IEEE 802.11, Code Division Multiplexing Access (CDMA), Wideband Code Division Multiple Access (WCDMA), GSM, LTE, New Radio (NR), UMTS, WiMax, Ethernet, transmission control protocol/internet protocol (TCP/IP), synchronous optical networking (SONET), Asynchronous Transfer Mode (ATM), QUIC, Hypertext Transfer Protocol (HTTP), and so forth.
[0059] Regardless of the type of sensor, a UE may provide an output of data captured by its sensors, through its communication interface 312, via a wireless connection to a network node. Data captured by sensors of a UE can be communicated through a wireless connection to a network node via another UE. The output may be periodic (e.g., once every 15 minutes if it reports the sensed temperature), random (e.g., to even out the load from reporting from several sensors), in response to a triggering event (e.g., when moisture is detected an alert is sent), in response to a request (e.g., a user initiated request), or a continuous stream (e.g., a live video feed of a patient).
[0060] As another example, a UE comprises an actuator, a motor, or a switch, related to a communication interface configured to receive wireless input from a network node via a wireless connection. In response to the received wireless input the states of the actuator, the motor, or the switch may change. For example, the UE may comprise a motor that adjusts the control surfaces or rotors of a drone in flight according to the received input or to a robotic arm performing a medical procedure according to the received input.
[0061] A UE, when in the form of an Internet of Things (loT) device, may be a device for use in one or more application domains, these domains comprising, but not limited to, city wearable technology, extended industrial application and healthcare. Non-limiting examples of such an loT device are a device which is or which is embedded in: a connected refrigerator or freezer, a TV, a connected lighting device, an electricity meter, a robot vacuum cleaner, a voice controlled smart speaker, a home security camera, a motion detector, a thermostat, a smoke detector, a door/window sensor, a flood/moisture sensor, an electrical door lock, a connected doorbell, an air conditioning system like a heat pump, an autonomous vehicle, a surveillance system, a weather monitoring device, a vehicle parking monitoring device, an electric vehicle charging station, a smart watch, a fitness tracker, a wearable for tactile augmentation or sensory enhancement, a water sprinkler, an animal- or item-tracking device, a sensor for monitoring a plant or animal, an industrial robot, an Unmanned Aerial Vehicle (UAV), and any kind of medical device, like a heart rate monitor or a remote controlled surgical robot. A UE in the form of an loT device comprises circuitry and/or software in dependence of the intended application of the loT device in addition to other components as described in relation to the UE 300 shown in Figure 3.
[0062] As yet another specific example, in an loT scenario, a UE may represent a machine or other device that performs monitoring and/or measurements, and transmits the results of such monitoring and/or measurements to another UE and/or a network node. The UE may in this case be an M2M device, which may in a 3GPP context be referred to as an MTC device.
As one particular example, the UE may implement the 3GPP NB-IoT standard. In other scenarios, a UE may represent a vehicle, such as a car, a bus, a truck, a ship and an airplane, or other equipment that is capable of monitoring and/or reporting on its operational status or other functions associated with its operation.
[0063] In practice, any number of UEs may be used together with respect to a single use case. For example, a first UE might be or be integrated in a drone and provide the drone’s speed information (obtained through a speed sensor) to a second UE that is a remote controller operating the drone. When the user makes changes from the remote controller, the first UE may adjust the throttle on the drone (e.g. by controlling an actuator) to increase or decrease the drone’s speed. The first and/or the second UE can also include more than one of the functionalities described above. For example, a UE might comprise the sensor and the actuator, and handle communication of data for both the speed sensor and the actuators.
[0064] Figure 4 shows a network node 400 in accordance with some embodiments. As used herein, network node refers to equipment capable, configured, arranged and/or operable to communicate directly or indirectly with a UE and/or with other network nodes or equipment, in a telecommunication network. Examples of network nodes include, but are not limited to, access points (APs) (e.g., radio access points), base stations (BSs) (e.g., radio base stations, Node Bs, evolved Node Bs (eNBs) and NRNodeBs (gNBs)), 0-RAN nodes or components of an 0-RAN node (e g., 0-RU, 0-DU, O-CU).
[0065] Base stations may be categorized based on the amount of coverage they provide (or, stated differently, their transmit power level) and so, depending on the provided amount of coverage, may be referred to as femto base stations, pico base stations, micro base stations, or macro base stations. A base station may be a relay node or a relay donor node controlling a relay. A network node may also include one or more (or all) parts of a distributed radio base station such as centralized digital units, distributed units (e.g., in an 0-RAN access node) and/or remote radio units (RRUs), sometimes referred to as Remote Radio Heads (RRHs). Such remote radio units may or may not be integrated with an antenna as an antenna integrated radio. Parts of a distributed radio base station may also be referred to as nodes in a distributed antenna system (DAS).
[0066] Other examples of network nodes include multiple transmission point (multi-TRP) 5G access nodes, multi -standard radio (MSR) equipment such as MSR BSs, network controllers such as radio network controllers (RNCs) or base station controllers (BSCs), base transceiver stations (BTSs), transmission points, transmission nodes, multi-cell/multicast coordination entities (MCEs), Operation and Maintenance (O&M) nodes, Operations Support
System (OSS) nodes, Self-Organizing Network (SON) nodes, positioning nodes (e.g., Evolved Serving Mobile Location Centers (E-SMLCs)), and/or Minimization of Drive Tests (MDTs). [0067] The network node 400 includes a processing circuitry 402, a memory 404, a communication interface 406, and a power source 408. The network node 400 may be composed of multiple physically separate components (e.g., a NodeB component and a RNC component, or a BTS component and a BSC component, etc.), which may each have their own respective components. In certain scenarios in which the network node 400 comprises multiple separate components (e.g., BTS and BSC components), one or more of the separate components may be shared among several network nodes. For example, a single RNC may control multiple NodeBs. In such a scenario, each unique NodeB and RNC pair, may in some instances be considered a single separate network node. In some embodiments, the network node 400 may be configured to support multiple radio access technologies (RATs). In such embodiments, some components may be duplicated (e.g., separate memory 404 for different RATs) and some components may be reused (e.g., a same antenna 410 may be shared by different RATs). The network node 400 may also include multiple sets of the various illustrated components for different wireless technologies integrated into network node 400, for example GSM, WCDMA, LTE, NR, WiFi, Zigbee, Z-wave, LoRaWAN, Radio Frequency Identification (RFID) or Bluetooth wireless technologies. These wireless technologies may be integrated into the same or different chip or set of chips and other components within network node 400.
[0068] The processing circuitry 402 may comprise a combination of one or more of a microprocessor, controller, microcontroller, central processing unit, digital signal processor, application-specific integrated circuit, field programmable gate array, or any other suitable computing device, resource, or combination of hardware, software and/or encoded logic operable to provide, either alone or in conjunction with other network node 400 components, such as the memory 404, to provide network node 400 functionality.
[0069] In some embodiments, the processing circuitry 402 includes a system on a chip (SOC). In some embodiments, the processing circuitry 402 includes one or more of radio frequency (RF) transceiver circuitry 412 and baseband processing circuitry 414. In some embodiments, the radio frequency (RF) transceiver circuitry 412 and the baseband processing circuitry 414 may be on separate chips (or sets of chips), boards, or units, such as radio units and digital units. In alternative embodiments, part or all of RF transceiver circuitry 412 and baseband processing circuitry 414 may be on the same chip or set of chips, boards, or units.
[0070] The memory 404 may comprise any form of volatile or non-volatile computer- readable memory including, without limitation, persistent storage, solid-state memory, remotely mounted memory, magnetic media, optical media, random access memory (RAM), read-only memory (ROM), mass storage media (for example, a hard disk), removable storage media (for example, a flash drive, a Compact Disk (CD) or a Digital Video Disk (DVD)), and/or any other volatile or non-volatile, non-transitory device-readable and/or computerexecutable memory devices that store information, data, and/or instructions that may be used by the processing circuitry 402. The memory 404 may store any suitable instructions, data, or information, including a computer program, software, an application including one or more of logic, rules, code, tables, and/or other instructions capable of being executed by the processing circuitry 402 and utilized by the network node 400. The memory 404 may be used to store any calculations made by the processing circuitry 402 and/or any data received via the communication interface 406. In some embodiments, the processing circuitry 402 and memory 404 is integrated.
[0071] The communication interface 406 is used in wired or wireless communication of signaling and/or data between a network node, access network, and/or UE. As illustrated, the communication interface 406 comprises port(s)/terminal(s) 416 to send and receive data, for example to and from a network over a wired connection. The communication interface 406 also includes radio front-end circuitry 418 that may be coupled to, or in certain embodiments a part of, the antenna 410. Radio front-end circuitry 418 comprises filters 420 and amplifiers 422. The radio front-end circuitry 418 may be connected to an antenna 410 and processing circuitry 402. The radio front-end circuitry may be configured to condition signals communicated between antenna 410 and processing circuitry 402. The radio front-end circuitry 418 may receive digital data that is to be sent out to other network nodes or UEs via a wireless connection. The radio front-end circuitry 418 may convert the digital data into a radio signal having the appropriate channel and bandwidth parameters using a combination of filters 420 and/or amplifiers 422. The radio signal may then be transmitted via the antenna 410. Similarly, when receiving data, the antenna 410 may collect radio signals which are then converted into digital data by the radio front-end circuitry 418. The digital data may be passed to the processing circuitry 402. In other embodiments, the communication interface may comprise different components and/or different combinations of components.
[0072] In certain alternative embodiments, the network node 400 does not include separate radio front-end circuitry 418, instead, the processing circuitry 402 includes radio front-end circuitry and is connected to the antenna 410. Similarly, in some embodiments, all or some of
the RF transceiver circuitry 412 is part of the communication interface 406. In still other embodiments, the communication interface 406 includes one or more ports or terminals 416, the radio front-end circuitry 418, and the RF transceiver circuitry 412, as part of a radio unit (not shown), and the communication interface 406 communicates with the baseband processing circuitry 414, which is part of a digital unit (not shown).
[0073] The antenna 410 may include one or more antennas, or antenna arrays, configured to send and/or receive wireless signals. The antenna 410 may be coupled to the radio front-end circuitry 418 and may be any type of antenna capable of transmitting and receiving data and/or signals wirelessly. In certain embodiments, the antenna 410 is separate from the network node 400 and connectable to the network node 400 through an interface or port.
[0074] The antenna 410, communication interface 406, and/or the processing circuitry 402 may be configured to perform any receiving operations and/or certain obtaining operations described herein as being performed by the network node. Any information, data and/or signals may be received from a UE, another network node and/or any other network equipment. Similarly, the antenna 410, the communication interface 406, and/or the processing circuitry 402 may be configured to perform any transmitting operations described herein as being performed by the network node. Any information, data and/or signals may be transmitted to a UE, another network node and/or any other network equipment.
[0075] The power source 408 provides power to the various components of network node 400 in a form suitable for the respective components (e.g., at a voltage and current level needed for each respective component). The power source 408 may further comprise, or be coupled to, power management circuitry to supply the components of the network node 400 with power for performing the functionality described herein. For example, the network node 400 may be connectable to an external power source (e.g., the power grid, an electricity outlet) via an input circuitry or interface such as an electrical cable, whereby the external power source supplies power to power circuitry of the power source 408. As a further example, the power source 408 may comprise a source of power in the form of a battery or battery pack which is connected to, or integrated in, power circuitry. The battery may provide backup power should the external power source fail.
[0076] Embodiments of the network node 400 may include additional components beyond those shown in Figure 4 for providing certain aspects of the network node’s functionality, including any of the functionality described herein and/or any functionality necessary to support the subject matter described herein. For example, the network node 400 may include user interface equipment to allow input of information into the network node 400 and to allow
output of information from the network node 400. This may allow a user to perform diagnostic, maintenance, repair, and other administrative functions for the network node 400. In some embodiments providing a core network node, such as core network node 108 of FIG. 2, some components, such as the radio front-end circuitry 418 and the RF transceiver circuitry 412 may be omitted.
[0077] Figure 5 is a block diagram illustrating a virtualization environment 500 in which functions implemented by some embodiments may be virtualized. In the present context, virtualizing means creating virtual versions of apparatuses or devices which may include virtualizing hardware platforms, storage devices and networking resources. As used herein, virtualization can be applied to any device described herein, or components thereof, and relates to an implementation in which at least a portion of the functionality is implemented as one or more virtual components. Some or all of the functions described herein may be implemented as virtual components executed by one or more virtual machines (VMs) implemented in one or more virtual environments 500 hosted by one or more of hardware nodes, such as a hardware computing device that operates as a network node, UE, core network node, or host. Further, in embodiments in which the virtual node does not require radio connectivity (e.g., a core network node or host), then the node may be entirely virtualized. In some embodiments, the virtualization environment 500 includes components defined by the O-RAN Alliance, such as an O-Cloud environment orchestrated by a Service Management and Orchestration Framework via an O-2 interface. Virtualization may facilitate distributed implementations of a network node, UE, core network node, or host.
[0078] Applications 502 (which may alternatively be called software instances, virtual appliances, network functions, virtual nodes, virtual network functions, etc.) are run in the virtualization environment Q400 to implement some of the features, functions, and/or benefits of some of the embodiments disclosed herein.
[0079] Hardware 504 includes processing circuitry, memory that stores software and/or instructions executable by hardware processing circuitry, and/or other hardware devices as described herein, such as a network interface, input/output interface, and so forth. Software may be executed by the processing circuitry to instantiate one or more virtualization layers 506 (also referred to as hypervisors or virtual machine monitors (VMMs)), provide VMs 508a and 508b (one or more of which may be generally referred to as VMs 508), and/or perform any of the functions, features and/or benefits described in relation with some embodiments described herein. The virtualization layer 506 may present a virtual operating platform that appears like networking hardware to the VMs 508.
[0080] The VMs 508 comprise virtual processing, virtual memory, virtual networking or interface and virtual storage, and may be run by a corresponding virtualization layer 506. Different embodiments of the instance of a virtual appliance 502 may be implemented on one or more of VMs 508, and the implementations may be made in different ways. Virtualization of the hardware is in some contexts referred to as network function virtualization (NFV). NFV may be used to consolidate many network equipment types onto industry standard high volume server hardware, physical switches, and physical storage, which can be located in data centers, and customer premise equipment.
[0081] In the context of NFV, a VM 508 may be a software implementation of a physical machine that runs programs as if they were executing on a physical, non-virtualized machine. Each of the VMs 508, and that part of hardware 504 that executes that VM, be it hardware dedicated to that VM and/or hardware shared by that VM with others of the VMs, forms separate virtual network elements. Still in the context of NFV, a virtual network function is responsible for handling specific network functions that run in one or more VMs 508 on top of the hardware 504 and corresponds to the application 502.
[0082] Hardware 504 may be implemented in a standalone network node with generic or specific components. Hardware 504 may implement some functions via virtualization. Alternatively, hardware 504 may be part of a larger cluster of hardware (e.g. such as in a data center or CPE) where many hardware nodes work together and are managed via management and orchestration 510, which, among others, oversees lifecycle management of applications 502. In some embodiments, hardware 504 is coupled to one or more radio units that each include one or more transmitters and one or more receivers that may be coupled to one or more antennas. Radio units may communicate directly with other hardware nodes via one or more appropriate network interfaces and may be used in combination with the virtual components to provide a virtual node with radio capabilities, such as a radio access node or a base station. In some embodiments, some signaling can be provided with the use of a control system 512 which may alternatively be used for communication between hardware nodes and radio units.
[0083] Although the computing devices described herein (e.g., UEs, network nodes) may include the illustrated combination of hardware components, other embodiments may comprise computing devices with different combinations of components. It is to be understood that these computing devices may comprise any suitable combination of hardware and/or software needed to perform the tasks, features, functions and methods disclosed herein. Determining, calculating, obtaining or similar operations described herein may be performed by processing circuitry, which may process information by, for example, converting the obtained information
into other information, comparing the obtained information or converted information to information stored in the network node, and/or performing one or more operations based on the obtained information or converted information, and as a result of said processing making a determination. Moreover, while components are depicted as single boxes located within a larger box, or nested within multiple boxes, in practice, computing devices may comprise multiple different physical components that make up a single illustrated component, and functionality may be partitioned between separate components. For example, a communication interface may be configured to include any of the components described herein, and/or the functionality of the components may be partitioned between the processing circuitry and the communication interface. In another example, non-computationally intensive functions of any of such components may be implemented in software or firmware and computationally intensive functions may be implemented in hardware.
[0084] In certain embodiments, some or all of the functionality described herein may be provided by processing circuitry executing instructions stored on in memory, which in certain embodiments may be a computer program product in the form of a non-transitory computer- readable storage medium. In alternative embodiments, some or all of the functionality may be provided by the processing circuitry without executing instructions stored on a separate or discrete device-readable storage medium, such as in a hard-wired manner. In any of those particular embodiments, whether executing instructions stored on a non-transitory computer- readable storage medium or not, the processing circuitry can be configured to perform the described functionality. The benefits provided by such functionality are not limited to the processing circuitry alone or to other components of the computing device, but are enjoyed by the computing device as a whole, and/or by end users and a wireless network generally.
[0085] As described above, AI/ML models are increasingly being used in wireless communications. One AI/ML PHY (physical layer) use case is the positioning of a target user equipment (UE). The methods described in this disclosure use the positioning of a target UE as examples. It is understood, however, the methods described can also be applied to other use cases involving in using AI/ML models in wireless communications.
[0086] There may be various UE positioning approaches. Both positioning approaches described below have been shown to be effective in obtaining a target UE’s location. The first approach relates to direct AI/ML positioning, where the AI/ML model output is a UE location. Direct AI/ML positioning typically refers to radio fingerprinting, where channel observation is used as the input of the AI/ML model. The second approach relates to AI/ML assisted positioning, where the AI/ML model output is a new measurement and/or enhancement of
existing measurements. The model output can be, for example, LOS/NLOS (line-of-sight/non- line-of-sight) identification, timing and/or angle measurement, likelihood or reliability of the measurement. The model input is also channel observations. The model output can be used for performing UE positioning methods such as triangulation or trilateration.
[0087] For the AI/ML assisted positioning approach, multiple constructions or configurations are possible. Such constructions of configurations include, for example, (a) AI/ML assisted positioning with multi-TRP (transmission and reception points) construction; (b) assisted positioning with single-TRP construction and one model for multi-TRPs (e.g., one model for N TRPs); or (c) assisted positioning with single-TRP construction and multiple models for multi-TRPs (e.g., N models for A TRPs).
[0088] When applying the direct and assisted AI/ML positioning to an NR wireless communication network, the following cases may be identified. Methods for enhancing the context information consistency are described for each of these below cases 1, 2a, 2b, 3a, and 4b. In each of the cases below, an entity (e.g., UE) may assist in performing a target UE positioning, or may perform the positioning itself. If the entity is assisting, but not directly performing the positioning, it is referred to as the “entity-assisted positioning” (e.g., UE- assisted positioning). If the entity directly performs the positioning (e.g., using triangulation/trilateration based on model output), it is referred to as the “entity-based positioning” (e.g., UE-based positioning). Furthermore, in the below cases, the one or more AI/ML models used for providing direct positioning (e.g., the first approach described above) or providing information for performing positioning (e.g., the second approach described above) may be located or deployed at different entities and are referred to the “entity-side model” (e.g., an UE-side model is deployed in the UE).
[0089] The first case (denoted as Case 1 below) relates to UE-based positioning with one or more UE-side models. The positioning can be direct AI/ML positioning or AI/ML assisted positioning. The second case (denoted as Case 2a below) relates to UE-assisted/LMF-based positioning with one or more UE-side models. The positioning can be AI/ML assisted positioning. The third case (denoted as Case 2b below) relates to UE-assisted/LMF-based positioning with one or more LMF-side models. The positioning can be direct AI/ML positioning. The fourth case (denoted as Case 3a below) relates to network node (e.g., NG- RAN or next generation-radio access network) assisted positioning with one or more network node side models (e.g., gNB-side models, where gNB is a base station in NR). The positioning can be AI/ML assisted positioning. The fifth case (denoted as Case 3b below) relates to network node assisted (e.g., NG-RAN) positioning with one or more LMF-side models. The
positioning can be direct AI/ML positioning. It is understood that the above cases 1, 2a, 2b, 3a, and 3b are illustrative examples. There may be other cases or scenarios, or other positioning methods for a given case. For example, while aforementioned case 2a uses AI/ML assisted positioning as an example, it is not limited and it may be direct positioning.
[0090] In some scenarios, positioning a target UE in a challenging radio environment can be an issue. For radio signal based positioning methods, conventional methods rely on a sufficient number of line-of-sight (LoS) links, typically at least three to five LOS links depending on the positioning method, and whether vertical position is estimated in addition to horizontal position.
[0091] In a cluttered environment, there is often a low probability of line-of-sight for a radio link between a UE and a TRP (a transmission and reception point). As an example of a cluttered environment, for an InF-DH (Indoor Factory with Dense clutter and High base station height (Tx or Rx elevated above the clutter)) environment, the LoS probability can be quite poor, e.g., ranging from 44.9% in a mildly cluttered environment to only 0.8% in a heavily cluttered environment.
[0092] Thus, conventional positioning methods may struggle to locate a target UE in a heavily cluttered environment. Evaluations show that the 90%-tile positioning accuracy of conventional positioning methods is more than 15 meters in an InF-DH {60%, 6m, 2m} environment, due to the unavailability of sufficient LoS links. The low positioning accuracy motivates the application of AI/ML based positioning in such challenging deployment environments. In some examples, an AI/ML model can be trained to deliver 90%-tile positioning accuracy below 1 meter.
[0093] There currently exist certain challenge(s). For an AI/ML model at the physical layer of a wireless communication system, currently there is no description of model inference context information. There is no method to ensure the consistency among the contexts of different life cycle stages of an AI/ML model, including training data collection associated with one or more AI/ML models, training of the one or more AI/ML models, and inference based on the one or more AI/ML models. There is a need for methods to improve or ensure the consistency among the context information related to the training data collection, the context information related to the model training, and the context information related to the model inference.
[0094] Certain aspects of the disclosure and their embodiments may provide solutions to these or other challenges. In some examples, to support an AI/ML model for a physical layer of a wireless network, three types of validity areas are defined for an AI/ML model: training
dataset validity area Adataset, model training validity area Atraining, and model inference validity area Ain^erence. These three areas are illustrated in Figures 6 and 7 below in more detail. Methods to define the validity areas are provided, including (a) via a list of TRPs and (b) via a list of downlink positioning reference signal (DL PRS) resources.
[0095] Similar to validity area, configurations for wireless signal transmission (e.g., configuration of DL PRS transmission, configuration of uplink sounding reference signal (UL SRS) transmission) need to be consistent between, for example, the training data collection stage and the model inference stage. At the receiver end (e.g., UE, network node such as gNB), the receiver needs to be aware of the requirement of the AI/ML model with regard to the measurements of the wireless signal, when such measurements are used as model input during the model inference stage. All these aspects provide context information for model inference. Before model inference can start, context information is exchanged, and system parameters are configured according to the desired context if needed. This improves or ensures that the trained AI/ML model operates (e.g., makes inference) within the context it was trained for.
[0096] Methods and embodiments are provided in this disclosure for improving or enhancing consistency of context information related to one or more of: training data collection associated with one or more artificial intelligence/machine learning (AI/ML) models, training of the one or more AI/ML models, and inference based on the one or more AI/ML models. In some embodiments, for an AI/ML model at a physical layer of a wireless communication system, description of training data collection context, model training context, and model inference context is provided. Methods are provided to improve or ensure the consistency among the context where the training data was collected, the context where the model is trained, and the context where model inference is performed.
[0097] Furthermore, in this disclosure, methods are provided to coordinate between UE and network nodes, as well as among network nodes. The coordination is to exchange information that affects model inference performance, and to provide the desired context information to support model inference. The exchanged information includes, for example: deployment environment identifiers (e.g., model inference validity area) of the deployment, one or more configuration of wireless signal transmission (e.g., configuration of downlink positioning reference signal (DL PRS) transmission, configuration of uplink sounding reference signal (UL SRS) transmission, etc.), one or more configuration on receiving wireless signals and generating the desired measurement reports, the positioning capabilities of the UE, and/or the positioning capabilities of a network node (e.g., NG-RAN or gNB). The positioning
capabilities of the UE and/or the network node are related to transmission/reception of the wireless signals to support positioning.
[0098] The AI/ML model can be activated for model inference if the model inference context requirement is satisfied. Otherwise, the AI/ML model may not be activated. If the model cannot be activated or need to be deactivated, one of the following actions can be taken. These actions may be performed at one or more of an UE, a network node, an LMF, and/or any other entities associated with the network. In one such action, parameters for transmission and/or reception can be updated or reconfigured, so that the model inference context requirement associated with the AI/ML model can be satisfied. After that, the model can be activated. In another such action, the AI/ML model may be declared, annotated, or flagged as inappropriate for the deployment scenario or deployment environment. The model is not activated, or is deactivated if it is active. Optionally, a new AI/ML model can be provided for the deployment scenario or environment, where model inference context of the new model is consistent (e.g., compatible) with the deployment scenario or environment. The new model can be obtained by model delivery/transfer/downloading, model switching, model updating, model fine-tuning, and/or model re-training.
[0099] Certain embodiments described in this disclosure may provide one or more of the following technical advantage(s). The disclosed methods describe context information that can noticeably affect model inference performance. Signaling to coordinate between UE and network nodes, as well as among network nodes is described. The coordination improves the consistency among the context information for training the model and for making model inference, and/or ensures that trained model is activated within the context it was trained for. The teachings of certain embodiments may improve the AI/ML model prediction accuracy, the propemess of using the AI/ML model, data transmission/evaluation accuracy, and data rate; and may reduce latency and power consumption.
[0100] Some of the embodiments contemplated herein will now be described more fully with reference to the accompanying drawings. Embodiments are provided by way of example to convey the scope of the subject matter to those skilled in the art.
[0101] In some embodiments, the non-limiting terms UE or a wireless device are used interchangeably. The UE herein can be any type of wireless device capable of communicating with a network node or another UE over radio signals. The UE may also be a radio communication device, a target device, a device to device (D2D) UE, a machine type UE or a UE capable of machine to machine communication (M2M), a low-cost and/or low-complexity UE, a sensor equipped with UE, a tablet, mobile terminals, a smart phone, a laptop embedded
equipment (LEE), a laptop mounted equipment (LME), USB dongles, Customer Premises Equipment (CPE), an Internet of Things (loT) device, or a Narrowband loT (NB-IOT) device, etc.
[0102] In some embodiments the generic term “anchor node” is used, which are used as reference points for determining the location of a target UE. In general, the anchor nodes for positioning can be a variety of nodes in the wireless network. For positioning using the radio link between the target UE and a radio network node, the anchor node can be any kind of a radio network node which may comprise any of base station, radio base station, base transceiver station, Node B, evolved Node B (eNB), Next-Generation Node B (gNodeB or gNB), NG-RAN node, Transmission Point (TP), Transmission-Reception Point (TRP), Multi- cell/multicast Coordination Entity (MCE), relay node, access point (AP), Antenna Reference Point (ARP), radio access point, Remote Radio Unit (RRU), Remote Radio Head (RRH). For positioning using sidelink between two UEs, the anchor node is a UE or a wireless device.
[0103] For simplicity, the methods are described using the radio links between a UE and an anchor node. TRP is used as a representative example of the anchor node, where the TRP is connected to a network node like a gNB. A TRP refers to a specific location with antennas that can both transmit and receive signals. A TRP may act as a single point within a cell tower where data is sent and received from UEs. Multiple TRPs within a network can be used to improve coverage, reliability, and capacity by allowing coordinated transmission and reception from different locations. It is understood by those skilled in the art that the same methodology can be easily applied to many other wireless communication scenarios, e.g., sidelink-based positioning.
[0104] In this disclosure, methods and systems are provided to support AI/ML models to improve or ensure consistent context among training data collection, model training, and model inferences. The methods and systems are described using the positioning use case as a representative example. It is understood that the same methodology applies to other AI/ML models for other physical layer functionalities in the wireless communication system.
[0105] As described above, there are various context information associated with an AI/ML model, including context information for training data collection, model training, and model inference. In general, the context information describes the condition (e.g., environment, configuration, parameters, etc.) under which a model can perform inference. For an AI/ML model at a physical layer of a wireless communication system, the context information includes, for instance, deployment environment identifiers (e.g., model inference validity area) of the AI/ML model deployment, one or more configurations of wireless signal transmission
(e.g., configuration of DL PRS transmission, configuration of UL SRS transmission, etc.), and/or one or more configurations on receiving wireless signals and generating the desired measurement reports.
[0106] The context information related to the model deployment environment is described next. Figure 6 is an illustration of training dataset validity area Adataset , model training validity area Atraining, and model inference validity area Ain^erence, and TRPs included in these validity areas. Regarding deployment environment identifiers as the context information, three types of validity areas can be defined for an AI/ML model.
[0107] The first type of validity area is the training dataset validity area 602 denoted by Adataset- The training dataset validity area 602 represents the area (e.g., geographical area) that training dataset is collected. Adataset is thus also referred to as the training dataset collection validity area, and is from the perspective of training data collection entity. The training data collection entity may be any entity like a UE, a network node, a core network, a training vendor, etc. As illustrated in Figure 6, for example, the training data for a particular AI/ML model may include measurements associated with 18 TRPs (e.g., TRP {0, 1, ...,17}). Hence, the training dataset validity area Adataset = TRP {0, 1, ...,17}, which may covers the entire factory floor as shown in Figure 6.
[0108] The second type of validity area is the model training validity area 604 denoted by Atraining - This model training validity area 604 represents the coverage area that the model is trained for. The model training validity area Atraining is from the perspective of model training entity. The model training entity may be any entity like a UE, a network node, a core network, a training vendor, etc. The model training entity may or may not be the same as the training data collection entity. Thus, Atraining can be the same as Adataset, or a subset of Adataset. This depends on whether the model training used data samples cover the same area as Adataset, or partial training data is extracted to build a new dataset for training, covering a sub-area ^training - Tor example, in Figure 6, the part of training data with measurements associated with 12 TRPs on the left hand side are extracted and used in model training. Hence Atraining= P {0, 1, . . . , 11 }, which is a sub-area of Adataset.
[0109] The third type of validity area is the model inference validity area 606 denoted by Ain erence- This model inference validity area 606 represents the coverage area that the trained model is used to perform inference for. The model inference validity area Ain^erence is from the perspective of model inference entity, which can be an UE, a network node, an LMF, etc. In some examples, the model inference entity is the same as model training entity if the entity
performs on-line training, and different from model training entity if the entity performs offline training. Thus, Ainference can be the same as Atraining, or a subset of Atraining. For example, in Figure 6, the model inference entity may use measurements associated with 9 TRPs on the left hand side for inference. That is, Air ererice= TRP {0, 1, . . .,8}, which is a subarea of Atraining. In this example, the model inference entity may assign null values to model input corresponding to TRP {9, 10, 11 }, since their associated measurement data are not provided for model inference.
[0110] The example illustrated in Figure 6 is only used to illustrate the concept, and for simplicity, measurements of all TRPs in the validity area are assumed as the model input for training data collection, model training, and model inference. Other variations are possible without departing from the spirit of the methodology. For example, one variation can be, a subset of the TRPs in the validity area are included in the training data validity area 602, and/or model training validity area 604, and/or model inference validity area 606, (e.g., some TRPs in the geographic area are may not be included). This may be done to reduce reference signal transmission, measurement burden, etc.
[0111] Figure 7 is an illustration of training dataset validity area Adataset, model training validity area Atraining, and model inference validity area Ain^erence, with some TRPs excluded in some of the validity areas. For instance, in the Figure 7, an example is shown where all TRPs {0, ..., 17} are included in training dataset validity area 702, but TRP { 1, 3, 5, 7} are excluded from the list of TRPs associated with model training validity area 704 and model inference validity area 706. In this embodiment, the same geographic area as in Figure 6 is still covered by the dataset collection related to the AI/ML model, while four fewer TRPs are involved in model training and model inference. For Figure 7, it is also possible that TRP { 1, 3, 5, 7} are excluded from the list of TRPs associated with model inference validity area 706, but not excluded from training dataset validity area 702 or model training validity area 704. In this embodiment, the same geographic area is still covered by the trained AI/ML model with same set of TRPs as in Figure 6, while four fewer TRPs are involved in model inference.
[0112] With reference still to Figures 6 and 7, in one embodiment, the context information representing the validity area is defined as a list of TRPs, where the TRPs transmit DL PRS and cover the surrounding area to support positioning. In one example, the TRP identifiers (IDs) are explicitly defined, e.g., numbered from 0 to 255. In another example, the TRPs are identified via a combination of other identifiers and system parameter values. For example, the TRP is identified via a combination of: dl-PRS-ID , nr-PhysCelllD (optional), nr-
CellGloballD (optional), nr-ARFCN (optional). An example to signal a TRP ID in this manner is shown below in the TRP location information element.
TRP-LocationInfoElement-rl6 ::= SEQUENCE { dl-PRS-ID-rl6 INTEGER (0..255), nr-PhysCellID-rl6 NR-PhysCellID-rl6 OPTIONAL, - Need ON nr-CellGlobalID-rl6 NCGI-rl5 OPTIONAL, - Need ON nr-ARFCN-r!6 ARFCN-ValueNR-rl5 OPTIONAL, - Need ON associated-DL-PRS-ID-rl6 INTEGER (0..255) OPTIONAL, - Need OP trp-Location-rl6 RelativeLocation-rl6 OPTIONAL, - Need OP
}
[0113] In another embodiment, the context information representing the validity area is defined as a list of DL PRS resources. In one example, each PRS resource is identified by a combination of: dl-PRS-ID, DL-PRS Resource Set ID, and a DL-PRS Resource ID.
[0114] In one example, all three validity areas are the same, e.g., Adataset=Atraining=Ainference - Using the entire floor area in Figure 6 or Figure 7 as an example, one example may use Adataset=Atraining=Ainference which is entire factory floor covered by all 18 TRPs.
[0115] In other examples, Adataset, Atraining, Ainference may be different as illustrated by the example in Figure 6. The selections or determinations of different validity areas Adataset, Atraining, and Ainferencecan be made by the training data collection entity, model training entity, and the model inference entity, respectively.
[0116] In some examples, from a given training dataset validity area Adataset, one or more model training validity area Atraining may be defined, each for training a different model. When two or more model training validity areas Atraining are defined, they may or may not overlap. In some examples, from a given model training validity area Atraining, one or more model inference validity area Ain^erence may be defined. When two or more model inference validity areas Ain^erence are defined, they may or may not overlap. Both the above-described examples are possible, and similar principles described herein apply. In the description below, the latter example is used illustration, which is for the stage of model inference. It is understood by those skilled in the art that the same principle applies to other variations.
[0117] As described above, the context information related to downlink reference signal and its measurements is another type of context information. Such context information is described next in detail. Continuing with the positioning of a target UE as the example use case, the downlink reference signal in such a use case is typically DL PRS. While other DL signals (e.g., system information block (SIB), CSLRS, tracking reference signal) can be
leveraged as well, without losing generality, the description below focuses on DL PRS to illustrate the methodology.
[0118] Some configurations and identifiers of PRS may significantly affect the measurements obtained from PRS. When such measurements are used as the AI/ML model input, such measurements should be consistent between model training and model inference, and consistent among the various network-side nodes and UE-side nodes (e.g., positioning reference units - PRUs and UEs) involved. In consequence, at least some configurations and identifiers of PRS need to stay constant between model training and model inference, and stay constant regardless of the various PRUs and UEs involved.
[0119] For DL PRS, such configurations and identifiers include one or more of the following examples. For instance, identifiers and information that can act as identifiers include dl-PRS-ID, nr-DL-PRS-ResourceSetID, NR-DL-PRS-ResourcelD, nr-PhysCelllD, nr- CellGloballD, nr-ARFCN, and NR- TRP -Locationinfo. As examples, such configurations include nr-SSB-Config, nr-DL-PRS-Info, NR-DL-PRS-Beamlnfo, NR-DL-PRS- AssistanceDataPerTRP (e.g., including NR-DL-PRS-SFNO-Offset, NR-DL-PRS-Info), and NR-DL-PRS-PositioningFrequencyLayer (including dl-PRS-SubcarrierSpacing, dl-PRS- ResourceBandwidth, dl-PRS-StartPRB, dl-PRS-PointA, dl-PRS-CombSizeN, dl-PRS- CyclicPrefix).
[0120] At the receiver end (e.g., at UE-side, either PRU or a normal UE), at least some configurations on PRS measurements need to stay consistent, when such measurements are used as model input. Such configurations related to PRS measurements include one or more following examples. One example of such configurations relates to the type of measurements, e.g., channel impulse response (CIR), power delay profile (PDP), delay profile (DP). Alternatively, such configuration relates to whether timing information, power information, and/or phase information is reported. Another example of such configurations relates to the format of measurement report. For instance, the reported measurement is an absolute value, or a value relative to a pre-defined reference point (e.g., reference time, reference power, reference phase), or a difference value between two measurements (e.g., timing difference, power difference, phase difference). Another example of such configurations relates to the size of measurements for one PRS resource, for example, the number of samples to be measured and reported, or the number of paths to be measured and reported. Another example of such configurations relates to the quantization resolution or granularity used in measurement reporting. Another example configurations relates to the timeliness of the measurement, for example, the measurement is performed within the last 5 ms. This is to ensure that the
measurement is not obsolete or reduce the likelihood that the measure is obsolete. This information can be conveyed by a time stamp of the measurement. Another example of the such configurations relates to the quality, reliability, certainty/uncertainty, trustworthiness of the measurement.
[0121] As described above, the context information related to uplink reference signal and its measurements is another type of context information. Such context information is described below, again using the positioning of a target UE as illustration. For the positioning of a target UE use case, the uplink reference signal is typically positioning SRS (sounding reference signal).
[0122] Some configurations and identifiers of SRS may significantly affect the measurements obtained from SRS. When such measurements are used as model input, such measurements should be consistent between model training and model inference, and consistent among the various network-side nodes and UE-side nodes (e.g., PRUs and UEs) involved. The network-side nodes include, for instance, NG-RAN nodes (e.g., gNB), TRPs, LMF, etc.. In consequence, at least some configurations and identifiers of SRS need to stay constant between model training and model inference, and stay constant regardless of the various network-side nodes and UE-side nodes involved.
[0123] For UL SRS, such configurations and identifiers include one or more of the following examples: NR PCI (e.g., Physical Cell ID of the cell that contains the SRS carrier); SRS Carrier (including Point A, Offset To Carrier, Carrier, Subcarrier Spacing, Bandwidth); SRS Frequency; SRS Resource Set information; Positioning SRS Resource Set information; SSB (synchronized signal block) Information; Spatial Relation Information; and Pathloss Reference Information.
[0124] At the receiver end (e.g., at NG-RAN node, gNB, gNB-distributed unit or gNB- DU, TRP), at least some configurations related to SRS measurements need to stay consistent, when such measurements are used as model input. Such configurations on SRS measurements include one or more of the following examples. One example of such configurations relates to the type of measurements, e.g., channel impulse response (CIR), power delay profile (PDP), delay profile (DP). Alternatively, such configurations include whether timing information, power information, and/or phase information is reported. Another example of such configurations relates to the format of measurement report. For instance, the reported measurement is an absolute value, or a value relative to a pre-defined reference point (e.g., reference time, reference power, reference phase), or a difference value between two measurements (e.g., timing difference, power difference, phase difference). Another example
of such configurations relates to the number of samples to be measured and reported; alternatively, the number of paths to be measured and reported. Another example of such configurations relates to the quantization resolution used in measurement reporting. Another example of such configurations relates to the timeliness of the measurement, for example, the measurement is performed within last 5 ms. This is to ensure that the measurement is not obsolete or reduce the likelihood that the measurement is obsolete. This information can be conveyed by a time stamp of the measurement. Another example of such configurations relates to the quality, reliability, certainty/uncertainty, trustworthiness of the measurement.
[0125] Described next are signaling aspects to improve consistency between model training context and model inference context, using the use case of positioning a target UE as illustrations. As described above, there may be several scenarios where the one or more AI/ML models are located at different entities (e.g., at UE, at the network node, or at the LMF), where one or more entities initiate and/or assist with the positioning of the target UE, and where the positioning actually occurs at different entities. Different entities (e.g., a UE or a network node) can perform certain methods to enhance the context information consistency. The description below provides details of several example scenarios.
[0126] Figure 8 is a flowchart illustrating an example method 800 performed by a UE for enhancing the context information consistency according to some embodiments. Method 800 can be performed by a user equipment (e.g., a UE shown in Figures 2 and 3). With reference to Figure 8, the method 800 includes a block 802, in which the UE receives, or sends, context information related to one or more of: training data collection associated with one or more artificial intelligence/machine learning (AI/ML) models, training of the one or more AI/ML models, and inference based on the one or more AI/ML models. In one example, the one or more AI/ML models are configured to provide measurements associated with positioning the UE. As described above, the type of context information that the UE may receive or send includes the deployment environment identifiers, configurations of wireless signal transmissions, configurations of receiving wireless signals and generating the desired measurement reports, the positioning capabilities of the UE, and the positioning capabilities of a network node. The UE may send the context information to, or receive the context information from, a network node (e.g., a gNB) and/or an LMF in the core network.
[0127] With the context information, an entity (e.g., the UE, the network node, or the LMR) may determine whether a model inference context requirement is satisfied based on consistency between the context information related to the training of at least one of the one or more AI/ML models and the context information related to the inference based on the at least
one of the one or more AI/ML models. At block 804, in accordance with a determination that a model inference context requirement is satisfied based on consistency between the context information related to the training of at least one of the one or more AI/ML models and the context information related to the inference based on the at least one of the one or more AI/ML models, the UE may select (block 804a) the at least one of the one or more AI/ML models for performing model inference, or the UE may send (block 804b) a report including at least one of configurations associated with the positioning the UE or the UE’s capabilities in relation to performing inference based on the at least one of the one or more AI/ML models. For instance, if the AI/ML models are deployed at the UE, the UE may select the models for performing model inference because it is determined that the model training and model inference context information are consistent. If the AI/ML models are deployed at another entity like the network node or the LMF, the UE may send a report to the entity where the models are deployed.
[0128] At block 806, in accordance with a determination that the model inference context requirement is not satisfied, the configurations associated with the positioning the UE are updated or a notification message is provided. Similarly, different entities may perform the updating of the configurations or providing the notification. Such an entity may be the UE, the network node, or the LMF. The configurations can be updated to satisfy the model inference context requirement for the one or more candidate AI/ML models. After updating, the model inference can be performed. In another example, if the configurations cannot be updated to satisfy the model inference context requirement (e.g., if the configurations related to the inference context cannot be updated to within a consistency threshold of the training context), one of the entities may send a notification message (e.g., an error message) on the failure of the model inference due to mismatched context between the model training and the model inference. Some examples of detailed signaling among the various entities for enhancing context information consistency are described below in connection with Figure 9.
[0129] As described above, in general, for an AI/ML model to work properly, the model inference context needs to be consistent with model training context. The model training context describes at least some of the settings that noticeably affect the model performance, and cannot be ignored. For example, in Figure 6, when the model inference validity area 606 correspond to TRP {0, 1, ..., 8}, the model cannot function properly (e.g., making accurate inferences) for a completely different area, e.g., the area corresponding to TRP { 15, 16, 17}. Similarly, if the model is trained with PRS configured with one beam pattern, but the PRS beam pattern is different during model inference, then the model is not expected to work properly.
[0130] When the AI/ML model takes DL PRS measurement as input, for configuration of PRS transmission and measurement, the same configuration should be used in the training data collection stage and the model inference stage. Also, the same configuration should be used for all PRUs and UEs. For example, the IDs (e.g., TRP ID, dl-PRS-ID, DL PRS Resource Set ID, a DL PRS Resources ID) are assigned the same to all PRUs and UEs, and not varying in a UE- specific manner.
[0131] Similarly, when the AI/ML model takes UL SRS measurement as input, for configuration of SRS transmission and measurement, the same configuration should be used in the training data collection stage and the model inference stage.
[0132] For the model inference stage, there is a need to exchange information between UE-side nodes and network-side nodes (when needed), and among network-side nodes (when needed), so that all nodes involved in the UE location determination are aware of the model inference context, including the validity area, configuration of reference signal transmission and measurement, and other configuration information about the model. If the model inference context requirement is satisfied, then the model inference can proceed. If the model inference context requirement is not satisfied, then configuration can be updated to satisfy the model inference context requirement. After that, the model inference can proceed. If the model inference context requirement is not satisfied, and the configuration cannot be updated to satisfy the model inference context requirement for some reason, then model inference cannot proceed. A notification message (e.g., an error message) can be sent on the failure of model inference due to mismatched context between model training and model inference.
[0133] Here the model inference context requirement includes, for example, one or more validity areas, one or more configurations of reference signal transmission and measurement, and one or more related configurations required to support the model inference.
[0134] To support the exchanging of information, coordination can be performed between UEs and network nodes, and among network nodes. Such coordination may include providing information on the model inference validity area (e.g., via assistance data). Such coordination may include providing configuration information of reference signal and measurements. The configuration information can be provided via assistance data. For the positioning use case, the reference signal is typically PRS for downlink, and SRS for uplink. Such coordination may also include adjusting the configuration of reference signal and measurements; and/or starting and/or stopping transmission of one or more reference signal resource.
[0135] In the following, signaling details are provided to support the coordination between UE-side nodes and network-side nodes (when needed), and among network-side nodes (when
needed). Coordination on model inference context requirement for UE-side model is described first. Figure 9 is a flowchart illustrating example method 900 for enhancing the context information consistency using one or more UE-side models with UE-based positioning or UE- as si sted/LMF -based positioning, according to some embodiments. Method 900 is performed by various entities including the UE, the network node, and the LMF. Method 900 illustrates the signaling aspects of enhancing the consistency of the context information for model training and model inference. Method 900 illustrated in Figure 9 relates to the scenarios where the AI/ML models are deployed at the UE, corresponding to case 1 and case 2a. As described above, case 1 relates to UE-based positioning with one or more UE-side models. The positioning can be direct AI/ML positioning or AI/ML assisted positioning, case 2a relates to UE-assisted/LMF-based positioning with one or more UE-side models.
[0136] For case 1 and case 2a scenarios (both having AI/ML models deployed at the UE), UE signals to LMF, via the LTE positioning protocol (LPP) interface, the UE's model inference validity area Ainference, which is a list of TRP identifiers for instance. In response, LMF configures PRS transmission to cover at least Ain^erence. Additionally, the validity area of the PRS consistent data (e.g., the area where the PRS system data is valid) is provided as context information to cover UE’s model inference validity area Ainference at least.
[0137] In some examples, the UE makes request of the DL PRS that the UE-side model needs, so that LMF can configure the PRS (positioning reference signal) accordingly. Here the PRS are those associated with model inference validity area Ain^erence. In one example, the UE can use method 900 outlined below to request the set of PRS needed for model inference of UE-side model.
[0138] With reference to method 900, at block 902, the LMF may configure the UE with pre-defined PRS configurations via LPP Provide Assistance Data message or via posSI. Thus, the UE receives the pre-defined PRS configurations from the LMF. At block 904, if it is UE- initiated coordination on model inference context information, the UE sends an On-Demand PRS request to the LMF via LPP Request Assistance Data message. The On-Demand PRS request can be a request for a pre-defined PRS configuration indicated with pre-defined PRS configuration ID or explicit parameter for PRS configuration and may be a request for PRS transmission or change to the PRS transmission characteristics for positioning measurements. At block 906, if it is LMF-initiated coordination on model inference context information, the LMF and the UE may exchange LPP messages e.g., to obtain UE measurements or the DL- PRS positioning capabilities of the UE, etc. Thus, the UE can receive, from an LMF, or
exchanging with the LMF, one or more LTE positioning protocol (LPP) messages associated with UE measurements or DL-PRS positioning capabilities of the UE.
[0139] At block 910, the LMF determines the need for PRS transmission or change to the transmission characteristics of an ongoing PRS transmission. At block 912, the LMF requests the serving and non-serving network nodes (e.g., gNBs) and/or TRPs for new PRS transmission or PRS transmission with changes to the PRS configuration via NRPPa PRS CONFIGURATION REQUEST message. At block 912, the network nodes (e.g., gNBs) and/or TRPs provide, and the LMF receives, the successfully configured or updated PRS transmission in the NRPPa PRS CONFIGURATION RESPONSE message accordingly. At block 916, the LMF may provide the PRS configuration used for PRS transmission or error cause via LPP Provide Assistance Data message to the UE. And thus, the UE may also receive, from the LMF, PRS configurations used for PRS transmission or error cause.
[0140] Figure 10 is a flowchart illustrating example method 1000 for enhancing the context information consistency using one or more LMF-side models with UE-assisted/LMF- based positioning, according to some embodiments. Method 1000 corresponds to scenario case 2b. As described above, case 2b relates to UE-assisted/LMF-based positioning with one or more LMF-side models (e.g., models deployed at the LMF of a core network).
[0141] With reference to Figure 10, at block 1002, the UE sends information of capabilities of the UE to the LMF for performing channel measurements associated with the one or more AI/ML models deployed at the LMF. Thus, the LMF obtains information on UE's capability to perform the channel measurement desired by the LMF-side model(s). The channel measurement requirement may include the type of measurement (e.g., PDP-power delay profile), the format of measurement report, the size of measurements for one PRS resource (e.g., at least 16 samples for one observation of the TRP-UE link), the granularity, the timeliness, and the quality of the measurement.
[0142] At block 1004, the LMF sends assistance data to the UE, where the assistance data is for supporting the LMF-side model(s) deployed at the LMF. Thus, the UE receives the assistance data for supporting the one or more AI/ML models deployed at the LMF. At block 1006, if the UE is capable of performing the desired measurement on DL PRS from at least NTRpt input, min TRPs, LMF configures the UE to perform PRS measurement for TRPs in the ^inference of the one or more LMF-side models. Accordingly, the UE performs the PRS measurements for performing model inference by the LMF. Otherwise, the LMF does not activate model inference of the one or more LMF-side models.
[0143] Figure 11 is a flowchart illustrating example method 1100 performed by a network node for enhancing the context information consistency, according to some embodiments. Method 1100 can be performed by a network node (e.g., a network node shown in Figures 2 and 4). With reference to Figure 11, the method 1100 includes a block 1102, in which the network node receives, or sends, context information related to one or more of: training data collection associated with one or more artificial intelligence/machine learning (AI/ML) models, training of the one or more AI/ML models, and inference based on the one or more AI/ML models. In one example, the one or more AI/ML models are configured to provide measurements associated with positioning the UE. As described above, the type of context information that the network node may receive or send includes the deployment environment identifiers, configurations of wireless signal transmissions, configurations of receiving wireless signals and generating the desired measurement reports, the positioning capabilities of the UE, and the positioning capabilities of a network node. The network node may send the context information to, or receive the context information from, a UE and/or an LMF in the core network.
[0144] With the context information, an entity (e.g., the UE, the network node, or the LMR) may determine whether a model inference context requirement is satisfied based on consistency between the context information related to the training of at least one of the one or more AI/ML models and the context information related to the inference based on the at least one of the one or more AI/ML models. At block 1104, in accordance with a determination that a model inference context requirement is satisfied based on consistency between the context information related to the training of at least one of the one or more AI/ML models and the context information related to the inference based on the at least one of the one or more AI/ML models, the network node may select (block 1104a) the at least one of the one or more AI/ML models for performing model inference, or the network node may send (block 1104b) a report including at least one of configurations associated with the positioning the UE or the UE’s capabilities in relation to performing inference based on the at least one of the one or more AI/ML models. For instance, if the AI/ML models are deployed at the network node, the network node may select the models for performing model inference because it is determined that the model training and model inference context information are consistent. If the AI/ML models are deployed at another entity like the UE or the LMF, the network node may send a report to the entity where the models are deployed.
[0145] At block 1106, in accordance with a determination that the model inference context requirement is not satisfied, the configurations associated with the positioning the UE are
updated or a notification message is provided. Similarly, different entities may perform the updating of the configurations or providing the notification. Such an entity may be the UE, the network node, or the LMF. The configurations can be updated to satisfy the model inference context requirement for the one or more candidate AI/ML models. After updating, the model inference can be performed. In another example, if the configurations cannot be updated to satisfy the model inference context requirement (e.g., if the configurations related to the inference context cannot be updated to within a consistency threshold of the training context), one of the entities may send a notification message (e.g., an error message) on the failure of the model inference due to mismatched context between the model training and the model inference. Some examples of signaling for network-assisted or network-based positioning are described next.
[0146] Figure 12 is a flowchart illustrating example method 1200 for enhancing the context information consistency using one or more LMF-side models with network-assisted positioning, according to some embodiments. Method 1200 corresponds to scenario case 3b. As described above, case 3b relates to network node assisted positioning with one or more LMF-side models. Method 1200 illustrates the coordination between the LMF and the UE.
[0147] With reference to Figure 12, at block 1202, the UE sends information of the capability of the UE to transmit positioning SRS associated with an LMF-side model. Thus, the LMF obtains information on the capability of the UE to transmit positioning SRS as desired by the one or more LMF-side models. For example, the UE needs to be able to transmit periodic positioning SRS towards at least 8 TRPs. At block 1204, the LMF sends assistance data to the UE, where the assistance data is for supporting the one or more LMF-side models deployed at the LMF. Thus, the UE receives the assistance data for supporting the LMF-side model. At block 1206, the UE sends an SRS measurement report for performing model inference by the LMF. if the UE is capable of transmitting the desired positioning SRS towards at least NTRPiinputimin TRPs, the LMF may proceed with the model inference using the AI/ML models deployed at the LMF, with SRS measurement report received from the UE or a network node (e.g., NG-RAN node). Otherwise, the LMF does not activate model inference.
[0148] Figure 13 is a flowchart illustrating example method 1300 for enhancing the context information consistency using one or more LMF-side models with network-assisted positioning, according to some embodiments. Figure 13 also corresponds to scenario case 3b. As described above, case 3b relates to network node assisted positioning with one or more
LMF-side models. Method 1300 illustrates the coordination between the LMF and the network node.
[0149] With reference to Figure 13, for Case 3b, the signaling aspects of the LMF coordination with the network node (e.g., gNB) may include the following steps. At block 1302, the LMF sends POSITIONING INFORMATION REQUEST message to the network node (e.g., NG-RAN node), which contains the requested SRS transmission according to the needs of the one or more LMF-side models. Correspondingly, the network node receives, from the LMF, a POSITIONING INFORMATION REQUEST message including requested SRS transmission according to requirements of the one or more AI/ML models deployed at the LMF.
[0150] If the network node (e.g., NG-RAN node) takes this information into account when configuring SRS transmissions for the UE, it shall include the SRS Configuration IE and the SFN Initialisation Time IE in the POSITIONING INFORMATION RESPONSE message. Otherwise (e.g., the NG-RAN node is unable to configure any SRS transmissions for the UE), the network node (e.g., the NG-RAN node) shall respond (block 1304) with a POSITIONING INFORMATION FAILURE message to the LMF.
[0151] After receiving the response from the relevant network nodes (e.g., NG-RAN nodes), the LMF determines whether there are enough SRS transmission receivable at the network node (e.g., gNB) to satisfy the requirements of the one or more LMF-side models. In some examples, the LMF also determines whether the SRS measurement report (e.g., type, format, granularity, timeliness, quality, etc.) supported by the network node (e.g., gNB) satisfies the context requirement of the one or more LMF-side models. If there are sufficient SRS transmission configured, and the SRS measurement report requirements are satisfied, LMF may proceed with performing model inference (block 1306) using SRS measurement from TRPs covering Ainference of the one or more LMF models. Otherwise, the LMF does not activate (block 1206) model inference based on the one or more LMF-side models.
[0152] Figure 14 is a flowchart illustrating example method 1400 for enhancing the context information consistency using one or more LMF-side models with network-assisted positioning, according to some embodiments. Figure 14 also corresponds to scenario case 2b. As described above, case 2b relates to relates to UE-assisted/LMF-based positioning with one or more LMF-side models. Method 1400 illustrates the coordination between the LMF and the network node.
[0153] With reference to Figure 14, signaling aspects for coordination on model inference context requirement for LMF-side model are shown. In this case, the LMF needs to coordinate with both the network node (e.g., gNB) and UE to support model inference. The coordination between the LMF and the UE for case 2b is illustrated in connection with Figure 10, as described above.
[0154] At block 1402, the LMF sends a PRS CONFIGURATION REQUEST message to the network node (e.g., NG-RAN), e.g., to request the NG-RAN node to configure the DL PRS transmission by the indicated TRP(s). The LMF request is to have the desired PRS transmission that the one or more LMF-side models require. If DL-PRS transmission is successfully configured or updated for at least one of the TRPs, the network node (e.g., NG- RAN node responds (e.g, block 1404) to LMF with a PRS CONFIGURATION RESPONSE message. Otherwise (e.g., the NG-RAN node cannot configure or update DL-PRS transmission for any of the TRPs as requested by LMF), the network node (e.g., NG-RAN node) responds (block 1404) to LMF with a PRS CONFIGURATION FAILURE message.
[0155] After receiving the response from the relevant NG-RAN nodes, the LMF determines whether there are enough DL PRS transmission to satisfy the requirements of the LMF-side model. For example, the model is trained to take measurements from NTR
TRPs as model input, but the model can still function satisfactorily if at least NTRP iinput imin=8 TRPs can transmit PRS to support the model. In this case, the LMF can determine whether to proceed (block 1406) with model inference depending on whether the network node(s) (e.g, NG-RAN node(s)) will transmit DL PRS from at least NTRP input min TRPs. Note that if a smaller number of TRPs (i.e, at or close to NTRP inputimin TRPs) are available, the distribution of the TRPs need to be considered as well. In some examples, it is desirable to have approximately evenly distributed TRPs over the targeted area. If the LMF determines that there are not enough DL PRS transmission to satisfy the needs of the LMF-side model, the LMF does not activate (block 1406) the model inference based on the one or more AI/ML models deployed at the LMF.
[0156] Figure 15 is a flowchart illustrating example method 1500 for enhancing the context information consistency using one or more network node-side models with network node assisted positioning, according to some embodiments. Figure 15 corresponds to scenario case 3a. As described above, case 3a relates to network node assisted positioning with one or more network node side models. Method 1500 illustrates the coordination between the LMF and the network node.
[0157] With reference to Figure 15, signaling aspects for coordination on model inference context requirement for network node-side model are shown. At block 1502, the network node sends, to the LMF, a model inference validity area associated with the network node. For instance, the gNB signals to LMF via NRPPa interface the gNB's model inference validity area ^inference- Thus, LMF is aware that intermediate measurement (e.g., LOS/NLOS indicator, relative time of arrival - RTOA, etc.) reported for TRPs in Ain^erence are produced by AI/ML model, not conventional measurements of SRS. Also, the LMF may coordinate with network node (e.g., gNB) on the SRS transmission from the target UE to cover at least Airjerence of the gNB model(s).
[0158] At block 1504, if the one or more AI/ML models are hosted by a central unit of the network node (e.g., gNB-CU), the network node (e.g., via gNB-CU) configures the SRS transmission from the target UE so that the SRS is transmitted at least towards the TRPs in Atnference of the one or more AI/ML models deployed at the network node. The SRS configuration is typically via RRC (radio resource control) signaling, and it may be assisted by MAC (message authentication code) signaling.
[0159] At block 1506, if the one or more AI/ML models are hosted by a distributed unit of the network node (e.g., gNB-DU), the network node (e.g., via gNB-DU) requests the CU of the network node (e.g., gNB-CU) to configure the SRS transmission from the target UE, so that the SRS is transmitted at least towards the TRPs in Ain^erence of the one or more AI/ML models deployed at the network node. Alternatively, the DU of the network node (e.g., gNB- DU) sends Ain^erence of its model to the CU of the network node (e.g., gNB-CU), so that CU of the network node can configure the SRS transmission accordingly. The SRS configuration is typically via RRC signalling, and it may be assisted by MAC signaling.
[0160] In some of the examples described above, it is assumed that one AI/ML model is being prepared for AI/ML positioning. When multiple candidate models are available for the same functionality, then the methods and processes described above can be extended. For example, the coordination at the model inference stage can be extended as described below.
[0161] If the model inference context requirement is satisfied for at least one of the candidate models, then one of the satisfied candidate model(s) can be selected for performing model inference for the functionality. If the model inference context requirement is not satisfied for any of the candidate models, then configuration can be updated to satisfy the model inference context requirement for one or more of the candidate model(s). After that, the model inference can proceed with one of the satisfied candidate model(s) for the functionality.
[0162] If the model inference context requirement is not satisfied, and the configuration cannot be updated to satisfy the model inference context requirement for any of the candidate models, then model inference cannot proceed. A notification message (e.g., an error message) can be sent on the failure of model inference due to mismatched context between model training and model inference. Similar extension can be applied in other procedures discussed above, so that the procedure takes into considerations that multiple candidate models are available for the given functionality (e.g., positioning).
[0163] The foregoing specification is to be understood as being in every respect illustrative and exemplary, but not restrictive, and the scope of the disclosure herein is not to be determined from the specification, but rather from the claims as interpreted according to the full breadth permitted by the patent laws. It is to be understood that the embodiments shown and described herein are only illustrative of the principles of the present disclosure and that various modifications may be implemented by those skilled in the art without departing from the scope and spirit of the present disclosure. Those skilled in the art could implement various other feature combinations without departing from the scope and spirit of the disclosure.
Claims
1. A method performed by a user equipment (UE), the method comprising: receiving, or sending, context information related to one or more of: training data collection associated with one or more artificial intelligence/machine learning (AI/ML) models, training of the one or more AI/ML models, and inference based on the one or more AI/ML models, the one or more AI/ML models being configured to provide measurements associated with positioning the UE; in accordance with a determination that a model inference context requirement is satisfied based on consistency between the context information related to the training of at least one of the one or more AI/ML models and the context information related to the inference based on the at least one of the one or more AI/ML models, selecting the at least one of the one or more AI/ML models for performing model inference, or sending a report including at least one of configurations associated with the positioning the UE or the UE’s capabilities in relation to performing inference based on the at least one of the one or more AI/ML models; and in accordance with a determination that the model inference context requirement is not satisfied, the configurations associated with the positioning the UE are updated or a notification message is provided.
2. The method of claim 1, wherein the context information comprises one or more of: deployment environment identifiers of deployment of the one or more AI/ML models; one or more configurations of wireless signal transmission; one or more configurations of receiving wireless signals and generating the desired measurement reports; positioning capabilities of the UE; and positioning capabilities of a network node.
3. The method of claim 2, wherein the deployment environment identifiers comprise one or more validity areas including at least one of: a training dataset validity area (Adataset);
a model training validity area (Atraining) associated with the at least one of the one or more AI/ML models; and a model inference validity area (Ainference) associated with the at least one of the one or more AI/ML models.
4. The method of claim 3, wherein the one or more validity areas are associated with one or more of: a list of transmission and reception points (TRPs) configured to transmit a downlink positioning reference signal (DL PRS) and cover a surrounding area for supporting positioning of the UE; and a list of DL PRS resources.
5. The method of any of claims 3 or 4, wherein a subset of the list of TRPs are included in at least one of the training data collection validity area, the model training validity area, or the model inference validity area.
6. The method of any of claims 3-5, wherein: the model training validity area (Ataining) and the model inference validity area (Ainference) are the same.
7. The method of any of claims 3-5, wherein: the model inference validity areas (Ainference) is a subset of the model training validity area (Ataining).
8. The method of any of claims 1-7, wherein the context information comprises at least one of: a downlink positioning reference signals (DL PRS); a system information block (SIB) signal, a channel state information reference signal (CSLRS), or a tracking reference signal.
9. The method of claim 8, wherein the DL PRS is associated with one or more of identifiers and configurations affecting PRS measurements, the one or more identifiers including one or more of dl-PRS-ID, nr-DL-PRS-
ResourceSetID, NR-DL-PRS-ResourcelD, nr-PhysCelllD, nr-CellGloballD, nr-ARFCN, and NR- TRP -Locationinfo; the configurations including one or more of: nr-SSB-Config, nr-DL-PRS-Info,
NR-DL-PRS-Beamlnfo,
NR-DL-PRS-AssistanceDataPerTRP, and NR-DL-PRS-PositioningFrequencyLayer.
10. The method of any of claims 8-9, wherein the DL PRS is associated with one or more PRS measurements including one or more of: a type of the measurements; a format of the measurement report; a size of the measurements for a PRS resource; a quantization resolution or granularity used in the measurement report; a timeliness of the measurement; and a quality, reliability, certainty/uncertainty, trustworthiness of the measurement.
11. The method of any of claims 1-10, the context information comprises uplink (UL) reference signals including a positioning sounding reference signal (SRS).
12. The method of claim 11, wherein the uplink reference signals are associated with one or more identifiers and configurations affecting SRS measurements, the one or more identifiers and configurations including at least one of:
SRS carrier characteristics; physical cell ID of the cell that includes the SRS carrier;
SRS frequency;
SRS resource set information; positioning SRS resource set information;
SSB (synchronization signal block) information; spatial relation information; and pathloss reference information.
13. The method of any of claims 11 or 12, wherein the UL reference signals are associated
with one or more SRS measurements including one or more of: a type of measurements; a format of the measurement report; a number of samples or paths to be measured and reported; a quantization resolution used in measurement report; a timeliness of the measurement; and a quality, reliability, certainty/uncertainty, trustworthiness of the measurement.
14. The method of any of claims 1-13, wherein the one or more AI/ML models are deployed at the UE, further comprising: receiving, from a location management function (LMF), pre-defined PRS configurations.
15. The method of claim 14, further comprises: sending an on-demand positioning reference signal (PRS) request to a location management function (LMF), the on-demand PRS request being a request for a pre-defined PRS configuration; and wherein receiving the context information comprises receiving, from the LMF, PRS configurations used for PRS transmission or error cause.
16. The method of claim 14, wherein receiving the context information comprises: receiving from an LMF, or exchanging with the LMF, one or more LTE positioning protocol (LPP) messages associated with UE measurements or DL PRS positioning capabilities of the UE; and receiving, from the LMF, PRS configurations used for PRS transmission or error cause.
17. The method of claims 15 or 16, wherein the LMF is configured to perform: determining a need for PRS transmission or change to the transmission characteristics of an ongoing PRS transmission; requesting serving and non-serving network nodes/TRPs for new PRS transmission or PRS transmission with changes to the PRS configuration; receiving, from the network nodes/TRPs the successfully configured or updated PRS transmission.
18. The method of claim 1, wherein the one or more AI/ML models are deployed at an LMF,
and wherein sending the context information comprises: sending information of capabilities of the UE to the LMF for performing channel measurements associated with the one or more AI/ML models deployed at the LMF; the method further comprising: receiving assistance data for supporting the one or more AI/ML models deployed at the LMF; and performing PRS measurements for performing model inference by the LMF.
19. The method of claim 1, wherein sending the context information comprises: sending information of the capability of the UE to transmit positioning SRS associated with an LMF-side model; the method further comprising: receiving assistance data for supporting the LMF-side model; and sending an SRS measurement report for performing model inference by the LMF.
20. The method of any of claims 1-19, wherein the one or more AI/ML models comprise a plurality of candidate models, further comprising the steps of: selecting a candidate model from the plurality of candidate models for performing model inference, wherein the selected candidate model satisfies the model inference context requirements; updating configurations to satisfy the model inference context requirements and performing model inference; or sending a notification message indicating failure to satisfy the model inference context requirements and failure of updating the configurations.
21. A method performed by a network node, the method comprising: sending, or receiving, context information related to one or more of: training data collection associated with one or more artificial intelligence/machine learning (AI/ML) models, training of the one or more AI/ML models, and inference based on the one or more AI/ML models, the one or more AI/ML models being configured to provide measurements associated with positioning a user equipment (UE); in accordance with a determination that a model inference context requirement is satisfied based on consistency between the context information related to the training of at least one of the one or more AI/ML models and the context information related to the inference based on
the at least one of the one or more AI/ML models, selecting the at least one of the one or more AI/ML models for performing model inference, or sending a report including at least one of configurations associated with the positioning the UE or capabilities of the UE in relation to performing inference based on the at least one of the one or more AI/ML models; and in accordance with a determination that the model inference context requirement is not satisfied, the configurations associated with the positioning the UE are updated or a notification message is provided.
22. The method of claim 21, wherein the context information comprises one or more of: deployment environment identifiers of deployment of the one or more AI/ML models; one or more configurations of wireless signal transmission; one or more configurations of receiving wireless signals and generating the desired measurement reports; the UE’s positioning capabilities; and the network node’s positioning capabilities.
23. The method of claim 22, wherein the deployment environment identifiers comprise one or more validity areas including at least one of: a training data collection validity area (Adataset); a model training validity area (Atraining) associated with the at least one of the one or more AI/ML models; and a model inference validity area (Ainference) associated with the at least one of the one or more AI/ML models.
24. The method of claim 23, wherein the one or more validity areas are associated with one or more of: a list of transmission and reception points (TRPs) configured to transmit a downlink positioning reference signal (DL PRS) and cover a surrounding area for supporting positioning; and a list of DL PRS resources.
25. The method of any of claims 23 or 24, wherein a subset of the list of TRPs are included
in at least one of the training data collection validity area, the model training validity area, or the model inference validity area.
26. The method of any of claims 23-25, wherein: the model training validity area (Ataining) and the model inference validity area (Ainference) are the same.
27. The method of any of claims 23-25, wherein: the model inference validity areas (Ainference) is a subset of the model training validity area (Ataining).
28. The method of any of claims 21-27, wherein the context information comprises at least one of: a downlink positioning reference signals (DL PRS); a system information block (SIB) signal, a channel state information reference signal (CSI-RS), or a tracking reference signal.
29. The method of claim 28, wherein the DL PRS is associated with one or more of identifiers and configurations affecting PRS measurements, the one or more identifiers including one or more of dl-PRS-ID, nr-DL-PRS- ResourceSetlD, NR-DL-PRS-ResourcelD, nr-PhysCelllD, nr-CellGloballD, nr-ARFCN, and NR- TRP -Locationinfo; the configurations including one or more of: nr-SSB-Config, nr-DL-PRS-Info,
NR-DL-PRS-Beamlnfo,
NR-DL-PRS-AssistanceDataPerTRP, and NR-DL-PRS-PositioningFrequencyLayer.
30. The method of any of claims 28 or 29, wherein the DL PRS is associated with one or more PRS measurements including one or more of: a type of the measurements; a format of the measurement report;
a size of the measurements for a PRS resource; a quantization resolution or granularity used in the measurement report; a timeliness of the measurement; and a quality, reliability, certainty/uncertainty, trustworthiness of the measurement.
31. The method of any of claims 21-30, the context information comprises uplink (UL) reference signals including a positioning sounding reference signal (SRS).
32. The method of claim 31, wherein the uplink reference signals are associated with one or more identifiers and configurations affecting SRS measurements, the one or more identifiers and configurations including at least one of:
SRS carrier characteristics; physical cell ID of the cell that includes the SRS carrier;
SRS frequency;
SRS resource set information; positioning SRS resource set information;
SSB (synchronization signal block) information; spatial relation information; and pathloss reference information.
33. The method of any of claims 31 or 32, wherein the UL reference signals are associated with one or more SRS measurements including one or more of: a type of measurements; a format of the measurement report; a number of samples or paths to be measured and reported; a quantization resolution used in measurement report; a timeliness of the measurement; and a quality, reliability, certainty/uncertainty, trustworthiness of the measurement.
34. The method of any of claims 21-33, wherein the one or more AI/ML models are deployed at the network node or a location management function (LMF).
35. The method of claim 34, wherein sending the context information comprises: sending, to the LMF, a model inference validity area associated with the network node;
the method further comprising: if the one or more AI/ML models are hosted by a central unit (CU) of the network node, configuring SRS transmission from the UE; and if the one or more AI/ML models are hosted by a distributed unit (DU) of the network node, requesting the CU of the network node to configure SRS transmission from the UE.
36. The method of claim 34, wherein receiving the context information comprises: receiving a PRS CONFIGURATION REQUEST message from the LMF for configuring DL PRS transmission by indicated TRP(s); sending a response to the LMF, wherein the LMF is configured to determine if the model inference context requirement is satisfied for the one or more AI/ML models deployed at the LMF.
37. The method of claim 34, wherein receiving the context information comprises: receiving, from the LMF, a POSITIONING INFORMATION REQUEST message including requested SRS transmission according to requirements of the one or more AI/ML models deployed at the LMF; the method further comprising: sending a response to the LMF, wherein the LMF is configured to determine if the model inference context requirement is satisfied for the one or more AI/ML models deployed at the LMF..
38. A user equipment, comprising: processing circuitry configured to perform any of the steps of any of the claims 1-20; and power supply circuitry configured to supply power to the processing circuitry.
39. A network node, comprising: processing circuitry configured to perform any of the steps of any of the claims 21-37; power supply circuitry configured to supply power to the processing circuitry.
Applications Claiming Priority (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| US202463554918P | 2024-02-16 | 2024-02-16 | |
| US63/554,918 | 2024-02-16 |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| WO2025172899A1 true WO2025172899A1 (en) | 2025-08-21 |
Family
ID=94824354
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| PCT/IB2025/051569 Pending WO2025172899A1 (en) | 2024-02-16 | 2025-02-13 | Ai/ml model inference context and signaling |
Country Status (1)
| Country | Link |
|---|---|
| WO (1) | WO2025172899A1 (en) |
Citations (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| WO2023065060A1 (en) * | 2021-10-18 | 2023-04-27 | Qualcomm Incorporated | Reduced capability machine learning with assistance |
| WO2023148665A1 (en) * | 2022-02-04 | 2023-08-10 | Lenovo (Singapore) Pte. Ltd. | Measurement and reporting for artificial intelligence based positioning |
-
2025
- 2025-02-13 WO PCT/IB2025/051569 patent/WO2025172899A1/en active Pending
Patent Citations (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| WO2023065060A1 (en) * | 2021-10-18 | 2023-04-27 | Qualcomm Incorporated | Reduced capability machine learning with assistance |
| WO2023148665A1 (en) * | 2022-02-04 | 2023-08-10 | Lenovo (Singapore) Pte. Ltd. | Measurement and reporting for artificial intelligence based positioning |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| US20220369069A1 (en) | Modifying an event-based positioning procedure configured in a wireless device | |
| WO2023079506A1 (en) | Systems and methods for the provisioning of reference time information for time synchronization | |
| KR20250108645A (en) | Collecting data for positioning | |
| WO2023187678A1 (en) | Network assisted user equipment machine model handling | |
| WO2025172899A1 (en) | Ai/ml model inference context and signaling | |
| US20250234219A1 (en) | Network assisted user equipment machine learning model handling | |
| WO2025172939A1 (en) | Signaling assistance for artifical intelligence/machine learning model validation | |
| WO2025212031A1 (en) | External model-monitoring of ai/ml model using line-of-sight link information | |
| WO2025171803A1 (en) | Data collection for ai/ml-based functionality | |
| WO2025172948A1 (en) | Self-monitoring of an artifical intelligence/machine learning model | |
| WO2025172964A1 (en) | Complex valued data for ai/ml | |
| WO2024242608A1 (en) | Methods for enabling efficient signaling of network configuration assistance information for beam management | |
| WO2025233909A1 (en) | Network configuration identifier for machine learning models | |
| WO2025172947A1 (en) | External model monitoring of artificial intelligence/machine learning model | |
| WO2025234912A1 (en) | Predictions of measurements or measurement events in time and/or frequency domain | |
| WO2025172485A1 (en) | Network assisted ai/ml model validation for positioning | |
| WO2024236513A1 (en) | Channel measurement resource configuration for artificial intelligence - channel state information | |
| WO2025207003A1 (en) | Methods to identify valid data samples for ai/ml model training for positioning | |
| KR20250148719A (en) | Automatic label generation for positioning training data in user equipment-assisted positioning systems. | |
| WO2025134060A1 (en) | Methods to support enhanced reporting of positioning information based on gnb-du ai/ml inference side model | |
| WO2024214027A1 (en) | Ai/ml model selection criteria for measurement procedure | |
| WO2025178538A1 (en) | Methods for selecting and configuring measurement resources based on configured prediction resources for aiml radio measurement predictions | |
| WO2025172969A1 (en) | Determining an rtoa reference time for ai/ml based user equipment positioning | |
| WO2025183613A1 (en) | Methods, apparatus and computer-readable media related to sharing datasets over a communication network | |
| WO2025178547A1 (en) | Systems and methods for indicating downlink reference signal characteristics |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| 121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 25708875 Country of ref document: EP Kind code of ref document: A1 |