WO2025063878A1 - Radio access network awareness of user equipment-side model training - Google Patents
Radio access network awareness of user equipment-side model training Download PDFInfo
- Publication number
- WO2025063878A1 WO2025063878A1 PCT/SE2024/050798 SE2024050798W WO2025063878A1 WO 2025063878 A1 WO2025063878 A1 WO 2025063878A1 SE 2024050798 W SE2024050798 W SE 2024050798W WO 2025063878 A1 WO2025063878 A1 WO 2025063878A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- data collection
- model training
- side model
- perform
- indication
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04W—WIRELESS COMMUNICATION NETWORKS
- H04W24/00—Supervisory, monitoring or testing arrangements
- H04W24/02—Arrangements for optimising operational condition
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N20/00—Machine learning
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04B—TRANSMISSION
- H04B7/00—Radio transmission systems, i.e. using radiation field
- H04B7/02—Diversity systems; Multi-antenna system, i.e. transmission or reception using multiple antennas
- H04B7/04—Diversity systems; Multi-antenna system, i.e. transmission or reception using multiple antennas using two or more spaced independent antennas
- H04B7/06—Diversity systems; Multi-antenna system, i.e. transmission or reception using multiple antennas using two or more spaced independent antennas at the transmitting station
- H04B7/0686—Hybrid systems, i.e. switching and simultaneous transmission
- H04B7/0695—Hybrid systems, i.e. switching and simultaneous transmission using beam selection
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L1/00—Arrangements for detecting or preventing errors in the information received
- H04L1/0001—Systems modifying transmission characteristics according to link quality, e.g. power backoff
- H04L1/0023—Systems modifying transmission characteristics according to link quality, e.g. power backoff characterised by the signalling
- H04L1/0026—Transmission of channel quality indication
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L1/00—Arrangements for detecting or preventing errors in the information received
- H04L1/0001—Systems modifying transmission characteristics according to link quality, e.g. power backoff
- H04L1/0023—Systems modifying transmission characteristics according to link quality, e.g. power backoff characterised by the signalling
- H04L1/0028—Formatting
- H04L1/0029—Reduction of the amount of signalling, e.g. retention of useful signalling or differential signalling
Definitions
- the present disclosure generally relates to the technical field of wireless communication and, more particularly, to Lifecycle Management (LCM) of an Artificial Intelligence (Al) and/or Machine Learning (ML) model in which inferences are performed at a User Equipment (UE).
- LCM Lifecycle Management
- Al Artificial Intelligence
- ML Machine Learning
- Example use cases include using autoencoders for Channel State Information (CSI) compression to reduce the feedback overhead and improve channel prediction accuracy; using deep neural networks for classifying Line-of-Sight (LOS) and Non-LOS (NLOS) conditions to enhance the positioning accuracy; and using reinforcement learning for beam selection at the network side and/or the UE side to reduce the signaling overhead and beam alignment latency; and using deep reinforcement learning to learn an optimal precoding policy for complex Multiple Input Multiple Output (MIMO) precoding problems.
- CSI Channel State Information
- LOS Line-of-Sight
- NLOS Non-LOS
- reinforcement learning for beam selection at the network side and/or the UE side to reduce the signaling overhead and beam alignment latency
- MIMO Multiple Input Multiple Output
- the present disclosure generally relates to LCM of a model in which inferences are performed at a UE.
- the indication comprises a request to perform the UE-side model training. Performing the data collection is responsive to receiving a response to the indication from the network node.
- the response comprises a configuration for the UE to use for the data collection and/or an acceptance indication.
- the method further comprises receiving, from the network node or a different network node, a notification to stop or pause the data collection for UE-side model.
- the method further comprises transmitting, to the network node or the different network node, a further indication indicating that the data collection for the UE-side model training has stopped or paused.
- the method further comprises informing the network node or a different network node that the data collection for the UE-side model training has been completed; or needs to be resumed and/or re-started and/or re-configured.
- a UE comprising processing circuitry and memory.
- the memory contains instructions executable by the processing circuitry whereby the UE is configured to transmit, to a network node, an indication indicating that the UE needs to perform UE-side model training.
- the UE is further configured to perform data collection for the UE-side model training.
- the UE is further configured to perform any one of the first methods described above.
- Other embodiments include a second method, performed by a UE.
- the method comprises receiving, from a first entity, a first message indicating a need to perform UE-side model training for one or more lower layer functions and/or one or more higher layer functions.
- the one or more lower layer functions comprise PHY and/or MAC layer functions.
- the one or more lower layer functions comprise beam management, providing CSI and/or positioning.
- the one or more higher layer functions comprise RRM measurements and/or a L3 mobility function.
- the method further comprises responsive to receiving the first message, indicating to a RAN node, in a second message, a request to perform the UE-side model training.
- the method further comprises receiving, from the RAN node, a third message indicating whether or not the UE is allowed to perform the UE-side model training.
- the method further comprises starting data collection at the one or more lower layers for the UE-side model training in response to the third message indicating that the UE is allowed to perform the UE- side model training.
- the method further comprises responsive to receiving the third message, sending, to the first entity, a fourth message indicating whether the request was accepted or rejected.
- the method further comprises indicating to the first entity that data collection has started.
- UE comprising processing circuitry and memory.
- the memory contains instructions executable by the processing circuitry whereby the UE is configured to receive, from a first entity, a first message indicating a need to perform UE-side model training for one or more lower layer functions and/or one or more higher layer functions.
- the one or more lower layer functions comprise PHY and/or MAC layer functions.
- the UE is further configured to perform any one of the second methods described above.
- Other embodiments include a third method implemented by a UE.
- the method comprises performing data collection for UE-side model training.
- the method further comprises receiving, from a RAN node, a first message indicating to stop the data collection.
- the method further comprises stopping the data collection for the UE-side model training at one or more lower layers in response to receiving the first message.
- the one or more lower layers comprises a PHY layer and/or a MAC layer.
- the method further comprises transmitting, to a first entity, a second message indicating that the data collection for UE-side model training is stopped.
- the method further comprises receiving, from the RAN node, a third message indicating to resume the stopped data collection.
- the method further comprises resuming the data collection for the UE-side model training at one or more lower layers in response to receiving the third message.
- the one or more lower layers comprises a PHY layer and/or a MAC layer.
- the method further comprises transmitting, to the first entity, a fourth message indicating that the data collection for UE-side model training is resumed.
- a UE comprising processing circuitry and memory.
- the memory contains instructions executable by the processing circuitry whereby the UE is configured to perform data collection for UE-side model training.
- the UE is further configured to receive, from a RAN node, a first message indicating to stop the data collection.
- the UE is further configured to perform any one of the third methods described above.
- Other embodiments include a fourth method implemented by a UE.
- the method comprises stopping data collection for UE-side model training.
- the method further comprises transmitting to a RAN node, a first message indicating that the data collection for the UE-side model training is stopped.
- stopping the data collection for the UE-side model training comprises stopping the data collection at one or more lower layers.
- the one or more lower layers comprises a PHY layer and/or a MAC layer.
- the method further comprises transmitting, to a first entity, a second message indicating that the data collection for the UE-side model training is stopped.
- the method further comprises determining that the data collection for the UE-side model training can be resumed.
- the method further comprises transmitting, to the RAN node, a third message indicating that the data collection for the UE-side model training can be resumed.
- the method further comprises receiving, from the RAN node, a fourth message indicating whether or not to resume the data collection.
- the method further comprises resuming the data collection for the UE-side model training at one or more lower layers in response to the fourth message indicating to resume the data collection.
- the method further comprises transmitting, to a first entity, a fifth message indicating whether or not the data collection for the UE-side model training is resumed depending on the fourth message.
- a UE comprising processing circuitry and memory.
- the memory contains instructions executable by the processing circuitry whereby the UE is configured to stop data collection for UE-side model training.
- the UE is further configured to transmit to a RAN node, a first message indicating that the data collection for the UE-side model training is stopped.
- the UE is further configured to perform any one of the fourth methods described above.
- inventions include a method implemented by a network node.
- the method comprises receiving, from a UE, an indication indicating that the UE needs to perform UE-side model training.
- the method further comprises, responsive to receiving the indication, transmitting a response to the UE indicating that the UE is allowed to perform the data collection for the UE-side model training or shall perform the data collection for the UE-side model training.
- the response comprises an acceptance indication and/or a configuration for the UE to use for the data collection.
- the indication comprises a request from UE to perform the UE- side model training.
- the method further comprises determining whether to accept or reject the request.
- the method further comprises receiving, from the UE, a request to stop or pause the data collection for the UE-side model training or a notification that the UE has stopped or paused the data collection for the UE-side model training, or an indication that the data collection for the UE-side model training needs to be resumed and/or restarted and/or reconfigured.
- the method further comprises transmitting, to the UE, an indication to stop or pause the data collection for the UE-side model.
- a network node comprising processing circuitry and memory.
- the memory contains instructions executable by the processing circuitry whereby the network node is configured to receive an indication from a UE indicating that the UE needs to perform UE-side model training.
- the network node is further configured to, responsive to receiving the indication, transmit a response to the UE indicating that the UE is allowed to perform the data collection for the UE-side model training or shall perform the data collection for the UE-side model training.
- the response comprises an acceptance indication and/or a configuration for the UE to use for the data collection.
- the network node is further configured to perform any of the network node methods described above.
- Other embodiments include a second method implemented by a network node.
- the method comprises receiving, from a UE, a request to perform data collection for UE-side model training.
- the method further comprises determining whether to accept or reject the request.
- the method further comprises indicating to the UE whether the request is accepted or rejected.
- the method further comprises determining that the UE should stop the data collection for the UE-side model training.
- the method further comprises indicating, to the UE, to stop the data collection for UE-side model training.
- the method further comprises determining that the UE should resume the data collection.
- the method further comprises indicating to the UE to resume the data collection.
- the method further comprises receiving, from the UE, a notification indicating that the data collection for the UE-side model training is stopped or paused; and/or notice that the data collection for UE-side model training is resumable.
- the method further comprises indicating, to the UE, whether or not to resume the data collection.
- a network node comprising processing circuitry and memory.
- the memory contains instructions executable by the processing circuitry whereby the network node is configured to receive, from a UE, a request to perform data collection for UE- side model training.
- the network node is further configured to determine whether to accept or reject the request.
- the network node is further configured to indicate to the UE whether the request is accepted or rejected.
- the network node is further configured to perform any one of the second network node methods described above.
- inventions include a computer program comprising instructions which, when executed on processing circuitry of a network node, cause the network node to carry out any one of the network node methods described above.
- FIG. 1 is a schematic block diagram illustrating an example model LCM procedure, according to one or more embodiments of the present disclosure.
- FIG. 2 is a schematic block diagram illustrating an example framework for studying model LCM aspects, according to one or more embodiments of the present disclosure.
- FIG. 3 is a schematic block diagram illustrating an example autoencoder for CSI, according to one or more embodiments of the present disclosure.
- FIGS. 5-11 are signaling diagrams illustrating examples of signaling exchanged according to one or more embodiments of the present disclosure.
- FIGS. 12-15 are flow diagrams illustrating example methods implemented by a UE according to one or more embodiments of the present disclosure.
- FIGS. 16-17 are flow diagrams illustrating example methods implemented by a network node according to one or more embodiments of the present disclosure.
- FIG. 18 illustrates an example UE, according to one or more embodiments of the present disclosure.
- FIG. 19 illustrates an example network node, according to one or more embodiments of the present disclosure.
- model refers to one or more data structures and/or algorithms used to generate a prediction from collected input data.
- model ML model
- Al model Al model
- AI/ML model AI/ML model
- Al and/or ML model should be considered to have equivalent meanings to each other and therefore be interchangeable.
- a model may be deployed, implemented, and/or configured in a UE, in a network node, or both.
- a model may receive a reference signal measurement (e.g., a measurement of a Synchronization Signal Block (SSB)) at time instance to as input and provide as output a prediction of the reference signal an time tO+T.
- a model may receive as input a measurement of a reference signal transmitted on a first beam and provide as outcome a prediction of another reference signal transmitted on another beam.
- SSB Synchronization Signal Block
- the model may comprise a UE-side specific model and a network (NW)-side specific model that operate jointly.
- the function of the UE-side model may be to compress a channel input and the function of the NW-side model may be to decompress the received output from the UE.
- a model is a model to aid a UE in channel estimation (or interference estimation for channel estimation).
- the channel estimation may, for example, be for the Physical Downlink Shared Channel (PDSCH) and be associated with a specific set of reference signals patterns that are transmitted from the NW to the UE.
- the model may be part of the receiver chain within the UE and may not be directly visible within the reference signal pattern, and be configured or scheduled to be used between the NW and UE.
- Another example of a model for CSI estimation is to predict a suitable Channel Quality Indicator (CQI), Precoding Matrix Indicator (PMI), Rank Indicator (Rl) or similar value in the future.
- CQI Channel Quality Indicator
- PMI Precoding Matrix Indicator
- Rl Rank Indicator
- the future may be a certain number of slots after the UE has performed the last measurement or targeting a specific slot in time within the future.
- the UE is connected to a network (e.g., it may receive and transmit data and/or control information).
- the UE may further be in RRC_CONNECTED state and be configured to use the model for a specific function.
- the specific function may include one or more of the following “functionality areas:”
- RRM measurement includes mobility measurement, such as Reference Signal Received Power (RSRP), Reference Signal Received Quality (RSRQ), Received Signal Strength Indicator (RSSI), Radio Link Failure (RLF) predictions, and other aspects related to radio link failure.
- RSRP Reference Signal Received Power
- RSSI Received Signal Strength Indicator
- RRM measurement may be performed using a measurement framework as will be discussed further below.
- the measurement framework may govern how a UE performs measurements (e.g., through a measurement configuration), what triggers measurement reports (e.g., whether measurements reports are event-triggered or are sent periodically), and what content is included in the measurement reports.
- FIG. 1 is an illustration of training and inference pipelines 10, 20 and their interactions within a model LCM procedure 30.
- the model LCM procedure 30 typically comprises a training pipeline 10 (which may or may not be used for retraining, either partly or wholly), a model deployment stage 16, an inference pipeline 20, and a drift detection stage 25.
- the training pipeline 10 may include data ingestion 11 , data preprocessing 12, model training 13, model evaluation 14, and/or model registration 15 stages.
- Data ingestion 11 refers to gathering raw data (e.g., training data) from data storage. After data ingestion 11, there may also be a step that controls the validity of the gathered data.
- Data preprocessing 12 refers feature engineering that is applied to the gathered data.
- data preprocessing 12 may include data normalization and/or data transformation required for the input data to the model.
- Model evaluation 14 refers to benchmarking model performance to some model baseline. The iterative steps of model training 13 and model evaluation 14 may continue until an acceptable level of performance is achieved.
- Model registration 15 refers to registering the model, including any corresponding metadata that provides information on how the model was developed, and possibly model evaluation performance outcomes.
- the model deployment stage 16 makes the trained (e.g., retrained) model part of the inference pipeline 20.
- the inference pipeline 20 may include data ingestion 21, data preprocessing 22, model operation 23, and data and/or model monitoring 24 stages.
- Model operation 23 refers to using the trained and deployed model in an operational mode.
- Data and model monitoring 24 refers to validating that the inference data are from a distribution that aligns well with the training data, as well as monitoring model outputs for detecting any performance, or operational, drifts.
- the models being discussed in the Rel-18 study item on AI/ML for the NR air interface can be categorized into the following two types, namely a one-side AI/ML model and a two-side AI/ML model.
- FIG. 3 shows an example autoencoder (AE)-based two-sided CSI compression use case.
- a UE uses an encoder 42 (i.e., the UE-part of the two-sided AE model 40) operated at a UE to compress measured CSI 41 for a wireless channel.
- the output of the encoder 42 i.e., compressed CSI 43
- the gNB uses a decoder 44 (i.e., the NW-part of the two-sided AE model 40) to generate reconstructed CSI 45 for the wireless channel.
- UE-side model monitoring There are several methods for UE-side model monitoring.
- the training of UE-side model is performed at the UE itself. That is, the UE performs both the training and the inference.
- this approach might be too complex in practice. For example, given the limited computational resources of the UE and the large computational complexity that training might require, it may not be feasible for the UE to perform either or both of the training and inference.
- models are dependent on location and/or region, a single UE is unlikely to be able to cover an entire coverage area. Indeed, the models that the UE trains by itself would be limited to the areas in which the UE moves. Accordingly, every time the UE enters a new area its trained models could become outdated.
- a network node e.g., a Radio Access Network (RAN) node like a gNB or a Core Network (CN) node (e.g., like the Network Data Analytics Function (NWDAF)), collects data from a UE and trains a model that at some point should be transferred to that UE or other UEs that will then apply it.
- RAN Radio Access Network
- CN Core Network
- NWDAAF Network Data Analytics Function
- an Over-the-Top (OTT) server outside of the 3GPP environment, may be in charge of performing the training.
- This server could be for example a UE-vendor specific server.
- This latter approach might be a reasonable candidate because in order to have optimal performance, the trained data set should fit the inference operations at the device, which may depend on UE-vendor specific implementation (e.g., software/hardware properties/capabilities).
- FIG. 4 is a schematic diagram illustrating an example wireless communication network 100 comprising a RAN node 120, a core network node 140, an Over- the-Top (OTT) node 150, and a UE 110.
- the RAN node 120 serves a cell 130 to the UE 110.
- the cell 130 supports a RAT (e.g., NR, 6G) that provides the UE 110 with access to a core network 160 that comprises the core network node 140.
- the core network 160 provides access to the OTT server 150, which may be external to the core network 160.
- a RAT e.g., NR, 6G
- the UE-side model training is performed by a network node outside the RAN 130, e.g., in a CN node 140, outside of the 3GPP network (such as at a UE-vendor specific server), at an OTT server 150, or by the UE 110 itself (e.g., in the UE’s application layer), there is no possibility for a RAN node 120, in the current 3GPP specification, to become aware of this operation. This might lead to undesired behavior that ultimately may affect the overall system performance (and, potentially, UE performance). In some scenarios, it might be possible that training operations performed by the UE require specific configurations and settings at the UE that might collide with the NW provided configuration.
- RRM Radio Resource Management
- the UE 110 may need to upload the collected data to the node performing the training, e.g., to the UE-vendor specific server. Uploading this collected data may impact the RAN 130 system performance, in terms of available uplink (UL) radio resources, and the delivery of the collected data itself that may be delayed if the cell is congested.
- UL uplink
- the indication may correspond to a request from the UE to the network node, or an indication for informing the network node.
- the UE in response to the transmitted indication, receives a response from the network node, such as an acceptance indication, a rejection indication, a configuration for the UE to use for data collection, etc.
- a response from the network node such as an acceptance indication, a rejection indication, a configuration for the UE to use for data collection, etc.
- the UE while the UE is performing data collection for UE-side model training (e.g., in response to the acceptance indication), the UE transmits an indication to stop and/or pause data collection for UE side model. Alternatively, the UE transmits an indication that the UE has stopped and/or paused data collection for the UE-side model.
- the indication is transmitted to a network node (e.g., the same network node to which the UE has transmitted the indication that it needed to perform data collection for UE-side model training, or a different network node).
- the UE receives an indication to stop and/or pause data collection for UE side model from a network node (e.g., the same network node to which the UE has transmitted the indication that it needed to perform data collection for UE side model training, or a different network node).
- a network node e.g., the same network node to which the UE has transmitted the indication that it needed to perform data collection for UE side model training, or a different network node.
- the UE transmits an indication informing the NW that data collection for UE-side model training has been completed.
- the UE transmits an indication informing the NW that data collection for UE-side model training needs to be resumed and/or restarted and/or re-configured.
- Embodiments also include methods for an network node (e.g., a RAN node 120) to determine if the UE can start, stop, or resume the data collection for the UE-side model training and to transmit responses to the transmitted indications from the UE.
- an network node e.g., a RAN node 120
- a network node may become aware of whether the UE 110 needs to perform model training by performing data collection, which may require the UE 110 to perform one or more measurements which may serve as input for model training. Accordingly, if the UE request of performing UE-side model training is accepted, the network can for example provide the necessary configuration to the UE to perform a proper UE-side model training, e.g., to enable the UE to perform a data collection for the purpose of UE-side model training and/or to understand whether it expects a degradation in performance due to the data collection process the UE performs.
- the solution proposed herein enables the network to avoid one or more sideeffects or unintended consequences of the model training on the performance of the UE and/or network, e.g., by avoiding conflict in the network expected configuration/behavior at the UE and the AI/ML based training activities at the UE.
- FIG. 5 illustrates a first example of signaling according to one or more embodiments of the present disclosure.
- the UE transmits an indication to a network node indicating that the UE needs to perform UE-side model training (step 1510).
- the action of “model training” may comprise the UE performing measurements and/or performing data collection (which may include the performed measurements), for the purpose of training one or more UE-sided AI/ML model(s) e.g., in the case data is collected at the UE and reported to a OTT server for the AI/ML model training and/or in the case data is collected at the UE and the model training is performed at the UE itself.
- the network node may correspond to a Radio Access Network (RAN) node such as a gNodeB, an eNodeB, a 6G radio access network node, a centralized unit in the RAN, a server running a baseband and/or a Core Network (CN) node such as a Authentication and Mobility Function (AMF).
- RAN Radio Access Network
- CN Core Network
- AMF Authentication and Mobility Function
- the indication the UE transmits may correspond to a request to the network node for performing UE-side model training. In other words, the UE does not perform UE-side model training and/or data collection before it receives a response from the network node.
- the UE starts a supervision timer when it transmits the request to the network node. While the timer is running the UE expects a response and, when the response is not received and the timer expires the UE aborts the procedure and does not perform data collection and/or UE side model training.
- the request comprises an RRC message e.g., in case the network node corresponds to a RAN node 120, like a gNodeB or 6G RAN node.
- the RRC message may be a UE Assistance Information message including an indication that the UE needs to perform UE sided training of an AI/ML model and/or data collection for AI/ML model training.
- the request to the network node for performing UE-side model training and/or data collection for AI/ML model training may contain one or more of the following information:
- the indication may indicate e.g., the need for the UE to perform relaxed measurements on certain frequencies, the need to reduce Ml MO layers, or the need to avoid DAPS handover;
- time period or interval may be provided as one or more time units, such as: number of radio frames and/or subframes and/or OFDM symbols, seconds, minutes, hours, etc;
- Indications that identify the training use case and/or AI/ML functionality e.g., training for channel state information (CSI) compression or for positioning or for beam managements.
- the indication might be a list indicating different training request received via the over-the-top server for the UE sided model training;
- SSB related data such as measurements (e.g., SS-RSRP, SS-RSRQ, SS-SINR as defined in TS 38-215), SSB indexes, physical cell identifiers (PCIs), information obtained from system information, etc; another example are CSI-RS related measurements.
- FIGS. 6 and 7 illustrate different examples of signaling according to embodiments of the present disclosure.
- the indication may correspond to a request to the network node for performing UE-side model training (step 1610) and the UE may receive as a response an indication of acceptance (step 1620, as shown in FIG. 6) or rejection (step 1720, as shown in FIG. 7).
- the UE receives the response with an indication of acceptance and, if the UE request is accepted, the indication indicates an RRC configuration to use for the duration of the training, wherein the said RRC configuration may comprise the CSI-RS/SSBs resources to use for measurements and data collection, the DRX configuration, a radio bearer reconfiguration, a MIMO configuration, the frequencies in which the data collection is allowed, the duration of the data collection, the point in time in which the data collection can start
- the indication indicates an RRC configuration to use for the duration of the training, wherein the said RRC configuration may comprise the CSI-RS/SSBs resources to use for measurements and data collection, the DRX configuration, a radio bearer reconfiguration, a MIMO configuration, the frequencies in which the data collection is allowed, the duration of the data collection, the point in time in which the data collection can start
- a cause indicating the cause of the reject e.g., network overload, lack of radio resources, high priority traffic executed at the UE
- the indication of acceptance or rejection multiple indications each entry indicating the response by the network for each training request for the plurality of the training requests received from the UE.
- Each training request might be identified by a specific identifier (ID).
- the indication of acceptance or rejection is provided in a DL MAC CE or a PHY level signaling from the network node to the UE. This option enables more frequent operation of training request/response procedure based on the status of L1 operation in the lower layers.
- the indication of acceptance or rejection is provided in a RRC message.
- This option enables the network to make the training request/response procedure a slower process compared to the MAC/PHY based reply option and thus possibly reduces the need for frequent communication between the UE and the network node.
- the UE receives the accept/reject decision multiple times for a single request that it had transmitted. Such an embodiment is especially useful when the UE includes an indication of the duration for which it intends to perform the training. Based on such a ‘duration’ indication in the request from the UE, the network node can reply with accept/reject several times until the expiry of such a timer.
- the UE receives an accept/reject from the network node but such an accept/reject decision is appended with a validity duration which indicates the time for which such an accept/reject decision is valid.
- the response message when the response indicates reject, includes a timer value based on which the UE starts a timer (set to the received value) and: i) while the timer is running the UE is not allowed to send another indication of the need to perform Ue-side model training; ii) when the timer expires the UE is allowed to send another indication of the need to perform UE side-model training; iii) and the timer is stopped under one or more conditions such as entering RRC DLE or RRC NACTIVE.
- the response message when the response indicates acceptance, includes a timer value based on which the UE starts a timer (set to the received value) and: i) while the timer is running the UE is allowed to perform UE-side model training; ii) when the timer expires and the UE-sided model training is not completed, the UE is allowed to send another indication of the need to perform UE side-model training; iii) the timer is stopped under one or more conditions such as completing the training.
- the UE In response of receiving the indication from the network node that the UE request is accepted, the UE performs data collection for the UE-side model training.
- the indication that the request is rejected includes a timer value based on which the UE starts a first time (with the indicated value). While the timer is running the UE does not transmit another request. After the timer expires, the UE is allowed to transmit again another request for data collection for the UE-side model training. In some such embodiments, the UE stops the timer upon the occurrence of an event such as a handover, or a reception of a message indication from the network indicating the UE is allowed to perform data collection for AI/ML model training and/or model training.
- a UE triggers the transmission to the network node indicating the need for performing data collection for model training in response to having an indication in a first message from a first entity indicating the need to perform UE-side model training (e.g., at lower layers, PHY/MAC, or at RAN higher layer e.g., for RRM or layer-3 mobility).
- a first entity e.g., at lower layers, PHY/MAC, or at RAN higher layer e.g., for RRM or layer-3 mobility.
- the first entity may comprise one or more higher layers 50 of the UE (e.g., the UE application layer) as shown in FIG. 8.
- higher layers 50 of the UE may be in charge of performing the UE-side model training.
- there may be an internal communication in the UE in which an OTT client indicates within the UE (e.g., to the UE RAN layers such as PHY/MAC/RRC layers) that data collection is needed, so that the UE triggers the transmission to the network node of the indication that data collection is needed, e.g., by one or more lower layers 55 of the UE (step 1810).
- the UE has entered an area (e.g., geographical area, or cell, or cells controlled by a certain gNB) in which another UE-sided model training session is needed, e.g., the first entity does not have an available trained model for the concerned area.
- the first entity may request the UE to start data collection for UE-side model training.
- the area may be known by the first entity from the location information provided by the UE to the first entity.
- This message may also include an indication associated to a specific UE-vendor (UE set) for which training is allowed.
- UE set UE-vendor
- the UE may indicate to the RAN node 120 that it is capable of AIML training. Then the RAN node 120 may provide a dedicated RRC message indicating whether the UE can request resources for UE-side model training.
- this message may include also an indication of the specific training session for which the training can start/be resumed, wherein each training session may be associated to a specific set of training resource configuration or to a specific model training (in which case the indication of the specific data collection can be the model ID), or to a specific AIML functionality (such as beam management, CSI prediction, positioning predictions, etc) (in which case the indication of the specific data collection can be the functionality ID).
- This message may also include an indication associated to a specific UE-vendor for which training is allowed. In such case all UEs of the specific UE vendor are allowed for UE-side model training in the cell.
- a radio access technology e.g., NR
- the UE in response to receiving the indication from the RAN node 120 to stop/pause the data collection in a first message, the UE may stop performing data collection for the UE-side model training (step 1540). In case of receiving a plurality of stop/pause indications for multiple training operations at the UE, the UE may apply stop/pause of data collection to each of the indicated AI/ML training models.
- the UE may transmit an indication in a second message to the first entity that data collection for UE-side model training is stopped/paused, e.g., as shown in FIG. 11 (step 1560).
- the UE in response of receiving the indication from the RAN node 120 that the data collection is resumed in a third message, the UE may resume performing data collection at lower layers 55 (e.g., PHY/MAC) for the UE-side model training. In case of receiving a plurality of resume indications for multiple stopped/paused training operations at the UE, the UE may apply resume of data collection to each of the indicated training models.
- lower layers 55 e.g., PHY/MAC
- the UE may transmit an indication in a fourth message to the first entity that data collection for UE-side model training is resumed.
- embodiments include a method performed by a UE, e.g., as illustrated in FIG. 11.
- the method comprises transmitting an indication to the RAN node 120, e.g., gNB, in a first message, that data collection for the UE-side model training is stopped.
- the transmission of the first message may be in response to receiving indication from the first entity to stop the data collection for UE-sided model training. Whether the UE is in a coverage suitable/expected for the AI/ML training operations.
- the method may further comprise stopping performing data collection at lower layers 55 (e.g., PHY/MAC) for the UE-side model training.
- lower layers 55 e.g., PHY/MAC
- the method may further comprise transmitting indication in a second message to the first entity that data collection for UE-side model training is stopped.
- the method may further comprise determining that data collection for UE-side model training can be resumed.
- the method may further comprise transmitting indication to the RAN node 120, e.g., gNB, in a third message, that data collection for the UE-side model training can be resumed.
- the RAN node 120 e.g., gNB
- the method may further comprise receiving an indication from the RAN node 120, e.g., gNB, in a fourth message, to resume the stopped data collection, or not resume the data collection.
- the RAN node 120 e.g., gNB
- the method may further comprise, in response of receiving the indication from the RAN node 120 that the data collection is resumed in a fourth message, resuming performing data collection at lower layers 55 (e.g., PHY/MAC) for the UE-side model training
- lower layers 55 e.g., PHY/MAC
- the method may further comprise, in response of receiving the indication from the RAN node 120 that the data collection in a fourth message, transmitting indication in a fifth message to the first entity that data collection for UE-side model training is resumed or not resumed depending on the response in the fourth message.
- the method may further comprise receiving an indication from the network node (e.g., RAN node 120, e.g., gNB), in a first message, to stop/pause the started data collection e.g., under RAN overload condition.
- the indication might comprise a plurality of indications each indicating to the UE to stop/pause each of the ongoing AI/ML training operations at the UE. Each indication might indicate an AI/ML training operation by an identifier.
- Other embodiments include another method performed by a UE.
- the method comprises, in response to receiving the indication from the first entity to stop/pause the data collection for UE-side model training in a first message, stopping performing data collection for the UE-side model training.
- the UE may apply stop/pause of data collection to each of the indicated AI/ML training models.
- the transmission of the first message by the first entity may be in response of fulfilling one or more of the following conditions in the first entity:
- the UE has entered an area (e.g., geographical area, or cell, or cells controlled by a certain gNB) in which another UE-sided model training session is needed, e.g., the first entity does not have an available trained model for the concerned area.
- the first entity may request the UE to stop the current data collection session for UE- side model training and possibly start a new data collection session for the training of another UE-sided model training.
- the UE has entered an area (e.g., geographical area, or cell, or cells controlled by a certain gNB) in which the started UE-sided model training session is not valid, e.g., the outcome of the UE-sided model training is not applicable to such area.
- an area e.g., geographical area, or cell, or cells controlled by a certain gNB
- the UE has entered an area (e.g., geographical area, or cell, or cells controlled by a certain gNB) in which the resources (CSI-RS/SSB, ground truth) necessary for the UE to perform the started UE-side model training are not available.
- an area e.g., geographical area, or cell, or cells controlled by a certain gNB
- the resources CSI-RS/SSB, ground truth
- the UE has entered an area (e.g., geographical area, or cell, or cells controlled by a certain gNB) in which UE-side model training is not allowed, e.g., the target gNB not supporting provisioning of configurations for UE-side model training.
- the first entity may request the UE to stop the data collection for UE-side model training.
- the method may further comprise transmitting indication in a second message to the network node that data collection for UE-side model training is stopped/paused.
- the method may further comprise receiving indication from the first entity in a third message, to resume the stopped/paused data collection.
- the indication might comprise a plurality of indications each indicating to the UE to resume each of the stoped/paused AI/ML training operations at the UE.
- Each indication might indicate an AI/ML training operation by an identifier.
- the method may further comprise, in response to receiving the indication from the first entity that the data collection for UE-side model training is resumed in a third message, transmitting indication in a fourth message to the network node that data collection for UE-side model training can be resumed.
- the UE may apply resume of data collection to each of the indicated AI/ML training models.
- the method may further comprise receiving indication from the network node in a fifth message, to resume the stopped/paused data collection.
- Radio resources e.g., CSI-RS, SSBs resources
- the network may request the UE to stop the training.
- the method may further comprise determining that the UE should resume a stopped data collection for UE-side model training, wherein the conditions for resuming a stopped data collection can be the same as the conditions for starting the data collection
- the method may further comprise receiving indication from the UE that the data collection for UE-side model training is stopped.
- the method may further comprise receiving indication from the UE that the data collection for UE-side model training can be resumed.
- the method may further comprise transmitting indication to the UE whether the UE can resume the started data collection.
- Further network node methods may include providing a message indicating that model training operations are supported in the cell, and that one or more specific sets of UEs can request resources for UE-side model training.
- This message may be provided for example in SIB signaling.
- this message may include also an indication of the specific training session for which the training can start/be resumed, wherein each training session may be associated to a specific set of training resource configuration or to a specific model training (in which case the indication of the specific data collection can be the model ID), or to a specific AIM L functionality (such as beam management, CSI prediction, positioning predictions, etc.) (in which case the indication of the specific data collection can be the functionality ID).
- This message may also include an indication associated to a specific UE-vendor (UE set) for which training is allowed.
- UE set UE-vendor
- Network node methods may additionally or alternatively include providing a message indicating that model training operations are supported for this specific UE.
- the UE may indicate to the RAN node 120 that it is capable of model training.
- the RAN node 120 may provide a dedicated RRC message indicating whether the UE can request resources for UE-side model training.
- this message may include also an indication of the specific training session for which the training can start/be resumed, wherein each training session may be associated to a specific set of training resource configuration or to a specific model training (in which case the indication of the specific data collection can be the model ID), or to a specific AIM L functionality (such as beam management, CSI prediction, positioning predictions, etc) (in which case the indication of the specific data collection can be the functionality ID).
- This message may also include an indication associated to a specific UE-vendor for which training is allowed. In such case all UEs of the specific UE vendor are allowed for UE-side model training in the cell.
- FIG. 12 is a flow diagram illustrating an example method 200 implemented by a UE 110.
- the method 200 comprises transmitting, to a network node 120, an indication indicating that the UE 110 needs to perform UE-side model training (block 210).
- the method 200 further comprises performing data collection for the UE- side model training (block 220).
- FIG. 13 is a flow diagram illustrating another example method 300 implemented by a UE 110.
- the method 300 may be performed additionally or alternatively to the method 200.
- the method 300 comprises receiving, from a first entity, a first message indicating a need to perform UE-side model training, e.g., for one or more lower layer functions and/or one or more higher layer functions (block 210).
- the one or more lower layer functions may comprise Physical, PHY, and/or Medium Access Control, MAC, layer functions.
- the method 300 further comprises performing data collection for the UE-side model training (block 220).
- FIG. 14 is a flow diagram illustrating another example method 400 implemented by a UE 110.
- the method 300 may be performed additionally or alternatively to either or both of methods 200, 300.
- the method 400 comprises performing data collection for UE-side model training (block 410).
- the method 400 further comprises receiving, from a RAN node 120, a first message indicating to stop the data collection (block 420).
- FIG. 15 is a flow diagram illustrating another example method 500 implemented by a UE 110.
- the method 300 may be performed additionally or alternatively to any one or more of methods 200, 300, 400.
- the method 500 comprises stopping data collection for UE-side model training (block 510).
- the method 500 further comprises transmitting, to a RAN node 120, a first message indicating that the data collection for the UE-side model training is stopped (block 520).
- FIG. 17 is a flow diagram illustrating another example method 900 implemented by a network node 190.
- the method 900 may be performed additionally or alternatively to the method 800.
- the method 900 comprises receiving, from a UE 110, a request to perform data collection for UE-side model training (block 910).
- the method 900 further comprises determining whether to accept or reject the request (block 920).
- the method 900 further comprises indicating to the UE 110 whether the request is accepted or rejected (block 930).
- the UE 110 may, for example, be implemented as schematically illustrated in the example of FIG. 18.
- the UE 110 of FIG. 18 comprises processing circuitry 610, memory circuitry 620, and interface circuitry 630.
- the processing circuitry 610 is communicatively coupled to the memory circuitry 620 and the interface circuitry 630, e.g., via a bus 604.
- the processing circuitry 610 may comprise one or more microprocessors, microcontrollers, hardware circuits, discrete logic circuits, hardware registers, digital signal processors (DSPs), field-programmable gate arrays (FPGAs), application-specific integrated circuits (ASICs), or a combination thereof.
- DSPs digital signal processors
- FPGAs field-programmable gate arrays
- ASICs application-specific integrated circuits
- the processing circuitry 610 may be programmable hardware capable of executing software instructions stored, e.g., as a machine-readable computer program 640 in the memory circuitry 620.
- the memory circuitry 620 of the various embodiments may comprise any non-transitory machine-readable media known in the art or that may be developed, whether volatile or non-volatile, including but not limited to solid state media (e.g., SRAM, DRAM, DDRAM, ROM, PROM, EPROM, flash memory, solid state drive, etc.), removable storage devices (e.g., Secure Digital (SD) card, miniSD card, microSD card, memory stick, thumb-drive, USB flash drive, ROM cartridge, Universal Media Disc), fixed drive (e.g., magnetic hard disk drive), or the like, wholly or in any combination.
- solid state media e.g., SRAM, DRAM, DDRAM, ROM, PROM, EPROM, flash memory, solid state drive, etc.
- removable storage devices e.g., Secure Digital (SD)
- the interface circuitry 630 may be a controller hub configured to control the input and output (I/O) data paths of the UE 110. Such I/O data paths may include data paths for exchanging signals over a network.
- the interface circuitry 630 may be implemented as a unitary physical component, or as a plurality of physical components that are contiguously or separately arranged, any of which may be communicatively coupled to any other, or may communicate with any other via the processing circuitry 610.
- the interface circuitry 630 may comprise a transmitter 632 configured to send wireless communication signals and a receiver 634 configured to receive wireless communication signals.
- the UE 110 may be configured to perform any one or more of the UE methods 200, 300, 400, 500 described above.
- the memory 620 contains instructions executable by the processing circuitry 610 whereby the UE 110 is configured.
- Still other embodiments include a control program 640 comprising instructions that, when executed on processing circuitry 610 of a UE 110, cause the UE 110 to carry out any of the UE methods 200, 300, 400, 500 described above.
- Yet other embodiments include a carrier containing the control program 640.
- the carrier is one of an electronic signal, optical signal, radio signal, or computer readable storage medium.
- a network node 190 may be implemented as schematically illustrated in the example of FIG. 19.
- the network node 190 of FIG. 19 comprises processing circuitry 710, memory circuitry 720, and interface circuitry 730.
- the processing circuitry 710 is communicatively coupled to the memory circuitry 720 and the interface circuitry 730, e.g., via a bus 704.
- the processing circuitry 710 may comprise one or more microprocessors, microcontrollers, hardware circuits, discrete logic circuits, hardware registers, digital signal processors (DSPs), field-programmable gate arrays (FPGAs), application-specific integrated circuits (ASICs), or a combination thereof.
- DSPs digital signal processors
- FPGAs field-programmable gate arrays
- ASICs application-specific integrated circuits
- the processing circuitry 710 may be programmable hardware capable of executing software instructions stored, e.g., as a machine- readable computer program 740 in the memory circuitry 720.
- the memory circuitry 720 of the various embodiments may comprise any non-transitory machine-readable media known in the art or that may be developed, whether volatile or non-volatile, including but not limited to solid state media (e.g., SRAM, DRAM, DDRAM, ROM, PROM, EPROM, flash memory, solid state drive, etc.), removable storage devices (e.g., Secure Digital (SD) card, miniSD card, microSD card, memory stick, thumb-drive, USB flash drive, ROM cartridge, Universal Media Disc), fixed drive (e.g., magnetic hard disk drive), or the like, wholly or in any combination.
- solid state media e.g., SRAM, DRAM, DDRAM, ROM, PROM, EPROM, flash memory, solid state drive, etc.
- removable storage devices e.g., Secure Digital (SD
- the interface circuitry 730 may be a controller hub configured to control the input and output (I/O) data paths of the network node 120. Such I/O data paths may include data paths for exchanging signals over a network.
- the interface circuitry 730 may be implemented as a unitary physical component, or as a plurality of physical components that are contiguously or separately arranged, any of which may be communicatively coupled to any other or may communicate with any other via the processing circuitry 710.
- the interface circuitry 730 may comprise a transmitter 732 configured to send wireless communication signals and a receiver 734 configured to receive wireless communication signals.
- the network node 190 may be configured to perform the method 300 described above.
- the processing circuitry 710 is configured to perform any of the network node methods described above.
- Still other embodiments include a control program 740 comprising instructions that, when executed on processing circuitry 710 of a network node 120, cause the network node 120 to carry out any of the network node methods described above.
- Yet other embodiments include a carrier containing the control program 740.
- the carrier is one of an electronic signal, optical signal, radio signal, or computer readable storage medium.
- the various communication devices described herein may include the illustrated combination of hardware components, other embodiments may comprise computing and/or communication hardware with different combinations of components. It is to be understood that these computing devices may comprise any suitable combination of hardware and/or software needed to perform the tasks, features, functions, and methods disclosed herein. Further, while components are depicted as single boxes located within a larger box, or nested within multiple boxes, in practice, the devices described herein may comprise multiple different physical components that make up a single illustrated component, and functionality may be partitioned between separate components.
- computing devices described herein may include the illustrated combination of hardware components, other embodiments may comprise computing devices with different combinations of components. It is to be understood that these computing devices may comprise any suitable combination of hardware and/or software needed to perform the tasks, features, functions and methods disclosed herein. Determining, calculating, obtaining or similar operations described herein may be performed by processing circuitry, which may process information by, for example, converting the obtained information into other information, comparing the obtained information or converted information to information stored in the network node, and/or performing one or more operations based on the obtained information or converted information, and as a result of said processing making a determination.
- processing circuitry may process information by, for example, converting the obtained information into other information, comparing the obtained information or converted information to information stored in the network node, and/or performing one or more operations based on the obtained information or converted information, and as a result of said processing making a determination.
- computing devices may comprise multiple different physical components that make up a single illustrated component, and functionality may be partitioned between separate components.
- a communication interface may be configured to include any of the components described herein, and/or the functionality of the components may be partitioned between the processing circuitry and the communication interface.
- non-computationally intensive functions of any of such components may be implemented in software or firmware and computationally intensive functions may be implemented in hardware.
Landscapes
- Engineering & Computer Science (AREA)
- Signal Processing (AREA)
- Computer Networks & Wireless Communication (AREA)
- Quality & Reliability (AREA)
- Software Systems (AREA)
- Theoretical Computer Science (AREA)
- Evolutionary Computation (AREA)
- Physics & Mathematics (AREA)
- Computing Systems (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Mathematical Physics (AREA)
- Medical Informatics (AREA)
- Data Mining & Analysis (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Artificial Intelligence (AREA)
- Mobile Radio Communication Systems (AREA)
Abstract
A User Equipment, UE (110), and a network node (190) within a wireless communication network exchange messages to perform Lifecycle Management, LCM, of an Artificial Intelligence, AI, model The model is a UE-sided model and inferences for the UE-sided model are performed at the UE (110). The UE (110) performs data collection in support of the UE- sided model.
Description
RADIO ACCESS NETWORK AWARENESS OF USER EQUIPMENT-SIDE MODEL TRAINING
RELATED APPLICATIONS
This application claims priority to U.S. Provisional Patent Application Number 63/539,387 filed September 20, 2023, the entire contents of which are incorporated herein by reference herein.
TECHNICAL FIELD
The present disclosure generally relates to the technical field of wireless communication and, more particularly, to Lifecycle Management (LCM) of an Artificial Intelligence (Al) and/or Machine Learning (ML) model in which inferences are performed at a User Equipment (UE).
BACKGROUND
Al and ML have been investigated, both in academia and industry, as promising tools to optimize the design of the air-interface in wireless communication networks. Example use cases include using autoencoders for Channel State Information (CSI) compression to reduce the feedback overhead and improve channel prediction accuracy; using deep neural networks for classifying Line-of-Sight (LOS) and Non-LOS (NLOS) conditions to enhance the positioning accuracy; and using reinforcement learning for beam selection at the network side and/or the UE side to reduce the signaling overhead and beam alignment latency; and using deep reinforcement learning to learn an optimal precoding policy for complex Multiple Input Multiple Output (MIMO) precoding problems.
In 3rd Generation Partnership Project (3GPP) New Radio (NR) standardization work, a new release 18 study item on AI/ML for the NR air interface started in May 2022. This study item will explore the benefits of augmenting the air interface with features enabling improved support of AI/ML based algorithms for enhanced performance and/or reduced complexity/overhead. Through studying a few selected use cases (CSI feedback, beam management, and positioning), this study item aims at laying the foundation for future airinterface use cases leveraging AI/ML techniques.
SUMMARY
The present disclosure generally relates to LCM of a model in which inferences are performed at a UE.
Embodiments described herein include a first method implemented by a UE. The method comprises transmitting, to a network node, an indication indicating that the UE needs to perform UE-side model training. The method further comprises performing data collection for the UE-side model training.
In some embodiments, the indication comprises a request to perform the UE-side model training. Performing the data collection is responsive to receiving a response to the indication from the network node. The response comprises a configuration for the UE to use for the data collection and/or an acceptance indication.
In some embodiments, the method further comprises receiving, from the network node or a different network node, a notification to stop or pause the data collection for UE-side model. The method further comprises transmitting, to the network node or the different network node, a further indication indicating that the data collection for the UE-side model training has stopped or paused.
In some embodiments, the method further comprises informing the network node or a different network node that the data collection for the UE-side model training has been completed; or needs to be resumed and/or re-started and/or re-configured.
Other embodiments include a UE comprising processing circuitry and memory. The memory contains instructions executable by the processing circuitry whereby the UE is configured to transmit, to a network node, an indication indicating that the UE needs to perform UE-side model training. The UE is further configured to perform data collection for the UE-side model training.
In some embodiments, the UE is further configured to perform any one of the first methods described above.
Other embodiments include a second method, performed by a UE. The method comprises receiving, from a first entity, a first message indicating a need to perform UE-side model training for one or more lower layer functions and/or one or more higher layer functions. The one or more lower layer functions comprise PHY and/or MAC layer functions.
In some embodiments, the one or more lower layer functions comprise beam management, providing CSI and/or positioning. The one or more higher layer functions comprise RRM measurements and/or a L3 mobility function.
In some embodiments, the method further comprises responsive to receiving the first message, indicating to a RAN node, in a second message, a request to perform the UE-side model training. The method further comprises receiving, from the RAN node, a third message indicating whether or not the UE is allowed to perform the UE-side model training. The method further comprises starting data collection at the one or more lower layers for the UE-side model training in response to the third message indicating that the UE is allowed to perform the UE- side model training. The method further comprises responsive to receiving the third message, sending, to the first entity, a fourth message indicating whether the request was accepted or rejected.
In some embodiments, the method further comprises indicating to the first entity that data collection has started.
Other embodiments include UE comprising processing circuitry and memory. The memory contains instructions executable by the processing circuitry whereby the UE is configured to receive, from a first entity, a first message indicating a need to perform UE-side model training for one or more lower layer functions and/or one or more higher layer functions. The one or more lower layer functions comprise PHY and/or MAC layer functions.
In some embodiments, the UE is further configured to perform any one of the second methods described above.
Other embodiments include a third method implemented by a UE. The method comprises performing data collection for UE-side model training. The method further comprises receiving, from a RAN node, a first message indicating to stop the data collection.
In some embodiments, the method further comprises stopping the data collection for the UE-side model training at one or more lower layers in response to receiving the first message. The one or more lower layers comprises a PHY layer and/or a MAC layer.
In some embodiments, the method further comprises transmitting, to a first entity, a second message indicating that the data collection for UE-side model training is stopped.
In some embodiments, the method further comprises receiving, from the RAN node, a third message indicating to resume the stopped data collection. The method further comprises resuming the data collection for the UE-side model training at one or more lower layers in response to receiving the third message. The one or more lower layers comprises a PHY layer and/or a MAC layer.
In some embodiments, the method further comprises transmitting, to the first entity, a fourth message indicating that the data collection for UE-side model training is resumed.
Other embodiments include a UE comprising processing circuitry and memory. The memory contains instructions executable by the processing circuitry whereby the UE is configured to perform data collection for UE-side model training. The UE is further configured to receive, from a RAN node, a first message indicating to stop the data collection.
In some embodiments, the UE is further configured to perform any one of the third methods described above.
Other embodiments include a fourth method implemented by a UE. The method comprises stopping data collection for UE-side model training. The method further comprises transmitting to a RAN node, a first message indicating that the data collection for the UE-side model training is stopped.
In some embodiments, stopping the data collection for the UE-side model training comprises stopping the data collection at one or more lower layers. The one or more lower layers comprises a PHY layer and/or a MAC layer.
In some embodiments, the method further comprises transmitting, to a first entity, a second message indicating that the data collection for the UE-side model training is stopped.
In some embodiments, the method further comprises determining that the data collection for the UE-side model training can be resumed. The method further comprises transmitting, to the RAN node, a third message indicating that the data collection for the UE-side model training can be resumed.
In some embodiments, the method further comprises receiving, from the RAN node, a fourth message indicating whether or not to resume the data collection. The method further
comprises resuming the data collection for the UE-side model training at one or more lower layers in response to the fourth message indicating to resume the data collection. The method further comprises transmitting, to a first entity, a fifth message indicating whether or not the data collection for the UE-side model training is resumed depending on the fourth message.
Other embodiments include a UE comprising processing circuitry and memory. The memory contains instructions executable by the processing circuitry whereby the UE is configured to stop data collection for UE-side model training. The UE is further configured to transmit to a RAN node, a first message indicating that the data collection for the UE-side model training is stopped.
In some embodiments, the UE is further configured to perform any one of the fourth methods described above.
Other embodiments include a computer program comprising instructions which, when executed on processing circuitry of a UE, cause the processing circuitry to carry out any one of the UE methods described above.
Other embodiments include a method implemented by a network node. The method comprises receiving, from a UE, an indication indicating that the UE needs to perform UE-side model training. The method further comprises, responsive to receiving the indication, transmitting a response to the UE indicating that the UE is allowed to perform the data collection for the UE-side model training or shall perform the data collection for the UE-side model training. The response comprises an acceptance indication and/or a configuration for the UE to use for the data collection.
In some embodiments, the indication comprises a request from UE to perform the UE- side model training. The method further comprises determining whether to accept or reject the request.
In some embodiments, the method further comprises receiving, from the UE, a request to stop or pause the data collection for the UE-side model training or a notification that the UE has stopped or paused the data collection for the UE-side model training, or an indication that the data collection for the UE-side model training needs to be resumed and/or restarted and/or reconfigured.
In some embodiments, the method further comprises transmitting, to the UE, an indication to stop or pause the data collection for the UE-side model.
Other embodiments include a network node comprising processing circuitry and memory. The memory contains instructions executable by the processing circuitry whereby the network node is configured to receive an indication from a UE indicating that the UE needs to perform UE-side model training. The network node is further configured to, responsive to receiving the indication, transmit a response to the UE indicating that the UE is allowed to perform the data collection for the UE-side model training or shall perform the data collection for
the UE-side model training. The response comprises an acceptance indication and/or a configuration for the UE to use for the data collection.
In some embodiments, the network node is further configured to perform any of the network node methods described above.
Other embodiments include a second method implemented by a network node. The method comprises receiving, from a UE, a request to perform data collection for UE-side model training. The method further comprises determining whether to accept or reject the request. The method further comprises indicating to the UE whether the request is accepted or rejected.
In some embodiments, the method further comprises determining that the UE should stop the data collection for the UE-side model training. The method further comprises indicating, to the UE, to stop the data collection for UE-side model training. The method further comprises determining that the UE should resume the data collection. The method further comprises indicating to the UE to resume the data collection.
In some embodiments, the method further comprises receiving, from the UE, a notification indicating that the data collection for the UE-side model training is stopped or paused; and/or notice that the data collection for UE-side model training is resumable. The method further comprises indicating, to the UE, whether or not to resume the data collection.
Other embodiments include a network node comprising processing circuitry and memory. The memory contains instructions executable by the processing circuitry whereby the network node is configured to receive, from a UE, a request to perform data collection for UE- side model training. The network node is further configured to determine whether to accept or reject the request. The network node is further configured to indicate to the UE whether the request is accepted or rejected.
In some embodiments, the network node is further configured to perform any one of the second network node methods described above.
Other embodiments include a computer program comprising instructions which, when executed on processing circuitry of a network node, cause the network node to carry out any one of the network node methods described above.
Other embodiments include a carrier containing any one of the computer programs described above. The carrier is one of an electronic signal, optical signal, radio signal, or computer readable storage medium.
BRIEF DESCRIPTION OF THE DRAWINGS
Aspects of the present disclosure are illustrated by way of example and are not limited by the accompanying figures with like references indicating like elements. In general, the use of a reference numeral should be regarded as referring to the depicted subject matter according to one or more embodiments, whereas discussion of a specific instance of an illustrated element will append a letter designation thereto (e.g., discussion of a computing device 110, generally, as opposed to discussion of particular instances of computing devices 110a, 110b).
FIG. 1 is a schematic block diagram illustrating an example model LCM procedure, according to one or more embodiments of the present disclosure.
FIG. 2 is a schematic block diagram illustrating an example framework for studying model LCM aspects, according to one or more embodiments of the present disclosure.
FIG. 3 is a schematic block diagram illustrating an example autoencoder for CSI, according to one or more embodiments of the present disclosure.
FIG. 4 is a schematic block diagram illustrating an example wireless communication network, according to one or more embodiments of the present disclosure.
FIGS. 5-11 are signaling diagrams illustrating examples of signaling exchanged according to one or more embodiments of the present disclosure.
FIGS. 12-15 are flow diagrams illustrating example methods implemented by a UE according to one or more embodiments of the present disclosure.
FIGS. 16-17 are flow diagrams illustrating example methods implemented by a network node according to one or more embodiments of the present disclosure.
FIG. 18 illustrates an example UE, according to one or more embodiments of the present disclosure.
FIG. 19 illustrates an example network node, according to one or more embodiments of the present disclosure.
DETAILED DESCRIPTION
As used herein, the term “model” refers to one or more data structures and/or algorithms used to generate a prediction from collected input data. The terms “model,” “ML model,” “Al model,” “AI/ML model,” and “Al and/or ML model” should be considered to have equivalent meanings to each other and therefore be interchangeable. As will be discussed in greater detail below, a model may be deployed, implemented, and/or configured in a UE, in a network node, or both.
In one example, a model may receive a reference signal measurement (e.g., a measurement of a Synchronization Signal Block (SSB)) at time instance to as input and provide as output a prediction of the reference signal an time tO+T. In another example, a model may receive as input a measurement of a reference signal transmitted on a first beam and provide as outcome a prediction of another reference signal transmitted on another beam.
Another example is a model for aid in Channel Status Information (CSI) estimation. In such an example, the model may comprise a UE-side specific model and a network (NW)-side specific model that operate jointly. The function of the UE-side model may be to compress a channel input and the function of the NW-side model may be to decompress the received output from the UE.
Other examples may operate similarly for positioning. For example, the input to the model may be a channel impulse related to a certain reference point in time. The NW side of the model may detect different peaks within the impulse response corresponding to different
reception directions of radio signals on the UE side. Another example relevant to positioning is to input multiple sets of measurements into an ML network and based on that derive an estimated positioning.
Another example of a model is a model to aid a UE in channel estimation (or interference estimation for channel estimation). The channel estimation may, for example, be for the Physical Downlink Shared Channel (PDSCH) and be associated with a specific set of reference signals patterns that are transmitted from the NW to the UE. The model may be part of the receiver chain within the UE and may not be directly visible within the reference signal pattern, and be configured or scheduled to be used between the NW and UE.
Another example of a model for CSI estimation is to predict a suitable Channel Quality Indicator (CQI), Precoding Matrix Indicator (PMI), Rank Indicator (Rl) or similar value in the future. The future may be a certain number of slots after the UE has performed the last measurement or targeting a specific slot in time within the future.
The UE is connected to a network (e.g., it may receive and transmit data and/or control information). The UE may further be in RRC_CONNECTED state and be configured to use the model for a specific function. The specific function may include one or more of the following “functionality areas:”
CSI reporting
Beam management
RRM measurement
L3 mobility
Conditional Handover
Lower Layer Triggered Mobility (LTM)
HARQ transmission
Data transmission
Data reception
Power control
An example of RRM measurement includes mobility measurement, such as Reference Signal Received Power (RSRP), Reference Signal Received Quality (RSRQ), Received Signal Strength Indicator (RSSI), Radio Link Failure (RLF) predictions, and other aspects related to radio link failure. RRM measurement may be performed using a measurement framework as will be discussed further below. The measurement framework may govern how a UE performs measurements (e.g., through a measurement configuration), what triggers measurement reports (e.g., whether measurements reports are event-triggered or are sent periodically), and what content is included in the measurement reports.
A “UE-side model” is a model in which the UE performs the inference. Training for a UE- side model may either be performed at the UE and/or outside the mobile network, such as in an Over the Top (OTT) server (e.g., managed by the UE manufacturer). According to one or more
embodiments of the present disclosure, the UE indicates that it needs to perform data collection for UE-side model training. The data is collected by the UE and it may either be used by the UE to perform the training and/or communicated to the OTT server so that the OTT server performs model training. A model may then be provided back to the UE and/or to other UEs, e.g., to update an existing model at the UE(s).
Building a model may include several development steps. The actual training of the model may be just one step in a training pipeline. An important part of model development is model LCM. FIG. 1 is an illustration of training and inference pipelines 10, 20 and their interactions within a model LCM procedure 30. The model LCM procedure 30 typically comprises a training pipeline 10 (which may or may not be used for retraining, either partly or wholly), a model deployment stage 16, an inference pipeline 20, and a drift detection stage 25.
The training pipeline 10 may include data ingestion 11 , data preprocessing 12, model training 13, model evaluation 14, and/or model registration 15 stages.
Data ingestion 11 refers to gathering raw data (e.g., training data) from data storage. After data ingestion 11, there may also be a step that controls the validity of the gathered data.
Data preprocessing 12 refers feature engineering that is applied to the gathered data. For example, data preprocessing 12 may include data normalization and/or data transformation required for the input data to the model.
Model training 13 refers to the actual model training steps.
Model evaluation 14 refers to benchmarking model performance to some model baseline. The iterative steps of model training 13 and model evaluation 14 may continue until an acceptable level of performance is achieved.
Model registration 15 refers to registering the model, including any corresponding metadata that provides information on how the model was developed, and possibly model evaluation performance outcomes.
The model deployment stage 16 makes the trained (e.g., retrained) model part of the inference pipeline 20.
The inference pipeline 20 may include data ingestion 21, data preprocessing 22, model operation 23, and data and/or model monitoring 24 stages.
Data ingestion 21 refers to gathering raw data (e.g., inference data) from a data storage.
Data preprocessing 22 for the inference pipeline 20 is substantially similar to the corresponding processing that occurs in the training pipeline 10.
Model operation 23 refers to using the trained and deployed model in an operational mode.
Data and model monitoring 24 refers to validating that the inference data are from a distribution that aligns well with the training data, as well as monitoring model outputs for detecting any performance, or operational, drifts.
The drift detection stage 25 informs about any drifts in the model operations.
FIG. 2 illustrates a functional framework for studying model LCM aspects. The framework may, for example, be used for studying different network (NW)-UE collaboration levels for physical layer use cases.
The models being discussed in the Rel-18 study item on AI/ML for the NR air interface can be categorized into the following two types, namely a one-side AI/ML model and a two-side AI/ML model.
The one-side AI/ML model may be a UE-sided model in which inference is performed entirely at the UE or a NW-sided model whose inference is performed entirely at the NW.
The two-sided AI/ML model refers to paired models over which joint inference is performed across the UE and the NW. That is, the first part of the inference is firstly performed by a UE and the remaining part is performed by a network node (e.g., at a next generation Node B (gNB)), or vice versa.
FIG. 3 shows an example autoencoder (AE)-based two-sided CSI compression use case. In this example, a UE uses an encoder 42 (i.e., the UE-part of the two-sided AE model 40) operated at a UE to compress measured CSI 41 for a wireless channel. The output of the encoder 42 (i.e., compressed CSI 43) is reported from the UE to a gNB. The gNB uses a decoder 44 (i.e., the NW-part of the two-sided AE model 40) to generate reconstructed CSI 45 for the wireless channel.
When applying Al and/or ML on air interface use cases, different levels of collaboration between network nodes and UEs can be considered. In one example, there is no collaboration between network nodes and UEs. In this case, a proprietary model operating with the existing standard air-interface is applied at one end of the communication chain (e.g., at the UE side), and the model LCM (e.g., model selection/training, model monitoring, model retraining, model update) is done at this node without inter-node assistance (e.g., assistance information provided by the network node).
In another example, there is limited collaboration between network nodes and UEs for one-sided models. In this case, a model is operating at one end of the communication chain (e.g., at the UE side), but this node gets assistance from one or more nodes at the other end of the communication chain (e.g., at a gNB) for its model LCM to some extent (e.g., for training/retraining the Al model, model update, model monitoring, model selection, fallback, and/or switching.
In another example, there is joint operation between network nodes and UEs for two- sided models. In this case, it is assumed that the model is split with one part located on the NW side and the other part located at the UE side. Hence, the model requires joint inference between the NW and UE, and the model LCM involves both ends of a communication chain.
There are several methods for UE-side model monitoring. In one approach, the training of UE-side model is performed at the UE itself. That is, the UE performs both the training and the inference. However, this approach might be too complex in practice. For example, given the
limited computational resources of the UE and the large computational complexity that training might require, it may not be feasible for the UE to perform either or both of the training and inference. Also, if models are dependent on location and/or region, a single UE is unlikely to be able to cover an entire coverage area. Indeed, the models that the UE trains by itself would be limited to the areas in which the UE moves. Accordingly, every time the UE enters a new area its trained models could become outdated.
In view of the above, alternative approaches for training UE-sided models include the possibility that a network node, e.g., a Radio Access Network (RAN) node like a gNB or a Core Network (CN) node (e.g., like the Network Data Analytics Function (NWDAF)), collects data from a UE and trains a model that at some point should be transferred to that UE or other UEs that will then apply it.
According to another example, an Over-the-Top (OTT) server, outside of the 3GPP environment, may be in charge of performing the training. This server could be for example a UE-vendor specific server. This latter approach might be a reasonable candidate because in order to have optimal performance, the trained data set should fit the inference operations at the device, which may depend on UE-vendor specific implementation (e.g., software/hardware properties/capabilities).
In view of the above, FIG. 4 is a schematic diagram illustrating an example wireless communication network 100 comprising a RAN node 120, a core network node 140, an Over- the-Top (OTT) node 150, and a UE 110. The RAN node 120 serves a cell 130 to the UE 110. The cell 130 supports a RAT (e.g., NR, 6G) that provides the UE 110 with access to a core network 160 that comprises the core network node 140. The core network 160 provides access to the OTT server 150, which may be external to the core network 160.
If the UE-side model training is performed by a network node outside the RAN 130, e.g., in a CN node 140, outside of the 3GPP network (such as at a UE-vendor specific server), at an OTT server 150, or by the UE 110 itself (e.g., in the UE’s application layer), there is no possibility for a RAN node 120, in the current 3GPP specification, to become aware of this operation. This might lead to undesired behavior that ultimately may affect the overall system performance (and, potentially, UE performance). In some scenarios, it might be possible that training operations performed by the UE require specific configurations and settings at the UE that might collide with the NW provided configuration. For example, the NW 100 may expect the UE 110 to perform measurements with certain accuracy or performance (e.g., relaxed mode) while the training request from the OTT sever 150 may require a different accuracy. In some other scenarios, the NW may require performance of specific operations (e.g., for purposes of energy savings) that might affect the training performance at the UE. Accordingly, training operations at the UE 110 without RAN-awareness might cause unexpected and/or inaccurate behaviors and/or results for either or both UE 110 and network nodes.
Another example of training affecting performance is if the UE 110 needs to train a model that requires measurements in frequencies not configured by the network 100 for typical Radio Resource Management (RRM) measurements to support mobility decisions and carrier aggregation, e.g., for frequencies that are not serving frequencies or neighbor frequencies for which a measurement object is configured. In that case, the UE 110 may need to leave the serving cell to switch the frequency that needs to be measured and the UE 110 may not be configured with measurement gaps or too short measurement gaps. This could lead to frequent handovers or reconfigurations, impacting the performance within the network 100.
Additionally, it is expected that at some point, e.g., when the data collection is completed, the UE 110 may need to upload the collected data to the node performing the training, e.g., to the UE-vendor specific server. Uploading this collected data may impact the RAN 130 system performance, in terms of available uplink (UL) radio resources, and the delivery of the collected data itself that may be delayed if the cell is congested.
Embodiments of the present disclosure include a method implemented by a UE. The method comprises transmitting an indication, to a network node (e.g., a RAN node 120, like a gNB), indicating that the UE needs to perform data collection for model training of at least one functionality (e.g., beam management, CSI, positioning, L3 mobility, RRM measurements) for UE-side model training. Performing data collection in this context may comprise performing the model training itself at the UE. Throughout this disclosure, “data collection for UE-side model training” refers to data collection for UE-side model training of at least one functionality.
In some embodiments (and as will be further discussed below), the indication may correspond to a request from the UE to the network node, or an indication for informing the network node.
In some embodiments, in response to the transmitted indication, the UE receives a response from the network node, such as an acceptance indication, a rejection indication, a configuration for the UE to use for data collection, etc. When the UE receives the acceptance indication, the UE performs data collection for UE side model training.
In some embodiments, while the UE is performing data collection for UE-side model training (e.g., in response to the acceptance indication), the UE transmits an indication to stop and/or pause data collection for UE side model. Alternatively, the UE transmits an indication that the UE has stopped and/or paused data collection for the UE-side model. The indication is transmitted to a network node (e.g., the same network node to which the UE has transmitted the indication that it needed to perform data collection for UE-side model training, or a different network node).
In some embodiments, after the UE has started performing data collection for UE-side model training (e.g., in response to the acceptance indication), the UE receives an indication to stop and/or pause data collection for UE side model from a network node (e.g., the same
network node to which the UE has transmitted the indication that it needed to perform data collection for UE side model training, or a different network node).
In some embodiments, after the UE has started performing data collection for UE-side model training (e.g., in response to the acceptance indication), the UE transmits an indication informing the NW that data collection for UE-side model training has been completed.
In some embodiments, after the UE has stopped performing data collection for UE-side model training (e.g., in response to the acceptance indication), the UE transmits an indication informing the NW that data collection for UE-side model training needs to be resumed and/or restarted and/or re-configured.
It should be noted that a UE in this context generally refers to a mobile terminal or device that comprises a UE performing one or more specified functionalities.
Embodiments also include methods for an network node (e.g., a RAN node 120) to determine if the UE can start, stop, or resume the data collection for the UE-side model training and to transmit responses to the transmitted indications from the UE.
Thus, a network node may become aware of whether the UE 110 needs to perform model training by performing data collection, which may require the UE 110 to perform one or more measurements which may serve as input for model training. Accordingly, if the UE request of performing UE-side model training is accepted, the network can for example provide the necessary configuration to the UE to perform a proper UE-side model training, e.g., to enable the UE to perform a data collection for the purpose of UE-side model training and/or to understand whether it expects a degradation in performance due to the data collection process the UE performs.
In addition, the solution proposed herein enables the network to avoid one or more sideeffects or unintended consequences of the model training on the performance of the UE and/or network, e.g., by avoiding conflict in the network expected configuration/behavior at the UE and the AI/ML based training activities at the UE.
FIG. 5 illustrates a first example of signaling according to one or more embodiments of the present disclosure. As shown in FIG. 5, the UE transmits an indication to a network node indicating that the UE needs to perform UE-side model training (step 1510). The action of “model training” (step 1520) may comprise the UE performing measurements and/or performing data collection (which may include the performed measurements), for the purpose of training one or more UE-sided AI/ML model(s) e.g., in the case data is collected at the UE and reported to a OTT server for the AI/ML model training and/or in the case data is collected at the UE and the model training is performed at the UE itself.
The network node may correspond to a Radio Access Network (RAN) node such as a gNodeB, an eNodeB, a 6G radio access network node, a centralized unit in the RAN, a server running a baseband and/or a Core Network (CN) node such as a Authentication and Mobility Function (AMF).
The indication the UE transmits may correspond to a request to the network node for performing UE-side model training. In other words, the UE does not perform UE-side model training and/or data collection before it receives a response from the network node.
In one option, the UE starts a supervision timer when it transmits the request to the network node. While the timer is running the UE expects a response and, when the response is not received and the timer expires the UE aborts the procedure and does not perform data collection and/or UE side model training.
In one option, the request comprises an RRC message e.g., in case the network node corresponds to a RAN node 120, like a gNodeB or 6G RAN node. In that case the RRC message may be a UE Assistance Information message including an indication that the UE needs to perform UE sided training of an AI/ML model and/or data collection for AI/ML model training.
The request to the network node for performing UE-side model training and/or data collection for AI/ML model training may contain one or more of the following information:
• Indication of whether reduced capabilities are expected or needed, wherein the indication may indicate e.g., the need for the UE to perform relaxed measurements on certain frequencies, the need to reduce Ml MO layers, or the need to avoid DAPS handover;
• Indication of desired configuration to perform proper data collection, such as desired DRX configuration, desired CSI-RS/SSBs resources, desired frequencies;
• Indication of the frequencies in which the UE needs to perform data collection for UE-side model training;
• Indication of the time period or interval during which the UE needs to perform data collection for UE-side model training;
• the time period or interval may be provided as one or more time units, such as: number of radio frames and/or subframes and/or OFDM symbols, seconds, minutes, hours, etc;
• The indication of time may comprise an initial time unit in which the UE is expected to start data collection e.g., first radio frame and/or subframe and/or OFDM symbol in which the UE is expected to start data collection;
• Indication of the DRBs or QoS flows or PDU sessions for which the UE needs to perform data collection for the UE-sided model training;
• Indications that identify the training use case and/or AI/ML functionality e.g., training for channel state information (CSI) compression or for positioning or for beam managements. The indication might be a list indicating different training request received via the over-the-top server for the UE sided model training;
• Indication of the type of data the UE is expected to collect for AI/ML model training. Examples of data that may be indicated are SSB related data such as
measurements (e.g., SS-RSRP, SS-RSRQ, SS-SINR as defined in TS 38-215), SSB indexes, physical cell identifiers (PCIs), information obtained from system information, etc; another example are CSI-RS related measurements.
• Indication of the duration for which the said request is valid. Using such an indication the UE can avoid sending multiple requests during this duration if the network wants to accept/reject the said request for a duration which is shorter than the requested duration from the UE.
FIGS. 6 and 7 illustrate different examples of signaling according to embodiments of the present disclosure. The indication may correspond to a request to the network node for performing UE-side model training (step 1610) and the UE may receive as a response an indication of acceptance (step 1620, as shown in FIG. 6) or rejection (step 1720, as shown in FIG. 7).
In one option, the UE receives the response with an indication of acceptance and, if the UE request is accepted, the indication indicates an RRC configuration to use for the duration of the training, wherein the said RRC configuration may comprise the CSI-RS/SSBs resources to use for measurements and data collection, the DRX configuration, a radio bearer reconfiguration, a MIMO configuration, the frequencies in which the data collection is allowed, the duration of the data collection, the point in time in which the data collection can start
In one option, if the UE request is accepted, the indication indicates an RRC configuration to use for the duration of the training, wherein the said RRC configuration may comprise the CSI-RS/SSBs resources to use for measurements and data collection, the DRX configuration, a radio bearer reconfiguration, a MIMO configuration, the frequencies in which the data collection is allowed, the duration of the data collection, the point in time in which the data collection can start
In one option, if the request is rejected, a cause indicating the cause of the reject, e.g., network overload, lack of radio resources, high priority traffic executed at the UE
In one option, the indication of acceptance or rejection multiple indications each entry indicating the response by the network for each training request for the plurality of the training requests received from the UE. Each training request might be identified by a specific identifier (ID).
In one option, the indication of acceptance or rejection is provided in a DL MAC CE or a PHY level signaling from the network node to the UE. This option enables more frequent operation of training request/response procedure based on the status of L1 operation in the lower layers.
In one option, the indication of acceptance or rejection is provided in a RRC message. This option enables the network to make the training request/response procedure a slower process compared to the MAC/PHY based reply option and thus possibly reduces the need for frequent communication between the UE and the network node.
In one option, the UE receives the accept/reject decision multiple times for a single request that it had transmitted. Such an embodiment is especially useful when the UE includes an indication of the duration for which it intends to perform the training. Based on such a ‘duration’ indication in the request from the UE, the network node can reply with accept/reject several times until the expiry of such a timer.
In one option, the UE receives an accept/reject from the network node but such an accept/reject decision is appended with a validity duration which indicates the time for which such an accept/reject decision is valid.
In one option when the response indicates reject, the response message includes a timer value based on which the UE starts a timer (set to the received value) and: i) while the timer is running the UE is not allowed to send another indication of the need to perform Ue-side model training; ii) when the timer expires the UE is allowed to send another indication of the need to perform UE side-model training; iii) and the timer is stopped under one or more conditions such as entering RRC DLE or RRC NACTIVE.
In one option when the response indicates acceptance, the response message includes a timer value based on which the UE starts a timer (set to the received value) and: i) while the timer is running the UE is allowed to perform UE-side model training; ii) when the timer expires and the UE-sided model training is not completed, the UE is allowed to send another indication of the need to perform UE side-model training; iii) the timer is stopped under one or more conditions such as completing the training.
In response of receiving the indication from the network node that the UE request is accepted, the UE performs data collection for the UE-side model training.
In response of receiving the indication from the network node that the UE request is rejected, the UE does not perform data collection for the UE-side model training.
In one option, the indication that the request is rejected includes a timer value based on which the UE starts a first time (with the indicated value). While the timer is running the UE does not transmit another request. After the timer expires, the UE is allowed to transmit again another request for data collection for the UE-side model training. In some such embodiments, the UE stops the timer upon the occurrence of an event such as a handover, or a reception of a message indication from the network indicating the UE is allowed to perform data collection for AI/ML model training and/or model training.
In some embodiments, a UE triggers the transmission to the network node indicating the need for performing data collection for model training in response to having an indication in a first message from a first entity indicating the need to perform UE-side model training (e.g., at lower layers, PHY/MAC, or at RAN higher layer e.g., for RRM or layer-3 mobility).
The first entity may comprise one or more higher layers 50 of the UE (e.g., the UE application layer) as shown in FIG. 8. For example, higher layers 50 of the UE may be in charge of performing the UE-side model training. In that case, there may be an internal
communication in the UE, in which an OTT client indicates within the UE (e.g., to the UE RAN layers such as PHY/MAC/RRC layers) that data collection is needed, so that the UE triggers the transmission to the network node of the indication that data collection is needed, e.g., by one or more lower layers 55 of the UE (step 1810).
In another embodiment, the first entity could be an OTT server if the OTT server is in charge of performing the UE-side model training (e.g., as shown in the example of FIG. 9).
In another embodiment, the first entity could be a core network node, e.g., the NWDAF, if such core network node is in charge of performing the UE-side model training. In this latter embodiment the indication on the need to perform UE-side model training at lower layers 55, it is signaled to the UE via NAS signaling.
In one embodiment, the UE performing UE-side model training implies the lower layers 55 performing data collection (e.g., for CSI/beam prediction or positioning) or the RAN higher layer e.g., for RRM or layer-3 mobility.
In one embodiment, prior to transmitting the request/indication, the UE may determine whether certain conditions for the transmission of the request/indication may be fulfilled. For example, the condition could be:
Whether an indication is received from the first entity (OTT server, core network node, UE application layer) to start or resume the data collection for UE-side model training. The first entity may provide this indication to the UE in response of any of:
• The UE has entered an area (e.g., geographical area, or cell, or cells controlled by a certain gNB) in which another UE-sided model training session is needed, e.g., the first entity does not have an available trained model for the concerned area. Hence, the first entity may request the UE to start data collection for UE-side model training. The area may be known by the first entity from the location information provided by the UE to the first entity.
• The UE has entered an area (e.g., geographical area, or cell, or cells controlled by a certain gNB) in which UE-side model training is allowed, e.g., the target gNB supporting provisioning of configurations for UE-side model training. Hence, the first entity may request the UE to initiate the data collection for UE-side model training.
• Whether the UE is already configured with the necessary resources to perform the training. For example, if the UE needs to perform data collection on certain CSI- RS/SSBs resources, or frequencies in which the UE has been already configured to perform measurements, the UE may not send the request since the UE can already perform the data collection for the UE-side model training. In another embodiment, if the UE is not configured with the necessary CSI-RS/SSBs resources, or frequencies to perform the training according to the first entity indication, the UE may transmit the request to the gNB.
• Whether reduced capabilities are expected if the UE starts the data collection for the UE-side model training. For example, in some cases, in order to perform data collection for the UE-side model training, impact on normal operations may be expected, e.g., the UE may need to perform relaxed measurements in certain frequencies, or it may need a different antenna configuration (reduced MIMO layers), or it may need a different DRX configuration, or it may need to be configured with a different number of serving cells, or it may need to be released with MR-DC.
• Whether enough battery is left. For example, if the UE has limited battery left, the UE lower layers 55 may reject the request from the first entity of starting the data collection for the UE-side model training. Otherwise, the UE may send the request of performing UE-side model training to the gNB.
• Whether the UE is performing certain user plane or control plane operations that do not allow the UE to start the UE-side model training. For example, the reception of the indication from a first entity indicating the need to perform UE-side model training at lower layers 55, may be received when the UE is configured with certain high priority radio bearers, or the UE may be performing PCell HO, or performing RRC Reestablishment procedure or performing fast MCG link recovery procedure or performing the reporting of SCG failures.
• Whether the RAN node 120 providing message indicating that AIML training operations are supported in the cell, and that one or more specific sets of UEs can request resources for UE-side model training. This message may be provided for example in SIB signalling. For example this message may include also an indication of the specific training session for which the training can start/be resumed, wherein each training session may be associated to a specific set of training resource configuration or to a specific model training (in which case the indication of the specific data collection can be the model ID), or to a specific AIML functionality (such as beam management, CSI prediction, positioning predictions, etc) (in which case the indication of the specific data collection can be the functionality ID). This message may also include an indication associated to a specific UE-vendor (UE set) for which training is allowed. In such case all UEs of the specific UE vendor, i.e. of the same UE set, are allowed for UE-side model training in the cell.
• Whether the RAN node 120 providing message indicating that AIML training operations are supported for this specific UE. For example, the UE may indicate to the RAN node 120 that it is capable of AIML training. Then the RAN node 120 may provide a dedicated RRC message indicating whether the UE can request resources for UE-side model training. For example this message may include also an indication of the specific training session for which the training can start/be resumed, wherein each training session may be associated to a specific set of training resource
configuration or to a specific model training (in which case the indication of the specific data collection can be the model ID), or to a specific AIML functionality (such as beam management, CSI prediction, positioning predictions, etc) (in which case the indication of the specific data collection can be the functionality ID). This message may also include an indication associated to a specific UE-vendor for which training is allowed. In such case all UEs of the specific UE vendor are allowed for UE-side model training in the cell.
• Whether the UE is in a coverage suitable/expected for the AI/ML training operations. For example, if the measured RSRP/RSRQ/SINR/RSSI is determined to be lower that a certain threshold (possibly configured) the UE should not start the UE-side model training, or if certain timers such as T310/T312/T304 are running, the UE should not start the data collection for UE-sided model training.
• Whether the UE is connected to a PLMN in which it is possible for the UE to perform data collection for UE-side model training.
• Whether the UE is connected to a radio access technology (e.g., NR) in which it is possible for the UE to perform data collection for UE-side model training.
Once the UE has started data collection for model training the UE monitors one or more indications from the network, e.g., as shown in FIG. 10. In some such embodiments, the UE may receive an indication from the network node (e.g., RAN node 120, e.g., gNB), in a first message, to stop/pause the started data collection e.g., under RAN overload condition (step 1530). The indication might comprise a plurality of indications each indicating to the UE to stop/pause each of the ongoing AI/ML training operations at the UE. Each indication might indicate an AI/ML training operation by an identifier.
In some embodiments, in response to receiving the indication from the RAN node 120 to stop/pause the data collection in a first message, the UE may stop performing data collection for the UE-side model training (step 1540). In case of receiving a plurality of stop/pause indications for multiple training operations at the UE, the UE may apply stop/pause of data collection to each of the indicated AI/ML training models.
In some embodiments, the UE may transmit an indication in a second message to the first entity that data collection for UE-side model training is stopped/paused, e.g., as shown in FIG. 11 (step 1560).
In some embodiments, the UE may receiving an indication from the RAN node 120, e.g., gNB, in a third message, to resume the stopped/paused data collection. The indication may comprise a plurality of indications each indicating to the UE to resume each of the stopped/paused training operations at the UE. Each indication might indicate an AI/ML training operation by an identifier.
In some embodiments, in response of receiving the indication from the RAN node 120 that the data collection is resumed in a third message, the UE may resume performing data
collection at lower layers 55 (e.g., PHY/MAC) for the UE-side model training. In case of receiving a plurality of resume indications for multiple stopped/paused training operations at the UE, the UE may apply resume of data collection to each of the indicated training models.
In some embodiments, the UE may transmit an indication in a fourth message to the first entity that data collection for UE-side model training is resumed.
In view of the above, embodiments include a method performed by a UE, e.g., as illustrated in FIG. 11. The method comprises transmitting an indication to the RAN node 120, e.g., gNB, in a first message, that data collection for the UE-side model training is stopped. The transmission of the first message may be in response to receiving indication from the first entity to stop the data collection for UE-sided model training. Whether the UE is in a coverage suitable/expected for the AI/ML training operations. For example, if the measured RSRP/RSRQ/SINR/RSSI is determined to be lower that a certain threshold (possibly configured) the UE stops the UE-side model training, or if certain timers such as T310/T312/T304 are running, the model training stops.
The method may further comprise stopping performing data collection at lower layers 55 (e.g., PHY/MAC) for the UE-side model training.
The method may further comprise transmitting indication in a second message to the first entity that data collection for UE-side model training is stopped.
The method may further comprise determining that data collection for UE-side model training can be resumed.
The method may further comprise transmitting indication to the RAN node 120, e.g., gNB, in a third message, that data collection for the UE-side model training can be resumed.
The method may further comprise receiving an indication from the RAN node 120, e.g., gNB, in a fourth message, to resume the stopped data collection, or not resume the data collection.
The method may further comprise, in response of receiving the indication from the RAN node 120 that the data collection is resumed in a fourth message, resuming performing data collection at lower layers 55 (e.g., PHY/MAC) for the UE-side model training
The method may further comprise, in response of receiving the indication from the RAN node 120 that the data collection in a fourth message, transmitting indication in a fifth message to the first entity that data collection for UE-side model training is resumed or not resumed depending on the response in the fourth message.
The method may further comprise receiving an indication from the network node (e.g., RAN node 120, e.g., gNB), in a first message, to stop/pause the started data collection e.g., under RAN overload condition. The indication might comprise a plurality of indications each indicating to the UE to stop/pause each of the ongoing AI/ML training operations at the UE. Each indication might indicate an AI/ML training operation by an identifier.
Other embodiments include another method performed by a UE. The method comprises, in response to receiving the indication from the first entity to stop/pause the data collection for UE-side model training in a first message, stopping performing data collection for the UE-side model training. In case of receiving a plurality of stop/pause indications for multiple training operations at the UE, the UE may apply stop/pause of data collection to each of the indicated AI/ML training models.
The transmission of the first message by the first entity may be in response of fulfilling one or more of the following conditions in the first entity:
• The UE has entered an area (e.g., geographical area, or cell, or cells controlled by a certain gNB) in which another UE-sided model training session is needed, e.g., the first entity does not have an available trained model for the concerned area. Hence, the first entity may request the UE to stop the current data collection session for UE- side model training and possibly start a new data collection session for the training of another UE-sided model training.
• The UE has entered an area (e.g., geographical area, or cell, or cells controlled by a certain gNB) in which the started UE-sided model training session is not valid, e.g., the outcome of the UE-sided model training is not applicable to such area.
• The UE has entered an area (e.g., geographical area, or cell, or cells controlled by a certain gNB) in which the resources (CSI-RS/SSB, ground truth) necessary for the UE to perform the started UE-side model training are not available.
• The UE has entered an area (e.g., geographical area, or cell, or cells controlled by a certain gNB) in which UE-side model training is not allowed, e.g., the target gNB not supporting provisioning of configurations for UE-side model training. Hence, the first entity may request the UE to stop the data collection for UE-side model training.
The method may further comprise transmitting indication in a second message to the network node that data collection for UE-side model training is stopped/paused.
The method may further comprise receiving indication from the first entity in a third message, to resume the stopped/paused data collection. The indication might comprise a plurality of indications each indicating to the UE to resume each of the stoped/paused AI/ML training operations at the UE. Each indication might indicate an AI/ML training operation by an identifier.
The method may further comprise, in response to receiving the indication from the first entity that the data collection for UE-side model training is resumed in a third message, transmitting indication in a fourth message to the network node that data collection for UE-side model training can be resumed. In case of receiving a plurality of resume indications for multiple stopped/paused training operations at the UE, the UE may apply resume of data collection to each of the indicated AI/ML training models.
The method may further comprise receiving indication from the network node in a fifth message, to resume the stopped/paused data collection.
The method may further comprise resuming performing data collection at lower layers 55 (e.g., PHY/MAC/RRC) for the UE-side model training. In case of receiving a plurality of resume indications for multiple stopped/paused training operations at the UE, the UE may apply resume of data collection to each of the indicated training models.
The method may further comprise transmitting an indication in a sixth message to the first entity that data collection for UE-side model training is resumed.
Other embodiments include a further method performed by a UE. The method comprises clearing the stored collected data associated to a certain UE-side model training session when the data collection for the said UE-side model training session is stopped. Alternatively, the method comprises keeping the stored collected data associated to a certain UE-side model training session when the data collection for the said UE-side model training session is stopped.
Correspondingly, other methods include a method performed by a RAN node 120. The method comprises receiving a request in a first message from the UE on the need of performing UE-side model training.
The method may further comprise, in response of receiving the request in the first message, determining whether to accept or reject the UE request. The determination can depend on any of the following:
• Load conditions in the cells
• Availability of radio resources (e.g., CSI-RS, SSBs resources) in which the UE needs to perform the UE-side model training
• Amount of UEs in the cell already performing UE-side model training
• Whether the UE is already configured with the necessary resources to perform the training. For example, if the UE needs to perform data collection on certain CSI- RS/SSBs resources, or frequencies in which the UE has been already configured to perform measurements, RAN node 120 may accept the UE request of performing data collection for UE-sided model training. In another embodiment, if the UE is not configured with the necessary CSI-RS/SSBs resources, or frequencies to perform the training according to the first entity indication, the RAN node 120 may reject the request
• Whether reduced capabilities are expected if the UE starts the data collection for the UE-side model training. For example, in some cases, in order to perform data collection for the UE-side model training, impact on normal operations may be expected, e.g., the UE may need to perform relaxed measurements in certain frequencies, or it may need a different antenna configuration (reduced MIMO layers),
or it may need a different DRX configuration, or it may need to be configured with a different number of serving cells, or it may need to be released with MR-DC
• Whether the UE is performing certain user plane or control plane operations that do not allow the UE to start the UE-side model training, e.g., configured with higher priority DRBs or SRBs.
• Depending on the UE radio conditions. For example, the UE request for performing UE-sided model training may be received when the UE is close to the cell edge, or the latest RRM measurement reports shows bad UE coverage. In such case, the request for UE-sided model training may be rejected.
In some embodiments, the method further comprises transmitting an indication to the UE in a second message whether the UE request is accepted or rejected.
In some embodiments, the method further comprises determining that the UE should stop a started data collection for UE-side model training. The determination can depend on any of the following:
• Load conditions in the cells
• Availability of radio resources (e.g., CSI-RS, SSBs resources) in which the UE needs to perform the UE-side model training
• Amount of UEs in the cell already performing UE-side model training
• Whether the UE is already needs to be configured with difference CSI-RS/SSBs resources
• Whether reduced capabilities are expected if the UE continues the data collection for the UE-side model training.
• Whether the UE is configured with certain user plane or control plane operations that do not allow the UE to continue the UE-side model training, e.g., configured with higher priority DRBs or SRBs.
• Depending on the UE radio conditions. For example, if the UE is determined to be close to the cell edge, or the latest RRM measurement reports shows bad UE coverage, the network may request the UE to stop the training.
The method may further comprise transmitting an indication to the UE to stop the started data collection.
The method may further comprise determining that the UE should resume a stopped data collection for UE-side model training, wherein the conditions for resuming a stopped data collection can be the same as the conditions for starting the data collection
The method may further comprise transmitting indication to the UE to resume the started data collection.
The method may further comprise receiving indication from the UE that the data collection for UE-side model training is stopped.
The method may further comprise receiving indication from the UE that the data collection for UE-side model training can be resumed.
The method may further comprise transmitting indication to the UE whether the UE can resume the started data collection.
Further network node methods may include providing a message indicating that model training operations are supported in the cell, and that one or more specific sets of UEs can request resources for UE-side model training. This message may be provided for example in SIB signaling. For example this message may include also an indication of the specific training session for which the training can start/be resumed, wherein each training session may be associated to a specific set of training resource configuration or to a specific model training (in which case the indication of the specific data collection can be the model ID), or to a specific AIM L functionality (such as beam management, CSI prediction, positioning predictions, etc.) (in which case the indication of the specific data collection can be the functionality ID). This message may also include an indication associated to a specific UE-vendor (UE set) for which training is allowed. In such case all UEs of the specific UE vendor, i.e. of the same UE set, are allowed for UE-side model training in the cell.
Network node methods may additionally or alternatively include providing a message indicating that model training operations are supported for this specific UE. For example, the UE may indicate to the RAN node 120 that it is capable of model training. Then the RAN node 120 may provide a dedicated RRC message indicating whether the UE can request resources for UE-side model training. For example this message may include also an indication of the specific training session for which the training can start/be resumed, wherein each training session may be associated to a specific set of training resource configuration or to a specific model training (in which case the indication of the specific data collection can be the model ID), or to a specific AIM L functionality (such as beam management, CSI prediction, positioning predictions, etc) (in which case the indication of the specific data collection can be the functionality ID). This message may also include an indication associated to a specific UE-vendor for which training is allowed. In such case all UEs of the specific UE vendor are allowed for UE-side model training in the cell.
In view of the above, FIG. 12 is a flow diagram illustrating an example method 200 implemented by a UE 110. The method 200 comprises transmitting, to a network node 120, an indication indicating that the UE 110 needs to perform UE-side model training (block 210). In some embodiments, the method 200 further comprises performing data collection for the UE- side model training (block 220).
FIG. 13 is a flow diagram illustrating another example method 300 implemented by a UE 110. The method 300 may be performed additionally or alternatively to the method 200. The method 300 comprises receiving, from a first entity, a first message indicating a need to perform UE-side model training, e.g., for one or more lower layer functions and/or one or more higher
layer functions (block 210). The one or more lower layer functions may comprise Physical, PHY, and/or Medium Access Control, MAC, layer functions. In some embodiments, the method 300 further comprises performing data collection for the UE-side model training (block 220).
FIG. 14 is a flow diagram illustrating another example method 400 implemented by a UE 110. The method 300 may be performed additionally or alternatively to either or both of methods 200, 300. The method 400 comprises performing data collection for UE-side model training (block 410). The method 400 further comprises receiving, from a RAN node 120, a first message indicating to stop the data collection (block 420).
FIG. 15 is a flow diagram illustrating another example method 500 implemented by a UE 110. The method 300 may be performed additionally or alternatively to any one or more of methods 200, 300, 400. The method 500 comprises stopping data collection for UE-side model training (block 510). The method 500 further comprises transmitting, to a RAN node 120, a first message indicating that the data collection for the UE-side model training is stopped (block 520).
FIG. 16 is a flow diagram illustrating an example method 800 implemented by a network node 190. The method 800 comprises receiving, from a UE 110, an indication indicating that the UE 110 needs to perform UE-side model training (block 810). The method 800 further comprises, responsive to receiving the indication, transmitting a response to the UE 110 indicating that the UE 110 is allowed to perform the data collection for the UE-side model training or shall perform the data collection for the UE-side model training (block 820). The response comprises an acceptance indication and/or a configuration for the UE 110 to use for the data collection.
FIG. 17 is a flow diagram illustrating another example method 900 implemented by a network node 190. The method 900 may be performed additionally or alternatively to the method 800. The method 900 comprises receiving, from a UE 110, a request to perform data collection for UE-side model training (block 910). The method 900 further comprises determining whether to accept or reject the request (block 920). The method 900 further comprises indicating to the UE 110 whether the request is accepted or rejected (block 930).
The UE 110 may, for example, be implemented as schematically illustrated in the example of FIG. 18. The UE 110 of FIG. 18 comprises processing circuitry 610, memory circuitry 620, and interface circuitry 630. The processing circuitry 610 is communicatively coupled to the memory circuitry 620 and the interface circuitry 630, e.g., via a bus 604. The processing circuitry 610 may comprise one or more microprocessors, microcontrollers, hardware circuits, discrete logic circuits, hardware registers, digital signal processors (DSPs), field-programmable gate arrays (FPGAs), application-specific integrated circuits (ASICs), or a combination thereof. For example, the processing circuitry 610 may be programmable hardware capable of executing software instructions stored, e.g., as a machine-readable computer program 640 in the memory circuitry 620. The memory circuitry 620 of the various
embodiments may comprise any non-transitory machine-readable media known in the art or that may be developed, whether volatile or non-volatile, including but not limited to solid state media (e.g., SRAM, DRAM, DDRAM, ROM, PROM, EPROM, flash memory, solid state drive, etc.), removable storage devices (e.g., Secure Digital (SD) card, miniSD card, microSD card, memory stick, thumb-drive, USB flash drive, ROM cartridge, Universal Media Disc), fixed drive (e.g., magnetic hard disk drive), or the like, wholly or in any combination.
The interface circuitry 630 may be a controller hub configured to control the input and output (I/O) data paths of the UE 110. Such I/O data paths may include data paths for exchanging signals over a network. The interface circuitry 630 may be implemented as a unitary physical component, or as a plurality of physical components that are contiguously or separately arranged, any of which may be communicatively coupled to any other, or may communicate with any other via the processing circuitry 610. For example, the interface circuitry 630 may comprise a transmitter 632 configured to send wireless communication signals and a receiver 634 configured to receive wireless communication signals.
The UE 110 may be configured to perform any one or more of the UE methods 200, 300, 400, 500 described above. In one example, the memory 620 contains instructions executable by the processing circuitry 610 whereby the UE 110 is configured.
Still other embodiments include a control program 640 comprising instructions that, when executed on processing circuitry 610 of a UE 110, cause the UE 110 to carry out any of the UE methods 200, 300, 400, 500 described above.
Yet other embodiments include a carrier containing the control program 640. The carrier is one of an electronic signal, optical signal, radio signal, or computer readable storage medium.
Correspondingly, a network node 190 may be implemented as schematically illustrated in the example of FIG. 19. The network node 190 of FIG. 19 comprises processing circuitry 710, memory circuitry 720, and interface circuitry 730. The processing circuitry 710 is communicatively coupled to the memory circuitry 720 and the interface circuitry 730, e.g., via a bus 704. The processing circuitry 710 may comprise one or more microprocessors, microcontrollers, hardware circuits, discrete logic circuits, hardware registers, digital signal processors (DSPs), field-programmable gate arrays (FPGAs), application-specific integrated circuits (ASICs), or a combination thereof. For example, the processing circuitry 710 may be programmable hardware capable of executing software instructions stored, e.g., as a machine- readable computer program 740 in the memory circuitry 720. The memory circuitry 720 of the various embodiments may comprise any non-transitory machine-readable media known in the art or that may be developed, whether volatile or non-volatile, including but not limited to solid state media (e.g., SRAM, DRAM, DDRAM, ROM, PROM, EPROM, flash memory, solid state drive, etc.), removable storage devices (e.g., Secure Digital (SD) card, miniSD card, microSD card, memory stick, thumb-drive, USB flash drive, ROM cartridge, Universal Media Disc), fixed drive (e.g., magnetic hard disk drive), or the like, wholly or in any combination.
The interface circuitry 730 may be a controller hub configured to control the input and output (I/O) data paths of the network node 120. Such I/O data paths may include data paths for exchanging signals over a network. The interface circuitry 730 may be implemented as a unitary physical component, or as a plurality of physical components that are contiguously or separately arranged, any of which may be communicatively coupled to any other or may communicate with any other via the processing circuitry 710. For example, the interface circuitry 730 may comprise a transmitter 732 configured to send wireless communication signals and a receiver 734 configured to receive wireless communication signals.
The network node 190 may be configured to perform the method 300 described above. According to particular embodiments, the processing circuitry 710 is configured to perform any of the network node methods described above.
Still other embodiments include a control program 740 comprising instructions that, when executed on processing circuitry 710 of a network node 120, cause the network node 120 to carry out any of the network node methods described above.
Yet other embodiments include a carrier containing the control program 740. The carrier is one of an electronic signal, optical signal, radio signal, or computer readable storage medium.
Although the various communication devices described herein may include the illustrated combination of hardware components, other embodiments may comprise computing and/or communication hardware with different combinations of components. It is to be understood that these computing devices may comprise any suitable combination of hardware and/or software needed to perform the tasks, features, functions, and methods disclosed herein. Further, while components are depicted as single boxes located within a larger box, or nested within multiple boxes, in practice, the devices described herein may comprise multiple different physical components that make up a single illustrated component, and functionality may be partitioned between separate components.
Although the computing devices described herein (e.g., UEs, network nodes, hosts) may include the illustrated combination of hardware components, other embodiments may comprise computing devices with different combinations of components. It is to be understood that these computing devices may comprise any suitable combination of hardware and/or software needed to perform the tasks, features, functions and methods disclosed herein. Determining, calculating, obtaining or similar operations described herein may be performed by processing circuitry, which may process information by, for example, converting the obtained information into other information, comparing the obtained information or converted information to information stored in the network node, and/or performing one or more operations based on the obtained information or converted information, and as a result of said processing making a determination. Moreover, while components are depicted as single boxes located within a larger box, or nested within multiple boxes, in practice, computing devices may comprise multiple different physical components that make up a single illustrated component, and functionality
may be partitioned between separate components. For example, a communication interface may be configured to include any of the components described herein, and/or the functionality of the components may be partitioned between the processing circuitry and the communication interface. In another example, non-computationally intensive functions of any of such components may be implemented in software or firmware and computationally intensive functions may be implemented in hardware.
Claims
What is claimed is:
1. A method (200), implemented by a User Equipment, UE (110), the method comprising: transmitting (210), to a network node (190), an indication indicating that the UE (110) needs to perform UE-side model training; and performing (220) data collection for the UE-side model training.
2. The method of claim 1 , wherein: the indication comprises a request to perform the UE-side model training; and performing the data collection is responsive to receiving a response to the indication from the network node (190); the response comprises a configuration for the UE (110) to use for the data collection and/or an acceptance indication.
3. The method of any one of claims 1-2, further comprising: receiving, from the network node (190) or a different network node, a notification to stop or pause the data collection for UE-side model; and transmitting, to the network node (190) or the different network node, a further indication indicating that the data collection for the UE-side model training has stopped or paused.
4. The method of any one of claims 1-3, further comprising informing the network node (190) or a different network node that the data collection for the UE-side model training: has been completed; or needs to be resumed and/or re-started and/or re-configured.
5. A User Equipment, UE (110), comprising: processing circuitry (610) and memory (620), the memory (620) containing instructions executable by the processing circuitry (610) whereby the UE (110) is configured to: transmit, to a network node (190), an indication indicating that the UE (110) needs to perform UE-side model training; and perform data collection for the UE-side model training.
6. The UE of the preceding claim, further configured to perform the method (200) of any one of claims 2-4.
7. A method (300), performed by a user equipment, UE (110), the method comprising: receiving (310), from a first entity, a first message indicating a need to perform UE-side model training for one or more lower layer functions and/or one or more higher layer functions; wherein the one or more lower layer functions comprise Physical, PHY, and/or Medium Access Control, MAC, layer functions.
8. The method of claim 7, wherein: the one or more lower layer functions comprise beam management, providing Channel State Information, CSI, and/or positioning; and the one or more higher layer functions comprise Radio Resource Management, RRM, measurements and/or a L3 mobility function.
9. The method of any one of claims 7-8, further comprising: responsive to receiving the first message, indicating to a Radio Access Network, RAN, node, in a second message, a request to perform the UE-side model training; receiving, from the RAN node (120), a third message indicating whether or not the UE (110) is allowed to perform the UE-side model training; starting data collection at the one or more lower layers (55) for the UE-side model training in response to the third message indicating that the UE (110) is allowed to perform the UE-side model training; and responsive to receiving the third message, sending, to the first entity, a fourth message indicating whether the request was accepted or rejected.
10. The method of any one of claims 7-9, further comprising indicating to the first entity that data collection has started.
11. A User Equipment, UE (110), comprising: processing circuitry (610) and memory (620), the memory (620) containing instructions executable by the processing circuitry (610) whereby the UE (110) is configured to receive, from a first entity, a first message indicating a need to perform UE-side model training for one or more lower layer functions and/or one or more higher layer functions; wherein the one or more lower layer functions comprise Physical, PHY, and/or Medium Access Control, MAC, layer functions.
12. The UE of the preceding claim, further configured to perform the method (300) of any one of claims 8-10.
13. A method (400), implemented by a User Equipment, UE (110), the method comprising: performing (410) data collection for UE-side model training; and receiving (420), from a Radio Access Network, RAN, node, a first message indicating to stop the data collection.
14. The method of claim 13, further comprising stopping the data collection for the UE-side model training at one or more lower layers (55) in response to receiving the first message, wherein the one or more lower layers (55) comprises a Physical, PHY, layer and/or a Medium Access Control, MAC, layer.
15. The method of any one of claims 13-14, further comprising transmitting, to a first entity, a second message indicating that the data collection for UE-side model training is stopped.
16. The method of any one of claims 13-15, further comprising: receiving, from the RAN node (120), a third message indicating to resume the stopped data collection; and resuming the data collection for the UE-side model training at one or more lower layers (55) in response to receiving the third message, wherein the one or more lower layers (55) comprises a PHY layer and/or a MAC layer.
17. The method of any one of claims 13-16, further comprising transmitting, to the first entity, a fourth message indicating that the data collection for UE-side model training is resumed.
18. A User Equipment, UE (110), comprising: processing circuitry (610) and memory (620), the memory (620) containing instructions executable by the processing circuitry (610) whereby the UE (110) is configured to: perform data collection for UE-side model training; and receive, from a Radio Access Network, RAN, node, a first message indicating to stop the data collection.
19. The UE of the preceding claim, further configured to perform the method (400) of any one of claims 14-17.
20. A method (500), implemented by a User Equipment, UE (110), the method comprising: stopping (510) data collection for UE-side model training; and transmitting (520) to a Radio Access Network, RAN, node (120), a first message indicating that the data collection for the UE-side model training is stopped.
21. The method of claim 20, wherein: stopping the data collection for the UE-side model training comprises stopping the data collection at one or more lower layers (55); and the one or more lower layers (55) comprises a Physical, PHY, layer and/or a Medium Access Control, MAC, layer.
22. The method of any one of claims 20-21 , further comprising transmitting, to a first entity, a second message indicating that the data collection for the UE-side model training is stopped.
23. The method of any one of claims 20-22, further comprising: determining that the data collection for the UE-side model training can be resumed; and transmitting, to the RAN node (120), a third message indicating that the data collection for the UE-side model training can be resumed.
24. The method of any one of claims 20-23, further comprising: receiving, from the RAN node (120), a fourth message indicating whether or not to resume the data collection; and resuming the data collection for the UE-side model training at one or more lower layers (55) in response to the fourth message indicating to resume the data collection; and transmitting, to a first entity, a fifth message indicating whether or not the data collection for the UE-side model training is resumed depending on the fourth message.
25. A User Equipment, UE (110), comprising: processing circuitry (610) and memory (620), the memory (620) containing instructions executable by the processing circuitry (610) whereby the UE (110) is configured to: stop data collection for UE-side model training; and transmit to a Radio Access Network, RAN, node (120), a first message indicating that the data collection for the UE-side model training is stopped.
26. The UE of the preceding claim, further configured to perform the method of any one of claims 21-24.
27. A computer program (640), comprising instructions which, when executed on processing circuitry (610) of a User Equipment, UE (110), cause the processing circuitry (610) to carry out the method of any one of claims 1-4, 7-10, 13-17, or 20-24.
28. A method (800), implemented by a network node (190), the method comprising: receiving (810), from a User Equipment, UE (110), an indication indicating that the UE
(110) needs to perform UE-side model training; and responsive to receiving the indication, transmitting (820) a response to the UE (110) indicating that the UE (110): is allowed to perform the data collection for the UE-side model training; or shall perform the data collection for the UE-side model training; wherein the response comprises an acceptance indication and/or a configuration for the UE (110) to use for the data collection.
29. The method of claim 28, wherein: the indication comprises a request from UE (110) to perform the UE-side model training; and the method further comprises determining whether to accept or reject the request.
30. The method of any one of claims 28-29, further comprising receiving, from the UE (110): a request to stop or pause the data collection for the UE-side model training; or a notification that the UE (110) has stopped or paused the data collection for the UE-side model training; or an indication that the data collection for the UE-side model training needs to be resumed and/or restarted and/or reconfigured.
31. The method of any one of claims 28-30, further comprising transmitting, to the UE (110), an indication to stop or pause the data collection for the UE-side model.
32. A network node (190) comprising: processing circuitry (710) and memory (720), the memory (720) containing instructions executable by the processing circuitry (720) whereby the network node (190) is configured to: receive an indication from a User Equipment, UE (110), indicating that the UE (110) needs to perform UE-side model training; and responsive to receiving the indication, transmit a response to the UE (110) indicating that the UE (110): is allowed to perform the data collection for the UE-side model training; or shall perform the data collection for the UE-side model training; wherein the response comprises an acceptance indication and/or a configuration for the UE (110) to use for the data collection.
33. The network node of the preceding claim, further configured to perform the method (800) of any one of claims 29-31.
34. A method (900), implemented by a network node (190), comprising: receiving (910), from a User Equipment, UE (110), a request to perform data collection for UE-side model training; determining (920) whether to accept or reject the request; and indicating (930) to the UE (110) whether the request is accepted or rejected.
35. The method of claim 34, further comprising: determining that the UE (110) should stop the data collection for the UE-side model training; indicating, to the UE (110), to stop the data collection for UE-side model training, determining that the UE (110) should resume the data collection; and indicating to the UE (110) to resume the data collection.
36. The method of any one of claims 34-35, further comprising: receiving, from the UE (110): a notification indicating that the data collection for the UE-side model training is stopped or paused; and/or notice that the data collection for UE-side model training is resumable; and indicating, to the UE (110), whether or not to resume the data collection.
37. A network node (190) comprising: processing circuitry (710) and memory (720), the memory (720) containing instructions executable by the processing circuitry (710) whereby the network node (190) is configured to: receive, from a User Equipment, UE (110), a request to perform data collection for UE-side model training; determine whether to accept or reject the request; and indicate to the UE (110) whether the request is accepted or rejected.
38. The network node of the preceding claim, further configured to perform the method (900) of any one of claims 35-37.
39. A computer program (740), comprising instructions which, when executed on processing circuitry (710) of a network node (190), cause the network node (190) to carry out the method (900) of any one of claims 28-31 or 34-36.
40. A carrier containing the computer program of claim 27 or 39, wherein the carrier is one of an electronic signal, optical signal, radio signal, or computer readable storage medium.
Applications Claiming Priority (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| US202363539387P | 2023-09-20 | 2023-09-20 | |
| US63/539,387 | 2023-09-20 |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| WO2025063878A1 true WO2025063878A1 (en) | 2025-03-27 |
Family
ID=93011088
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| PCT/SE2024/050798 Pending WO2025063878A1 (en) | 2023-09-20 | 2024-09-18 | Radio access network awareness of user equipment-side model training |
Country Status (1)
| Country | Link |
|---|---|
| WO (1) | WO2025063878A1 (en) |
Citations (4)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| WO2022256981A1 (en) * | 2021-06-07 | 2022-12-15 | Apple Inc. | Channel estimation using artificial intelligence |
| WO2023079946A1 (en) * | 2021-11-08 | 2023-05-11 | 日本電気株式会社 | Wireless terminal, radio access network node, and methods therefor |
| WO2023092249A1 (en) * | 2021-11-23 | 2023-06-01 | Zte Corporation | Systems and methods for model management |
| WO2023148009A1 (en) * | 2022-02-07 | 2023-08-10 | Telefonaktiebolaget Lm Ericsson (Publ) | User-centric life cycle management of ai/ml models deployed in a user equipment (ue) |
-
2024
- 2024-09-18 WO PCT/SE2024/050798 patent/WO2025063878A1/en active Pending
Patent Citations (5)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| WO2022256981A1 (en) * | 2021-06-07 | 2022-12-15 | Apple Inc. | Channel estimation using artificial intelligence |
| WO2023079946A1 (en) * | 2021-11-08 | 2023-05-11 | 日本電気株式会社 | Wireless terminal, radio access network node, and methods therefor |
| EP4432738A1 (en) * | 2021-11-08 | 2024-09-18 | NEC Corporation | Wireless terminal, radio access network node, and methods therefor |
| WO2023092249A1 (en) * | 2021-11-23 | 2023-06-01 | Zte Corporation | Systems and methods for model management |
| WO2023148009A1 (en) * | 2022-02-07 | 2023-08-10 | Telefonaktiebolaget Lm Ericsson (Publ) | User-centric life cycle management of ai/ml models deployed in a user equipment (ue) |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| CN112512059B (en) | Network optimization method, server, network side device, system and storage medium | |
| EP2534872B1 (en) | Method and apparatus for reporting of measurement data | |
| US20230276264A1 (en) | Managing a wireless device that is operable to connect to a communication network | |
| CN102149106B (en) | Method and equipment for measuring MDT | |
| US20230403606A1 (en) | Managing resources in a radio access network | |
| CN102685671B (en) | Minimization drive test (MDT) result reporting and reporting instruction methods and devices | |
| CN111586740B (en) | Method for configuring minimization of drive tests and base station | |
| US20250247719A1 (en) | Methods and apparatuses for an ai or ml based cco mechanism | |
| US12219410B2 (en) | Message sending method and apparatus, message receiving method and apparatus, and device and storage medium | |
| EP2569971A1 (en) | Logged drive test reporting | |
| US20230362678A1 (en) | Method for evaluating action impact over mobile network performance | |
| WO2020032861A1 (en) | Cgi report procedure for nr cells without sib1 | |
| US20250138978A1 (en) | Model monitoring method, monitoring end, device, and storage medium | |
| CN116887290A (en) | A communication method and device for machine learning model training | |
| WO2025063878A1 (en) | Radio access network awareness of user equipment-side model training | |
| WO2021079287A1 (en) | A non-intrusive method for online quality of experience assessment in wireless synchrophasor networks | |
| WO2025201505A1 (en) | Methods and apparatus of data collection within connection state in mobile communications | |
| EP4366379A1 (en) | Ai/ml assisted csi-pilot based beam management and measurement reduction | |
| WO2025199824A1 (en) | Methods and apparatus to perform data collecting/logging in rrc-connected mode | |
| EP4612919A1 (en) | Feedback on predicted user equipment trajectory | |
| CN117955849A (en) | Data management method, device, system and storage medium | |
| WO2025095843A1 (en) | Ue memory handling due to artificial intelligence operations | |
| WO2025080186A1 (en) | Admission control of model-related traffic | |
| WO2025209780A1 (en) | Ml model performance monitoring host switching | |
| CN120476626A (en) | Wireless communication system and method for collecting AI/ML data information |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| 121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 24786180 Country of ref document: EP Kind code of ref document: A1 |