WO2025195815A1 - Method of advanced ai/ml model assignment signaling - Google Patents
Method of advanced ai/ml model assignment signalingInfo
- Publication number
- WO2025195815A1 WO2025195815A1 PCT/EP2025/056387 EP2025056387W WO2025195815A1 WO 2025195815 A1 WO2025195815 A1 WO 2025195815A1 EP 2025056387 W EP2025056387 W EP 2025056387W WO 2025195815 A1 WO2025195815 A1 WO 2025195815A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- model
- alignment
- assignment
- previous
- assigned
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04W—WIRELESS COMMUNICATION NETWORKS
- H04W24/00—Supervisory, monitoring or testing arrangements
- H04W24/02—Arrangements for optimising operational condition
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N20/00—Machine learning
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/0464—Convolutional networks [CNN, ConvNet]
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
Definitions
- the present disclosure relates to AI/ML based model alignment, where techniques for pre-configuring and signaling the specific information about aligning ML models applicable to radio access network are presented.
- AI/ML artificial intelligence/machine learning
- RP-213599 3GPP TSG (Technical Specification Group) RAN (Radio Access Network) meeting #94e.
- the official title of AI/ML study item is “Study on AI/ML for NR Air Interface”.
- the goal of this study item is to identify a common AI/ML framework and areas of obtaining gains using AI/ML based techniques with use cases.
- the main objective of this study item is to study AI/ML framework for air-interface with target use cases by considering performance, complexity, and potential specification impact.
- AI/ML model terminology and description to identify common and specific characteristics for framework are included as one of key work scopes.
- AI/ML framework various aspects are under consideration for investigation and one of key items is about lifecycle management of AI/ML model where multiple stages are included as mandatory for model training, model deployment, model inference, model monitoring, model updating etc.
- two-sided (AI/ML) model is defined as a paired AI/ML model(s) over which joint inference is performed, where joint inference comprises AI/ML Inference whose inference is performed jointly across the UE and the network.
- AI/ML UE-side
- AI/ML network-side
- UE-side (AI/ML) model is defined as an AI/ML model whose inference is performed entirely at the UE
- AI/ML network-side
- UE-ML user equipment
- 3GPP TR 37.817 for Release 17, titled as Study on enhancement for Data Collection for NR and EN-DC UE (user equipment) mobility was also considered as one of AI/ML use cases and one of scenarios for model training/inference is that both functions are located within RAN node.
- model training is one of the most important parts for model deployment and currently there is no specification defined for signaling methods and network-UE behaviors so as to identify the required dataset when model updating/re- training as any activated model can be also impacted due to model/data drift.
- the enabled AI/ML model(s) can be impacted for model performance due to data/model drift.
- model re-training/updating can be executed. For example, when the trained ML model is deployed in RAN, model performance for inferencing can be easily degraded if target ML condition is not well aligned with real ML condition measured for specific model operation. For example,
- US 2022400373 describes the method of determining neural network functions and configuring models for performing wireless communications management procedures.
- US 2022337487 shows that a network entity determines at least one model parameter of a model for digitally analyzing input data depending on the at least one model parameter of a model, the network entity being configured to receive a model request.
- WO 2023277780 contains a method of downloading of a compiled machine code version of a ML model to a wireless communication device.
- WO 2022258149 provides a way for training a model in a server device based on training data in a user device.
- WO 2022228666 shows about influencing training of a ML model based on a training policy provided by an actor node.
- WO 2022161624 describes the method of receiving a request for retrieving or executing a ML model or a combination of ML models.
- the present application describes methods of using the pre-configured AI/ML (artificial intelligence/machine learning) based model alignment in wireless mobile communication system including base station (e.g., gNB, TN, NTN) and mobile station (e.g., UE).
- base station e.g., gNB, TN, NTN
- mobile station e.g., UE
- AI/ML model is applied to radio access network, signaling of model information exchange can be mismatched with model operation failure.
- model operation e.g., model training/inferencing/monitoring/updating
- model alignment with assignment of model identification
- the present disclosure solves the cited problem by the proposed embodiments and describes a method for advanced model assignment signaling by configuring model alignment ID or index information to indicate varying model ID assignment events in a wireless communication system, comprising determining the assigned models with the associated model alignment; setting threshold values for validity of the assigned model ID(s) with the associated model alignment ID; providing the configured model alignment information with the assigned model information; updating mapping relation information about model assignment and model alignment changes.
- the method is characterized by model identification using the pre-configured model alignment ID can be decided to proceed for model ID re-alignment if necessary.
- the method is characterized by, that if the pre-configured model alignment ID information can be sent to UE via L1/L2 or RRC signaling.
- the method is characterized by, that if any additional update about model alignment ID information can be also sent via L1/L2 or RRC signaling.
- the method is characterized by, that if mapping relation between model alignment ID and the assigned model ID list can be also sent via system information or dedicated RRC message.
- the method is characterized by, that if 1 -bit signaling can be used to determine availability of offline model ID assignment before online model ID assignment is processed with offline model ID assignment or without offline model ID assignment.
- the method is characterized by, that if indication signaling with any specific model alignment ID(s) can be sent to check if UE has the aligned model ID(s) along with confirmation message sent back by UE with or without the aligned model ID information.
- the method is characterized by, that if either network side or UE side can send the request message if online model ID assignment is needed.
- the method is characterized by, that if online model ID assignment can be triggered for model ID re- alignment when the aligned model ID information at UE side changes or offline model ID assignment is not available.
- the method is characterized by, that if mapping relation table between the model alignment ID and the assigned model ID list can be referenced for further offline/online model identification process between network side and UE side.
- the method is characterized by, that if model alignment ID can also indicate if the assigned model ID(s) can be valid or need to be updated depending on timing record of model ID assignment (e.g., using threshold value).
- the method is characterized by, that if there can be multiple mapping relation tables based on different time/site in association with other parameters such as ML conditions, functionalities, dataset, applications.
- the method is characterized by, that if mapping relation table can be dynamically updated via L1/L2 signaling when any new assigned model ID(s) becomes available.
- the method is characterized by, that if model identification via offline/online is processed to update model ID assignment with the associated model alignment ID if any assigned model ID(s) is outdated.
- the method is characterized by, that if any measure to determine validity of the assigned model ID(s) with the associated model alignment ID can be based on the configured threshold value. In some embodiments of the method according to the first aspect, the method is characterized by, that if network side provides the configured ML information with model alignment ID.
- the method is characterized by, that if UE side identifies if the assigned model IDs are available to match with the indicated model alignment ID and confirmation message is sent to network side so that either offline or online model identification process can proceed.
- the method is characterized by, that if indication message about offline or online model identification process can be sent to UE side if necessary.
- the method is characterized by, that if the assigned model ID(s) matched with model alignment ID can be activated for LCM operation for the requested model alignment ID.
- the present disclosure relates to an apparatus for advanced model assignment signaling by configuring model alignment ID or index information to indicate varying model ID assignment events in a wireless communication system
- the apparatus comprising a wireless transceiver, a processor coupled with a memory in which computer program instructions are stored, said instructions being configured to implement steps to carry out a method according to any one of the embodiments of the present disclosure.
- the present disclosure relates to User Equipment comprising an apparatus according to the second aspect.
- the present disclosure relates to gNB comprising an apparatus according to the second aspect.
- the present disclosure relates to a wireless communication system wireless communication for advanced model assignment signaling by configuring model alignment ID or index information to indicate varying model ID assignment events
- the wireless communication systems comprises at least a user equipment according a third aspect, at least a gNB according to a fourth aspect, whereby the user Equipment and the gNB each comprises a processor coupled with a memory in which computer program instructions are stored, whereby the user Equipment and the gNB each comprises a processor coupled with a memory in which computer program instructions are stored, said instructions being configured to to carry out a method according to any one of the embodiments of the first aspect.
- the present disclosure relates to a computer program product comprising instructions which, when executed by at least one processor, configure said at least one processor to carry out a method according to the first aspect said at least one processor to carry out a method for exchanging data according to any one of the embodiments of the present disclosure.
- the computer program product can use any programming language, and can be in the form of source code, object code, or in any intermediate form between source code and object code, such as in a partially compiled form, or in any other desirable form.
- the present disclosure relates to a computer-readable storage medium comprising instructions which, when executed by at least one processor, configure said at least one processor to carry out a method according to any one of the embodiments of the present disclosure.
- the present disclosure relates to an apparatus for configuring a set of the supported operation modes for ML functionality in a wireless communication system, the apparatus comprising a wireless transceiver, a processor coupled with a memory in which computer program instructions are stored, said instructions being configured to carry out a method according to any one of the embodiments of the present disclosure.
- Figure 1 is an exemplary table of mapping relation between the model alignment ID and the assigned model ID list.
- Figure 2 is an exemplary flow chart of configuring model alignment identification at network side.
- Figure 3 is an exemplary flow chart of confirming the assigned model(s) using model alignment ID.
- Figure 4 is an exemplary flow chart of updating the assigned model IDs.
- Figure 5 is an exemplary signaling flow of aligning models between network side and UE side.
- a more general term “network node” may be used and may correspond to any type of radio network node or any network node, which communicates with a UE (directly or via another node) and/or with another network node.
- network nodes are NodeB, MeNB, ENB, a network node belonging to MCG or SCG, base station (BS), multi-standard radio (MSR) radio node such as MSR BS, eNodeB, gNodeB, network controller, radio network controller (RNC), base station controller (BSC), relay, donor node controlling relay, base transceiver station (BTS), access point (AP), transmission points, transmission nodes, RRU, RRH, nodes in distributed antenna system (DAS), core network node (e.g.
- MSC Mobile Switching Center
- MME Mobility Management Entity
- O&M Operations & Maintenance
- OSS Operations Support System
- SON Self Optimized Network
- positioning node e.g. Evolved- Serving Mobile Location Centre (E-SMLC)
- E-SMLC Evolved- Serving Mobile Location Centre
- MDT Minimization of Drive Tests
- test equipment physical node or software
- the non-limiting term user equipment (UE) or wireless device may be used and may refer to any type of wireless device communicating with a network node and/or with another UE in a cellular or mobile communication system.
- UE are target device, device to device (D2D) UE, machine type UE or UE capable of machine to machine (M2M) communication, PDA, PAD, Tablet, mobile terminals, smart phone, laptop embedded equipped (LEE), laptop mounted equipment (LME), USB dongles, UE category Ml, UE category M2, ProSe UE, V2V UE, V2X UE, etc.
- terminologies such as base station/gNodeB and UE should be considered non-limiting and do in particular not imply a certain hierarchical relation between the two; in general, “gNodeB” could be considered as device 1 and “UE” could be considered as device 2 and these two devices communicate with each other over some radio channel. And in the following the transmitter or receiver could be either gNodeB (gNB), or UE.
- gNB gNodeB
- embodiments may be embodied as a system, apparatus, method, or program product. Accordingly, embodiments may take the form of an entirely hardware embodiment, an entirely software embodiment (including firmware, resident software, micro-code, etc.) or an embodiment combining software and hardware aspects.
- the disclosed embodiments may be implemented as a hardware circuit comprising custom very-large-scale integration (“VLSI”) circuits or gate arrays, off- the-shelf semiconductors such as logic chips, transistors, or other discrete components.
- VLSI very-large-scale integration
- the disclosed embodiments may also be implemented in programmable hardware devices such as field programmable gate arrays, programmable array logic, programmable logic devices, or the like.
- the disclosed embodiments may include one or more physical or logical blocks of executable code which may, for instance, be organized as an object, procedure, or function.
- embodiments may take the form of a program product embodied in one or more computer readable storage devices storing machine readable code, computer readable code, and/or program code, referred hereafter as code.
- the storage devices may be tangible, non- transitory, and/or non-transmission.
- the storage devices may not embody signals. In a certain embodiment, the storage devices only employ signals for accessing code
- the computer readable medium may be a computer readable storage medium.
- the computer readable storage medium may be a storage device storing the code.
- the storage device may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, holographic, micromechanical, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing.
- a storage device More specific examples (a non-exhaustive list) of the storage device would include the following: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a random-access memory (“RAM”), a read-only memory (“ROM”), an erasable programmable read-only memory (“EPROM” or Flash memory), a portable compact disc readonly memory (“CD-ROM”), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
- a computer readable storage medium may be any tangible medium that can contain or store a program for use by or in connection with an instruction execution system, apparatus, or device.
- Code for carrying out operations for embodiments may be any number of lines and may be written in any combination of one or more programming languages including an object- oriented programming language such as Python, Ruby, Java, Smalltalk, C++, or the like, and conventional procedural programming languages, such as the “C” programming language, or the like, and/or machine languages such as assembly languages.
- the code may execute entirely on the user’s computer, partly on the user’s computer, as a stand-alone software package, partly on the user’s computer and partly on a remote computer or entirely on the remote computer or server.
- the remote computer may be connected to the user’s computer through any type of network, including a local area network (“LAN”), wireless LAN (“WLAN”), or a wide area network (“WAN”), or the connection may be made to an external computer (for example, through the Internet using an Internet Service Provider (“ISP”)).
- LAN local area network
- WLAN wireless LAN
- WAN wide area network
- ISP Internet Service Provider
- the described features, structures, or characteristics of the embodiments may be combined in any suitable manner.
- numerous specific details are provided, such as examples of programming, software modules, user selections, network transactions, database queries, database structures, hardware modules, hardware circuits, hardware chips, etc., to provide a thorough understanding of embodiments.
- the code may also be stored in a storage device that can direct a computer, other programmable data processing apparatus, or other devices to function in a particular manner, such that the instructions stored in the storage device produce an article of manufacture including instructions which implement the function/act specified in the flowchart diagrams and/or block diagrams.
- the code may also be loaded onto a computer, other programmable data processing apparatus, or other devices to cause a series of operational steps to be performed on the computer, other programmable apparatus, or other devices to produce a computer implemented process such that the code which execute on the computer or other programmable apparatus provide processes for implementing the functions/acts specified in the flowchart diagrams and/or block diagrams.
- each block in the flowchart diagrams and/or block diagrams may represent a module, segment, or portion of code, which includes one or more executable instructions of the code for implementing the specified logical function(s).
- the disclosure is related to wireless communication system, which may be for example a 5G NR wireless communication system. More specifically, it represents a RAN of the wireless communication system, which is used exchange data with UEs via radio signals. For example, the RAN may send data to the UEs (downlink, DL), for instance data received from a core network (CN). The RAN may also receive data from the UEs (uplink, UL), which data may be forwarded to the CN.
- DL downlink
- CN core network
- uplink, UL uplink
- the RAN comprises one base station, BS.
- the RAN may comprise more than one BS to increase the coverage of the wireless communication system.
- Each of these BSs may be referred to as NB, eNodeB (or eNB), gNodeB (or gNB, in the case of a 5G NR wireless communication system), an access point or the like, depending on the wireless communication standard(s) implemented.
- the UEs are located in a coverage of the BS.
- the coverage of the BS corresponds for example to the area in which UEs can decode a PDCCH transmitted by the BS.
- An example of a wireless device suitable for implementing any method, discussed in the present disclosure, performed at a UE corresponds to an apparatus that provides wireless connectivity with the RAN of the wireless communication system, and that can be used to exchange data with said RAN.
- a wireless device may be included in a UE.
- the UE may for instance be a cellular phone, a wireless modem, a wireless communication device, a handheld device, a laptop computer, or the like.
- the UE may also be an Internet of Things (loT) equipment, like a wireless camera, a smart sensor, a smart meter, smart glasses, a vehicle (manned or unmanned), a global positioning system device, etc., or any other equipment that may run applications that need to exchange data with remote recipients, via the wireless device.
- LoT Internet of Things
- the wireless device comprises one or more processors and one or more memories.
- the one or more processors may include for instance a central processing unit (CPU), a digital signal processor (DSP), a field-programmable gate array (FPGA), an application specific integrated circuit (ASIC), etc.
- the one or more memories may include any type of computer readable volatile and non-volatile memories (magnetic hard disk, solid-state disk, optical disk, electronic memory, etc.).
- the one or more memories may store a computer program product, in the form of a set of programcode instructions to be executed by the one or more processors to implement all or part of the steps of a method for exchanging data, performed at a UE’s side, according to any one of the embodiments disclosed herein.
- the wireless device can comprise also a main radio, MR, unit.
- the MR unit corresponds to a main wireless communication unit of the wireless device, used for exchanging data with BSs of the RAN using radio signals.
- the MR unit may implement one or more wireless communication protocols, and may for instance be a 3G, 4G, 5G, NR, WiFi, WiMax, etc. transceiver or the like.
- the MR unit corresponds to a 5G NR wireless communication unit.
- AI/ML Model is a data driven algorithm that applies AI/ML techniques to generate set of outputs based on set of inputs.
- AI/ML model delivery is a generic term referring to delivery of an AI/ML model from one entity to another entity in any manner.
- An entity could mean network node/function (e.g., gNB, LMF, etc.), UE, proprietary server, etc.
- AI/ML model Inference is a process of using trained AI/ML model to produce set of outputs based on set of inputs.
- AI/ML model testing is a subprocess of training, to evaluate the performance of final AI/ML model using dataset different from one used for model training and validation. Differently from AI/ML model validation, testing does not assume subsequent tuning of the model.
- AI/ML model training is a process to train an AI/ML Model [by learning the input/output relationship] in data driven manner and obtain the trained AI/ML Model for inference.
- AI/ML model transfer is a delivery of an AI/ML model over the air interface in manner that is not transparent to 3GPP signalling, either parameters of model structure known at the receiving end or new model with parameters. Delivery may contain full model or partial model.
- AI/ML model validation is a subprocess of training, to evaluate the quality of an AI/ML model using dataset different from one used for model training, that helps selecting model parameters that generalize beyond the dataset used for model training.
- Data collection is a process of collecting data by the network nodes, management entity, or UE for the purpose of AI/ML model training, data analytics and inference.
- Federated learning I federated training is a machine learning technique that trains an AI/ML model across multiple decentralized edge nodes e.g., UEs, gNBs each performing local model training using local data samples.
- the technique requires multiple interactions of the model, but no exchange of local data samples.
- Functionality identification is a process/method of identifying an AI/ML functionality for the common understanding between the NW and the UE. Note is Information regarding the AI/ML functionality may be shared during functionality identification. Where AI/ML functionality resides depends on the specific use cases and sub use cases.
- Model activation means enable an AI/ML model for specific AI/ML-enabled feature.
- Model deactivation means disable an AI/ML model for specific AI/ML-enabled feature.
- Model download means Model transfer from the network to UE.
- Model identification is A process/method of identifying an AI/ML model for the common understanding between the NW and the UE.
- the process/method of model identification may or may not be applicable and regarding the AI/ML model may be shared during model identification.
- Model monitoring is A procedure that monitors the inference performance of the AI/ML model.
- Model parameter update is Process of updating the model parameters of model.
- Model selection is the process of selecting an AI/ML model for activation among multiple models for the same AI/ML enabled feature. Model selection may or may not be carried out simultaneously with model activation. Model switching is deactivating currently active AI/ML model and activating different AI/ML model for specific AI/ML-enabled feature.
- Model update is Process of updating the model parameters and/or model structure of model.
- Model upload is Model transfer from UE to the network.
- AI/ML Network-side
- Offline field data is the data collected from field and used for offline training of the AI/ML model.
- Offline training is an AI/ML training process where the model is trained based on collected dataset, and where the trained model is later used or delivered for inference. Note is This definition only serves as guidance. There may be cases that may not exactly conform to this definition but could still be categorized as offline training by commonly accepted conventions.
- Online field data is the data collected from field and used for online training of the AI/ML model.
- Online training is an AI/ML training process where the model being used for inference) is (typically continuously) trained in (near) real-time with the arrival of new training samples.
- Note is the notion of (near) real-time vs. non real-time is context- dependent and is relative to the inference time-scale. This definition only serves as guidance.
- Reinforcement Learning is a process of training an AI/ML model from input (a.k.a. state) and feedback signal (a.k.a. reward) resulting from the model’s output (a.k.a. action) in an environment the model is interacting with.
- Semi-supervised learning is a process of training model with mix of labelled data and unlabelled data.
- Supervised learning is a process of training model from input and its corresponding labels.
- Two-sided (AI/ML) model is a paired AI/ML Model(s) over which joint inference is performed, where joint inference comprises AI/ML Inference whose inference is performed jointly across the UE and the network, i.e, the first part of inference is firstly performed by UE and then the remaining part is performed by gNB, or vice versa.
- AI/ML UE-side
- Unsupervised learning is a process of training model without labelled data.
- Proprietary-format models is ML models of vendor-Zdevice-specific proprietary format, from 3GPP perspective. They are not mutually recognizable across vendors and hide model design information from other vendors when shared.
- Open-format models is ML models of specified format that are mutually recognizable across vendors and allow interoperability, from 3GPP perspective. They are mutually recognizable between vendors and do not hide model design information from other vendors when shared.
- AI/ML based techniques are currently applied to many different applications and 3GPP also started to work on its technical investigation to apply to multiple use cases based on the observed potential gains.
- AI/ML lifecycle can be split into several stages such as data collection/pre- processing, model training, model testing/validation, model deployment/update, model monitoring etc., where each stage is equally important to achieve target performance with any specific model(s).
- one of the challenging issues is to manage the lifecycle of AI/ML model.
- model training or re-training is one of key issues for model performance maintenance as model performance such as inferencing and/or training is dependent on different model execution environment with varying configuration parameters.
- AI/ML model needs model monitoring after deployment because model performance cannot be maintained continuously due to drift and update feedback is then provided to re-train/update the model or select alternative model.
- model alignment ID (or index information) can be pre-configured to indicate varying model ID assignment events. The assigned models are then determined with the associated model alignment. Using the pre-configured model alignment ID, model identification can be decided to proceed for model ID realignment if necessary.
- the pre-configured model alignment ID information can be sent to UE via L1/L2 or RRC signaling and any additional update about model alignment ID information can be also sent via L1/L2 or RRC signaling. Mapping relation between model alignment ID and the assigned model ID list can be also sent via system information or dedicated RRC message. Two scenarios to apply model alignment ID can be considered such that 1 -bit signaling can be used to determine availability of offline model ID assignment before online model ID assignment is processed when 1 ) offline model ID is assigned or 2) offline model ID is not assigned. Indication signaling with any specific model alignment ID(s) can be sent to check if UE has the aligned model ID(s) and confirmation message is sent back by UE with or without the aligned model ID information. If online model ID assignment is needed, either network side or UE side can send the request message.
- model ID assignment can be triggered for model ID re-alignment when the aligned model ID information at UE side changes or offline model ID assignment is not available.
- this mapping relation table can be referenced for further offline/online model identification process between network side and UE side.
- Model alignment ID can also indicate if the assigned model ID(s) can be valid or need to be updated depending on timing record of model ID assignment (e.g., using threshold value).
- mapping relation table can be dynamically updated via L1/L2 signaling when any new assigned model ID(s) becomes available. If any assigned model ID(s) is outdated, model identification via offline/online is processed to update model ID assignment with the associated model alignment ID. Any measure to determine validity of the assigned model ID(s) with the associated model alignment ID can be based on the configured threshold value.
- Network side provides the configured ML information with model alignment ID.
- UE side identifies if the assigned model IDs are available to match with the indicated model alignment ID and confirmation message is sent to network side so that either offline or online model identification process can proceed. Indication message about offline or online model identification process can be sent to UE side if necessary.
- the assigned model ID(s) matched with model alignment ID can then be activated for LCM operation.
- Figure 1 shows an exemplary table of mapping relation between the model alignment ID and the assigned model ID list.
- this mapping relation table can be referenced for further offline/online model identification process between network side and UE side.
- Model alignment ID can also indicate if the assigned model ID(s) can be valid or need to be updated depending on timing record of model ID assignment (e.g., using threshold value).
- Mapping relation table can be dynamically updated via L1/L2 signaling when any new assigned model ID(s) becomes available.
- Figure 2 shows an exemplary flow chart of configuring model alignment identification at network side.
- model alignment ID (or index information) can be pre-configured to indicate varying model ID assignment events.
- model identification can be decided to proceed for model ID re-alignment if necessary.
- the pre-configured model alignment ID information can be sent to UE via L1/L2 or RRC signaling and any additional update about model alignment ID information can be also sent via L1/L2 or RRC signaling. Mapping relation between model alignment ID and the assigned model ID list can be also sent via system information or dedicated RRC message.
- Figure 3 shows an exemplary flow chart of confirming the assigned model(s) using model alignment ID.
- UE can activate any matched model ID(s) assigned based on model alignment ID provided by network side. If not, then model re-alignment can proceed to update model assignment.
- Figure 4 shows an exemplary flow chart of updating the assigned model IDs.
- model identification via offline/online is processed to update model ID assignment with the associated model alignment ID.
- Any measure to determine validity of the assigned model ID(s) with the associated model alignment ID can be based on the configured threshold value.
- Figure 5 shows an exemplary signaling flow of aligning models between network side and UE side.
- Network side provides the configured ML information with model alignment ID.
- UE side identifies if the assigned model IDs are available to match with the indicated model alignment ID.
- Confirmation message is sent to network side so that either offline or online model identification process can proceed.
- Indication message about offline or online model identification process can be sent to UE side if necessary.
- the assigned model ID(s) matched with model alignment ID can then be activated for LCM operation.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Software Systems (AREA)
- Computing Systems (AREA)
- Artificial Intelligence (AREA)
- Mathematical Physics (AREA)
- General Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- General Engineering & Computer Science (AREA)
- Biomedical Technology (AREA)
- Molecular Biology (AREA)
- General Health & Medical Sciences (AREA)
- Computational Linguistics (AREA)
- Biophysics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Health & Medical Sciences (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Medical Informatics (AREA)
- Computer Networks & Wireless Communication (AREA)
- Signal Processing (AREA)
- Mobile Radio Communication Systems (AREA)
Abstract
The present disclosure describes methods of using the pre-configured AI/ML (artificial intelligence/machine learning) based model alignment in wireless mobile communication system including base station e.g., gNB, TN, NTN and mobile station e.g., UE. In AI/ML model is applied to radio access network, signaling of model information exchange can be mismatched with model operation failure. Therefore, model operation e.g., model training/inferencing/monitoring/updating can be set up between network and UE by configuring model alignment with assignment of model identification.
Description
TITLE
METHOD OF ADVANCED AI/ML MODEL ASSIGNMENT SIGNALING
TECHNNICAL FIELD
The present disclosure relates to AI/ML based model alignment, where techniques for pre-configuring and signaling the specific information about aligning ML models applicable to radio access network are presented.
BACKGROUND
In 3GPP (Third Generation Partnership Project), one of the selected study items as the approved Release 18 package is AI/ML (artificial intelligence/machine learning) as described in the related document (RP-213599) addressed in 3GPP TSG (Technical Specification Group) RAN (Radio Access Network) meeting #94e. The official title of AI/ML study item is “Study on AI/ML for NR Air Interface”. The goal of this study item is to identify a common AI/ML framework and areas of obtaining gains using AI/ML based techniques with use cases. According to 3GPP, the main objective of this study item is to study AI/ML framework for air-interface with target use cases by considering performance, complexity, and potential specification impact. In particular, AI/ML model, terminology and description to identify common and specific characteristics for framework are included as one of key work scopes. Regarding AI/ML framework, various aspects are under consideration for investigation and one of key items is about lifecycle management of AI/ML model where multiple stages are included as mandatory for model training, model deployment, model inference, model monitoring, model updating etc. Also in 3GPP, two-sided (AI/ML) model is defined as a paired AI/ML model(s) over which joint inference is performed, where joint inference comprises AI/ML Inference whose inference is performed jointly across the UE and the network. Also for one-sided (AI/ML) model, UE-side (AI/ML) model is defined as an AI/ML model whose inference is performed entirely at the UE and network-side (AI/ML) model is defined as an AI/ML model whose inference is performed entirely at the network. Currently, AI/ML specification work is at the stage of work item discussion for Release 19. Earlier, in 3GPP TR 37.817 for Release 17, titled as Study on enhancement for Data Collection
for NR and EN-DC, UE (user equipment) mobility was also considered as one of AI/ML use cases and one of scenarios for model training/inference is that both functions are located within RAN node. Followingly, in Release 18 the new work item of “Artificial Intelligence (Al)/Machine Learning (ML) for NG-RAN” was initiated to specify data collection enhancements and signaling support within existing NG-RAN interfaces and architecture. For the above active standardization works, RAN-based AI/ML model is considered very significant for both network and UE to meet any desired model operations (e.g., model training, inference, selection, switching, update, monitoring, etc.). Model information can be signaled to pair both networkside and UE-side models for various lifecycle management (LCM) operations.
However, signaling overhead indicating model information can be very high especially when model based LCM is processed between base station (BS/gNB) and multiple UEs. In LCM, model training is one of the most important parts for model deployment and currently there is no specification defined for signaling methods and network-UE behaviors so as to identify the required dataset when model updating/re- training as any activated model can be also impacted due to model/data drift. When ML condition changes, the enabled AI/ML model(s) can be impacted for model performance due to data/model drift. In this case, model re-training/updating can be executed. For example, when the trained ML model is deployed in RAN, model performance for inferencing can be easily degraded if target ML condition is not well aligned with real ML condition measured for specific model operation. For example,
US 2022400373 describes the method of determining neural network functions and configuring models for performing wireless communications management procedures.
US 2022108214 explains ML model management method for network data analytics function device.
US 2022337487 shows that a network entity determines at least one model parameter of a model for digitally analyzing input data depending on the at least one
model parameter of a model, the network entity being configured to receive a model request.
WO 2023277780 contains a method of downloading of a compiled machine code version of a ML model to a wireless communication device.
WO 2022258149 provides a way for training a model in a server device based on training data in a user device.
WO 2022228666 shows about influencing training of a ML model based on a training policy provided by an actor node.
WO 2022161624 describes the method of receiving a request for retrieving or executing a ML model or a combination of ML models.
The present application describes methods of using the pre-configured AI/ML (artificial intelligence/machine learning) based model alignment in wireless mobile communication system including base station (e.g., gNB, TN, NTN) and mobile station (e.g., UE). In AI/ML model is applied to radio access network, signaling of model information exchange can be mismatched with model operation failure.
Therefore, model operation (e.g., model training/inferencing/monitoring/updating) can be set up between network and UE by configuring model alignment with assignment of model identification.
The present disclosure solves the cited problem by the proposed embodiments and describes a method for advanced model assignment signaling by configuring model alignment ID or index information to indicate varying model ID assignment events in a wireless communication system, comprising determining the assigned models with the associated model alignment; setting threshold values for validity of the assigned model ID(s) with the associated model alignment ID; providing the configured model alignment information with the assigned model information; updating mapping relation information about model assignment and model alignment changes.
In some embodiments of the method according to the first aspect, the method is characterized by model identification using the pre-configured model alignment ID can be decided to proceed for model ID re-alignment if necessary.
In some embodiments of the method according to the first aspect, the method is characterized by, that if the pre-configured model alignment ID information can be sent to UE via L1/L2 or RRC signaling.
In some embodiments of the method according to the first aspect, the method is characterized by, that if any additional update about model alignment ID information can be also sent via L1/L2 or RRC signaling.
In some embodiments of the method according to the first aspect, the method is characterized by, that if mapping relation between model alignment ID and the assigned model ID list can be also sent via system information or dedicated RRC message.
In some embodiments of the method according to the first aspect, the method is characterized by, that if 1 -bit signaling can be used to determine availability of offline model ID assignment before online model ID assignment is processed with offline model ID assignment or without offline model ID assignment.
In some embodiments of the method according to the first aspect, the method is characterized by, that if indication signaling with any specific model alignment ID(s) can be sent to check if UE has the aligned model ID(s) along with confirmation message sent back by UE with or without the aligned model ID information.
In some embodiments of the method according to the first aspect, the method is characterized by, that if either network side or UE side can send the request message if online model ID assignment is needed.
In some embodiments of the method according to the first aspect, the method is characterized by, that if online model ID assignment can be triggered for model ID re-
alignment when the aligned model ID information at UE side changes or offline model ID assignment is not available.
In some embodiments of the method according to the first aspect, the method is characterized by, that if mapping relation table between the model alignment ID and the assigned model ID list can be referenced for further offline/online model identification process between network side and UE side.
In some embodiments of the method according to the first aspect, the method is characterized by, that if model alignment ID can also indicate if the assigned model ID(s) can be valid or need to be updated depending on timing record of model ID assignment (e.g., using threshold value).
In some embodiments of the method according to the first aspect, the method is characterized by, that if there can be multiple mapping relation tables based on different time/site in association with other parameters such as ML conditions, functionalities, dataset, applications.
In some embodiments of the method according to the first aspect, the method is characterized by, that if mapping relation table can be dynamically updated via L1/L2 signaling when any new assigned model ID(s) becomes available.
In some embodiments of the method according to the first aspect, the method is characterized by, that if model identification via offline/online is processed to update model ID assignment with the associated model alignment ID if any assigned model ID(s) is outdated.
In some embodiments of the method according to the first aspect, the method is characterized by, that if any measure to determine validity of the assigned model ID(s) with the associated model alignment ID can be based on the configured threshold value.
In some embodiments of the method according to the first aspect, the method is characterized by, that if network side provides the configured ML information with model alignment ID.
In some embodiments of the method according to the first aspect, the method is characterized by, that if UE side identifies if the assigned model IDs are available to match with the indicated model alignment ID and confirmation message is sent to network side so that either offline or online model identification process can proceed.
In some embodiments of the method according to the first aspect, the method is characterized by, that if indication message about offline or online model identification process can be sent to UE side if necessary.
In some embodiments of the method according to the first aspect, the method is characterized by, that if the assigned model ID(s) matched with model alignment ID can be activated for LCM operation for the requested model alignment ID.
According to a second aspect, the present disclosure relates to an apparatus for advanced model assignment signaling by configuring model alignment ID or index information to indicate varying model ID assignment events in a wireless communication system, the apparatus comprising a wireless transceiver, a processor coupled with a memory in which computer program instructions are stored, said instructions being configured to implement steps to carry out a method according to any one of the embodiments of the present disclosure.
According to a third aspect, the present disclosure relates to User Equipment comprising an apparatus according to the second aspect.
According to a fourth aspect, the present disclosure relates to gNB comprising an apparatus according to the second aspect.
According to a fifth aspect, the present disclosure relates to a wireless communication system wireless communication for advanced model assignment signaling by configuring model alignment ID or index information to indicate varying
model ID assignment events, wherein the wireless communication systems comprises at least a user equipment according a third aspect, at least a gNB according to a fourth aspect, whereby the user Equipment and the gNB each comprises a processor coupled with a memory in which computer program instructions are stored, whereby the user Equipment and the gNB each comprises a processor coupled with a memory in which computer program instructions are stored, said instructions being configured to to carry out a method according to any one of the embodiments of the first aspect.
According to a sixth aspect, the present disclosure relates to a computer program product comprising instructions which, when executed by at least one processor, configure said at least one processor to carry out a method according to the first aspect said at least one processor to carry out a method for exchanging data according to any one of the embodiments of the present disclosure. The computer program product can use any programming language, and can be in the form of source code, object code, or in any intermediate form between source code and object code, such as in a partially compiled form, or in any other desirable form.
According to a seventh aspect, the present disclosure relates to a computer-readable storage medium comprising instructions which, when executed by at least one processor, configure said at least one processor to carry out a method according to any one of the embodiments of the present disclosure.
According to a second aspect, the present disclosure relates to an apparatus for configuring a set of the supported operation modes for ML functionality in a wireless communication system, the apparatus comprising a wireless transceiver, a processor coupled with a memory in which computer program instructions are stored, said instructions being configured to carry out a method according to any one of the embodiments of the present disclosure.
BRIEF DESCRIPTION OF THE DRAWINGS
Figure 1 is an exemplary table of mapping relation between the model alignment ID and the assigned model ID list.
Figure 2 is an exemplary flow chart of configuring model alignment identification at network side.
Figure 3 is an exemplary flow chart of confirming the assigned model(s) using model alignment ID.
Figure 4 is an exemplary flow chart of updating the assigned model IDs.
Figure 5 is an exemplary signaling flow of aligning models between network side and UE side.
DETAILED DESCRIPTION
The detailed description set forth below, with reference to annexed drawings, is intended as a description of various configurations and is not intended to represent the only configurations in which the concepts described herein may be practiced. The detailed description includes specific details for the purpose of providing a thorough understanding of the various concepts. However, it will be apparent to those skilled in the art that these concepts may be practiced without these specific details. In particular, although terminology from 3GPP 5G NR may be used in this disclosure to exemplify embodiments herein, this should not be seen as limiting the scope of the invention.
Some of the embodiments contemplated herein will now be described more fully with reference to the accompanying drawings. Other embodiments, however, are contained within the scope of the subject matter disclosed herein, the disclosed subject matter should not be construed as limited to only the embodiments set forth herein; rather, these embodiments are provided by way of example to convey the scope of the subject matter to those skilled in the art.
Generally, all terms used herein are to be interpreted according to their ordinary meaning in the relevant technical field, unless a different meaning is clearly given and/or is implied from the context in which it is used. All references to a/an/the element, apparatus, component, means, step, etc. are to be interpreted openly as referring to at least one instance of the element, apparatus, component, means, step, etc., unless explicitly stated otherwise. The steps of any methods disclosed herein do not have to be performed in the exact order disclosed, unless a step is explicitly described as following or preceding another step and/or where it is implicit that a step must follow or precede another step. Any feature of any of the embodiments disclosed herein may be applied to any other embodiment, wherever appropriate. Likewise, any advantage of any of the embodiments may apply to any other embodiments, and vice versa. Other objectives, features and advantages of the enclosed embodiments will be apparent from the following description.
In some embodiments, a more general term “network node” may be used and may correspond to any type of radio network node or any network node, which communicates with a UE (directly or via another node) and/or with another network node. Examples of network nodes are NodeB, MeNB, ENB, a network node belonging to MCG or SCG, base station (BS), multi-standard radio (MSR) radio node such as MSR BS, eNodeB, gNodeB, network controller, radio network controller (RNC), base station controller (BSC), relay, donor node controlling relay, base transceiver station (BTS), access point (AP), transmission points, transmission nodes, RRU, RRH, nodes in distributed antenna system (DAS), core network node (e.g. Mobile Switching Center (MSC), Mobility Management Entity (MME), etc), Operations & Maintenance (O&M), Operations Support System (OSS), Self Optimized Network (SON), positioning node (e.g. Evolved- Serving Mobile Location Centre (E-SMLC)), Minimization of Drive Tests (MDT), test equipment (physical node or software), etc.
In some embodiments, the non-limiting term user equipment (UE) or wireless device may be used and may refer to any type of wireless device communicating with a network node and/or with another UE in a cellular or mobile communication system.
Examples of UE are target device, device to device (D2D) UE, machine type UE or UE capable of machine to machine (M2M) communication, PDA, PAD, Tablet, mobile terminals, smart phone, laptop embedded equipped (LEE), laptop mounted equipment (LME), USB dongles, UE category Ml, UE category M2, ProSe UE, V2V UE, V2X UE, etc.
Additionally, terminologies such as base station/gNodeB and UE should be considered non-limiting and do in particular not imply a certain hierarchical relation between the two; in general, “gNodeB” could be considered as device 1 and “UE” could be considered as device 2 and these two devices communicate with each other over some radio channel. And in the following the transmitter or receiver could be either gNodeB (gNB), or UE.
As will be appreciated by one skilled in the art, aspects of the embodiments may be embodied as a system, apparatus, method, or program product. Accordingly, embodiments may take the form of an entirely hardware embodiment, an entirely software embodiment (including firmware, resident software, micro-code, etc.) or an embodiment combining software and hardware aspects.
For example, the disclosed embodiments may be implemented as a hardware circuit comprising custom very-large-scale integration (“VLSI”) circuits or gate arrays, off- the-shelf semiconductors such as logic chips, transistors, or other discrete components. The disclosed embodiments may also be implemented in programmable hardware devices such as field programmable gate arrays, programmable array logic, programmable logic devices, or the like. As another example, the disclosed embodiments may include one or more physical or logical blocks of executable code which may, for instance, be organized as an object, procedure, or function.
Furthermore, embodiments may take the form of a program product embodied in one or more computer readable storage devices storing machine readable code, computer readable code, and/or program code, referred hereafter as code. The storage devices may be tangible, non- transitory, and/or non-transmission. The
storage devices may not embody signals. In a certain embodiment, the storage devices only employ signals for accessing code
Any combination of one or more computer readable medium may be utilized. The computer readable medium may be a computer readable storage medium. The computer readable storage medium may be a storage device storing the code. The storage device may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, holographic, micromechanical, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing.
More specific examples (a non-exhaustive list) of the storage device would include the following: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a random-access memory (“RAM”), a read-only memory (“ROM”), an erasable programmable read-only memory (“EPROM” or Flash memory), a portable compact disc readonly memory (“CD-ROM”), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this document, a computer readable storage medium may be any tangible medium that can contain or store a program for use by or in connection with an instruction execution system, apparatus, or device.
Code for carrying out operations for embodiments may be any number of lines and may be written in any combination of one or more programming languages including an object- oriented programming language such as Python, Ruby, Java, Smalltalk, C++, or the like, and conventional procedural programming languages, such as the “C” programming language, or the like, and/or machine languages such as assembly languages. The code may execute entirely on the user’s computer, partly on the user’s computer, as a stand-alone software package, partly on the user’s computer and partly on a remote computer or entirely on the remote computer or server. In the latter scenario, the remote computer may be connected to the user’s computer through any type of network, including a local area network (“LAN”), wireless LAN (“WLAN”), or a wide area network (“WAN”), or the connection may be made to an external computer (for example, through the Internet using an Internet Service Provider (“ISP”)).
Furthermore, the described features, structures, or characteristics of the embodiments may be combined in any suitable manner. In the following description, numerous specific details are provided, such as examples of programming, software modules, user selections, network transactions, database queries, database structures, hardware modules, hardware circuits, hardware chips, etc., to provide a thorough understanding of embodiments. One skilled in the relevant art will recognize, however, that embodiments may be practiced without one or more of the specific details, or with other methods, components, materials, and so forth. In other instances, well-known structures, materials, or operations are not shown or described in detail to avoid obscuring aspects of an embodiment. Reference throughout this specification to “one embodiment,” “an embodiment,” or similar language means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment. Thus, appearances of the phrases “in one embodiment,” “in an embodiment,” and similar language throughout this specification may, but do not necessarily, all refer to the same embodiment, but mean “one or more but not all embodiments” unless expressly specified otherwise. The terms “including,” “comprising,” “having,” and variations thereof mean “including but not limited to,” unless expressly specified otherwise. An enumerated listing of items does not imply that any or all of the items are mutually exclusive, unless expressly specified otherwise. The terms “a,” “an,” and “the” also refer to “one or more” unless expressly specified otherwise.
Aspects of the embodiments are described below with reference to schematic flowchart diagrams and/or schematic block diagrams of methods, apparatuses, systems, and program products according to embodiments. It will be understood that each block of the schematic flowchart diagrams and/or schematic block diagrams, and combinations of blocks in the schematic flowchart diagrams and/or schematic block diagrams, can be implemented by code. This code may be provided to a processor of a general-purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable
data processing apparatus, create means for implementing the functions/acts specified in the flowchart diagrams and/or block diagrams.
The code may also be stored in a storage device that can direct a computer, other programmable data processing apparatus, or other devices to function in a particular manner, such that the instructions stored in the storage device produce an article of manufacture including instructions which implement the function/act specified in the flowchart diagrams and/or block diagrams.
The code may also be loaded onto a computer, other programmable data processing apparatus, or other devices to cause a series of operational steps to be performed on the computer, other programmable apparatus, or other devices to produce a computer implemented process such that the code which execute on the computer or other programmable apparatus provide processes for implementing the functions/acts specified in the flowchart diagrams and/or block diagrams.
The flowchart diagrams and/or block diagrams in the Figures illustrate the architecture, functionality, and operation of possible implementations of apparatuses, systems, methods, and program products according to various embodiments. In this regard, each block in the flowchart diagrams and/or block diagrams may represent a module, segment, or portion of code, which includes one or more executable instructions of the code for implementing the specified logical function(s).
It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the Figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. Other steps and methods may be conceived that are equivalent in function, logic, or effect to one or more blocks, or portions thereof, of the illustrated Figures.
Although various arrow types and line types may be employed in the flowchart and/or block diagrams, they are understood not to limit the scope of the corresponding
embodiments. Indeed, some arrows or other connectors may be used to indicate only the logical flow of the depicted embodiment. For instance, an arrow may indicate a waiting or monitoring period of unspecified duration between enumerated steps of the depicted embodiment. It will also be noted that each block of the block diagrams and/or flowchart diagrams, and combinations of blocks in the block diagrams and/or flowchart diagrams, can be implemented by special purpose hardware-based systems that perform the specified functions or acts, or combinations of special purpose hardware and code.
The description of elements in each figure may refer to elements of proceeding figures. Like numbers refer to like elements in all figures, including alternate embodiments of like elements.
The detailed description set forth below, with reference to the figures, is intended as a description of various configurations and is not intended to represent the only configurations in which the concepts described herein may be practiced. The detailed description includes specific details for the purpose of providing a thorough understanding of the various concepts. However, it will be apparent to those skilled in the art that these concepts may be practiced without these specific details. For instance, although 3GPP terminology, from e.g., 5G NR, may be used in this disclosure to exemplify embodiments herein, this should not be seen as limiting the scope of the present disclosure.
The disclosure is related to wireless communication system, which may be for example a 5G NR wireless communication system. More specifically, it represents a RAN of the wireless communication system, which is used exchange data with UEs via radio signals. For example, the RAN may send data to the UEs (downlink, DL), for instance data received from a core network (CN). The RAN may also receive data from the UEs (uplink, UL), which data may be forwarded to the CN.
In the examples illustrated, the RAN comprises one base station, BS. Of course, the RAN may comprise more than one BS to increase the coverage of the wireless communication system. Each of these BSs may be referred to as NB, eNodeB (or
eNB), gNodeB (or gNB, in the case of a 5G NR wireless communication system), an access point or the like, depending on the wireless communication standard(s) implemented.
The UEs are located in a coverage of the BS. The coverage of the BS corresponds for example to the area in which UEs can decode a PDCCH transmitted by the BS.
An example of a wireless device suitable for implementing any method, discussed in the present disclosure, performed at a UE corresponds to an apparatus that provides wireless connectivity with the RAN of the wireless communication system, and that can be used to exchange data with said RAN. Such a wireless device may be included in a UE. The UE may for instance be a cellular phone, a wireless modem, a wireless communication device, a handheld device, a laptop computer, or the like. The UE may also be an Internet of Things (loT) equipment, like a wireless camera, a smart sensor, a smart meter, smart glasses, a vehicle (manned or unmanned), a global positioning system device, etc., or any other equipment that may run applications that need to exchange data with remote recipients, via the wireless device.
The wireless device comprises one or more processors and one or more memories. The one or more processors may include for instance a central processing unit (CPU), a digital signal processor (DSP), a field-programmable gate array (FPGA), an application specific integrated circuit (ASIC), etc. The one or more memories may include any type of computer readable volatile and non-volatile memories (magnetic hard disk, solid-state disk, optical disk, electronic memory, etc.). The one or more memories may store a computer program product, in the form of a set of programcode instructions to be executed by the one or more processors to implement all or part of the steps of a method for exchanging data, performed at a UE’s side, according to any one of the embodiments disclosed herein.
The wireless device can comprise also a main radio, MR, unit. The MR unit corresponds to a main wireless communication unit of the wireless device, used for exchanging data with BSs of the RAN using radio signals. The MR unit may
implement one or more wireless communication protocols, and may for instance be a 3G, 4G, 5G, NR, WiFi, WiMax, etc. transceiver or the like. In preferred embodiments, the MR unit corresponds to a 5G NR wireless communication unit.
AI/ML Model is a data driven algorithm that applies AI/ML techniques to generate set of outputs based on set of inputs.
AI/ML model delivery is a generic term referring to delivery of an AI/ML model from one entity to another entity in any manner. Note is An entity could mean network node/function (e.g., gNB, LMF, etc.), UE, proprietary server, etc.
AI/ML model Inference is a process of using trained AI/ML model to produce set of outputs based on set of inputs.
AI/ML model testing is a subprocess of training, to evaluate the performance of final AI/ML model using dataset different from one used for model training and validation. Differently from AI/ML model validation, testing does not assume subsequent tuning of the model.
AI/ML model training is a process to train an AI/ML Model [by learning the input/output relationship] in data driven manner and obtain the trained AI/ML Model for inference.
AI/ML model transfer is a delivery of an AI/ML model over the air interface in manner that is not transparent to 3GPP signalling, either parameters of model structure known at the receiving end or new model with parameters. Delivery may contain full model or partial model.
AI/ML model validation is a subprocess of training, to evaluate the quality of an AI/ML model using dataset different from one used for model training, that helps selecting model parameters that generalize beyond the dataset used for model training.
Data collection is a process of collecting data by the network nodes, management entity, or UE for the purpose of AI/ML model training, data analytics and inference.
Federated learning I federated training is a machine learning technique that trains an AI/ML model across multiple decentralized edge nodes e.g., UEs, gNBs each performing local model training using local data samples. The technique requires multiple interactions of the model, but no exchange of local data samples.
Functionality identification is a process/method of identifying an AI/ML functionality for the common understanding between the NW and the UE. Note is Information regarding the AI/ML functionality may be shared during functionality identification. Where AI/ML functionality resides depends on the specific use cases and sub use cases.
Model activation means enable an AI/ML model for specific AI/ML-enabled feature.
Model deactivation means disable an AI/ML model for specific AI/ML-enabled feature.
Model download means Model transfer from the network to UE.
Model identification is A process/method of identifying an AI/ML model for the common understanding between the NW and the UE. The process/method of model identification may or may not be applicable and regarding the AI/ML model may be shared during model identification.
Model monitoring is A procedure that monitors the inference performance of the AI/ML model.
Model parameter update is Process of updating the model parameters of model. Model selection is the process of selecting an AI/ML model for activation among multiple models for the same AI/ML enabled feature. Model selection may or may not be carried out simultaneously with model activation.
Model switching is deactivating currently active AI/ML model and activating different AI/ML model for specific AI/ML-enabled feature.
Model update is Process of updating the model parameters and/or model structure of model.
Model upload is Model transfer from UE to the network.
Network-side (AI/ML) model is an AI/ML Model whose inference is performed entirely at the network.
Offline field data is the data collected from field and used for offline training of the AI/ML model.
Offline training is an AI/ML training process where the model is trained based on collected dataset, and where the trained model is later used or delivered for inference. Note is This definition only serves as guidance. There may be cases that may not exactly conform to this definition but could still be categorized as offline training by commonly accepted conventions.
Online field data is the data collected from field and used for online training of the AI/ML model.
Online training is an AI/ML training process where the model being used for inference) is (typically continuously) trained in (near) real-time with the arrival of new training samples. Note is the notion of (near) real-time vs. non real-time is context- dependent and is relative to the inference time-scale. This definition only serves as guidance.
There may be cases that may not exactly conform to this definition but could still be categorized as online training by commonly accepted conventions. Note is Fine- tuning/re-training may be done via online or offline training. This note could be removed when we define the term fine-tuning.
Reinforcement Learning (RL) is a process of training an AI/ML model from input (a.k.a. state) and feedback signal (a.k.a. reward) resulting from the model’s output (a.k.a. action) in an environment the model is interacting with.
Semi-supervised learning is a process of training model with mix of labelled data and unlabelled data.
Supervised learning is a process of training model from input and its corresponding labels.
Two-sided (AI/ML) model is a paired AI/ML Model(s) over which joint inference is performed, where joint inference comprises AI/ML Inference whose inference is performed jointly across the UE and the network, i.e, the first part of inference is firstly performed by UE and then the remaining part is performed by gNB, or vice versa.
UE-side (AI/ML) model is an AI/ML Model whose inference is performed entirely at the UE.
Unsupervised learning is a process of training model without labelled data.
Proprietary-format models is ML models of vendor-Zdevice-specific proprietary format, from 3GPP perspective. They are not mutually recognizable across vendors and hide model design information from other vendors when shared.
Open-format models is ML models of specified format that are mutually recognizable across vendors and allow interoperability, from 3GPP perspective. They are mutually recognizable between vendors and do not hide model design information from other vendors when shared.
The following explanation will provide the detailed description of the mechanism about pre-configuring and signaling the specific information about model online training by configuring a set of UE behaviors. AI/ML based techniques are currently
applied to many different applications and 3GPP also started to work on its technical investigation to apply to multiple use cases based on the observed potential gains. AI/ML lifecycle can be split into several stages such as data collection/pre- processing, model training, model testing/validation, model deployment/update, model monitoring etc., where each stage is equally important to achieve target performance with any specific model(s). In applying AI/ML model for any use case or application, one of the challenging issues is to manage the lifecycle of AI/ML model.
It is mainly because the data/model drift occurs during model deployment/inference and it results in performance degradation of AI/ML model. Fundamentally, the dataset statistical changes occur after model is deployed and model inference capability is also impacted with unseen data as input. In a similar aspect, the statistical property of dataset and the relationship between input and output for the trained model can be changed with drift occurrence. In this context, model training or re-training is one of key issues for model performance maintenance as model performance such as inferencing and/or training is dependent on different model execution environment with varying configuration parameters.
To handle this issue, collaboration between UE and gNB is highly important to track model performance and re-configure model corresponding to different environments. AI/ML model needs model monitoring after deployment because model performance cannot be maintained continuously due to drift and update feedback is then provided to re-train/update the model or select alternative model.
When AI/ML model enabled wireless communication network is deployed, it is then important to consider how to handle AI/ML model in activation with re-configuration for wireless devices under operations such as model training, inference, updating, etc. Model(s) for activation must be aligned between network side and UE side using model ID information. If not aligned, activation of model(s) can fail due to the mismatched ML conditions and/or ML operation configuration between network side and UE side.
In this method, model alignment ID (or index information) can be pre-configured to indicate varying model ID assignment events. The assigned models are then determined with the associated model alignment. Using the pre-configured model alignment ID, model identification can be decided to proceed for model ID realignment if necessary.
The pre-configured model alignment ID information can be sent to UE via L1/L2 or RRC signaling and any additional update about model alignment ID information can be also sent via L1/L2 or RRC signaling. Mapping relation between model alignment ID and the assigned model ID list can be also sent via system information or dedicated RRC message. Two scenarios to apply model alignment ID can be considered such that 1 -bit signaling can be used to determine availability of offline model ID assignment before online model ID assignment is processed when 1 ) offline model ID is assigned or 2) offline model ID is not assigned. Indication signaling with any specific model alignment ID(s) can be sent to check if UE has the aligned model ID(s) and confirmation message is sent back by UE with or without the aligned model ID information. If online model ID assignment is needed, either network side or UE side can send the request message.
For example, online model ID assignment can be triggered for model ID re-alignment when the aligned model ID information at UE side changes or offline model ID assignment is not available. For mapping relation between the model alignment ID and the assigned model ID list, this mapping relation table can be referenced for further offline/online model identification process between network side and UE side. Model alignment ID can also indicate if the assigned model ID(s) can be valid or need to be updated depending on timing record of model ID assignment (e.g., using threshold value).
There can be multiple mapping relation tables based on different time/site in association with other parameters such as ML conditions, functionalities, dataset, applications. Mapping relation table can be dynamically updated via L1/L2 signaling when any new assigned model ID(s) becomes available. If any assigned model ID(s) is outdated, model identification via offline/online is processed to update model ID
assignment with the associated model alignment ID. Any measure to determine validity of the assigned model ID(s) with the associated model alignment ID can be based on the configured threshold value.
Network side provides the configured ML information with model alignment ID. UE side identifies if the assigned model IDs are available to match with the indicated model alignment ID and confirmation message is sent to network side so that either offline or online model identification process can proceed. Indication message about offline or online model identification process can be sent to UE side if necessary. Based on the requested model alignment ID, the assigned model ID(s) matched with model alignment ID can then be activated for LCM operation.
Figure 1 shows an exemplary table of mapping relation between the model alignment ID and the assigned model ID list. In this example, 1for mapping relation between the model alignment ID and the assigned model ID list, this mapping relation table can be referenced for further offline/online model identification process between network side and UE side. Model alignment ID can also indicate if the assigned model ID(s) can be valid or need to be updated depending on timing record of model ID assignment (e.g., using threshold value). There can be multiple mapping relation tables based on different time/site in association with other parameters such as ML conditions, functionalities, dataset, applications. Mapping relation table can be dynamically updated via L1/L2 signaling when any new assigned model ID(s) becomes available.
Figure 2 shows an exemplary flow chart of configuring model alignment identification at network side. In this example, model alignment ID (or index information) can be pre-configured to indicate varying model ID assignment events. Using the preconfigured model alignment ID, model identification can be decided to proceed for model ID re-alignment if necessary. The pre-configured model alignment ID information can be sent to UE via L1/L2 or RRC signaling and any additional update about model alignment ID information can be also sent via L1/L2 or RRC signaling. Mapping relation between model alignment ID and the assigned model ID list can be also sent via system information or dedicated RRC message.
Figure 3 shows an exemplary flow chart of confirming the assigned model(s) using model alignment ID. In this example, UE can activate any matched model ID(s) assigned based on model alignment ID provided by network side. If not, then model re-alignment can proceed to update model assignment.
Figure 4 shows an exemplary flow chart of updating the assigned model IDs. In this example, if any assigned model ID(s) is outdated, model identification via offline/online is processed to update model ID assignment with the associated model alignment ID. Any measure to determine validity of the assigned model ID(s) with the associated model alignment ID can be based on the configured threshold value.
Figure 5 shows an exemplary signaling flow of aligning models between network side and UE side. In this example, Network side provides the configured ML information with model alignment ID. UE side identifies if the assigned model IDs are available to match with the indicated model alignment ID. Confirmation message is sent to network side so that either offline or online model identification process can proceed. Indication message about offline or online model identification process can be sent to UE side if necessary. Based on the requested model alignment ID, the assigned model ID(s) matched with model alignment ID can then be activated for LCM operation.
Claims
1. A method of advanced model assignment signaling by configuring model alignment ID or index information to indicate varying model ID assignment events in a wireless communication system, comprising:
• Determining the assigned models with the associated model alignment;
• Setting threshold values for validity of the assigned model ID(s) with the associated model alignment ID;
• Providing the configured model alignment information with the assigned model information;
• Updating mapping relation information about model assignment and model alignment changes.
2. The method according to previous claim 1 , wherein model identification using the pre-configured model alignment ID can be decided to proceed for model ID realignment if necessary.
3. The method according to one of previous claims, wherein the pre-configured model alignment ID information can be sent to UE via L1/L2 or RRC signaling.
4. The method according to one of previous claims, wherein any additional update about model alignment ID information can be also sent via L1/L2 or RRC signaling.
5. The method according to one of previous claims, wherein mapping relation between model alignment ID and the assigned model ID list can be also sent via system information or dedicated RRC message.
6. The method according to one of previous claims, wherein 1 -bit signaling can be used to determine availability of offline model ID assignment before online model ID assignment is processed with offline model ID assignment or without offline model ID assignment.
7. The method according to one of previous claims, wherein indication signaling with any specific model alignment ID(s) can be sent to check if UE has the aligned model ID(s) along with confirmation message sent back by UE with or without the aligned model ID information.
8. The method according to one of previous claims, wherein either network side or UE side can send the request message if online model ID assignment is needed.
9. The method according to one of previous claims, wherein online model ID assignment can be triggered for model ID re-alignment when the aligned model ID information at UE side changes or offline model ID assignment is not available.
10. The method according to one of previous claims, wherein mapping relation table between the model alignment ID and the assigned model ID list can be referenced for further offline/online model identification process between network side and UE side.
11 . The method according to one of previous claims, wherein model alignment ID can also indicate if the assigned model ID(s) can be valid or need to be updated depending on timing record of model ID assignment (e.g., using threshold value).
12. The method according to one of previous claims, wherein there can be multiple mapping relation tables based on different time/site in association with other parameters such as ML conditions, functionalities, dataset, applications.
13. The method according to one of previous claims, wherein mapping relation table can be dynamically updated via L1/L2 signaling when any new assigned model ID(s) becomes available.
14. The method according to one of previous claims, wherein model identification via offline/online is processed to update model ID assignment with the associated model alignment ID if any assigned model ID(s) is outdated.
15. The method according to one of previous claims, wherein any measure to determine validity of the assigned model ID(s) with the associated model alignment ID can be based on the configured threshold value.
16. The method according to one of previous claims, wherein network side provides the configured ML information with model alignment ID.
17. The method according to one of previous claims, wherein UE side identifies if the assigned model IDs are available to match with the indicated model alignment ID and confirmation message is sent to network side so that either offline or online model identification process can proceed.
18. The method according to one of previous claims, wherein indication message about offline or online model identification process can be sent to UE side if necessary.
19. The method according to one of previous claims, wherein the assigned model ID(s) matched with model alignment ID can be activated for LCM operation for the requested model alignment ID.
20. Apparatus for advanced model assignment signaling by configuring model alignment ID or index information to indicate varying model ID assignment events in a wireless communication system, the apparatus comprising a wireless transceiver, a processor coupled with a memory in which computer program instructions are stored, said instructions being configured to implement steps of the claims 1 to 19.
21 . User Equipment comprising an apparatus according to claim 20.
22. gNB comprising an apparatus according to claim 20.
23. Wireless communication for advanced model assignment signaling by configuring model alignment ID or index information to indicate varying model ID assignment events, wherein the wireless communication systems comprises at least a user
equipment according to claim 21 , at least a gNB according to claim 21 , whereby the user Equipment and the gNB each comprises a processor coupled with a memory in which computer program instructions are stored, said instructions being configured to implement steps of the claims 1 to 19.
Applications Claiming Priority (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| DE102024108223 | 2024-03-22 | ||
| DE102024108223.2 | 2024-03-22 |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| WO2025195815A1 true WO2025195815A1 (en) | 2025-09-25 |
Family
ID=94974148
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| PCT/EP2025/056387 Pending WO2025195815A1 (en) | 2024-03-22 | 2025-03-10 | Method of advanced ai/ml model assignment signaling |
Country Status (1)
| Country | Link |
|---|---|
| WO (1) | WO2025195815A1 (en) |
Citations (9)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20220108214A1 (en) | 2020-08-13 | 2022-04-07 | Electronics And Telecommunications Research Institute | Management method of machine learning model for network data analytics function device |
| WO2022161624A1 (en) | 2021-01-29 | 2022-08-04 | Telefonaktiebolaget Lm Ericsson (Publ) | Candidate machine learning model identification and selection |
| US20220337487A1 (en) | 2020-01-03 | 2022-10-20 | Huawei Technologies Co., Ltd. | Network entity for determining a model for digitally analyzing input data |
| WO2022228666A1 (en) | 2021-04-28 | 2022-11-03 | Telefonaktiebolaget Lm Ericsson (Publ) | Signaling of training policies |
| US20220400373A1 (en) | 2021-06-15 | 2022-12-15 | Qualcomm Incorporated | Machine learning model configuration in wireless networks |
| WO2022258149A1 (en) | 2021-06-08 | 2022-12-15 | Huawei Technologies Co., Ltd. | User device, server device, method and system for privacy preserving model training |
| WO2023277780A1 (en) | 2021-07-01 | 2023-01-05 | Telefonaktiebolaget Lm Ericsson (Publ) | Enabling downloadable ai |
| WO2023209577A1 (en) * | 2022-04-25 | 2023-11-02 | Telefonaktiebolaget Lm Ericsson (Publ) | Ml model support and model id handling by ue and network |
| WO2023213286A1 (en) * | 2022-05-05 | 2023-11-09 | 维沃移动通信有限公司 | Model identifier management method and apparatus, and storage medium |
-
2025
- 2025-03-10 WO PCT/EP2025/056387 patent/WO2025195815A1/en active Pending
Patent Citations (9)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20220337487A1 (en) | 2020-01-03 | 2022-10-20 | Huawei Technologies Co., Ltd. | Network entity for determining a model for digitally analyzing input data |
| US20220108214A1 (en) | 2020-08-13 | 2022-04-07 | Electronics And Telecommunications Research Institute | Management method of machine learning model for network data analytics function device |
| WO2022161624A1 (en) | 2021-01-29 | 2022-08-04 | Telefonaktiebolaget Lm Ericsson (Publ) | Candidate machine learning model identification and selection |
| WO2022228666A1 (en) | 2021-04-28 | 2022-11-03 | Telefonaktiebolaget Lm Ericsson (Publ) | Signaling of training policies |
| WO2022258149A1 (en) | 2021-06-08 | 2022-12-15 | Huawei Technologies Co., Ltd. | User device, server device, method and system for privacy preserving model training |
| US20220400373A1 (en) | 2021-06-15 | 2022-12-15 | Qualcomm Incorporated | Machine learning model configuration in wireless networks |
| WO2023277780A1 (en) | 2021-07-01 | 2023-01-05 | Telefonaktiebolaget Lm Ericsson (Publ) | Enabling downloadable ai |
| WO2023209577A1 (en) * | 2022-04-25 | 2023-11-02 | Telefonaktiebolaget Lm Ericsson (Publ) | Ml model support and model id handling by ue and network |
| WO2023213286A1 (en) * | 2022-05-05 | 2023-11-09 | 维沃移动通信有限公司 | Model identifier management method and apparatus, and storage medium |
Non-Patent Citations (1)
| Title |
|---|
| VIJAY NANGIA ET AL: "On general aspects of AI/ML framework", vol. RAN WG1, no. Xiamen, CN; 20231009 - 20231013, 29 September 2023 (2023-09-29), XP052527664, Retrieved from the Internet <URL:https://www.3gpp.org/ftp/TSG_RAN/WG1_RL1/TSGR1_114b/Docs/R1-2309951.zip R1-2309951.docx> [retrieved on 20230929] * |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| WO2025008304A1 (en) | Method of advanced ml report signaling | |
| WO2024208702A1 (en) | Method of model dataset signaling for radio access network | |
| WO2025195815A1 (en) | Method of advanced ai/ml model assignment signaling | |
| WO2024223570A1 (en) | Method of data-driven model signaling for multi-usim | |
| WO2025233278A1 (en) | Method of multi-model pre-activation signaling | |
| WO2025195960A1 (en) | Method of model adjustment signaling using representative model | |
| WO2025233225A1 (en) | Method of model identification adaptation signaling | |
| WO2025168468A1 (en) | Method of advanced ml signaling for ran | |
| WO2025168467A1 (en) | Method of ml condition pairing | |
| WO2025168462A1 (en) | Method of advanced online training signaling for ran | |
| WO2025196092A1 (en) | Method of configuring the multi-sided model pairing with the end-to-end ml model operation across different entities | |
| WO2025195957A1 (en) | Method of advanced model activation signaling | |
| WO2025233201A1 (en) | Method of model tiering configuration | |
| WO2025233207A1 (en) | Model measurement feedback signaling | |
| WO2025210138A1 (en) | Method of multi-training model operation signaling | |
| WO2025195952A1 (en) | Method of adaptive model operation with device state awareness | |
| WO2025210139A1 (en) | Method of training mode adaptation signaling | |
| WO2025233227A1 (en) | Adaptive model signaling with the associated dataset for ran | |
| WO2025168469A1 (en) | Method of model alignment signaling | |
| WO2025168463A1 (en) | Method of advanced model updating signaling for ran | |
| WO2025172265A1 (en) | Method of advanced condition-based model signaling | |
| WO2025168471A1 (en) | Method of rrc state-based online training signaling | |
| WO2025233228A1 (en) | Method of multi-cell based concatenation signaling | |
| WO2025233226A1 (en) | Method of multi-mode model switching signaling | |
| WO2025233348A1 (en) | Method of pre-mapping based model signaling |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| 121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 25711646 Country of ref document: EP Kind code of ref document: A1 |