WO2025087719A1 - Method of offloading a ml lcm operation in a wireless network - Google Patents
Method of offloading a ml lcm operation in a wireless network Download PDFInfo
- Publication number
- WO2025087719A1 WO2025087719A1 PCT/EP2024/078905 EP2024078905W WO2025087719A1 WO 2025087719 A1 WO2025087719 A1 WO 2025087719A1 EP 2024078905 W EP2024078905 W EP 2024078905W WO 2025087719 A1 WO2025087719 A1 WO 2025087719A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- relay
- lcm
- model
- lcm operation
- ues
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04W—WIRELESS COMMUNICATION NETWORKS
- H04W24/00—Supervisory, monitoring or testing arrangements
- H04W24/02—Arrangements for optimising operational condition
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04W—WIRELESS COMMUNICATION NETWORKS
- H04W88/00—Devices specially adapted for wireless communication networks, e.g. terminals, base stations or access point devices
- H04W88/02—Terminal devices
- H04W88/04—Terminal devices adapted for relaying to or from another terminal or user
Definitions
- the present disclosure relates to AI/ML operation pre-configuration, where techniques for re-configuring and signaling the specific information to enhance relay-device based model performance are presented.
- AI/ML artificial intelligence/machine learning
- RP-213599 3GPP TSG (Technical Specification Group)
- RAN Radio Access Network
- the official title of the AI/ML study item is “Study on AI/ML for NR Air Interface”, and currently, RAN WG1 (Working Group 1 ) and WG2 are actively working on a specification.
- the goal of this study item is to identify a common AI/ML framework and areas of obtaining gains using AI/ML based techniques with use cases.
- the main objective of this study item is to study an AI/ML frameworks for air-interface with target use cases by considering performance, complexity, and potential specification impacts.
- AI/ML models terminology and descriptions to identify common and specific characteristics for a framework will be one of the key work scopes.
- various aspects are under consideration for investigation and one of key items is about lifecycle management (LCM) of AI/ML models where multiple stages are included as mandatory for model training, model deployment, model inference, model monitoring, model updating etc.
- LCM lifecycle management
- UE user equipment mobility was also considered as one of the AI/ML use cases and one of the scenarios for model training/inference is that both functions are located within a RAN node.
- the new work item of “Artificial Intelligence (Al)/Machine Learning (ML) for NG-RAN” was initiated to specify data collection enhancements and signaling support within existing NG-RAN interfaces and architectures where mobility optimization is included as one of target use cases.
- UE ML conditions to support RAN-based AI/ML models can be considered very significant for both the gNB and the UE to meet any desired model operations (e.g., model training, inference, selection, switching, update, monitoring, etc.).
- model operations e.g., model training, inference, selection, switching, update, monitoring, etc.
- Any mechanism of additional signaling methods and/or gNB-UE behaviors also need to be addressed to support relay-based model operation between a gNB and a UE so that any potential impact of UE ML conditions on model operation in RAN should be minimized with service continuity.
- the terminologies of the working list contains a set of high-level descriptions about AI/ML model training, inference, validation, testing, UE-side model, network-side model, one-sided model, two-sided model, etc.
- UE-sided models and network-sided models indicate that AI/ML models are located for operation in UE and network side, respectively.
- one-sided and two-sided models indicate that a AI/ML model is located in one side and two sides, respectively.
- US 2021 203 565 A1 discloses methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for training and using machine learning models to classify network traffic as loT traffic or non-loT traffic and managing the traffic based on the classification.
- machine learning parameters of a local machine learning model trained by the edge device is received each of at least a subset of a set of edge devices.
- the machine learning parameters received from an edge device are parameters of the local machine learning model trained by the edge device based on local network traffic processed by the edge device and to classify the network traffic as Internet of Things (loT) traffic or non-loT traffic.
- a global machine learning model is generated, using the machine learning parameters, to classify network traffic processed by edge devices as loT traffic or non-loT traffic.
- US 2022 261 697 A1 discloses systems and methods for federated machine learning.
- a central system receives satellite analytics artifacts from a plurality of satellite site systems and generates a central machine learning model based on the satellite analytics artifacts.
- a plurality of federated machine learning epochs are executed.
- the central system transmitting the central machine learning model to the plurality of satellite site systems, and then receives in return, from each satellite site system, a respective set of satellite values for a set of weights of the model, wherein the satellite values are generated by the respective satellite site system based on a respective local dataset of the satellite site system.
- the central system then generates an updated version of the central machine learning model based on the satellite values received from the satellite site systems.
- US 2023 037 893 A1 provides a method for generating a real-time radio coverage map in a wireless network by a network apparatus.
- the method includes: receiving real-time geospatial information from one or more geographical sources in the wireless network; determining handover information of at least one user equipment (UE) in the wireless network from a plurality of base stations based on the real-time geospatial information; and generating the real-time radio coverage map based on the handover information of at least one UE and the real-time geospatial information.
- UE user equipment
- WO 2022 015 008 A1 discloses a method for determining a target cell for handover of a UE.
- the method includes monitoring, by a mobility management platform, multiple network characteristics associated with multiple UEs and determining, by the mobility management platform, a correlation between the multiple UEs based on the multiple network characteristics associated with the multiple UEs and location information of the multiple UEs.
- the method includes receiving, by the mobility management platform, location information from UE of the multiple and determining, by the mobility management platform, a measurement report corresponding to the location information received from the UE based on the correlation.
- the method includes determining, by the mobility management platform, the target cell for the handover of the UE based on the location information received from the UE and the measurement report.
- sensing agents communicate with user equipments (UEs) or nodes using one of multiple sensing modes through non-sensing-based or sensing-based links
- artificial intelligence (Al) agents communicate with UEs or nodes using one of multiple Al modes through non-AI-based or Al-based links.
- Al and sensing may work independently or together.
- a sensing service request may be sent by an Al block to a sensing block to obtain sensing data from the sensing block, and the Al block may generate a configuration based on the sensing data.
- Various other features, related to example interfaces, channels, and other aspects of Al-enabled and/or sensing-enabled communications, for example, are also disclosed.
- Figure 1 is an exemplary table of a mapping relationship about offloadable LCM operations and reference ML capabilities
- Figure 2 is an exemplary flow chart of a network sided behavior for LCM operation offloading
- Figure 3 is an exemplary flow chart of a relay UE sided behavior for LCM operation offloading
- Figure 4 is an exemplary flow chart of a target UE sided behavior for LCM operation offloading
- Figure 5 is an exemplary flow chart of a target UE sided behavior for LCM operation offloading during in-coverage location.
- a more general term “network node” may be used and may correspond to any type of radio network node or any network node, which communicates with a UE (directly or via another node) and/or with another network node.
- network nodes are NodeB, MeNB, ENB, a network node belonging to MCG or SCG, base station (BS), multi-standard radio (MSR) radio node such as MSR BS, eNodeB, gNodeB, network controller, radio network controller (RNC), base station controller (BSC), relay, donor node controlling relay, base transceiver station (BTS), access point (AP), transmission points, transmission nodes, RRU, RRH, nodes in distributed antenna system (DAS), core network node (e.g.
- MSC Mobile Switching Center
- MME Mobility Management Entity
- O&M Operations & Maintenance
- OSS Operations Support System
- SON Self Optimized Network
- positioning node e.g. Evolved- Serving Mobile Location Centre (E-SMLC)
- E-SMLC Evolved- Serving Mobile Location Centre
- MDT Minimization of Drive Tests
- test equipment physical node or software
- another UE etc.
- the non-limiting term user equipment (UE) or wireless device may be used and may refer to any type of wireless device communicating with a network node and/or with another UE in a cellular or mobile communication system.
- UE are target device, device to device (D2D) UE, machine type UE or UE capable of machine to machine (M2M) communication, PDA, PAD, Tablet, mobile terminals, smart phone, laptop embedded equipped (LEE), laptop mounted equipment (LME), USB dongles, UE category Ml, UE category M2, ProSe UE, V2V UE, V2X UE, etc.
- terminologies such as base station/gNodeB and UE should be considered non-limiting and in particular do not imply a certain hierarchical relation between the two; in general, “gNodeB” could be considered as device 1 and “UE” could be considered as device 2 and these two devices communicate with each other over some radio channel. And in the following the transmitter or receiver could be either gNodeB (gNB), or UE.
- gNB gNodeB
- aspects of the embodiments may be embodied as a system, apparatus, method, or computer program product.
- embodiments may take the form of an entirely hardware embodiment, an entirely software embodiment (including firmware, resident software, micro-code, etc.) or an embodiment combining software and hardware aspects.
- the disclosed embodiments may be implemented as a hardware circuit comprising custom very-large-scale integration (“VLSI”) circuits or gate arrays, off-the-shelf semiconductors such as logic chips, transistors, or other discrete components.
- VLSI very-large-scale integration
- the disclosed embodiments may also be implemented in programmable hardware devices such as field programmable gate arrays, programmable array logic, programmable logic devices, or the like.
- the disclosed embodiments may include one or more physical or logical blocks of executable code which may, for instance, be organized as an object, procedure, or function.
- embodiments may take the form of a computer program product embodied in one or more computer readable storage devices storing machine readable code, computer readable code, and/or program code, referred hereafter as code.
- the storage devices may be tangible, non- transitory, and/or non-transmission.
- the storage devices may not embody signals. In a certain embodiment, the storage devices only employ signals for accessing code.
- the computer readable medium may be a computer readable storage medium.
- the computer readable storage medium may be a storage device storing the code.
- the storage device may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, holographic, micromechanical, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing.
- a storage device More specific examples (a non-exhaustive list) of the storage device would include the following: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a random-access memory (“RAM”), a read-only memory (“ROM”), an erasable programmable read-only memory (“EPROM” or Flash memory), a portable compact disc read-only memory (“CD-ROM”), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
- a computer readable storage medium may be any tangible medium that can contain or store a program for use by or in connection with an instruction execution system, apparatus, or device.
- Code for carrying out operations for embodiments may be any number of lines and may be written in any combination of one or more programming languages including an object-oriented programming language such as Python, Ruby, Java, Smalltalk, C++, or the like, and conventional procedural programming languages, such as the “C” programming language, or the like, and/or machine languages such as assembly languages.
- the code may execute entirely on the user’s computer, partly on the user’s computer, as a stand-alone software package, partly on the user’s computer and partly on a remote computer or entirely on the remote computer or server.
- the remote computer may be connected to the user’s computer through any type of network, including a local area network (“LAN”), wireless LAN (“WLAN”), or a wide area network (“WAN”), or the connection may be made to an external computer (for example, through the Internet using an Internet Service Provider (“ISP”)).
- LAN local area network
- WLAN wireless LAN
- WAN wide area network
- ISP Internet Service Provider
- the code may also be stored in a storage device that can direct a computer, other programmable data processing apparatus, or other devices to function in a particular manner, such that the instructions stored in the storage device produce an article of manufacture including instructions which implement the function/act specified in the flowchart diagrams and/or block diagrams.
- the code may also be loaded onto a computer, other programmable data processing apparatus, or other devices to cause a series of operational steps to be performed on the computer, other programmable apparatus, or other devices to produce a computer implemented process such that the code which execute on the computer or other programmable apparatus provide processes for implementing the functions/acts specified in the flowchart diagrams and/or block diagrams.
- each block in the flowchart diagrams and/or block diagrams may represent a module, segment, or portion of code, which includes one or more executable instructions of the code for implementing the specified logical function(s).
- base stations e.g., gNB
- mobile stations e.g., UE
- An AI/ML lifecycle can be split into several stages such as data collection/pre-processing, model training, model testing/validation, model deployment/update, model monitoring, model switching/selection etc., where each stage is equally important to achieve target performance with any specific model(s).
- one of the challenging issues is to manage the lifecycle of the AI/ML model. This is mainly because the data/model drift occurs during model deployment/inference which results in performance degradation of the AI/ML model. Fundamentally, the dataset statistical changes occur after the model is deployed and model inference capability is also impacted while using unseen data as input.
- the statistical property of dataset and the relationship between input and output for the trained model can be changed with drift occurrence.
- model adaptation is required to support operations such as model switching, re-training, fallback, etc.
- AI/ML model enabled wireless communication network it is then important to consider how to handle adaptation of AI/ML model under operations such as model training, inference, monitoring, updating, etc.
- ML applicable conditions for LCM (lifecycle management) operations can be significantly changed with different use cases and environmental properties.
- AI/ML based techniques are currently applied to many different applications and 3GPP also started to work on its technical investigation to apply to multiple use cases based on the observed potential gains.
- model performance such as inferencing and/or training is dependent on different model execution environments with varying configuration parameters.
- a first aspect of this invention discloses a method of offloading a ML LCM operation in a wireless network.
- the wireless network at least comprises a base station (gNB), a target UE being in an out-of-coverage location, and a set of relay UEs containing relay UEs not being in an out-of-coverage location. The method comprises the steps:
- the ML capability levels is a pre-configured mapping relationship between LCM operations and respective threshold values or ranges, the ML capability levels being transmitted via a system information or a dedicated RRC message.
- the ML capabilities are measured by the base station or the at least first relay UE measures.
- the offloading the LCM operation is performed across multiple of the relay UEs.
- the pre-configured mapping relationship comprises:
- offloadable LCM operations being one of: data collection, model training, model inferencing, or model monitoring; and/or
- Matching reference ML capabilities the matching reference ML capabilities being one of: threshold values or ranges indicating any combinations of parameters reflecting device ML applicable conditions, and/or environmental ML applicable conditions.
- multiple target UEs are connected with the at least first relay UE to receive a re-configured ML configuration message via a dedicated RRC re-configuration message.
- the at least first relay UE sends an updated ML configuration message to the target UE or multiple target UEs to re-configure the LCM operation.
- the ML configuration message is exchanged via a sidelink.
- the sidelink is established via L1/L2/L3 signaling.
- ML use cases being one of:
- All related model information and ML configurations are initially set to be provided to the at least first relay UE and/or the target UE and /or the multiple target UEs when the base station offloads the LCM operation;
- a second aspect of this invention discloses an apparatus of offloading a ML LCM operation in a wireless communication system, the apparatus comprising: • a wireless transceiver, and
- a processor coupled with a memory in which computer program instructions are stored, said instructions being configured to implement above-described steps and the apparatus is designed for the use in a base station (gNB) or a user equipment (UE).
- gNB base station
- UE user equipment
- a third aspect of this invention discloses an apparatus of offloading a ML LCM operation in a wireless communication system, the apparatus comprising:
- a processor coupled with a memory in which computer program instructions are stored, said instructions being configured to implement above-described steps and the apparatus is designed for the use in user equipment (UE).
- UE user equipment
- a fourth aspect of this invention discloses a base station (gNB) comprising an above-described apparatus.
- a fifth aspect of this invention discloses a User Equipment comprising an above-described apparatus.
- a sixth aspect of this invention discloses a wireless communication system, comprising at least an above-described base station (gNB) and at least an above-described user equipment (UE).
- gNB base station
- UE user equipment
- the network side determines decision whether to offload the configured LCM operation to relay UE(s) based on the information from relay UE(s), where the configured LCM operation to be performed by the NW can be offloaded to relay UE(s).
- the network side configures a set of threshold values/ranges to compare with relay ML capabilities to support the requested offloaded LCM operations where configuration information about the threshold can be sent through system information and/or dedicated RRC messages.
- the relay ML capability is measured by the candidate relay UEs to compare with the threshold values/ranges. Or, when the relay ML capability of a candidate relay UE is sent to the network side, the network side can measure/compare the reported relay ML capability with the threshold values/ranges.
- candidate relay UEs one or more relay UEs can be selected to serve for the offloaded LCM operation if the relay ML capability is above a threshold value or within a threshold range related to the associated LCM operation.
- the distributed LCM operation offloading is performed across multiple relay UEs as separate offloaded LCM operations can be assigned by the network side based on each relay ML capabilities.
- a pre-configured mapping relationship information can be used. For example, a finite set of indexed offloadable LCM operations are configured and the matching reference ML capabilities are formed where offloadable LCM operation indicates ML task execution such as data collection, model training/inferencing/monitoring, etc. and reference ML capability contains the threshold values or ranges indicating any combinations of parameters reflecting device ML applicable conditions (e.g., compute power, memory, etc.) and environmental ML applicable conditions (e.g., site, wireless links, etc.).
- device ML applicable conditions e.g., compute power, memory, etc.
- environmental ML applicable conditions e.g., site, wireless links, etc.
- a group of target UEs to be connected with a relay UE are re-configured to have a ML task execution assigned by the relay UE (e.g., through RRC re-configuration) while the initial ML configuration for target UEs can be received during in-coverage status.
- a relay UE may choose to update/re-configure LCM operations and send updated configuration to target UEs as relay UE is capable of measuring ML capability to distribute LCM operations to its associated target UEs.
- Relay ML and target ML (re-)configuration information for the distributed LCM operation is exchanged through sidelink (e.g., L1/L2/L3 signaling).
- sidelink e.g., L1/L2/L3 signaling
- ML models can be applied depending on different ML use cases with different LCM operations.
- network side offloads LCM operations to relay UE(s)
- initially all related model information and ML configuration are set to be provided to relay UE(s) and/or target UE(s).
- the relay UE takes over the offloaded LCM operation including model management, initial information about models and ML configurations from the network side can be re-configured between relay UEs and target UEs if necessary.
- Figure 1 shows an exemplary table of a mapping relationship information about offloadable LCM operation and reference ML capabilities.
- a finite set of indexed offloadable LCM operations are configured and the matching reference ML capabilities are formed where offloadable LCM operation indicates ML task execution such as data collection, model training/inferencing/monitoring, etc.
- reference ML capability contains threshold values or ranges indicating any combinations of parameters reflecting device ML applicable conditions (e.g., compute power, memory, etc.) and environmental ML applicable conditions (e.g., site, wireless links, etc.).
- Figure 2 shows an exemplary flow chart of a network sided behavior for LCM operation offloading.
- the network side e.g., gNB
- the relay ML capability information need to be measured at the network side or the relay UE side depending on implementation scenarios.
- FIG 3 shows an exemplary flow chart of a relay UE sided behavior for LCM operation offloading.
- a relay UE measures the relay ML capability and compares it with the pre-configured threshold information.
- the indication message about the measurement is sent to the network side so that the relay UE it can be confirmed that the offloaded LCM operation can be performed.
- the network side determines a relay ML re-configuration so that the all relay UE devices can be selected to execute the offloaded LCM operation or a single relay UE can be selected or multiple relay UEs can be selected for the distributed offloaded LCM operation.
- Figure 4 shows an exemplary flow chart of a target UE sided behavior for LCM operation offloading.
- one or more target UEs can be connected to the relay UE so as to perform the LCM operation and the selected target UEs receive the indication message from the relay UE about activating target UE ML execution.
Landscapes
- Engineering & Computer Science (AREA)
- Computer Networks & Wireless Communication (AREA)
- Signal Processing (AREA)
- Mobile Radio Communication Systems (AREA)
Abstract
Methods of pre-configuring AI/ML operations when multi-connectivity links are enabled in a wireless mobile communication system including base stations (e.g., gNB) and mobile stations (e.g., UE) are presented. If an AI/ML model is applied to a radio access network, model performance such as inferencing and/or training is dependent on different model execution environments between the network side and the UE side. Therefore, by offloading model operation into a relay UE from the network side, the potential performance impact due to out-of-coverage location can be reduced with model performance enhancement.
Description
TITLE
Method of offloading a ML LCM operation in a wireless network
TECHNNICAL FIELD
The present disclosure relates to AI/ML operation pre-configuration, where techniques for re-configuring and signaling the specific information to enhance relay-device based model performance are presented.
BACKGROUND
In 3GPP (Third Generation Partnership Project), one of the selected study items as of the approved Release 18 package is AI/ML (artificial intelligence/machine learning) as described in the related document (RP-213599) addressed in 3GPP TSG (Technical Specification Group) RAN (Radio Access Network) meeting #94e. The official title of the AI/ML study item is “Study on AI/ML for NR Air Interface”, and currently, RAN WG1 (Working Group 1 ) and WG2 are actively working on a specification. The goal of this study item is to identify a common AI/ML framework and areas of obtaining gains using AI/ML based techniques with use cases.
According to 3GPP, the main objective of this study item is to study an AI/ML frameworks for air-interface with target use cases by considering performance, complexity, and potential specification impacts. In particular, AI/ML models, terminology and descriptions to identify common and specific characteristics for a framework will be one of the key work scopes. Regarding AI/ML frameworks, various aspects are under consideration for investigation and one of key items is about lifecycle management (LCM) of AI/ML models where multiple stages are included as mandatory for model training, model deployment, model inference, model monitoring, model updating etc.
Earlier, in 3GPP TR 37.817 for Release 17, titled “Study on enhancement for Data Collection for NR and EN-DC”, UE (user equipment) mobility was also considered
as one of the AI/ML use cases and one of the scenarios for model training/inference is that both functions are located within a RAN node. Followingly, in Release 18, the new work item of “Artificial Intelligence (Al)/Machine Learning (ML) for NG-RAN” was initiated to specify data collection enhancements and signaling support within existing NG-RAN interfaces and architectures where mobility optimization is included as one of target use cases.
For the above active standardization works, UE ML conditions to support RAN-based AI/ML models can be considered very significant for both the gNB and the UE to meet any desired model operations (e.g., model training, inference, selection, switching, update, monitoring, etc.). Currently, there is no specification defined for signaling methods or gNB-UE behaviors about the distribution of split LCM operations with a sidelink relay link when RAN-based AI/ML model operation proceeds. Therefore, it is necessary to investigate any specification impact by considering model operation through a sidelink relay link. Any mechanism of additional signaling methods and/or gNB-UE behaviors also need to be addressed to support relay-based model operation between a gNB and a UE so that any potential impact of UE ML conditions on model operation in RAN should be minimized with service continuity.
On the other hand, in 3GPP the terminologies of the working list contains a set of high-level descriptions about AI/ML model training, inference, validation, testing, UE-side model, network-side model, one-sided model, two-sided model, etc. UE-sided models and network-sided models indicate that AI/ML models are located for operation in UE and network side, respectively. In the similar context, one-sided and two-sided models indicate that a AI/ML model is located in one side and two sides, respectively.
All signaling aspects to support the above items are currently not specified yet as definitions of terminologies are still under discussion for further modifications. Any potential standards impact with new or enhanced mechanisms of supporting AI/ML models with the above working list items is one key area for investigation in the AI/ML study item.
US 2021 203 565 A1 discloses methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for training and using machine learning models to classify network traffic as loT traffic or non-loT traffic and managing the traffic based on the classification. In some implementations, machine learning parameters of a local machine learning model trained by the edge device is received each of at least a subset of a set of edge devices. The machine learning parameters received from an edge device are parameters of the local machine learning model trained by the edge device based on local network traffic processed by the edge device and to classify the network traffic as Internet of Things (loT) traffic or non-loT traffic. A global machine learning model is generated, using the machine learning parameters, to classify network traffic processed by edge devices as loT traffic or non-loT traffic.
US 2022 261 697 A1 discloses systems and methods for federated machine learning. A central system receives satellite analytics artifacts from a plurality of satellite site systems and generates a central machine learning model based on the satellite analytics artifacts. A plurality of federated machine learning epochs are executed. At each epoch, the central system transmitting the central machine learning model to the plurality of satellite site systems, and then receives in return, from each satellite site system, a respective set of satellite values for a set of weights of the model, wherein the satellite values are generated by the respective satellite site system based on a respective local dataset of the satellite site system. At each epoch, the central system then generates an updated version of the central machine learning model based on the satellite values received from the satellite site systems.
The disclosure of US 2023 037 893 A1 provides a method for generating a real-time radio coverage map in a wireless network by a network apparatus. The method includes: receiving real-time geospatial information from one or more geographical sources in the wireless network; determining handover information of at least one user equipment (UE) in the wireless network from a plurality of base stations based on the real-time geospatial information; and generating the real-time radio coverage
map based on the handover information of at least one UE and the real-time geospatial information.
WO 2022 015 008 A1 discloses a method for determining a target cell for handover of a UE. The method includes monitoring, by a mobility management platform, multiple network characteristics associated with multiple UEs and determining, by the mobility management platform, a correlation between the multiple UEs based on the multiple network characteristics associated with the multiple UEs and location information of the multiple UEs. The method includes receiving, by the mobility management platform, location information from UE of the multiple and determining, by the mobility management platform, a measurement report corresponding to the location information received from the UE based on the correlation. And the method includes determining, by the mobility management platform, the target cell for the handover of the UE based on the location information received from the UE and the measurement report.
WO 2022 205 023 A1 discloses systems, methods, and apparatus on wireless network architecture and air interface. In some embodiments, sensing agents communicate with user equipments (UEs) or nodes using one of multiple sensing modes through non-sensing-based or sensing-based links, and/or artificial intelligence (Al) agents communicate with UEs or nodes using one of multiple Al modes through non-AI-based or Al-based links. Al and sensing may work independently or together. For example, a sensing service request may be sent by an Al block to a sensing block to obtain sensing data from the sensing block, and the Al block may generate a configuration based on the sensing data. Various other features, related to example interfaces, channels, and other aspects of Al-enabled and/or sensing-enabled communications, for example, are also disclosed.
BRIEF DESCRIPTION OF THE DRAWINGS
The disclosed invention will be further discussed in the following based on preferred embodiments presented in the attached drawings. However, the disclosed invention may be embodied in many different forms and should not be construed as limited to
said preferred embodiments. Rather, said preferred embodiments are provided for thoroughness and completeness, and fully convey the scope of the invention to the skilled person. The following detailed description refers to the attached drawings, in which:
Figure 1 is an exemplary table of a mapping relationship about offloadable LCM operations and reference ML capabilities;
Figure 2 is an exemplary flow chart of a network sided behavior for LCM operation offloading;
Figure 3 is an exemplary flow chart of a relay UE sided behavior for LCM operation offloading;
Figure 4 is an exemplary flow chart of a target UE sided behavior for LCM operation offloading; and
Figure 5 is an exemplary flow chart of a target UE sided behavior for LCM operation offloading during in-coverage location.
DETAILED DESCRIPTION
The detailed description set forth below, with reference to the annexed drawings, is intended as a description of various configurations and is not intended to represent the only configurations in which the concepts described herein may be practiced. The detailed description includes specific details for the purpose of providing a thorough understanding of the various concepts. However, it will be apparent to those skilled in the art that these concepts may be practiced without these specific details. In particular, although terminology from 3GPP 5G NR may be used in this disclosure to exemplify embodiments herein, this should not be seen as limiting the scope of the invention.
Some of the embodiments contemplated herein will now be described more fully with reference to the accompanying drawings. Other embodiments, however, are contained within the scope of the subject matter disclosed herein, the disclosed subject matter should not be construed as limited to only the embodiments set forth herein; rather, these embodiments are provided by way of example to convey the scope of the subject matter to those skilled in the art.
Generally, all terms used herein are to be interpreted according to their ordinary meaning in the relevant technical field, unless a different meaning is clearly given and/or is implied from the context in which it is used. All references to a/an/the element, apparatus, component, means, step, etc. are to be interpreted openly as referring to at least one instance of the element, apparatus, component, means, step, etc., unless explicitly stated otherwise. The steps of any methods disclosed herein do not have to be performed in the exact order disclosed, unless a step is explicitly described as following or preceding another step and/or where it is implicit that a step must follow or precede another step. Any feature of any of the embodiments disclosed herein may be applied to any other embodiment, wherever appropriate. Likewise, any advantage of any of the embodiments may apply to any other embodiments, and vice versa. Other objectives, features and advantages of the enclosed embodiments will be apparent from the following description.
In some embodiments, a more general term “network node” may be used and may correspond to any type of radio network node or any network node, which communicates with a UE (directly or via another node) and/or with another network node. Examples of network nodes are NodeB, MeNB, ENB, a network node belonging to MCG or SCG, base station (BS), multi-standard radio (MSR) radio node such as MSR BS, eNodeB, gNodeB, network controller, radio network controller (RNC), base station controller (BSC), relay, donor node controlling relay, base transceiver station (BTS), access point (AP), transmission points, transmission nodes, RRU, RRH, nodes in distributed antenna system (DAS), core network node (e.g. Mobile Switching Center (MSC), Mobility Management Entity (MME), etc.), Operations & Maintenance (O&M), Operations Support System (OSS), Self Optimized Network (SON), positioning node (e.g. Evolved- Serving Mobile Location
Centre (E-SMLC)), Minimization of Drive Tests (MDT), test equipment (physical node or software), another UE, etc.
In some embodiments, the non-limiting term user equipment (UE) or wireless device may be used and may refer to any type of wireless device communicating with a network node and/or with another UE in a cellular or mobile communication system. Examples of UE are target device, device to device (D2D) UE, machine type UE or UE capable of machine to machine (M2M) communication, PDA, PAD, Tablet, mobile terminals, smart phone, laptop embedded equipped (LEE), laptop mounted equipment (LME), USB dongles, UE category Ml, UE category M2, ProSe UE, V2V UE, V2X UE, etc.
Additionally, terminologies such as base station/gNodeB and UE should be considered non-limiting and in particular do not imply a certain hierarchical relation between the two; in general, “gNodeB” could be considered as device 1 and “UE” could be considered as device 2 and these two devices communicate with each other over some radio channel. And in the following the transmitter or receiver could be either gNodeB (gNB), or UE.
As will be appreciated by one skilled in the art, aspects of the embodiments may be embodied as a system, apparatus, method, or computer program product.
Accordingly, embodiments may take the form of an entirely hardware embodiment, an entirely software embodiment (including firmware, resident software, micro-code, etc.) or an embodiment combining software and hardware aspects.
For example, the disclosed embodiments may be implemented as a hardware circuit comprising custom very-large-scale integration (“VLSI”) circuits or gate arrays, off-the-shelf semiconductors such as logic chips, transistors, or other discrete components. The disclosed embodiments may also be implemented in programmable hardware devices such as field programmable gate arrays, programmable array logic, programmable logic devices, or the like. As another example, the disclosed embodiments may include one or more physical or logical
blocks of executable code which may, for instance, be organized as an object, procedure, or function.
Furthermore, embodiments may take the form of a computer program product embodied in one or more computer readable storage devices storing machine readable code, computer readable code, and/or program code, referred hereafter as code. The storage devices may be tangible, non- transitory, and/or non-transmission. The storage devices may not embody signals. In a certain embodiment, the storage devices only employ signals for accessing code.
Any combination of one or more computer readable medium may be utilized. The computer readable medium may be a computer readable storage medium. The computer readable storage medium may be a storage device storing the code. The storage device may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, holographic, micromechanical, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing.
More specific examples (a non-exhaustive list) of the storage device would include the following: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a random-access memory (“RAM”), a read-only memory (“ROM”), an erasable programmable read-only memory (“EPROM” or Flash memory), a portable compact disc read-only memory (“CD-ROM”), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this document, a computer readable storage medium may be any tangible medium that can contain or store a program for use by or in connection with an instruction execution system, apparatus, or device.
Code for carrying out operations for embodiments may be any number of lines and may be written in any combination of one or more programming languages including an object-oriented programming language such as Python, Ruby, Java, Smalltalk, C++, or the like, and conventional procedural programming languages, such as the “C” programming language, or the like, and/or machine languages such as assembly languages. The code may execute entirely on the user’s computer, partly
on the user’s computer, as a stand-alone software package, partly on the user’s computer and partly on a remote computer or entirely on the remote computer or server. In the latter scenario, the remote computer may be connected to the user’s computer through any type of network, including a local area network (“LAN”), wireless LAN (“WLAN”), or a wide area network (“WAN”), or the connection may be made to an external computer (for example, through the Internet using an Internet Service Provider (“ISP”)).
Furthermore, the described features, structures, or characteristics of the embodiments may be combined in any suitable manner. In the following description, numerous specific details are provided, such as examples of programming, software modules, user selections, network transactions, database queries, database structures, hardware modules, hardware circuits, hardware chips, etc., to provide a thorough understanding of embodiments. One skilled in the relevant art will recognize, however, that embodiments may be practiced without one or more of the specific details, or with other methods, components, materials, and so forth.
In other instances, well-known structures, materials, or operations are not shown or described in detail to avoid obscuring aspects of an embodiment. Reference throughout this specification to “one embodiment,” “an embodiment,” or similar language means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment. Thus, appearances of the phrases “in one embodiment”, “in an embodiment”, and similar language throughout this specification may, but do not necessarily, all refer to the same embodiment, but mean “one or more but not all embodiments” unless expressly specified otherwise. The terms “including”, “comprising”, “having”, and variations thereof mean “including but not limited to”, unless expressly specified otherwise. An enumerated listing of items does not imply that any or all of the items are mutually exclusive, unless expressly specified otherwise. The terms “a”, “an”, and “the” also refer to “one or more” unless expressly specified otherwise.
Aspects of the embodiments are described below with reference to schematic flowchart diagrams and/or schematic block diagrams of methods, apparatuses,
systems, and computer program products according to embodiments. It will be understood that each block of the schematic flowchart diagrams and/or schematic block diagrams, and combinations of blocks in the schematic flowchart diagrams and/or schematic block diagrams, can be implemented by code. This code may be provided to a processor of a general-purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which are executed via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart diagrams and/or block diagrams.
The code may also be stored in a storage device that can direct a computer, other programmable data processing apparatus, or other devices to function in a particular manner, such that the instructions stored in the storage device produce an article of manufacture including instructions which implement the function/act specified in the flowchart diagrams and/or block diagrams.
The code may also be loaded onto a computer, other programmable data processing apparatus, or other devices to cause a series of operational steps to be performed on the computer, other programmable apparatus, or other devices to produce a computer implemented process such that the code which execute on the computer or other programmable apparatus provide processes for implementing the functions/acts specified in the flowchart diagrams and/or block diagrams.
The flowchart diagrams and/or block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of apparatuses, systems, methods, and program products according to various embodiments. In this regard, each block in the flowchart diagrams and/or block diagrams may represent a module, segment, or portion of code, which includes one or more executable instructions of the code for implementing the specified logical function(s).
It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two
blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. Other steps and methods may be conceived that are equivalent in function, logic, or effect to one or more blocks, or portions thereof, of the illustrated figures.
Although various arrow types and line types may be employed in the flowchart and/or block diagrams, they are understood not to limit the scope of the corresponding embodiments. Indeed, some arrows or other connectors may be used to indicate only the logical flow of the depicted embodiment. For instance, an arrow may indicate a waiting or monitoring period of unspecified duration between enumerated steps of the depicted embodiment. It will also be noted that each block of the block diagrams and/or flowchart diagrams, and combinations of blocks in the block diagrams and/or flowchart diagrams, can be implemented by special purpose hardware-based systems that perform the specified functions or acts, or combinations of special purpose hardware and code.
The description of elements in each figure may refer to elements of proceeding figures. Like numbers refer to like elements in all figures, including alternate embodiments of like elements.
The following explanation will provide the detailed description of the mechanism about pre-configuring AI/ML-based model before a handover occurrence in a wireless mobile communication system including base stations (e.g., gNB) and mobile stations (e.g., UE).
An AI/ML lifecycle can be split into several stages such as data collection/pre-processing, model training, model testing/validation, model deployment/update, model monitoring, model switching/selection etc., where each stage is equally important to achieve target performance with any specific model(s). In applying an AI/ML model for any use case or application, one of the challenging issues is to manage the lifecycle of the AI/ML model. This is mainly because the data/model drift occurs during model deployment/inference which results in
performance degradation of the AI/ML model. Fundamentally, the dataset statistical changes occur after the model is deployed and model inference capability is also impacted while using unseen data as input.
In a similar aspect, the statistical property of dataset and the relationship between input and output for the trained model can be changed with drift occurrence. Then model adaptation is required to support operations such as model switching, re-training, fallback, etc. When AI/ML model enabled wireless communication network is deployed, it is then important to consider how to handle adaptation of AI/ML model under operations such as model training, inference, monitoring, updating, etc.
Based on specific network-UE ML collaboration in deployment scenarios (e.g., UE mobility case), ML applicable conditions for LCM (lifecycle management) operations can be significantly changed with different use cases and environmental properties. AI/ML based techniques are currently applied to many different applications and 3GPP also started to work on its technical investigation to apply to multiple use cases based on the observed potential gains.
In a similar aspect, the statistical property of dataset and the relationship between input and output for the trained model can be changed with drift occurrence. In this context, model performance such as inferencing and/or training is dependent on different model execution environments with varying configuration parameters.
To handle this issue, it is highly important to track model performance and re-configure model during collaboration between UE and gNB corresponding to different environments between UE and different gNBs. When an AI/ML model enabled wireless communication network is deployed, it is important to consider how to handle AI/ML models in activation with re-configuration for wireless devices under operations such as model training, inference, updating, etc. When the UE is in out-of-coverage location, the configured ML model can be impacted with failure of model execution.
A first aspect of this invention discloses a method of offloading a ML LCM operation in a wireless network. The wireless network at least comprises a base station (gNB), a target UE being in an out-of-coverage location, and a set of relay UEs containing relay UEs not being in an out-of-coverage location. The method comprises the steps:
• Generate ML capability levels which define requirements for supporting the LCM operation,
• Receive the ML capability levels,
• For each relay UE, measure ML capabilities based on the ML capability levels,
• For each relay UE, compare the measured ML capabilities against the ML capability levels,
• For each relay UE, send an indication message based on the respective comparisons of the ML capabilities, indicating whether the relay UE is capable of executing the LCM operation,
• Select at least a first relay UE from the relay UEs, the at least first relay UE being capable of executing the LCM operation,
• Configure the LCM operation via a system information or a dedicated RRC message to the at least first relay UE, and
• Execute the LCM operation by the at least first relay UE.
Advantageously, the ML capability levels is a pre-configured mapping relationship between LCM operations and respective threshold values or ranges, the ML capability levels being transmitted via a system information or a dedicated RRC message.
Advantageously, the ML capabilities are measured by the base station or the at least first relay UE measures.
Advantageously, the measured ML capabilities are compared against the ML capability levels by the base station or the at least first relay UE.
Advantageously, the offloading the LCM operation is performed across multiple of the relay UEs.
Advantageously, the pre-configured mapping relationship comprises:
• A finite set of indexed offloadable LCM operations, the offloadable LCM operations being one of: data collection, model training, model inferencing, or model monitoring; and/or
• Matching reference ML capabilities, the matching reference ML capabilities being one of: threshold values or ranges indicating any combinations of parameters reflecting device ML applicable conditions, and/or environmental ML applicable conditions.
Advantageously, multiple target UEs are connected with the at least first relay UE to receive a re-configured ML configuration message via a dedicated RRC re-configuration message.
Advantageously, the at least first relay UE sends an updated ML configuration message to the target UE or multiple target UEs to re-configure the LCM operation.
Advantageously, the ML configuration message is exchanged via a sidelink.
Advantageously, the sidelink is established via L1/L2/L3 signaling.
Advantageously, a number of different ML models can be applied for the LCM operation depending on different ML use cases, the ML use cases being one of:
• All related model information and ML configurations are initially set to be provided to the at least first relay UE and/or the target UE and /or the multiple target UEs when the base station offloads the LCM operation;
• Initial ML model information and ML configurations are provided by the base station while a re-configuration takes place between relay UEs and target UEs once the relay UEs begin the offloading of the LCM operation including model management.
A second aspect of this invention discloses an apparatus of offloading a ML LCM operation in a wireless communication system, the apparatus comprising:
• a wireless transceiver, and
• a processor coupled with a memory in which computer program instructions are stored, said instructions being configured to implement above-described steps and the apparatus is designed for the use in a base station (gNB) or a user equipment (UE).
A third aspect of this invention discloses an apparatus of offloading a ML LCM operation in a wireless communication system, the apparatus comprising:
• a wireless transceiver,
• a processor coupled with a memory in which computer program instructions are stored, said instructions being configured to implement above-described steps and the apparatus is designed for the use in user equipment (UE).
A fourth aspect of this invention discloses a base station (gNB) comprising an above-described apparatus.
A fifth aspect of this invention discloses a User Equipment comprising an above-described apparatus.
A sixth aspect of this invention discloses a wireless communication system, comprising at least an above-described base station (gNB) and at least an above-described user equipment (UE).
In this method, the network side (e.g., gNB) determines decision whether to offload the configured LCM operation to relay UE(s) based on the information from relay UE(s), where the configured LCM operation to be performed by the NW can be offloaded to relay UE(s). The network side configures a set of threshold values/ranges to compare with relay ML capabilities to support the requested offloaded LCM operations where configuration information about the threshold can be sent through system information and/or dedicated RRC messages.
When the pre-configured LCM information to be offloaded is sent to candidate relay UEs, the relay ML capability is measured by the candidate relay UEs to compare
with the threshold values/ranges. Or, when the relay ML capability of a candidate relay UE is sent to the network side, the network side can measure/compare the reported relay ML capability with the threshold values/ranges. Among candidate relay UEs, one or more relay UEs can be selected to serve for the offloaded LCM operation if the relay ML capability is above a threshold value or within a threshold range related to the associated LCM operation. When selecting more than one relay UE, the distributed LCM operation offloading is performed across multiple relay UEs as separate offloaded LCM operations can be assigned by the network side based on each relay ML capabilities.
Regarding the set of threshold values/ranges to be compared with relay ML capabilities, a pre-configured mapping relationship information can be used. For example, a finite set of indexed offloadable LCM operations are configured and the matching reference ML capabilities are formed where offloadable LCM operation indicates ML task execution such as data collection, model training/inferencing/monitoring, etc. and reference ML capability contains the threshold values or ranges indicating any combinations of parameters reflecting device ML applicable conditions (e.g., compute power, memory, etc.) and environmental ML applicable conditions (e.g., site, wireless links, etc.).
After all relay UEs are determined in a selection process, a group of target UEs to be connected with a relay UE are re-configured to have a ML task execution assigned by the relay UE (e.g., through RRC re-configuration) while the initial ML configuration for target UEs can be received during in-coverage status. In out-of-coverage location of target UEs, a relay UE may choose to update/re-configure LCM operations and send updated configuration to target UEs as relay UE is capable of measuring ML capability to distribute LCM operations to its associated target UEs. Relay ML and target ML (re-)configuration information for the distributed LCM operation is exchanged through sidelink (e.g., L1/L2/L3 signaling). For the distributed LCM operation, a number of different ML models can be applied depending on different ML use cases with different LCM operations. When network side offloads LCM operations to relay UE(s), initially all related model information and ML configuration are set to be provided to relay UE(s) and/or target UE(s).
When the relay UE takes over the offloaded LCM operation including model management, initial information about models and ML configurations from the network side can be re-configured between relay UEs and target UEs if necessary.
Figure 1 shows an exemplary table of a mapping relationship information about offloadable LCM operation and reference ML capabilities. In this example, a finite set of indexed offloadable LCM operations are configured and the matching reference ML capabilities are formed where offloadable LCM operation indicates ML task execution such as data collection, model training/inferencing/monitoring, etc. and reference ML capability contains threshold values or ranges indicating any combinations of parameters reflecting device ML applicable conditions (e.g., compute power, memory, etc.) and environmental ML applicable conditions (e.g., site, wireless links, etc.).
Figure 2 shows an exemplary flow chart of a network sided behavior for LCM operation offloading. In this example, the network side (e.g., gNB) determines the ML configuration including pre-configured mapping relationship information about offloading LCM operations to relay UE devices. To identify relay UE(s) that can perform the offloaded LCM operation, the relay ML capability information need to be measured at the network side or the relay UE side depending on implementation scenarios.
Figure 3 shows an exemplary flow chart of a relay UE sided behavior for LCM operation offloading. In this example, a relay UE measures the relay ML capability and compares it with the pre-configured threshold information. The indication message about the measurement is sent to the network side so that the relay UE it can be confirmed that the offloaded LCM operation can be performed. After receiving this indication message, the network side determines a relay ML re-configuration so that the all relay UE devices can be selected to execute the offloaded LCM operation or a single relay UE can be selected or multiple relay UEs can be selected for the distributed offloaded LCM operation.
Figure 4 shows an exemplary flow chart of a target UE sided behavior for LCM operation offloading. In this example, one or more target UEs can be connected to the relay UE so as to perform the LCM operation and the selected target UEs receive the indication message from the relay UE about activating target UE ML execution.
Figure 5 shows an exemplary flow chart of a target UE sided behavior for LCM operation offloading during in-coverage location. In this example, the target UE receives a ML configuration directly from the network side when it still is in an in-coverage location. After it goes to an out-of-coverage location, the ML re-configuration is performed with support of the relay UE so that target UE can also activate its own ML operation if indicated.
Abbreviations
BWP Bandwidth part
CBG Code block group
CLI Cross Link Interference
CP Cyclic prefix
CQI Channel quality indicator
CPU CSI processing unit
CRB Common resource block
CRC Cyclic redundancy check
CRI CSI-RS Resource Indicator
CSI Channel state information
CSI-RS Channel state information reference signal
CSI-RSRP CSI reference signal received power
CSI-RSRQ CSI reference signal received quality
CSI-SINR CSI signal-to-noise and interference ratio
CW Codeword
DCI Downlink control information
DL Downlink
DM-RS Demodulation reference signals
DRX Discontinuous Reception
EPRE Energy per resource element
IAB-MT Integrated Access and Backhaul - Mobile Terminal
L1 -RSRP Layer 1 reference signal received power
LI Layer Indicator
MCS Modulation and coding scheme
PDCCH Physical downlink control channel
PDSCH Physical downlink shared channel
PSS Primary Synchronisation signal
PUCCH Physical uplink control channel
QCL Quasi co-location
PMI Precoding Matrix Indicator
PRB Physical resource block
PRG Precoding resource block group
PRS Positioning reference signal
PT-RS Phase-tracking reference signal
RB Resource block
RBG Resource block group
Rl Rank Indicator
RIV Resource indicator value
RP Resource Pool
RS Reference signal
SCI Sidelink control information
SL CR Sidelink Channel Occupancy Ratio
SL CBR Sidelink Channel Busy Ratio
SLIV Start and length indicator value
SR Scheduling Request
SRS Sounding reference signal
SS Synchronisation signal
SSS Secondary Synchronisation signal
SS-RSRP SS reference signal received power
SS-RSRQ SS reference signal received quality
SS-SINR SS signal-to-noise and interference ratio
TB Transport Block
TCI Transmission Configuration Indicator
TDM Time division multiplexing
UE User equipment
UL Uplink
Claims
1. Method of offloading a ML LCM operation in a wireless network, the wireless network at least comprising a base station (gNB), a target UE being in an out-of-coverage location, and a set of relay UEs containing relay UEs not being in an out-of-coverage location, the method comprising the steps:
• Generate ML capability levels which define requirements for supporting the LCM operation,
• Receive the ML capability levels,
• For each relay UE, measure ML capabilities based on the ML capability levels,
• For each relay UE, compare the measured ML capabilities against the ML capability levels,
• For each relay UE, send an indication message based on the respective comparisons of the ML capabilities, indicating whether the relay UE is capable of executing the LCM operation,
• Select at least a first relay UE from the relay UEs, the at least first relay UE being capable of executing the LCM operation,
• Configure the LCM operation via a system information or a dedicated RRC message to the at least first relay UE, and
• Execute the LCM operation by the at least first relay UE.
2. Method according to claim 1 , characterized in that the ML capability levels is a pre-configured mapping relationship between LCM operations and respective threshold values or ranges, the ML capability levels being transmitted via a system information or a dedicated RRC message.
3. Method according to claim 1 or 2, characterized in that the ML capabilities are measured by the base station or the at least first relay UE measures.
4. Method according to one of the previous claims, characterized in that the measured ML capabilities are compared against the ML capability levels by the
base station or the at least first relay UE.
5. Method according to one of the previous claims, characterized in that the offloading the LCM operation is performed across multiple of the relay UEs.
6. Method according to claim 2, characterized in that the pre-configured mapping relationship comprises:
• A finite set of indexed offloadable LCM operations, the offloadable LCM operations being one of: data collection, model training, model inferencing, or model monitoring; and/or
• Matching reference ML capabilities, the matching reference ML capabilities being one of: threshold values or ranges indicating any combinations of parameters reflecting device ML applicable conditions, and/or environmental ML applicable conditions.
7. Method according to one of the previous claims, characterized in that multiple target UEs are connected with the at least first relay UE to receive a re-configured ML configuration message via a dedicated RRC re-configuration message.
8. Method according to one of the previous claims, characterized in that the at least first relay UE sends an updated ML configuration message to the target UE or multiple target UEs to re-configure the LCM operation.
9. Method according to claim 7 or 8, characterized in that the ML configuration message is exchanged via a sidelink.
10. Method according to claim 9, where the sidelink is established via L1/L2/L3 signaling.
11 . Method according to one of the previous claims, characterized in that a number of different ML models can be applied for the LCM operation depending on different ML use cases, the ML use cases being one of:
• All related model information and ML configurations are initially set to be provided to the at least first relay UE and/or the target UE and /or the multiple target UEs when the base station offloads the LCM operation;
• Initial ML model information and ML configurations are provided by the base station while a re-configuration takes place between relay UEs and target UEs once the relay UEs begin the offloading of the LCM operation including model management.
12. An apparatus of offloading a ML LCM operation in a wireless communication system, the apparatus comprising:
• a wireless transceiver, and
• a processor coupled with a memory in which computer program instructions are stored, said instructions being configured to implement steps of the claims 1 to 11 and the apparatus is designed for the use in a base station (gNB) or a user equipment (UE)
13. An apparatus of offloading a ML LCM operation in a wireless communication system, the apparatus comprising:
• a wireless transceiver,
• a processor coupled with a memory in which computer program instructions are stored, said instructions being configured to implement steps of the claims 1 to 11 and the apparatus is designed for the use in user equipment (UE).
14. A base station (gNB) comprising an apparatus according to claim 12.
15. A User Equipment comprising an apparatus according to claim 13.
16. A wireless communication system, comprising at least a base station (gNB) according to claim 14 and at least a user equipment (UE) according to claim 15.
Applications Claiming Priority (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| DE102023210633 | 2023-10-27 | ||
| DE102023210633.7 | 2023-10-27 |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| WO2025087719A1 true WO2025087719A1 (en) | 2025-05-01 |
Family
ID=93119699
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| PCT/EP2024/078905 Pending WO2025087719A1 (en) | 2023-10-27 | 2024-10-14 | Method of offloading a ml lcm operation in a wireless network |
Country Status (1)
| Country | Link |
|---|---|
| WO (1) | WO2025087719A1 (en) |
Citations (6)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20210203565A1 (en) | 2019-12-31 | 2021-07-01 | Hughes Network Systems, Llc | Managing internet of things network traffic using federated machine learning |
| WO2022015008A1 (en) | 2020-07-13 | 2022-01-20 | Samsung Electronics Co., Ltd. | Method and system for determining target cell for handover of ue |
| US20220261697A1 (en) | 2021-02-15 | 2022-08-18 | Devron Corporation | Federated learning platform and methods for using same |
| WO2022205023A1 (en) | 2021-03-31 | 2022-10-06 | Huawei Technologies Co., Ltd. | Systems, methods, and apparatus on wireless network architecture and air interface |
| US20220329648A1 (en) * | 2019-09-25 | 2022-10-13 | Idac Holdings, Inc. | Transparent relocation of mec application instances between 5g devices and mec hosts |
| US20230037893A1 (en) | 2021-08-02 | 2023-02-09 | Samsung Electronics Co., Ltd. | Method and network apparatus for generating real-time radio coverage map in wireless network |
-
2024
- 2024-10-14 WO PCT/EP2024/078905 patent/WO2025087719A1/en active Pending
Patent Citations (6)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20220329648A1 (en) * | 2019-09-25 | 2022-10-13 | Idac Holdings, Inc. | Transparent relocation of mec application instances between 5g devices and mec hosts |
| US20210203565A1 (en) | 2019-12-31 | 2021-07-01 | Hughes Network Systems, Llc | Managing internet of things network traffic using federated machine learning |
| WO2022015008A1 (en) | 2020-07-13 | 2022-01-20 | Samsung Electronics Co., Ltd. | Method and system for determining target cell for handover of ue |
| US20220261697A1 (en) | 2021-02-15 | 2022-08-18 | Devron Corporation | Federated learning platform and methods for using same |
| WO2022205023A1 (en) | 2021-03-31 | 2022-10-06 | Huawei Technologies Co., Ltd. | Systems, methods, and apparatus on wireless network architecture and air interface |
| US20230037893A1 (en) | 2021-08-02 | 2023-02-09 | Samsung Electronics Co., Ltd. | Method and network apparatus for generating real-time radio coverage map in wireless network |
Non-Patent Citations (3)
| Title |
|---|
| "3rd Generation Partnership Project; Technical Specification Group Radio Access Network; Study on Artificial Intelligence (AI)/Machine Learning (ML) for NR air interface (Release 18)", no. V1.0.0, 4 September 2023 (2023-09-04), pages 1 - 148, XP052512007, Retrieved from the Internet <URL:https://ftp.3gpp.org/Specs/archive/38_series/38.843/38843-100.zip RP-231766 TR 38.843 v1.0.0.docx> [retrieved on 20230904] * |
| HUAWEI ET AL: "Discussion on general aspects of AI/ML framework", vol. RAN WG1, no. Toulouse, France; 20220822 - 20220826, 12 August 2022 (2022-08-12), XP052273819, Retrieved from the Internet <URL:https://ftp.3gpp.org/tsg_ran/WG1_RL1/TSGR1_110/Docs/R1-2205889.zip R1-2205889.docx> [retrieved on 20220812] * |
| RUDRAKSH SHRIVASTAVA ET AL: "Discussion on AI/ML Capability Reporting and Model LCM", vol. RAN WG2, no. Toulouse, FR; 20230821 - 20230825, 10 August 2023 (2023-08-10), XP052443195, Retrieved from the Internet <URL:https://www.3gpp.org/ftp/TSG_RAN/WG2_RL2/TSGR2_123/Docs/R2-2307484.zip R2-2307484.docx> [retrieved on 20230810] * |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| US20230299467A1 (en) | Facilitating user equipment beamforming control | |
| US20220360309A1 (en) | Cqi saturation mitigation in downlink massive mu-mimo systems | |
| EP4014457B1 (en) | Method and apparatus for transmission configuration | |
| WO2025008304A1 (en) | Method of advanced ml report signaling | |
| US20250219698A1 (en) | Csi processing mode switching method and apparatus, and medium, product and chip | |
| US20220330175A1 (en) | Channel quality indicator (cqi) reporting with cqi headroom | |
| WO2024223570A1 (en) | Method of data-driven model signaling for multi-usim | |
| WO2025026970A1 (en) | Method of activating a candidate model | |
| WO2025087719A1 (en) | Method of offloading a ml lcm operation in a wireless network | |
| WO2024208702A1 (en) | Method of model dataset signaling for radio access network | |
| WO2025087720A1 (en) | Method of a network-assisted indirect ml lcm operation | |
| WO2025016856A1 (en) | Method of advanced assistance signaling for user equipment of machine learning reporting | |
| WO2024231362A1 (en) | Method of ml model configuration and signaling | |
| WO2024231363A1 (en) | Method of advanced model adaptation for radio access network | |
| WO2025067885A1 (en) | Method of model signaling for multi-connectivity | |
| WO2025026851A1 (en) | Method of distributed partitioned-model monitoring | |
| WO2025087718A1 (en) | Method of model grouping signaling | |
| WO2025017056A1 (en) | Method of ai/ml model matching signaling | |
| WO2025093364A1 (en) | System and apparatus for model transfer in a network and a method in association thereto | |
| EP4659481A2 (en) | Method of gnb-ue behaviors for model-based mobility | |
| WO2025093284A1 (en) | System and apparatus for model transfer in a network and a method in association thereto | |
| US20250105894A1 (en) | Beamforming Control for Downlink Limited Access Node | |
| WO2025124931A1 (en) | Method of model-sharing signaling in a wireless communication system | |
| WO2024200393A1 (en) | Method of model switching signaling for radio access network | |
| WO2025087873A1 (en) | Method of ai/ml model re-training in a wireless network |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| 121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 24790468 Country of ref document: EP Kind code of ref document: A1 |