WO2025027055A1 - Method for a handover operation for a two-sided ai/ml model - Google Patents
Method for a handover operation for a two-sided ai/ml model Download PDFInfo
- Publication number
- WO2025027055A1 WO2025027055A1 PCT/EP2024/071635 EP2024071635W WO2025027055A1 WO 2025027055 A1 WO2025027055 A1 WO 2025027055A1 EP 2024071635 W EP2024071635 W EP 2024071635W WO 2025027055 A1 WO2025027055 A1 WO 2025027055A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- model
- node
- handover
- sided
- wireless network
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04W—WIRELESS COMMUNICATION NETWORKS
- H04W36/00—Hand-off or reselection arrangements
- H04W36/08—Reselecting an access point
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N20/00—Machine learning
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04W—WIRELESS COMMUNICATION NETWORKS
- H04W36/00—Hand-off or reselection arrangements
- H04W36/0005—Control or signalling for completing the hand-off
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04W—WIRELESS COMMUNICATION NETWORKS
- H04W36/00—Hand-off or reselection arrangements
- H04W36/0005—Control or signalling for completing the hand-off
- H04W36/0083—Determination of parameters used for hand-off, e.g. generation or modification of neighbour cell lists
- H04W36/00837—Determination of triggering parameters for hand-off
Definitions
- the present disclosure relates to AI/ML operation pre-configuration, where techniques for re-configuring and signaling the specific information to reduce model performance degradation due to UE mobility are presented.
- AI/ML artificial intelligence/machine learning
- RP-213599 3GPP TSG RAN (Technical Specification Group Radio Access Network) meeting #94e.
- the official title of the AI/ML study item is “Study on AI/ML for NR Air Interface”, and currently RAN WG1 and WG2 are actively working on a specification.
- the goal of this study item is to identify a common AI/ML framework and areas of obtaining gains using AI/ML based techniques with use cases.
- the main objective of this study item is to study AI/ML frameworks for air-interfaces with target use cases by considering performance, complexity, and potential specification impacts.
- AI/ML models, terminology, and descriptions to identify common and specific characteristics for a framework will be one key work scope.
- various aspects are under consideration for investigation and one key item is about the lifecycle management of AI/ML models where multiple stages are included as mandatory for model training, model deployment, model inference, model monitoring, model updating, etc.
- UE mobility was also considered as one of the AI/ML use cases and one of the scenarios for model training/inference is that both functions are located within a RAN node.
- AI Artificial Intelligence
- ML Machine Learning
- UE mobility to support RAN-based AI/ML model can be considered very significant for both gNBs and UEs to meet any desired model operations (e.g., model training I inference I selection I switching I update I monitoring, etc.) when the UE moves around.
- model training I inference I selection I switching I update I monitoring, etc. any desired model operations
- gNB-UE behaviors about UE mobility when a RAN-based AI/ML model operation proceeds. Therefore, it is necessary to investigate any specification impact by considering model operation during UE mobility. Any mechanism of additional signaling method and/or gNB-UE behaviors also need to be addressed to support mobility-based model operation between gNB and UE so that any potential impact of UE mobility on model operation in RAN should be minimized with service continuity.
- the terminologies of the working list contain a set of high-level descriptions about AI/ML model training, inference, validation, testing, UE-side model, network-side model, one-sided model, two-sided model, etc.
- a UE-sided model and a network-sided model indicate that AI/ML model is located for operation in UE and network side, respectively.
- a one-sided and a two-sided model indicates that AI/ML model is located in one side and two sides, respectively.
- WO 2022 034 259 A1 discloses a network apparatus that is caused to receive as part of a handover procedure for handover of a terminal to the network apparatus, metadata about at least one machine learning model accessible for execution and/or training by the terminal, determining whether or not the terminal should execute and/or train the at least one machine learning model after the terminal is handed over to the network apparatus; and signal the result of the determining to the terminal.
- WO 2022 058 020 A1 discloses measures for evaluation and control of predictive machine learning models in mobile networks. Such measures exemplarily comprise receiving information on a predictive model related to a radio resource management function, obtaining behavior information on an intended behavior of said predicted model, obtaining difference determination information on difference determination with respect to a predictive model prediction and said intended behavior, measuring a network condition, determining a prediction result based on said network condition and said information on said predictive model, determining a behavior result based on said network condition and said behavior information, and evaluating validity of said predictive model based on said prediction result, said behavior result, and said difference determination information.
- WO 2022 199 824 A1 discloses a computer implemented method for federated machine learning (FL) in a wireless communication system, the method comprising establishing a first wireless access radio link between a first access node and a wireless device, initiating an FL process involving the first access node and the wireless device, transmitting FL information from the first access node to the wireless device, where the FL information comprises data indicative of the FL process, establishing a second wireless access radio link between a second access node and the wireless device, where the second access node is communicatively coupled to the first access node, exchanging at least part of the FL information between the wireless device and the second access node, and resuming the FL process involving the first access node and the wireless device by communication via the second access node over the second wireless access radio link.
- FL federated machine learning
- WO 2022 258 196 A1 discloses an apparatus comprising means for receiving a machine learning model for predicting handover parameters; receiving radio access network service related information; and determining, using the machine learning model, handover parameters for a service and a cell pair comprising a serving cell and a target cell, the service being supported by a network, the machine learning model being provided with one or more input parameters based on the radio access network service related information.
- WO 2021 123 285 A1 discloses a method of transmitting or receiving data by a communications device in a wireless communications network, the method comprising: establishing a connection for transmitting or receiving the data in a first cell of the wireless communications network, determining a value of one or more input parameters, using the value of the one or more input parameters as inputs to a model trained using machine learning, determining, based on an output of the model, that the communications device should perform a handover to establish a connection in a second cell, and responsive to determining that the communications device should establish a connection in the second cell, transmitting a handover message to request the establishment of a connection in a second cell.
- WO 2021 259 492 A1 discloses a method in a first node of a communications network for training a machine learning model comprises receiving a first message comprising instructions for training the machine learning model using a distributed learning process. The method then comprises responsive to receiving the first message, acting as an aggregator in the distributed learning process for a subset of other nodes selected by the first node from a plurality of nodes that have an established radio channel allocation with the first node, by causing the subset of other nodes to perform training on local copies of the machine learning model and aggregating the results of the training by the subset of other nodes.
- Figure 1 is an exemplary table of ML operation modes
- Figure 2 is an exemplary block diagram of ML model switching mode
- Figure 3 is an exemplary block diagram of temporary suspend/resume mode
- Figure 4 is an exemplary block diagram of ML model de-activation mode
- Figure 5 is an exemplary block diagram of handover de-activation mode
- Figure 6 is an exemplary signaling flow of applying ML operation mode to handover
- Figure 7 is an exemplary signaling flow of applying temporary suspend/resume mode
- Figure 8 is an exemplary signaling flow of applying handover de-activation mode.
- Figure 9 is an exemplary signaling flow of applying ML model switching mode.
- a more general term “network node” may be used and may correspond to any type of radio network node or any network node, which communicates with a UE (directly or via another node) and/or with another network node.
- network nodes are NodeB, MeNB, ENB, a network node belonging to MCG or SCG, base station (BS), multi-standard radio (MSR) radio node such as MSR BS, eNodeB, gNodeB, network controller, radio network controller (RNC), base station controller (BSC), relay, donor node controlling relay, base transceiver station (BTS), access point (AP), transmission points, transmission nodes, RRU, RRH, nodes in distributed antenna system (DAS), core network node (e.g.
- the non-limiting term user equipment (UE) or wireless device may be used and may refer to any type of wireless device communicating with a network node and/or with another UE in a cellular or mobile communication system.
- Examples of UE are target device, device to device (D2D) UE, machine type UE or UE capable of machine to machine (M2M) communication, PDA, PAD, Tablet, mobile terminals, smart phone, laptop embedded equipped (LEE), laptop mounted equipment (LME), USB dongles, UE category Ml, UE category M2, ProSe UE, V2V UE, V2X UE, etc.
- D2D device to device
- M2M machine to machine
- PDA machine to machine
- PAD machine to machine
- Tablet mobile terminals
- smart phone laptop embedded equipped (LEE), laptop mounted equipment (LME), USB dongles
- UE category Ml UE category M2
- ProSe UE ProSe UE
- V2V UE V2X UE
- terminologies such as base station/gNodeB and UE should be considered non-limiting and in particular do not imply a certain hierarchical relation between the two; in general, “gNodeB” could be considered as device 1 and “UE” could be considered as device 2 and these two devices communicate with each other over some radio channel. And in the following the transmitter or receiver could be either gNodeB (gNB), or UE.
- gNB gNodeB
- aspects of the embodiments may be embodied as a system, apparatus, method, or computer program product.
- embodiments may take the form of an entirely hardware embodiment, an entirely software embodiment (including firmware, resident software, micro-code, etc.) or an embodiment combining software and hardware aspects.
- the disclosed embodiments may be implemented as a hardware circuit comprising custom very-large-scale integration (“VLSI”) circuits or gate arrays, off-the-shelf semiconductors such as logic chips, transistors, or other discrete components.
- VLSI very-large-scale integration
- the disclosed embodiments may also be implemented in programmable hardware devices such as field programmable gate arrays, programmable array logic, programmable logic devices, or the like.
- the disclosed embodiments may include one or more physical or logical blocks of executable code which may, for instance, be organized as an object, procedure, or function.
- embodiments may take the form of a computer program product embodied in one or more computer readable storage devices storing machine readable code, computer readable code, and/or program code, referred hereafter as code.
- the storage devices may be tangible, non- transitory, and/or non-transmission.
- the storage devices may not embody signals. In a certain embodiment, the storage devices only employ signals for accessing code.
- the computer readable medium may be a computer readable storage medium.
- the computer readable storage medium may be a storage device storing the code.
- the storage device may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, holographic, micromechanical, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing.
- a storage device More specific examples (a non-exhaustive list) of the storage device would include the following: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a random-access memory (“RAM”), a read-only memory (“ROM”), an erasable programmable read-only memory (“EPROM” or Flash memory), a portable compact disc read-only memory (“CD-ROM”), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
- a computer readable storage medium may be any tangible medium that can contain or store a program for use by or in connection with an instruction execution system, apparatus, or device.
- Code for carrying out operations for embodiments may be any number of lines and may be written in any combination of one or more programming languages including an object-oriented programming language such as Python, Ruby, Java, Smalltalk, C++, or the like, and conventional procedural programming languages, such as the “C” programming language, or the like, and/or machine languages such as assembly languages.
- the code may execute entirely on the user’s computer, partly on the user’s computer, as a stand-alone software package, partly on the user’s computer and partly on a remote computer or entirely on the remote computer or server.
- the remote computer may be connected to the user’s computer through any type of network, including a local area network (“LAN”), wireless LAN (“WLAN”), or a wide area network (“WAN”), or the connection may be made to an external computer (for example, through the Internet using an Internet Service Provider (“ISP”)).
- LAN local area network
- WLAN wireless LAN
- WAN wide area network
- ISP Internet Service Provider
- the code may also be stored in a storage device that can direct a computer, other programmable data processing apparatus, or other devices to function in a particular manner, such that the instructions stored in the storage device produce an article of manufacture including instructions which implement the function/act specified in the flowchart diagrams and/or block diagrams.
- the code may also be loaded onto a computer, other programmable data processing apparatus, or other devices to cause a series of operational steps to be performed on the computer, other programmable apparatus, or other devices to produce a computer implemented process such that the code which execute on the computer or other programmable apparatus provide processes for implementing the functions/acts specified in the flowchart diagrams and/or block diagrams.
- ML applicable conditions for LCM operations can be significantly changed with different mobility ranges over time by degrading any activated LCM operations. For example, when model activation/switching operations are executed for UE model(s) due to ML applicable condition changes, significant delay (due to model selection/configuration/pre-loading, etc.) might occur during the process when a new model is initially activated at the UE side or an alternative model at the UE side is switched on to replace the already active model. AI/ML based techniques are currently applied to many different applications and 3GPP also started to work on its technical investigation to apply multiple use cases based on the observed potential gains.
- a Method for a handover operation for a two-sided AI/ML model in a wireless network comprises a step of operating a first AI/ML model of the two-sided AI/ML model in a first node of the wireless network and operating a second AI/ML model of the two-sided AI/ML model in a second node of the wireless network. Further, the method comprises a step of receiving, at the second node, an AI/ML model status report from the first node.
- the handover actions are being executed by the first and/or second and/or third node.
- the status report includes measurement and/or mobility information.
- AI/ML operation modes are pre-configured in an AI/ML operation mode list based on applications or implementation use cases.
- the AI/ML operation mode or the pre-configured AI/ML operation mode list is transmitted via system information or a dedicated RRC message.
- a handover command is signaled via L1 or L2 signaling to the first node, the handover command informing the first node about the determined AI/ML operation mode and associated assistance information.
- the pre-configured AI/ML operation mode list is indexed and transmitted to group-based user equipments (UEs) via groupcast signaling.
- UEs user equipments
- the handover actions comprises at least one of the following:
- an apparatus for a handover operation for a two-sided AI/ML model in a wireless network comprising a wireless transceiver, a processor coupled with a memory in which computer program instructions are stored, said instructions being configured to implement before-described steps.
- a user equipment for a handover operation for a two-sided AI/ML model in a wireless network comprising a before-described apparatus.
- a base station for a handover operation for a two-sided AI/ML model in a wireless network comprising a before-described apparatus.
- a wireless communication system for a handover operation for a two-sided AI/ML model in a wireless network comprising the before-described base station (gNB) and the before-described user equipment (UE), the base station comprising a processor coupled with a memory in which computer program instructions are stored, said instructions being configured to implement before-described steps, the user equipment (UE) comprising a processor coupled with a memory in which computer program instructions are stored, said instructions being configured to implement before-described steps.
- a UE adjusts an already activated ML operation by applying a pre-configured ML operation mode among a candidate mode list based on a status update information about the UE mobility and candidate neighbor cells related to ML capability.
- the candidate mode list includes 1 ) ML model switching, 2) temporary suspend/resume of ML model, 3) de-activation of ML model (fallback mode), 4) de-activation of handover.
- the pre-configured ML operation mode can be sent to the UE through a handover command signaling in advance and the indicated ML operation mode is then applied before the handover is triggered to be executed.
- Figure 1 shows an exemplary table of ML operation modes.
- This table describes four different modes of the pre-configured ML operations and more number of ML operation modes can be configured in advance based on applications or implementation use cases.
- the available pre-configured ML operation mode information can be signaled to the UE through system information or a dedicated RRC message.
- groupcast signaling can be used to provide indexed information of available pre-configured ML operation modes.
- mode index #1 is ML model switching and two-sided model operation can be switched to one-sided model operation before handover is executed if applicable, as an example. After the handover is complete, one-sided model operation can be switched back to two-sided model with the new gNB connection.
- the already activated ML model on the UE side can be switched to an alternative model that can be independent of the source gNB during handover procedure.
- the specific model information e.g., model ID
- the ML operation mode in this case, mode index #1
- an indication message can contain both the model to switch before handover and the model to re-switch after the handover.
- the UE can also have the option of autonomously selectiing the alternative model for switching based on the indicated ML operation mode.
- mode index #2 which is a temporary suspend/resume mode
- the already activated ML model is suspended when handover is triggered to be executed and the same model operation is resumed after the handover is completed.
- no model switching is necessary and model operation itself is only suspended with no additional model information exchange signaling until the handover is complete where the loaded model on device is still valid to resume its operation.
- a ML model de-activation mode any activated model on device exchanging model information with the source gNB is de-activated when the handover is triggered to be executed and this mode can equivalently be considered as a fallback switching, or non-ML model.
- a new ML model is configured to be set up with the new gNB.
- a handover de-activation mode the handover execution is de-activated so that the already activated ML model is allowed to operate continuously. Or, the handover execution is delayed until ongoing the model operation can be completed, and the handover is then activated for execution. In this case, a pre-configured time period can be applied to delay the handover for model operation completion.
- a pre-configured time period can be applied to delay the handover for model operation completion.
- Skipping the handover procedure is implementation-specific based on ML model operation applications and specific cell sites.
- Figure 2 shows an exemplary block diagram of a ML model switching mode.
- the UE receives an indication message about the ML operation mode (e.g., through handover command) and the UE reports a confirmation message about the ML operation mode status.
- the handover is then executed to establish a new connection with the target gNB and model re-switching is performed after handover.
- the target/candidate model(s) to be switched before/after handover can be indicated in advance to determine the model switching configuration.
- FIG. 3 shows an exemplary block diagram of a temporary suspend/resume mode.
- the already activated ML model is suspended when the handover is triggered to be executed and the same model operation is resumed after the handover is complete.
- no model switching is necessary and model operation itself is only suspended with no additional model information exchange signaling until the handover is complete where the loaded model on device is still valid to resume its operation.
- a pre-configured time duration for how long the ML model is suspended is set. If the ML model cannot be resumed within this time duration, then a new ML model configuration is set up with the target gNB after handover.
- Figure 4 shows an exemplary block diagram of a ML model de-activation mode.
- any activated model on device exchanging model information with the source gNB is de-activated when the handover is triggered to be executed and this mode can equivalently be considered as a fallback switching, or non-ML model.
- a new ML model is configured to be set up with the new gNB.
- Figure 5 shows an exemplary block diagram of a handover de-activation mode.
- the handover execution is de-activated so that the already activated ML model is allowed to operate continuously. Or the handover execution is delayed until ongoing model operation can be completed, and the handover procedure is then re-activated for execution. In this case, the pre-configured time period can be applied to delay the handover for model operation completion.
- the handover itself is skipped when the already activated ML model can be operated independently without connection with the target gNB and model operation needs to continue for completion with priority. Skipping the handover procedure is implementation-specific based on ML model operation applications and specific cell site.
- Figure 6 shows an exemplary signaling flow of applying a ML operation mode to a handover operation.
- the source gNB determines a ML operation mode for the UE among a candidate list and the indication information about the determined ML operation mode and the associated assistance information is sent to the UE through a handover command or L1/L2 signaling. Before the triggered handover is executed, the indicated ML operation mode is then activated.
- Figure 7 shows an exemplary signaling flow of applying a temporary suspend/resume mode.
- the handover is triggered to be executed, the already activated model is suspended without further exchange of ML model related signaling with the source gNB.
- ML model operation is resumed after the handover is complete.
- a pre-configured time duration for how long the ML model is suspended can be set. If the ML model cannot be resumed within this time duration, then a new ML model configuration is set up with the target gNB after handover.
- Figure 8 shows an exemplary signaling flow of applying a handover de-activation mode.
- Handover execution is de-activated so that the already activated ML model is allowed to operate continuously. Or, the handover execution is delayed until ongoing model operation can be completed, and the handover procedure is then re-activated for execution. In this case, the pre-configured time period can be applied to delay the handover for model operation completion.
- Figure 9 shows an exemplary signaling flow of applying a ML model switching mode. This example shows that two-sided model is used between the source gNB and the UE, and the one-sided models are switched on to replace the two-sided model when handover is executed. Target/candidate model(s) to switch before/after handover can be indicated in advance to determine the model switching configuration.
Landscapes
- Engineering & Computer Science (AREA)
- Software Systems (AREA)
- Computer Networks & Wireless Communication (AREA)
- Signal Processing (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Evolutionary Computation (AREA)
- Medical Informatics (AREA)
- Data Mining & Analysis (AREA)
- Physics & Mathematics (AREA)
- Computing Systems (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Mathematical Physics (AREA)
- Artificial Intelligence (AREA)
- Mobile Radio Communication Systems (AREA)
Abstract
The disclosure describes methods of pre-configuring AI/ML operations before a handover occurs in a wireless mobile communication system including base stations (e.g., gNB) and mobile stations (e.g., UE). If AI/ML models are applied to radio access networks, model performance such as inferencing and/or training is dependent on different model execution environments with varying configuration parameters. Therefore, by re-configuring any model in operation with a source gNB in advance, the performance impact due to UE mobility can be minimized.
Description
TITLE
Method for a handover operation for a two-sided AI/ML model
TECHNNICAL FIELD
The present disclosure relates to AI/ML operation pre-configuration, where techniques for re-configuring and signaling the specific information to reduce model performance degradation due to UE mobility are presented.
BACKGROUND
In 3GPP (3rd Generation Partnership Project), one of the selected study items as the approved Release 18 package is AI/ML (artificial intelligence/machine learning) as described in the related document (RP-213599) addressed in 3GPP TSG RAN (Technical Specification Group Radio Access Network) meeting #94e. The official title of the AI/ML study item is “Study on AI/ML for NR Air Interface”, and currently RAN WG1 and WG2 are actively working on a specification. The goal of this study item is to identify a common AI/ML framework and areas of obtaining gains using AI/ML based techniques with use cases.
According to 3GPP, the main objective of this study item is to study AI/ML frameworks for air-interfaces with target use cases by considering performance, complexity, and potential specification impacts. In particular, AI/ML models, terminology, and descriptions to identify common and specific characteristics for a framework will be one key work scope. Regarding AI/ML frameworks, various aspects are under consideration for investigation and one key item is about the lifecycle management of AI/ML models where multiple stages are included as mandatory for model training, model deployment, model inference, model monitoring, model updating, etc.
Earlier, in 3GPP TR 37.817 for Release 17 titled as “Study on enhancement for Data Collection for NR and EN-DC”, UE mobility was also considered as one of the AI/ML use cases and one of the scenarios for model training/inference is that both
functions are located within a RAN node. Following, in Release 18, the new work item of “Artificial Intelligence (AI)ZMachine Learning (ML) for NG-RAN” was initiated to specify data collection enhancements and signaling support within existing NG-RAN interfaces and architectures.
For the above active standardization works, UE mobility to support RAN-based AI/ML model can be considered very significant for both gNBs and UEs to meet any desired model operations (e.g., model training I inference I selection I switching I update I monitoring, etc.) when the UE moves around. Currently, there is no specification defined for signaling methods or gNB-UE behaviors about UE mobility when a RAN-based AI/ML model operation proceeds. Therefore, it is necessary to investigate any specification impact by considering model operation during UE mobility. Any mechanism of additional signaling method and/or gNB-UE behaviors also need to be addressed to support mobility-based model operation between gNB and UE so that any potential impact of UE mobility on model operation in RAN should be minimized with service continuity.
On the other hand, in 3GPP the terminologies of the working list contain a set of high-level descriptions about AI/ML model training, inference, validation, testing, UE-side model, network-side model, one-sided model, two-sided model, etc. A UE-sided model and a network-sided model indicate that AI/ML model is located for operation in UE and network side, respectively. In the similar context, a one-sided and a two-sided model indicates that AI/ML model is located in one side and two sides, respectively.
All signaling aspects to support the above items are currently not specified yet as definitions of terminologies are still under discussion for further modifications.
Any potential standards impact with new or enhanced mechanisms of supporting AI/ML models with the above working list items is a key area for investigation in the AI/ML study item.
WO 2022 034 259 A1 discloses a network apparatus that is caused to receive as part of a handover procedure for handover of a terminal to the network apparatus, metadata about at least one machine learning model accessible for execution and/or training by the terminal, determining whether or not the terminal should execute and/or train the at least one machine learning model after the terminal is handed over to the network apparatus; and signal the result of the determining to the terminal.
WO 2022 058 020 A1 discloses measures for evaluation and control of predictive machine learning models in mobile networks. Such measures exemplarily comprise receiving information on a predictive model related to a radio resource management function, obtaining behavior information on an intended behavior of said predicted model, obtaining difference determination information on difference determination with respect to a predictive model prediction and said intended behavior, measuring a network condition, determining a prediction result based on said network condition and said information on said predictive model, determining a behavior result based on said network condition and said behavior information, and evaluating validity of said predictive model based on said prediction result, said behavior result, and said difference determination information.
WO 2022 199 824 A1 discloses a computer implemented method for federated machine learning (FL) in a wireless communication system, the method comprising establishing a first wireless access radio link between a first access node and a wireless device, initiating an FL process involving the first access node and the wireless device, transmitting FL information from the first access node to the wireless device, where the FL information comprises data indicative of the FL process, establishing a second wireless access radio link between a second access node and the wireless device, where the second access node is communicatively coupled to the first access node, exchanging at least part of the FL information between the wireless device and the second access node, and resuming the FL process involving the first access node and the wireless device by communication via the second access node over the second wireless access radio link.
WO 2022 258 196 A1 discloses an apparatus comprising means for receiving a machine learning model for predicting handover parameters; receiving radio access network service related information; and determining, using the machine learning model, handover parameters for a service and a cell pair comprising a serving cell and a target cell, the service being supported by a network, the machine learning model being provided with one or more input parameters based on the radio access network service related information.
WO 2021 123 285 A1 discloses a method of transmitting or receiving data by a communications device in a wireless communications network, the method comprising: establishing a connection for transmitting or receiving the data in a first cell of the wireless communications network, determining a value of one or more input parameters, using the value of the one or more input parameters as inputs to a model trained using machine learning, determining, based on an output of the model, that the communications device should perform a handover to establish a connection in a second cell, and responsive to determining that the communications device should establish a connection in the second cell, transmitting a handover message to request the establishment of a connection in a second cell.
WO 2021 259 492 A1 discloses a method in a first node of a communications network for training a machine learning model comprises receiving a first message comprising instructions for training the machine learning model using a distributed learning process. The method then comprises responsive to receiving the first message, acting as an aggregator in the distributed learning process for a subset of other nodes selected by the first node from a plurality of nodes that have an established radio channel allocation with the first node, by causing the subset of other nodes to perform training on local copies of the machine learning model and aggregating the results of the training by the subset of other nodes.
BRIEF DESCRIPTION OF THE DRAWINGS
The disclosed invention will be further discussed in the following based on preferred embodiments presented in the attached drawings. However, the disclosed invention
may be embodied in many different forms and should not be construed as limited to said preferred embodiments. Rather, said preferred embodiments are provided for thoroughness and completeness, and fully convey the scope of the invention to the skilled person. The following detailed description refers to the attached drawings, in which:
Figure 1 is an exemplary table of ML operation modes;
Figure 2 is an exemplary block diagram of ML model switching mode;
Figure 3 is an exemplary block diagram of temporary suspend/resume mode;
Figure 4 is an exemplary block diagram of ML model de-activation mode;
Figure 5 is an exemplary block diagram of handover de-activation mode;
Figure 6 is an exemplary signaling flow of applying ML operation mode to handover;
Figure 7 is an exemplary signaling flow of applying temporary suspend/resume mode;
Figure 8 is an exemplary signaling flow of applying handover de-activation mode; and
Figure 9 is an exemplary signaling flow of applying ML model switching mode.
DETAILED DESCRIPTION
Some of the embodiments contemplated herein will now be described more fully with reference to the accompanying drawings. Other embodiments, however, are contained within the scope of the subject matter disclosed herein, the disclosed subject matter should not be construed as limited to only the embodiments set forth
herein; rather, these embodiments are provided by way of example to convey the scope of the subject matter to those skilled in the art.
Generally, all terms used herein are to be interpreted according to their ordinary meaning in the relevant technical field, unless a different meaning is clearly given and/or is implied from the context in which it is used. All references to a/an/the element, apparatus, component, means, step, etc. are to be interpreted openly as referring to at least one instance of the element, apparatus, component, means, step, etc., unless explicitly stated otherwise. The steps of any methods disclosed herein do not have to be performed in the exact order disclosed, unless a step is explicitly described as following or preceding another step and/or where it is implicit that a step must follow or precede another step. Any feature of any of the embodiments disclosed herein may be applied to any other embodiment, wherever appropriate. Likewise, any advantage of any of the embodiments may apply to any other embodiments, and vice versa. Other objectives, features and advantages of the enclosed embodiments will be apparent from the following description.
In some embodiments, a more general term “network node” may be used and may correspond to any type of radio network node or any network node, which communicates with a UE (directly or via another node) and/or with another network node. Examples of network nodes are NodeB, MeNB, ENB, a network node belonging to MCG or SCG, base station (BS), multi-standard radio (MSR) radio node such as MSR BS, eNodeB, gNodeB, network controller, radio network controller (RNC), base station controller (BSC), relay, donor node controlling relay, base transceiver station (BTS), access point (AP), transmission points, transmission nodes, RRU, RRH, nodes in distributed antenna system (DAS), core network node (e.g. Mobile Switching Center (MSC), Mobility Management Entity (MME), etc), Operations & Maintenance (O&M), Operations Support System (OSS), Self Optimized Network (SON), positioning node (e.g. Evolved- Serving Mobile Location Centre (E-SMLC)), Minimization of Drive Tests (MDT), test equipment (physical node or software), another UE, etc.
In some embodiments, the non-limiting term user equipment (UE) or wireless device may be used and may refer to any type of wireless device communicating with a network node and/or with another UE in a cellular or mobile communication system. Examples of UE are target device, device to device (D2D) UE, machine type UE or UE capable of machine to machine (M2M) communication, PDA, PAD, Tablet, mobile terminals, smart phone, laptop embedded equipped (LEE), laptop mounted equipment (LME), USB dongles, UE category Ml, UE category M2, ProSe UE, V2V UE, V2X UE, etc.
Additionally, terminologies such as base station/gNodeB and UE should be considered non-limiting and in particular do not imply a certain hierarchical relation between the two; in general, “gNodeB” could be considered as device 1 and “UE” could be considered as device 2 and these two devices communicate with each other over some radio channel. And in the following the transmitter or receiver could be either gNodeB (gNB), or UE.
As will be appreciated by one skilled in the art, aspects of the embodiments may be embodied as a system, apparatus, method, or computer program product.
Accordingly, embodiments may take the form of an entirely hardware embodiment, an entirely software embodiment (including firmware, resident software, micro-code, etc.) or an embodiment combining software and hardware aspects.
For example, the disclosed embodiments may be implemented as a hardware circuit comprising custom very-large-scale integration (“VLSI”) circuits or gate arrays, off-the-shelf semiconductors such as logic chips, transistors, or other discrete components. The disclosed embodiments may also be implemented in programmable hardware devices such as field programmable gate arrays, programmable array logic, programmable logic devices, or the like. As another example, the disclosed embodiments may include one or more physical or logical blocks of executable code which may, for instance, be organized as an object, procedure, or function.
Furthermore, embodiments may take the form of a computer program product embodied in one or more computer readable storage devices storing machine readable code, computer readable code, and/or program code, referred hereafter as code. The storage devices may be tangible, non- transitory, and/or non-transmission. The storage devices may not embody signals. In a certain embodiment, the storage devices only employ signals for accessing code.
Any combination of one or more computer readable medium may be utilized. The computer readable medium may be a computer readable storage medium. The computer readable storage medium may be a storage device storing the code. The storage device may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, holographic, micromechanical, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing.
More specific examples (a non-exhaustive list) of the storage device would include the following: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a random-access memory (“RAM”), a read-only memory (“ROM”), an erasable programmable read-only memory (“EPROM” or Flash memory), a portable compact disc read-only memory (“CD-ROM”), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this document, a computer readable storage medium may be any tangible medium that can contain or store a program for use by or in connection with an instruction execution system, apparatus, or device.
Code for carrying out operations for embodiments may be any number of lines and may be written in any combination of one or more programming languages including an object-oriented programming language such as Python, Ruby, Java, Smalltalk, C++, or the like, and conventional procedural programming languages, such as the “C” programming language, or the like, and/or machine languages such as assembly languages. The code may execute entirely on the user’s computer, partly on the user’s computer, as a stand-alone software package, partly on the user’s computer and partly on a remote computer or entirely on the remote computer or server. In the latter scenario, the remote computer may be connected to the user’s
computer through any type of network, including a local area network (“LAN”), wireless LAN (“WLAN”), or a wide area network (“WAN”), or the connection may be made to an external computer (for example, through the Internet using an Internet Service Provider (“ISP”)).
Furthermore, the described features, structures, or characteristics of the embodiments may be combined in any suitable manner. In the following description, numerous specific details are provided, such as examples of programming, software modules, user selections, network transactions, database queries, database structures, hardware modules, hardware circuits, hardware chips, etc., to provide a thorough understanding of embodiments. One skilled in the relevant art will recognize, however, that embodiments may be practiced without one or more of the specific details, or with other methods, components, materials, and so forth. In other instances, well-known structures, materials, or operations are not shown or described in detail to avoid obscuring aspects of an embodiment. Reference throughout this specification to “one embodiment,” “an embodiment,” or similar language means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment. Thus, appearances of the phrases “in one embodiment”, “in an embodiment”, and similar language throughout this specification may, but do not necessarily, all refer to the same embodiment, but mean “one or more but not all embodiments” unless expressly specified otherwise. The terms “including”, “comprising”, “having”, and variations thereof mean “including but not limited to”, unless expressly specified otherwise. An enumerated listing of items does not imply that any or all of the items are mutually exclusive, unless expressly specified otherwise. The terms “a”, “an”, and “the” also refer to “one or more” unless expressly specified otherwise.
Aspects of the embodiments are described below with reference to schematic flowchart diagrams and/or schematic block diagrams of methods, apparatuses, systems, and program products according to embodiments. It will be understood that each block of the schematic flowchart diagrams and/or schematic block diagrams, and combinations of blocks in the schematic flowchart diagrams and/or schematic block diagrams, can be implemented by code. This code may be
provided to a processor of a general-purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart diagrams and/or block diagrams.
The code may also be stored in a storage device that can direct a computer, other programmable data processing apparatus, or other devices to function in a particular manner, such that the instructions stored in the storage device produce an article of manufacture including instructions which implement the function/act specified in the flowchart diagrams and/or block diagrams.
The code may also be loaded onto a computer, other programmable data processing apparatus, or other devices to cause a series of operational steps to be performed on the computer, other programmable apparatus, or other devices to produce a computer implemented process such that the code which execute on the computer or other programmable apparatus provide processes for implementing the functions/acts specified in the flowchart diagrams and/or block diagrams.
The flowchart diagrams and/or block diagrams in the Figures illustrate the architecture, functionality, and operation of possible implementations of apparatuses, systems, methods, and program products according to various embodiments. In this regard, each block in the flowchart diagrams and/or block diagrams may represent a module, segment, or portion of code, which includes one or more executable instructions of the code for implementing the specified logical function(s).
It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the Figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. Other steps and methods may be conceived that are
equivalent in function, logic, or effect to one or more blocks, or portions thereof, of the illustrated Figures.
Although various arrow types and line types may be employed in the flowchart and/or block diagrams, they are understood not to limit the scope of the corresponding embodiments. Indeed, some arrows or other connectors may be used to indicate only the logical flow of the depicted embodiment. For instance, an arrow may indicate a waiting or monitoring period of unspecified duration between enumerated steps of the depicted embodiment. It will also be noted that each block of the block diagrams and/or flowchart diagrams, and combinations of blocks in the block diagrams and/or flowchart diagrams, can be implemented by special purpose hardware-based systems that perform the specified functions or acts, or combinations of special purpose hardware and code.
The description of elements in each figure may refer to elements of proceeding figures. Like numbers refer to like elements in all figures, including alternate embodiments of like elements.
The detailed description set forth below, with reference to annexed drawings, is intended as a description of various configurations and is not intended to represent the only configurations in which the concepts described herein may be practiced. The detailed description includes specific details for the purpose of providing a thorough understanding of the various concepts. However, it will be apparent to those skilled in the art that these concepts may be practiced without these specific details. Although terminology from 3GPP 5G NR may be used in this disclosure to exemplify embodiments herein, this should not be seen as limiting the scope of the invention.
The following explanation will provide the detailed description of the mechanism about pre-configuring an AI/ML-based model before a handover occurrence in a wireless mobile communication system including base stations (e.g., gNB) and mobile stations (e.g., UE). An AI/ML lifecycle can be split into several stages such as data collection/pre-processing, model training, model testing/validation, model
deployment/update, model monitoring, model switching/selection etc., where each stage is equally important to achieve a target performance with any specific model(s).
In applying an AI/ML model for any use case or application, one of the challenging issues is to manage the lifecycle of the AI/ML model. This is mainly because a data/model drift occurs during model deployment/inference which results in performance degradation of the AI/ML model. Model drift occurs, when dataset statistically change after the model is deployed and when model inference capability is impacted due to unseen data as input. In a similar aspect, the statistical property of a dataset and the relationship between input and output for the trained model can be changed with drift occurrence. Then, model adaptation is required to support operations such as model switching, re-training, fallback, etc. When an AI/ML model enabled wireless communication network is deployed, it is then important to consider how to handle an adaptation of the AI/ML model under operations such as model training, inference, monitoring, updating, etc.
Based on specific network-UE ML collaboration in deployment scenarios (e.g., UE mobility case), ML applicable conditions for LCM operations can be significantly changed with different mobility ranges over time by degrading any activated LCM operations. For example, when model activation/switching operations are executed for UE model(s) due to ML applicable condition changes, significant delay (due to model selection/configuration/pre-loading, etc.) might occur during the process when a new model is initially activated at the UE side or an alternative model at the UE side is switched on to replace the already active model. AI/ML based techniques are currently applied to many different applications and 3GPP also started to work on its technical investigation to apply multiple use cases based on the observed potential gains.
In a similar aspect, the statistical properties of datasets and the relationship between input and output for trained models can be changed with drift occurrence. In this context, UE mobility is one key issue for model performance maintenance as
model performance such as inferencing and/or training is dependent on different model execution environments with varying configuration parameters.
To handle this issue, collaboration between a UE and a gNB is highly important to track model performance and re-configure the model corresponding to different environments as the UE moves around across different gNBs. Any AI/ML model needs model monitoring after deployment because model performance cannot be maintained continuously due to model drift. Update feedback is then provided to re-train/update the model or select an alternative model. Therefore, AI/ML data/model drift handling is highly important by tracking model performance such as predictability, accuracy, etc.
When an AI/ML model enabled wireless communication network is deployed, it is then important to consider how to handle AI/ML model in activation with re-configuration for wireless devices under operations such as model training, inference, updating, etc. In other words, by re-configuring any model in operation with the source gNB in advance, the performance impact due to UE mobility can be minimized when the target gNB connection is made.
According to a first aspect of the invention, a Method for a handover operation for a two-sided AI/ML model in a wireless network comprises a step of operating a first AI/ML model of the two-sided AI/ML model in a first node of the wireless network and operating a second AI/ML model of the two-sided AI/ML model in a second node of the wireless network. Further, the method comprises a step of receiving, at the second node, an AI/ML model status report from the first node. Further, the method comprises a step of receiving, at the second node, a handover request from a third node of the wireless network, the handover request indicating a future transition in communication from between the first node and the second node to between the first node and the third node. Further, the method comprises a step of configuring, upon receiving the handover request, at the second node, based on the AI/ML model status report, an AI/ML operation mode defining handover actions. Further, the method comprises a step of receiving, at the first node, the AI/ML operation mode. Further, the method comprises a step of activating the at least one
AI/ML operation mode, thereby executing the handover actions. Further, the method comprises a step of executing the handover operation, thereby transitioning into communication from between the first node and the second node to between the first node and the third node.
Alternatively, the method comprises steps of:
• Pre-configuring a candidate list of AI/ML operation modes,
• Configuring any number of AI/ML operation modes in advance based on applications or implementation use cases,
• Sending the pre-configured list of ML operation modes through system information or a dedicated RRC message,
• Sending an indication information about the determined AI/ML operation modes and associated assistance information through a handover command or L1/L2 signaling,
• Sending indexed information of available pre-configured AI/ML operation modes to group-based UEs to apply the AI/ML operation modes through groupcast signaling, and/or
• Activating the indicated AI/ML operation modes before the handover is executed.
Advantageously, the handover actions are being executed by the first and/or second and/or third node.
Advantageously, the first node is a first base station (gNB), the second node is a user equipment (UE), and the third node is a second base station (gNB).
Advantageously, the status report includes measurement and/or mobility information.
Advantageously, a multitude of AI/ML operation modes are pre-configured in an AI/ML operation mode list based on applications or implementation use cases.
Advantageously, the AI/ML operation mode or the pre-configured AI/ML operation mode list is transmitted via system information or a dedicated RRC message.
Advantageously, a handover command is signaled via L1 or L2 signaling to the first node, the handover command informing the first node about the determined AI/ML operation mode and associated assistance information.
Advantageously, the pre-configured AI/ML operation mode list is indexed and transmitted to group-based user equipments (UEs) via groupcast signaling.
Advantageously, the handover actions comprises at least one of the following:
• Switching from two-sided model operation to one-sided model operation at the first and/or second node before the handover operation, and switching back to two-sided model operation after the handover operation;
• Switching the first AI/ML model to an alternative AI/ML model;
• Temporarily suspend the first AI/ML model before the handover operation, and resuming the first AI/ML model after the handover operation;
• Configuring a time duration defining how long the two-sided AI/ML model is allowed to continue operation;
• Deactivate the first AI/ML model before the handover operation, and activate the first AI/ML model or an alternative AI/ML model after the handover operation;
• Deactivate the handover operation;
• Delay the handover operation until the two-sided AI/ML model operation is completed;
• Configuring a time duration defining a delay of the handover operation;
According to a second aspect of the invention, an apparatus for a handover operation for a two-sided AI/ML model in a wireless network comprising a wireless transceiver, a processor coupled with a memory in which computer program instructions are stored, said instructions being configured to implement before-described steps.
According to a third aspect of the invention, a user equipment (UE) for a handover operation for a two-sided AI/ML model in a wireless network comprising a before-described apparatus.
According to a fourth aspect of the invention, a base station (gNB) for a handover operation for a two-sided AI/ML model in a wireless network comprising a before-described apparatus.
According to a fifth aspect of the invention, a wireless communication system for a handover operation for a two-sided AI/ML model in a wireless network comprising the before-described base station (gNB) and the before-described user equipment (UE), the base station comprising a processor coupled with a memory in which computer program instructions are stored, said instructions being configured to implement before-described steps, the user equipment (UE) comprising a processor coupled with a memory in which computer program instructions are stored, said instructions being configured to implement before-described steps.
In this method, a UE adjusts an already activated ML operation by applying a pre-configured ML operation mode among a candidate mode list based on a status update information about the UE mobility and candidate neighbor cells related to ML capability. For example, the candidate mode list includes 1 ) ML model switching, 2) temporary suspend/resume of ML model, 3) de-activation of ML model (fallback mode), 4) de-activation of handover. When entering a handover preparation phase, the pre-configured ML operation mode can be sent to the UE through a handover command signaling in advance and the indicated ML operation mode is then applied before the handover is triggered to be executed.
Figure 1 shows an exemplary table of ML operation modes. This table describes four different modes of the pre-configured ML operations and more number of ML operation modes can be configured in advance based on applications or implementation use cases. The available pre-configured ML operation mode information can be signaled to the UE through system information or a dedicated RRC message. When considering group-based UEs to apply ML operation modes, groupcast signaling can be used to provide indexed information of available pre-configured ML operation modes. For mode index #1 is ML model switching and
two-sided model operation can be switched to one-sided model operation before handover is executed if applicable, as an example. After the handover is complete, one-sided model operation can be switched back to two-sided model with the new gNB connection. As another example, the already activated ML model on the UE side can be switched to an alternative model that can be independent of the source gNB during handover procedure. The specific model information (e.g., model ID) for switching can be indicated together with the ML operation mode (in this case, mode index #1 ) and an indication message can contain both the model to switch before handover and the model to re-switch after the handover. The UE can also have the option of autonomously selectiing the alternative model for switching based on the indicated ML operation mode.
For mode index #2, which is a temporary suspend/resume mode, the already activated ML model is suspended when handover is triggered to be executed and the same model operation is resumed after the handover is completed. In this mode, no model switching is necessary and model operation itself is only suspended with no additional model information exchange signaling until the handover is complete where the loaded model on device is still valid to resume its operation.
For mode index #3, a ML model de-activation mode, any activated model on device exchanging model information with the source gNB is de-activated when the handover is triggered to be executed and this mode can equivalently be considered as a fallback switching, or non-ML model. In this case, a new ML model is configured to be set up with the new gNB.
For mode index #4, a handover de-activation mode, the handover execution is de-activated so that the already activated ML model is allowed to operate continuously. Or, the handover execution is delayed until ongoing the model operation can be completed, and the handover is then activated for execution. In this case, a pre-configured time period can be applied to delay the handover for model operation completion. Other applicable options in this mode is that the handover itself is skipped when the already activated ML model can be operated independently without connection with the target gNB and model operation needs to
continue for completion with priority. Skipping the handover procedure is implementation-specific based on ML model operation applications and specific cell sites.
Figure 2 shows an exemplary block diagram of a ML model switching mode. In this exemplary description for the ML model switching mode, the UE receives an indication message about the ML operation mode (e.g., through handover command) and the UE reports a confirmation message about the ML operation mode status. The handover is then executed to establish a new connection with the target gNB and model re-switching is performed after handover. In this scenario, the target/candidate model(s) to be switched before/after handover can be indicated in advance to determine the model switching configuration.
Figure 3 shows an exemplary block diagram of a temporary suspend/resume mode. In this exemplary description for the temporary suspend/resume mode, the already activated ML model is suspended when the handover is triggered to be executed and the same model operation is resumed after the handover is complete. In this mode, no model switching is necessary and model operation itself is only suspended with no additional model information exchange signaling until the handover is complete where the loaded model on device is still valid to resume its operation. Optionally, a pre-configured time duration for how long the ML model is suspended is set. If the ML model cannot be resumed within this time duration, then a new ML model configuration is set up with the target gNB after handover.
Figure 4 shows an exemplary block diagram of a ML model de-activation mode. In this exemplary description for the ML model de-activation mode, any activated model on device exchanging model information with the source gNB is de-activated when the handover is triggered to be executed and this mode can equivalently be considered as a fallback switching, or non-ML model. In this case, a new ML model is configured to be set up with the new gNB.
Figure 5 shows an exemplary block diagram of a handover de-activation mode. In this exemplary description for the handover de-activation mode, the handover
execution is de-activated so that the already activated ML model is allowed to operate continuously. Or the handover execution is delayed until ongoing model operation can be completed, and the handover procedure is then re-activated for execution. In this case, the pre-configured time period can be applied to delay the handover for model operation completion. Other applicable options for this mode is that the handover itself is skipped when the already activated ML model can be operated independently without connection with the target gNB and model operation needs to continue for completion with priority. Skipping the handover procedure is implementation-specific based on ML model operation applications and specific cell site.
Figure 6 shows an exemplary signaling flow of applying a ML operation mode to a handover operation. The source gNB determines a ML operation mode for the UE among a candidate list and the indication information about the determined ML operation mode and the associated assistance information is sent to the UE through a handover command or L1/L2 signaling. Before the triggered handover is executed, the indicated ML operation mode is then activated.
Figure 7 shows an exemplary signaling flow of applying a temporary suspend/resume mode. When the handover is triggered to be executed, the already activated model is suspended without further exchange of ML model related signaling with the source gNB. ML model operation is resumed after the handover is complete. Optionally, a pre-configured time duration for how long the ML model is suspended can be set. If the ML model cannot be resumed within this time duration, then a new ML model configuration is set up with the target gNB after handover.
Figure 8 shows an exemplary signaling flow of applying a handover de-activation mode. Handover execution is de-activated so that the already activated ML model is allowed to operate continuously. Or, the handover execution is delayed until ongoing model operation can be completed, and the handover procedure is then re-activated for execution. In this case, the pre-configured time period can be applied to delay the handover for model operation completion.
Figure 9 shows an exemplary signaling flow of applying a ML model switching mode. This example shows that two-sided model is used between the source gNB and the UE, and the one-sided models are switched on to replace the two-sided model when handover is executed. Target/candidate model(s) to switch before/after handover can be indicated in advance to determine the model switching configuration.
Claims
1 . A Method for a handover operation for a two-sided AI/ML model in a wireless network, the method comprising the steps:
• Operating a first AI/ML model of the two-sided AI/ML model in a first node of the wireless network and operating a second AI/ML model of the two-sided AI/ML model in a second node of the wireless network,
• Receiving, at the second node, an AI/ML model status report from the first node,
• Receiving, at the second node, a handover request from a third node of the wireless network, the handover request indicating a future transition in communication from between the first node and the second node to between the first node and the third node,
• Configuring, upon receiving the handover request, at the second node, based on the AI/ML model status report, an AI/ML operation mode defining handover actions,
• Receiving, at the first node, the AI/ML operation mode,
• Activating the at least one AI/ML operation mode, thereby executing the handover actions, and
• Executing the handover operation, thereby transitioning into communication from between the first node and the second node to between the first node and the third node.
2. The method according to claim 1 , characterized in that the handover actions are being executed by the first and/or second and/or third node.
3. The method according to claim 1 or 2, characterized in that the first node is a first base station (gNB), the second node is a user equipment (UE), and the third node is a second base station (gNB).
4. The method according to any of the previous claims, characterized in that the status report includes measurement and/or mobility information.
5. The method according to any of the previous claims, characterized in that a multitude of AI/ML operation modes are pre-configured in an AI/ML operation mode list based on applications or implementation use cases.
6. The method according to any of the previous claims, characterized in that the AI/ML operation mode or the pre-configured AI/ML operation mode list is transmitted via system information or a dedicated RRC message.
7. The method according to any of the previous claims, characterized in that a handover command is signaled via L1or L2 signaling to the first node, the handover command informing the first node about the determined AI/ML operation mode and associated assistance information.
8. The method according to claim 5, characterized in that the pre-configured AI/ML operation mode list is indexed and transmitted to group-based user equipments (UEs) via groupcast signaling.
9. The method according to any of the previous claims, characterized in that the handover actions comprises at least one of the following:
• Switching from two-sided model operation to one-sided model operation at the first and/or second node before the handover operation, and switching back to two-sided model operation after the handover operation;
• Switching the first AI/ML model to an alternative AI/ML model;
• Temporarily suspend the first AI/ML model before the handover operation, and resuming the first AI/ML model after the handover operation;
• Configuring a time duration defining how long the two-sided AI/ML model is allowed to continue operation;
• Deactivate the first AI/ML model before the handover operation, and activate the first AI/ML model or an alternative AI/ML model after the handover operation;
• Deactivate the handover operation;
Delay the handover operation until the two-sided AI/ML model operation is completed;
Configuring a time duration defining a delay of the handover operation;
10. An apparatus for a handover operation for a two-sided AI/ML model in a wireless network comprising a wireless transceiver, a processor coupled with a memory in which computer program instructions are stored, said instructions being configured to implement steps of the claims 1 to 9.
11 . A user equipment (UE) for a handover operation for a two-sided AI/ML model in a wireless network comprising an apparatus according to claim 10.
12. A base station (gNB) for a handover operation for a two-sided AI/ML model in a wireless network comprising an apparatus according to claim 10.
13. A wireless communication system for a handover operation for a two-sided AI/ML model in a wireless network comprising the base station (gNB) according to claim 12 and the user equipment (UE) according to claim 11 , the base station comprising a processor coupled with a memory in which computer program instructions are stored, said instructions being configured to implement steps of claims 1 to 9, the user equipment (UE) comprising a processor coupled with a memory in which computer program instructions are stored, said instructions being configured to implement steps of the claims 1 to 9.
Applications Claiming Priority (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| DE102023207424 | 2023-08-02 | ||
| DE102023207424.9 | 2023-08-02 |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| WO2025027055A1 true WO2025027055A1 (en) | 2025-02-06 |
Family
ID=92295613
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| PCT/EP2024/071635 Pending WO2025027055A1 (en) | 2023-08-02 | 2024-07-31 | Method for a handover operation for a two-sided ai/ml model |
Country Status (1)
| Country | Link |
|---|---|
| WO (1) | WO2025027055A1 (en) |
Citations (6)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| WO2021123285A1 (en) | 2019-12-20 | 2021-06-24 | Sony Corporation | Communications device, infrastructure equipment and methods for performing handover using a model based on machine learning |
| WO2021259492A1 (en) | 2020-06-26 | 2021-12-30 | Telefonaktiebolaget Lm Ericsson (Publ) | Training a machine learning model |
| WO2022034259A1 (en) | 2020-08-11 | 2022-02-17 | Nokia Technologies Oy | Communication system for machine learning metadata |
| WO2022058020A1 (en) | 2020-09-18 | 2022-03-24 | Nokia Technologies Oy | Evaluation and control of predictive machine learning models in mobile networks |
| WO2022199824A1 (en) | 2021-03-25 | 2022-09-29 | Telefonaktiebolaget Lm Ericsson (Publ) | Methods for improved federated machine learning in wireless networks |
| WO2022258196A1 (en) | 2021-06-11 | 2022-12-15 | Nokia Technologies Oy | Determine handover parameters by machine learning |
-
2024
- 2024-07-31 WO PCT/EP2024/071635 patent/WO2025027055A1/en active Pending
Patent Citations (6)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| WO2021123285A1 (en) | 2019-12-20 | 2021-06-24 | Sony Corporation | Communications device, infrastructure equipment and methods for performing handover using a model based on machine learning |
| WO2021259492A1 (en) | 2020-06-26 | 2021-12-30 | Telefonaktiebolaget Lm Ericsson (Publ) | Training a machine learning model |
| WO2022034259A1 (en) | 2020-08-11 | 2022-02-17 | Nokia Technologies Oy | Communication system for machine learning metadata |
| WO2022058020A1 (en) | 2020-09-18 | 2022-03-24 | Nokia Technologies Oy | Evaluation and control of predictive machine learning models in mobile networks |
| WO2022199824A1 (en) | 2021-03-25 | 2022-09-29 | Telefonaktiebolaget Lm Ericsson (Publ) | Methods for improved federated machine learning in wireless networks |
| WO2022258196A1 (en) | 2021-06-11 | 2022-12-15 | Nokia Technologies Oy | Determine handover parameters by machine learning |
Non-Patent Citations (1)
| Title |
|---|
| SAKIRA HASSAN ET AL: "AIML methods", vol. RAN WG2, no. Athens, GR; 20230227 - 20230303, 16 February 2023 (2023-02-16), XP052245045, Retrieved from the Internet <URL:https://www.3gpp.org/ftp/TSG_RAN/WG2_RL2/TSGR2_121/Docs/R2-2300398.zip R2-2300398 AIML methods.docx> [retrieved on 20230216] * |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| WO2025008304A1 (en) | Method of advanced ml report signaling | |
| WO2025027055A1 (en) | Method for a handover operation for a two-sided ai/ml model | |
| WO2024223570A1 (en) | Method of data-driven model signaling for multi-usim | |
| WO2025026970A1 (en) | Method of activating a candidate model | |
| WO2024208702A1 (en) | Method of model dataset signaling for radio access network | |
| EP4659481A2 (en) | Method of gnb-ue behaviors for model-based mobility | |
| CN120569739A (en) | Advanced gNB-UE model monitoring method | |
| WO2025195792A1 (en) | A method of configuring a set of the supported operation modes for ml functionality in a wireless communication system | |
| WO2025124930A1 (en) | Method of multi-model signaling for ran | |
| WO2025168471A1 (en) | Method of rrc state-based online training signaling | |
| WO2025124931A1 (en) | Method of model-sharing signaling in a wireless communication system | |
| WO2024200393A1 (en) | Method of model switching signaling for radio access network | |
| WO2025168462A1 (en) | Method of advanced online training signaling for ran | |
| US20240251257A1 (en) | Method and apparatus for improving mobility management performance to reduce unnecessary handover occurrences | |
| WO2025067885A1 (en) | Method of model signaling for multi-connectivity | |
| WO2025087720A1 (en) | Method of a network-assisted indirect ml lcm operation | |
| WO2025168467A1 (en) | Method of ml condition pairing | |
| WO2025210139A1 (en) | Method of training mode adaptation signaling | |
| WO2025233278A1 (en) | Method of multi-model pre-activation signaling | |
| WO2025195960A1 (en) | Method of model adjustment signaling using representative model | |
| CN120642395A (en) | Model identification signaling method | |
| WO2025210138A1 (en) | Method of multi-training model operation signaling | |
| WO2025233228A1 (en) | Method of multi-cell based concatenation signaling | |
| WO2025233225A1 (en) | Method of model identification adaptation signaling | |
| WO2025195957A1 (en) | Method of advanced model activation signaling |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| 121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 24754221 Country of ref document: EP Kind code of ref document: A1 |