[go: up one dir, main page]

WO2025087720A1 - Method of a network-assisted indirect ml lcm operation - Google Patents

Method of a network-assisted indirect ml lcm operation Download PDF

Info

Publication number
WO2025087720A1
WO2025087720A1 PCT/EP2024/078908 EP2024078908W WO2025087720A1 WO 2025087720 A1 WO2025087720 A1 WO 2025087720A1 EP 2024078908 W EP2024078908 W EP 2024078908W WO 2025087720 A1 WO2025087720 A1 WO 2025087720A1
Authority
WO
WIPO (PCT)
Prior art keywords
target
relay
lcm
condition
model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
PCT/EP2024/078908
Other languages
French (fr)
Inventor
Hojin Kim
Andreas Andrae
Rikin SHAH
Reuben GEORGE STEPHEN
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Aumovio Germany GmbH
Original Assignee
Continental Automotive Technologies GmbH
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Continental Automotive Technologies GmbH filed Critical Continental Automotive Technologies GmbH
Publication of WO2025087720A1 publication Critical patent/WO2025087720A1/en
Pending legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W24/00Supervisory, monitoring or testing arrangements
    • H04W24/02Arrangements for optimising operational condition

Definitions

  • the present disclosure relates to AI/ML operation pre-configuration, where techniques for re-configuring and signaling the specific information to improve the efficient signaling of sidelink-based models are presented.
  • AI/ML artificial intelligence/machine learning
  • RP-213599 3GPP TSG RAN (Technical Specification Group Radio Access Network) meeting #94e.
  • the official title of the AI/ML study item is “Study on AI/ML for NR Air Interface”, and currently RAN WG1 and WG2 are actively working on a specification.
  • the goal of this study item is to identify a common AI/ML framework and areas of obtaining gains using AI/ML based techniques with use cases and LCM (lifecycle management).
  • the main objective of this study item is to study AI/ML frameworks for air-interface with target use cases by considering performance, complexity, and potential specification impact.
  • AI/ML model terminology and description to identify common and specific characteristics for the framework will be one of the key work scopes.
  • various aspects are under consideration for investigation and one of the key items is about lifecycle management of AI/ML models where multiple stages are included as mandatory for model training, model deployment, model inference, model monitoring, model updating etc.
  • UE mobility was also considered as one of the AI/ML use cases and one of the scenarios for model training/inference is that both functions are located within a RAN node.
  • AI Artificial Intelligence
  • ML Machine Learning
  • UE ML conditions to support RAN-based AI/ML models can be considered very significant for both gNBs and UEs to meet any desired model operations (e.g., model training/inference/selection/switching/update/monitoring, etc.).
  • model training/inference/selection/switching/update/monitoring etc.
  • gNB-UE behaviors there is no specification defined for signaling methods or gNB-UE behaviors about distribution of split LCMs with sidelink relay links when a RAN-based AI/ML model operation proceeds. Therefore, it is necessary to investigate any specification impact by considering model operations through sidelink relay links. Any mechanism of additional signaling methods and/or gNB-UE behaviors also need to be addressed to support relay-based model operation between gNBs and UEs so that any potential impact of UE ML conditions on model operation in RAN should be minimized with service continuity.
  • the terminologies of the working list contain a set of high-level descriptions about AI/ML model training, inference, validation, testing, UE-side model, network-side model, one-sided model, two-sided model, etc.
  • UE-sided model and network-sided model indicates that an AI/ML model is located for operation on the UE side and the network side, respectively.
  • one-sided and two-sided model indicate that an AI/ML model is located on one side and two sides, respectively.
  • US 2021 203 565 A1 discloses methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for training and using machine learning models to classify network traffic as loT traffic or non-loT traffic and managing the traffic based on the classification.
  • machine learning parameters of a local machine learning model trained by the edge device is received each of at least a subset of a set of edge devices.
  • the machine learning parameters received from an edge device are parameters of the local machine learning model trained by the edge device based on local network traffic processed by the edge device and to classify the network traffic as Internet of Things (loT) traffic or non-loT traffic.
  • a global machine learning model is generated, using the machine learning parameters, to classify network traffic processed by edge devices as loT traffic or non-loT traffic.
  • US 2022 261 697 A1 discloses systems and methods for federated machine learning.
  • a central system receives satellite analytics artifacts from a plurality of satellite site systems and generates a central machine learning model based on the satellite analytics artifacts.
  • a plurality of federated machine learning epochs are executed.
  • the central system transmitting the central machine learning model to the plurality of satellite site systems, and then receives in return, from each satellite site system, a respective set of satellite values for a set of weights of the model, wherein the satellite values are generated by the respective satellite site system based on a respective local dataset of the satellite site system.
  • the central system then generates an updated version of the central machine learning model based on the satellite values received from the satellite site systems.
  • US 2023 037 893 A1 provides a method for generating a real-time radio coverage map in a wireless network by a network apparatus.
  • the method includes: receiving real-time geospatial information from one or more geographical sources in the wireless network; determining handover information of at least one user equipment (UE) in the wireless network from a plurality of base stations based on the real-time geospatial information; and generating the real-time radio coverage map based on the handover information of at least one UE and the real-time geospatial information.
  • UE user equipment
  • WO 2022 015 008 A1 provides a method for determining a target cell for handover of a UE.
  • the method includes monitoring, by a mobility management platform, multiple network characteristics associated with multiple UEs and determining, by the mobility management platform, a correlation between the multiple UEs based on the multiple network characteristics associated with the multiple UEs and location information of the multiple UEs.
  • the method includes receiving, by the mobility management platform, location information from UE of the multiple and determining, by the mobility management platform, a measurement report corresponding to the location information received from the UE based on the correlation.
  • the method includes determining, by the mobility management platform, the target cell for the handover of the UE based on the location information received from the UE and the measurement report.
  • sensing agents communicate with user equipments (UEs) or nodes using one of multiple sensing modes through non-sensing-based or sensing-based links
  • artificial intelligence (Al) agents communicate with UEs or nodes using one of multiple Al modes through non-AI-based or Al-based links.
  • Al and sensing may work independently or together.
  • a sensing service request may be sent by an Al block to a sensing block to obtain sensing data from the sensing block, and the Al block may generate a configuration based on the sensing data.
  • Various other features, related to example interfaces, channels, and other aspects of Al-enabled and/or sensing-enabled communications, for example, are also disclosed.
  • Figure 1 is an exemplary block diagram of an indirect LCM operation using a relay UE and target UEs.
  • Figure 2 is an exemplary block diagram of an information exchange about ML LCM/model operation between UE devices.
  • Figure 3 is an exemplary flow chart of a network sided behavior for LCM operation modes.
  • Figure 4 is an exemplary flow chart of a target UE behavior for LCM operation modes.
  • Figure 6 is an exemplary signaling flow of a UE-sided decision of switching between direct LCM and indirect LCM operation mode.
  • Figure 7 is an exemplary signaling flow of a UE-sided decision of switching between direct LCM and indirect LCM operation mode.
  • Figure 8 is an exemplary signaling flow of a sidelink ML condition based relay UE selection.
  • Figure 9 is an exemplary block diagram of categories of ML conditions.
  • a more general term “network node” may be used and may correspond to any type of radio network node or any network node, which communicates with a UE (directly or via another node) and/or with another network node.
  • network nodes are NodeB, MeNB, ENB, a network node belonging to MCG or SCG, base station (BS), multi-standard radio (MSR) radio node such as MSR BS, eNodeB, gNodeB, network controller, radio network controller (RNC), base station controller (BSC), relay, donor node controlling relay, base transceiver station (BTS), access point (AP), transmission points, transmission nodes, RRU, RRH, nodes in distributed antenna system (DAS), core network node (e.g.
  • MSC Mobile Switching Center
  • MME Mobility Management Entity
  • O&M Operations & Maintenance
  • OSS Operations Support System
  • SON Self Optimized Network
  • positioning node e.g. Evolved- Serving Mobile Location Centre (E-SMLC)
  • E-SMLC Evolved- Serving Mobile Location Centre
  • MDT Minimization of Drive Tests
  • test equipment physical node or software
  • another UE etc.
  • the non-limiting term user equipment (UE) or wireless device may be used and may refer to any type of wireless device communicating with a network node and/or with another UE in a cellular or mobile communication system.
  • UE are target device, device to device (D2D) UE, machine type UE or UE capable of machine to machine (M2M) communication, PDA, PAD, Tablet, mobile terminals, smart phone, laptop embedded equipped (LEE), laptop mounted equipment (LME), USB dongles, UE category Ml, UE category M2, ProSe UE, V2V UE, V2X UE, etc.
  • terminologies such as base station/gNodeB and UE should be considered non-limiting and in particular do not imply a certain hierarchical relation between the two; in general, “gNodeB” could be considered as device 1 and “UE” could be considered as device 2 and these two devices communicate with each other over some radio channel. And in the following the transmitter or receiver could be either gNodeB (gNB), or UE.
  • gNB gNodeB
  • aspects of the embodiments may be embodied as a system, apparatus, method, or computer program product.
  • embodiments may take the form of an entirely hardware embodiment, an entirely software embodiment (including firmware, resident software, micro-code, etc.) or an embodiment combining software and hardware aspects.
  • the disclosed embodiments may be implemented as a hardware circuit comprising custom very-large-scale integration (“VLSI”) circuits or gate arrays, off-the-shelf semiconductors such as logic chips, transistors, or other discrete components.
  • VLSI very-large-scale integration
  • the disclosed embodiments may also be implemented in programmable hardware devices such as field programmable gate arrays, programmable array logic, programmable logic devices, or the like.
  • the disclosed embodiments may include one or more physical or logical blocks of executable code which may, for instance, be organized as an object, procedure, or function.
  • embodiments may take the form of a computer program product embodied in one or more computer readable storage devices storing machine readable code, computer readable code, and/or program code, referred hereafter as code.
  • the storage devices may be tangible, non- transitory, and/or non-transmission.
  • the storage devices may not embody signals. In a certain embodiment, the storage devices only employ signals for accessing code.
  • the computer readable medium may be a computer readable storage medium.
  • the computer readable storage medium may be a storage device storing the code.
  • the storage device may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, holographic, micromechanical, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing.
  • a storage device More specific examples (a non-exhaustive list) of the storage device would include the following: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a random-access memory (“RAM”), a read-only memory (“ROM”), an erasable programmable read-only memory (“EPROM” or Flash memory), a portable compact disc read-only memory (“CD-ROM”), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
  • a computer readable storage medium may be any tangible medium that can contain or store a program for use by or in connection with an instruction execution system, apparatus, or device.
  • Code for carrying out operations for embodiments may be any number of lines and may be written in any combination of one or more programming languages including an object-oriented programming language such as Python, Ruby, Java, Smalltalk, C++, or the like, and conventional procedural programming languages, such as the “C” programming language, or the like, and/or machine languages such as assembly languages.
  • the code may execute entirely on the user’s computer, partly on the user’s computer, as a stand-alone software package, partly on the user’s computer and partly on a remote computer or entirely on the remote computer or server.
  • the remote computer may be connected to the user’s computer through any type of network, including a local area network (“LAN”), wireless LAN (“WLAN”), or a wide area network (“WAN”), or the connection may be made to an external computer (for example, through the Internet using an Internet Service Provider (“ISP”)).
  • LAN local area network
  • WLAN wireless LAN
  • WAN wide area network
  • ISP Internet Service Provider
  • the code may also be stored in a storage device that can direct a computer, other programmable data processing apparatus, or other devices to function in a particular manner, such that the instructions stored in the storage device produce an article of manufacture including instructions which implement the function/act specified in the flowchart diagrams and/or block diagrams.
  • the code may also be loaded onto a computer, other programmable data processing apparatus, or other devices to cause a series of operational steps to be performed on the computer, other programmable apparatus, or other devices to produce a computer implemented process such that the code which execute on the computer or other programmable apparatus provide processes for implementing the functions/acts specified in the flowchart diagrams and/or block diagrams.
  • each block in the flowchart diagrams and/or block diagrams may represent a module, segment, or portion of code, which includes one or more executable instructions of the code for implementing the specified logical function(s).
  • An AI/ML lifecycle can be split into several stages such as data collection/pre-processing, model training, model testing/validation, model deployment/update, model monitoring, model switching/selection etc., where each stage is equally important to achieve target performance with any specific model(s).
  • one of the challenging issues is to manage the lifecycle of the AI/ML model. This is mainly because the data/model drift occurs during model deployment/inference which results in performance degradation of the AI/ML model. Fundamentally, the dataset statistical changes occur after the model is deployed and model inference capability is also impacted while using unseen data as input.
  • the statistical property of dataset and the relationship between input and output for the trained model can be changed with drift occurrence.
  • model adaptation is required to support operations such as model switching, re-training, fallback, etc.
  • AI/ML model enabled wireless communication network it is then important to consider how to handle adaptation of AI/ML model under operations such as model training, inference, monitoring, updating, etc.
  • ML applicable conditions for LCM (lifecycle management) operations can be significantly changed with different use cases and environmental properties.
  • AI/ML based techniques are currently applied to many different applications and 3GPP also started to work on its technical investigation to apply to multiple use cases based on the observed potential gains.
  • model performance such as inferencing and/or training is dependent on different model execution environments with varying configuration parameters.
  • a first aspect of this invention discloses a method of a network-assisted indirect ML LCM operation in a wireless network, the wireless network at least comprising a target UE in a coverage location of a base station and at least one relay UE, the method comprising the steps:
  • the wireless network comprises multiple target UEs for which LCM operations are indirectly executed on the at least first relay UE.
  • the base station generates the decision indication message based on a target UE ML condition update message which is based on an on-device measurement of the target UE ML condition.
  • the target UE generates the decision indication message based on an on-device measurement of the target UE ML condition.
  • a periodicity message is send to the target UE and/or the at least one relay UE, the periodicity message defining times to perform the target UE ML condition measurement and/or the relay UE ML capability estimation.
  • At least one additional UE device jointly performs the LCM operation with the target UE or the multiple target UEs.
  • multiple relay UEs are selected to jointly execute the LCM operation if the at least first relay UE cannot support the LCM operation alone, which is determined based on a sidelink ML condition update message from the at least one relay UE comprising the estimated relay UE ML capability.
  • the criteria correspond to threshold values which are configured from an assessment of target UE ML conditions to execute any given specific LCM operation.
  • the collaboration indication message indicates: • An indirect LCM operation if the measured target UE ML condition is lower than a corresponding threshold value;
  • the target UE measures the target UE ML condition in a non-periodic approach and sends a target UE ML condition update message to the base which triggers the step of generating the decision indication message.
  • the generation of the decision indication message is based one of: network traffic congestion or network level energy saving.
  • the target UE ML condition is based on elements related to support the LCM, the elements being at least one of:
  • Target UE compute processing power, memory size, battery power, Tx/Rx configuration/setting info
  • the at least first relay UE is configured to provide ML condition updates to the base station based on a relay ML configuration information periodically or non-periodically.
  • a second aspect of this invention discloses an apparatus for sidelink positioning in a wireless communication system, the apparatus comprising: a wireless transceiver, a processor coupled with a memory in which computer program instructions are stored, said instructions being configured to implement above-described steps and the apparatus is designed for the use in a base station (gNB).
  • gNB base station
  • a third aspect of this invention discloses an apparatus for sidelink positioning in a wireless communication system, the apparatus comprising: a wireless transceiver, a processor coupled with a memory in which computer program instructions are stored, said instructions being configured to implement above-described steps and the apparatus is designed for the use in user equipment (UE).
  • UE user equipment
  • a fourth aspect of this invention discloses a base station (gNB) comprising an above-described apparatus.
  • a fifth aspect of this invention discloses a User Equipment comprising an above-described apparatus.
  • a sixth aspect of this invention discloses a wireless communication system, comprising at least an above-described base station (gNB) and at least an above-described user equipment (UE).
  • gNB base station
  • UE user equipment
  • ML collaboration types are defined such as direct LCM operation and indirect LCM operation.
  • direct LCM operation any specific LCM is executed between the network side and the target UE(s) directly without additional UE device involvement such as relay and/or neighboring UEs.
  • additional UE device(s) can join to perform part of the configured LCM operation together with the target UE(s) based on model management by the network side where an additional UE device can also be indicated as relay UE.
  • the network side e.g., gNB
  • the network side generates the ML configuration information for the overall LCM operation to be executed between the network side and the target/relay UE(s).
  • indirect LCM operation can be triggered for activation based on pre-configured criteria such as a threshold-based measurement.
  • the network side configures the specific threshold to be sent to a single UE or a group of UEs (e.g., through system information and/or dedicated RRC message) so that the threshold value is configured from an assessment of the target UE with the associated ML conditions (e.g., representing capability of supporting the configured LCM/model activation) to execute any given specific LCM operations (e.g., training/inferencing, etc.).
  • Another example of deciding indirect LCM operation can be based on a network side implementation such as network traffic congestion or network level energy saving where indirect LCM can bring improvement.
  • the network side also can configure periodicity values of on-device measurements of a ML condition change status to compare with the configured threshold where target UE periodically measures and compares its current value of ML condition with the configured threshold. For example, if the measured current value is lower than the configured threshold, then the UE informs the network (e.g., through L1/L2/L3 signaling) to execute indirect LCM operation as an indication message. On the other side, if the measured current value is higher than or equal to the configured threshold, then the UE will report to the network side so that direct LCM operation can proceed. In a non-periodic approach, threshold-based measurements at the target UE can be performed as well when any ML condition change is detected.
  • a sidelink ML condition update is introduced to indicate whether ML conditions of the relay UE for LCM/model support capability can satisfy the relay ML configuration sent by the network after the network determines indirect LCM operation.
  • the network can decide whether indirect LCM can be formed with candidate relay UE(s) to achieve the configured target performance.
  • a final selection of relay UE(s) is made by the network. For example, two or more relay UEs can be selected if any single candidate relay UE cannot support the configured relay ML functionality.
  • model information and/or on-device LCM operation related information can be exchanged among target UEs and/or relay UEs so that those exchanged information about the configured and/or activated LCM/model operation can be reference information for each UE devices.
  • a decision of switching between direct LCM and indirect LCM can be based on the following threshold measurement signaling process. Firstly, the network side determines a difference between the configured threshold and the indicated measurement sent by the UE with the received indication message about the measurement of an on-device ML condition from the UE so that either direct or indirect LCM operation is enabled. Secondly, the UE autonomously determines the difference between the configured threshold and the measurement so as to identify whether on-device ML condition is above or below the configured threshold.
  • multiple target UEs and relay UEs can be combined to support the overall LCM operation configured by the network with a different number of target UEs and/or relay UEs based on a sidelink channel quality and an on-device ML condition level of the target/relay UEs.
  • Candidate relay UE devices can be configured to provide their ML condition updates to the network side based on a relay ML configuration information periodically or non-periodically.
  • the ML condition indicates any elements related to support the configured model for specific LCM operations to represent capability of supporting the configured LCM/model.
  • a ML condition can be divided into two categories such as device ML condition and non-device ML condition.
  • Device ML conditions contain HW-ZSW-specific ML capabilities, and non-device ML conditions contain environmental ML conditions and LCM/model ML conditions.
  • a device HW-specific ML capability indicates HW elements to support ML operation such as ML compute processing power, memory size, battery power, Tx/Rx configuration/setting info, etc.
  • a device SW-specific ML capability indicates SW elements to support ML operation such as ML-related framework/library, etc.
  • Environmental ML conditions indicate device geographical location, wireless connection links information, device mobility, neighboring network/device ML information, etc.
  • the LCM/model ML condition indicates ML-related configuration capability (e.g., supported model list, open-format/proprietary model formats, model structure supportability, LCM operability, etc.) to support the ML operation (not belonging to device HW/SW capability or environmental ML condition). Based on different ML condition configurations, any specific combinations of the above ML condition categories can be used for use-case-specific ML operation for network-device communication or device-device communication.
  • Figure 1 shows an exemplary block diagram of an indirect LCM operation using relay UE and target UEs.
  • one or more target UE devices are connected with a relay UE device.
  • the configured LCM operation is executed on each device so that the output information of a sidelink-based LCM operation across relay and target UEs is sent to the network side.
  • Figure 2 shows an exemplary block diagram of an information exchange about ML LCM/model operation between UE devices.
  • the exchanged information can be used to perform on-device LCM operations on each side as reference data or parameters, where any exchangeable information scope can be pre-configured in advance by the network side and/or the device side.
  • Figure 3 shows an exemplary flow chart of a network sided behavior for LCM operation modes.
  • the default LCM operation mode is a direct LCM by communicating between the network side and the target UE without an additional relay UE.
  • the UE report about enabling indirect LCM is monitored so that indirect LCM can be activated based on an indication message (e.g., through L1/L2/L3 signaling).
  • Figure 4 shows an exemplary flow chart of a target UE behavior for LCM operation modes.
  • the ML condition is measured to be compared with the configured threshold so that an indication message can be sent to the network for indirect LCM activation. This measurement can be performed periodically or non-periodically. Also, the indication message can be sent to candidate relay UEs or configured.
  • Figure 5 shows an exemplary flow chart of a relay UE behavior for LCM operation modes.
  • the request of enabling the relay UE to support indirect LCM operation is monitored so that a relay ML configuration is received to join indirect LCM operation with target UE(s).
  • Figure 6 shows an exemplary signaling flow of a UE-sided decision of switching between direct LCM and indirect LCM operation mode.
  • the UE autonomously determines the difference between configured thresholds and the measurements so as to identify whether the on-device ML condition is above or below the configured threshold.
  • the indication message of a measurement decision is sent to the network side so that the network side can finally confirm a ML collaboration type between direct LCM and indirect LCM operation mode.
  • Figure 7 shows an exemplary signaling flow of a UE-sided decision of switching between direct LCM and indirect LCM operation mode.
  • the network side determines the difference between the configured thresholds and the indicated measurements sent by the UE with the received indication message about the measurement of the on-device ML condition from the UE so that either direct or indirect LCM operation is enabled.
  • Figure 8 shows an exemplary signaling flow of a sidelink ML condition based relay UE selection.
  • a sidelink ML condition update is sent to the network side based on the relay ML configuration information so that the relay UE can be selected to support an indirect LCM operation mode.
  • FIG. 9 shows an exemplary block diagram of categories of ML conditions.
  • ML conditions can be divided into two categories such as device ML condition and non-device ML condition.
  • Device ML condition contain HW-/SW-specific ML capabilities
  • non-device ML condition contain environmental ML conditions and LCM/model ML conditions.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Mobile Radio Communication Systems (AREA)

Abstract

Methods of pre-configuring AI/ML operations when multi-connectivity links are enabled in a wireless mobile communication system including base stations (e.g., gNB) and mobile stations (e.g., UE) are presented. If an AI/ML model is applied to a radio access network, model performance such as inferencing and/or training is dependent on different model execution environments with varying device capabilities. Therefore, by re-configuring any model in operation with multiple devices including relay links, the potential performance impact due to dynamic changes of applicable model conditions for LCM model operations can be reduced with model performance enhancements.

Description

TITLE
Method of a network-assisted indirect ML LCM operation
TECHNNICAL FIELD
The present disclosure relates to AI/ML operation pre-configuration, where techniques for re-configuring and signaling the specific information to improve the efficient signaling of sidelink-based models are presented.
BACKGROUND
In 3GPP (3rd Generation Partnership Project), one of the selected study items as of the approved Release 18 package is AI/ML (artificial intelligence/machine learning) as described in the related document (RP-213599) addressed in 3GPP TSG RAN (Technical Specification Group Radio Access Network) meeting #94e. The official title of the AI/ML study item is “Study on AI/ML for NR Air Interface”, and currently RAN WG1 and WG2 are actively working on a specification. The goal of this study item is to identify a common AI/ML framework and areas of obtaining gains using AI/ML based techniques with use cases and LCM (lifecycle management).
According to 3GPP, the main objective of this study item is to study AI/ML frameworks for air-interface with target use cases by considering performance, complexity, and potential specification impact. In particular, AI/ML model terminology and description to identify common and specific characteristics for the framework will be one of the key work scopes. Regarding AI/ML frameworks, various aspects are under consideration for investigation and one of the key items is about lifecycle management of AI/ML models where multiple stages are included as mandatory for model training, model deployment, model inference, model monitoring, model updating etc.
Earlier, in 3GPP TR 37.817 for Release 17, titled as “Study on enhancement for Data Collection for NR and EN-DC”, UE mobility was also considered as one of the AI/ML use cases and one of the scenarios for model training/inference is that both functions are located within a RAN node. Followingly, in Release 18, the new work item of “Artificial Intelligence (AI)ZMachine Learning (ML) for NG-RAN” was initiated to specify data collection enhancements and signaling support within existing NG-RAN interfaces and architectures where mobility optimization is included as one of the target use cases.
For the above active standardization works, UE ML conditions to support RAN-based AI/ML models can be considered very significant for both gNBs and UEs to meet any desired model operations (e.g., model training/inference/selection/switching/update/monitoring, etc.). Currently, there is no specification defined for signaling methods or gNB-UE behaviors about distribution of split LCMs with sidelink relay links when a RAN-based AI/ML model operation proceeds. Therefore, it is necessary to investigate any specification impact by considering model operations through sidelink relay links. Any mechanism of additional signaling methods and/or gNB-UE behaviors also need to be addressed to support relay-based model operation between gNBs and UEs so that any potential impact of UE ML conditions on model operation in RAN should be minimized with service continuity.
On the other hand, in 3GPP the terminologies of the working list contain a set of high-level descriptions about AI/ML model training, inference, validation, testing, UE-side model, network-side model, one-sided model, two-sided model, etc. UE-sided model and network-sided model indicates that an AI/ML model is located for operation on the UE side and the network side, respectively. In the similar context, one-sided and two-sided model indicate that an AI/ML model is located on one side and two sides, respectively.
All signaling aspects to support the above items are currently not specified yet as definitions of terminologies are still under discussion for further modifications.
Any potential standard impact with new or enhanced mechanisms of supporting AI/ML models with the above working list items is one of the key areas for investigations in the AI/ML study item. US 2021 203 565 A1 discloses methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for training and using machine learning models to classify network traffic as loT traffic or non-loT traffic and managing the traffic based on the classification. In some implementations, machine learning parameters of a local machine learning model trained by the edge device is received each of at least a subset of a set of edge devices. The machine learning parameters received from an edge device are parameters of the local machine learning model trained by the edge device based on local network traffic processed by the edge device and to classify the network traffic as Internet of Things (loT) traffic or non-loT traffic. A global machine learning model is generated, using the machine learning parameters, to classify network traffic processed by edge devices as loT traffic or non-loT traffic.
US 2022 261 697 A1 discloses systems and methods for federated machine learning. A central system receives satellite analytics artifacts from a plurality of satellite site systems and generates a central machine learning model based on the satellite analytics artifacts. A plurality of federated machine learning epochs are executed. At each epoch, the central system transmitting the central machine learning model to the plurality of satellite site systems, and then receives in return, from each satellite site system, a respective set of satellite values for a set of weights of the model, wherein the satellite values are generated by the respective satellite site system based on a respective local dataset of the satellite site system. At each epoch, the central system then generates an updated version of the central machine learning model based on the satellite values received from the satellite site systems.
US 2023 037 893 A1 provides a method for generating a real-time radio coverage map in a wireless network by a network apparatus. The method includes: receiving real-time geospatial information from one or more geographical sources in the wireless network; determining handover information of at least one user equipment (UE) in the wireless network from a plurality of base stations based on the real-time geospatial information; and generating the real-time radio coverage map based on the handover information of at least one UE and the real-time geospatial information.
WO 2022 015 008 A1 provides a method for determining a target cell for handover of a UE. The method includes monitoring, by a mobility management platform, multiple network characteristics associated with multiple UEs and determining, by the mobility management platform, a correlation between the multiple UEs based on the multiple network characteristics associated with the multiple UEs and location information of the multiple UEs. The method includes receiving, by the mobility management platform, location information from UE of the multiple and determining, by the mobility management platform, a measurement report corresponding to the location information received from the UE based on the correlation. And the method includes determining, by the mobility management platform, the target cell for the handover of the UE based on the location information received from the UE and the measurement report.
WO 2022 205 023 A1 discloses systems, methods, and apparatus on wireless network architecture and air interface. In some embodiments, sensing agents communicate with user equipments (UEs) or nodes using one of multiple sensing modes through non-sensing-based or sensing-based links, and/or artificial intelligence (Al) agents communicate with UEs or nodes using one of multiple Al modes through non-AI-based or Al-based links. Al and sensing may work independently or together. For example, a sensing service request may be sent by an Al block to a sensing block to obtain sensing data from the sensing block, and the Al block may generate a configuration based on the sensing data. Various other features, related to example interfaces, channels, and other aspects of Al-enabled and/or sensing-enabled communications, for example, are also disclosed.
BRIEF DESCRIPTION OF THE DRAWINGS
The disclosed invention will be further discussed in the following based on preferred embodiments presented in the attached drawings. However, the disclosed invention may be embodied in many different forms and should not be construed as limited to said preferred embodiments. Rather, said preferred embodiments are provided for thoroughness and completeness, and fully convey the scope of the invention to the skilled person. The following detailed description refers to the attached drawings, in which:
Figure 1 is an exemplary block diagram of an indirect LCM operation using a relay UE and target UEs.
Figure 2 is an exemplary block diagram of an information exchange about ML LCM/model operation between UE devices.
Figure 3 is an exemplary flow chart of a network sided behavior for LCM operation modes.
Figure 4 is an exemplary flow chart of a target UE behavior for LCM operation modes.
Figure 5 is an exemplary flow chart of a relay UE behavior for LCM operation modes.
Figure 6 is an exemplary signaling flow of a UE-sided decision of switching between direct LCM and indirect LCM operation mode.
Figure 7 is an exemplary signaling flow of a UE-sided decision of switching between direct LCM and indirect LCM operation mode.
Figure 8 is an exemplary signaling flow of a sidelink ML condition based relay UE selection.
Figure 9 is an exemplary block diagram of categories of ML conditions.
DETAILED DESCRIPTION
The detailed description set forth below, with reference to the annexed drawings, is intended as a description of various configurations and is not intended to represent the only configurations in which the concepts described herein may be practiced. The detailed description includes specific details for the purpose of providing a thorough understanding of the various concepts. However, it will be apparent to those skilled in the art that these concepts may be practiced without these specific details. In particular, although terminology from 3GPP 5G NR may be used in this disclosure to exemplify embodiments herein, this should not be seen as limiting the scope of the invention.
Some of the embodiments contemplated herein will now be described more fully with reference to the accompanying drawings. Other embodiments, however, are contained within the scope of the subject matter disclosed herein, the disclosed subject matter should not be construed as limited to only the embodiments set forth herein; rather, these embodiments are provided by way of example to convey the scope of the subject matter to those skilled in the art.
Generally, all terms used herein are to be interpreted according to their ordinary meaning in the relevant technical field, unless a different meaning is clearly given and/or is implied from the context in which it is used. All references to a/an/the element, apparatus, component, means, step, etc. are to be interpreted openly as referring to at least one instance of the element, apparatus, component, means, step, etc., unless explicitly stated otherwise. The steps of any methods disclosed herein do not have to be performed in the exact order disclosed, unless a step is explicitly described as following or preceding another step and/or where it is implicit that a step must follow or precede another step. Any feature of any of the embodiments disclosed herein may be applied to any other embodiment, wherever appropriate. Likewise, any advantage of any of the embodiments may apply to any other embodiments, and vice versa. Other objectives, features and advantages of the enclosed embodiments will be apparent from the following description.
In some embodiments, a more general term “network node” may be used and may correspond to any type of radio network node or any network node, which communicates with a UE (directly or via another node) and/or with another network node. Examples of network nodes are NodeB, MeNB, ENB, a network node belonging to MCG or SCG, base station (BS), multi-standard radio (MSR) radio node such as MSR BS, eNodeB, gNodeB, network controller, radio network controller (RNC), base station controller (BSC), relay, donor node controlling relay, base transceiver station (BTS), access point (AP), transmission points, transmission nodes, RRU, RRH, nodes in distributed antenna system (DAS), core network node (e.g. Mobile Switching Center (MSC), Mobility Management Entity (MME), etc.), Operations & Maintenance (O&M), Operations Support System (OSS), Self Optimized Network (SON), positioning node (e.g. Evolved- Serving Mobile Location Centre (E-SMLC)), Minimization of Drive Tests (MDT), test equipment (physical node or software), another UE, etc.
In some embodiments, the non-limiting term user equipment (UE) or wireless device may be used and may refer to any type of wireless device communicating with a network node and/or with another UE in a cellular or mobile communication system. Examples of UE are target device, device to device (D2D) UE, machine type UE or UE capable of machine to machine (M2M) communication, PDA, PAD, Tablet, mobile terminals, smart phone, laptop embedded equipped (LEE), laptop mounted equipment (LME), USB dongles, UE category Ml, UE category M2, ProSe UE, V2V UE, V2X UE, etc.
Additionally, terminologies such as base station/gNodeB and UE should be considered non-limiting and in particular do not imply a certain hierarchical relation between the two; in general, “gNodeB” could be considered as device 1 and “UE” could be considered as device 2 and these two devices communicate with each other over some radio channel. And in the following the transmitter or receiver could be either gNodeB (gNB), or UE.
As will be appreciated by one skilled in the art, aspects of the embodiments may be embodied as a system, apparatus, method, or computer program product.
Accordingly, embodiments may take the form of an entirely hardware embodiment, an entirely software embodiment (including firmware, resident software, micro-code, etc.) or an embodiment combining software and hardware aspects. For example, the disclosed embodiments may be implemented as a hardware circuit comprising custom very-large-scale integration (“VLSI”) circuits or gate arrays, off-the-shelf semiconductors such as logic chips, transistors, or other discrete components. The disclosed embodiments may also be implemented in programmable hardware devices such as field programmable gate arrays, programmable array logic, programmable logic devices, or the like. As another example, the disclosed embodiments may include one or more physical or logical blocks of executable code which may, for instance, be organized as an object, procedure, or function.
Furthermore, embodiments may take the form of a computer program product embodied in one or more computer readable storage devices storing machine readable code, computer readable code, and/or program code, referred hereafter as code. The storage devices may be tangible, non- transitory, and/or non-transmission. The storage devices may not embody signals. In a certain embodiment, the storage devices only employ signals for accessing code.
Any combination of one or more computer readable medium may be utilized. The computer readable medium may be a computer readable storage medium. The computer readable storage medium may be a storage device storing the code. The storage device may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, holographic, micromechanical, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing.
More specific examples (a non-exhaustive list) of the storage device would include the following: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a random-access memory (“RAM”), a read-only memory (“ROM”), an erasable programmable read-only memory (“EPROM” or Flash memory), a portable compact disc read-only memory (“CD-ROM”), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this document, a computer readable storage medium may be any tangible medium that can contain or store a program for use by or in connection with an instruction execution system, apparatus, or device. Code for carrying out operations for embodiments may be any number of lines and may be written in any combination of one or more programming languages including an object-oriented programming language such as Python, Ruby, Java, Smalltalk, C++, or the like, and conventional procedural programming languages, such as the “C” programming language, or the like, and/or machine languages such as assembly languages. The code may execute entirely on the user’s computer, partly on the user’s computer, as a stand-alone software package, partly on the user’s computer and partly on a remote computer or entirely on the remote computer or server. In the latter scenario, the remote computer may be connected to the user’s computer through any type of network, including a local area network (“LAN”), wireless LAN (“WLAN”), or a wide area network (“WAN”), or the connection may be made to an external computer (for example, through the Internet using an Internet Service Provider (“ISP”)).
Furthermore, the described features, structures, or characteristics of the embodiments may be combined in any suitable manner. In the following description, numerous specific details are provided, such as examples of programming, software modules, user selections, network transactions, database queries, database structures, hardware modules, hardware circuits, hardware chips, etc., to provide a thorough understanding of embodiments. One skilled in the relevant art will recognize, however, that embodiments may be practiced without one or more of the specific details, or with other methods, components, materials, and so forth.
In other instances, well-known structures, materials, or operations are not shown or described in detail to avoid obscuring aspects of an embodiment. Reference throughout this specification to “one embodiment,” “an embodiment,” or similar language means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment. Thus, appearances of the phrases “in one embodiment”, “in an embodiment”, and similar language throughout this specification may, but do not necessarily, all refer to the same embodiment, but mean “one or more but not all embodiments” unless expressly specified otherwise. The terms “including”, “comprising”, “having”, and variations thereof mean “including but not limited to”, unless expressly specified otherwise. An enumerated listing of items does not imply that any or all of the items are mutually exclusive, unless expressly specified otherwise. The terms “a”, “an”, and “the” also refer to “one or more” unless expressly specified otherwise.
Aspects of the embodiments are described below with reference to schematic flowchart diagrams and/or schematic block diagrams of methods, apparatuses, systems, and computer program products according to embodiments. It will be understood that each block of the schematic flowchart diagrams and/or schematic block diagrams, and combinations of blocks in the schematic flowchart diagrams and/or schematic block diagrams, can be implemented by code. This code may be provided to a processor of a general-purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which are executed via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart diagrams and/or block diagrams.
The code may also be stored in a storage device that can direct a computer, other programmable data processing apparatus, or other devices to function in a particular manner, such that the instructions stored in the storage device produce an article of manufacture including instructions which implement the function/act specified in the flowchart diagrams and/or block diagrams.
The code may also be loaded onto a computer, other programmable data processing apparatus, or other devices to cause a series of operational steps to be performed on the computer, other programmable apparatus, or other devices to produce a computer implemented process such that the code which execute on the computer or other programmable apparatus provide processes for implementing the functions/acts specified in the flowchart diagrams and/or block diagrams.
The flowchart diagrams and/or block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of apparatuses, systems, methods, and program products according to various embodiments. In this regard, each block in the flowchart diagrams and/or block diagrams may represent a module, segment, or portion of code, which includes one or more executable instructions of the code for implementing the specified logical function(s).
It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. Other steps and methods may be conceived that are equivalent in function, logic, or effect to one or more blocks, or portions thereof, of the illustrated figures.
Although various arrow types and line types may be employed in the flowchart and/or block diagrams, they are understood not to limit the scope of the corresponding embodiments. Indeed, some arrows or other connectors may be used to indicate only the logical flow of the depicted embodiment. For instance, an arrow may indicate a waiting or monitoring period of unspecified duration between enumerated steps of the depicted embodiment. It will also be noted that each block of the block diagrams and/or flowchart diagrams, and combinations of blocks in the block diagrams and/or flowchart diagrams, can be implemented by special purpose hardware-based systems that perform the specified functions or acts, or combinations of special purpose hardware and code.
The description of elements in each figure may refer to elements of proceeding figures. Like numbers refer to like elements in all figures, including alternate embodiments of like elements.
The following explanation will provide the detailed description of the mechanism about pre-configuring AI/ML-based model before a handover occurrence in a wireless mobile communication system including base stations (e.g., gNB) and mobile stations (e.g., UE). An AI/ML lifecycle can be split into several stages such as data collection/pre-processing, model training, model testing/validation, model deployment/update, model monitoring, model switching/selection etc., where each stage is equally important to achieve target performance with any specific model(s). In applying an AI/ML model for any use case or application, one of the challenging issues is to manage the lifecycle of the AI/ML model. This is mainly because the data/model drift occurs during model deployment/inference which results in performance degradation of the AI/ML model. Fundamentally, the dataset statistical changes occur after the model is deployed and model inference capability is also impacted while using unseen data as input.
In a similar aspect, the statistical property of dataset and the relationship between input and output for the trained model can be changed with drift occurrence. Then model adaptation is required to support operations such as model switching, re-training, fallback, etc. When AI/ML model enabled wireless communication network is deployed, it is then important to consider how to handle adaptation of AI/ML model under operations such as model training, inference, monitoring, updating, etc.
Based on specific network-UE ML collaboration in deployment scenarios (e.g., UE mobility case), ML applicable conditions for LCM (lifecycle management) operations can be significantly changed with different use cases and environmental properties. AI/ML based techniques are currently applied to many different applications and 3GPP also started to work on its technical investigation to apply to multiple use cases based on the observed potential gains.
In a similar aspect, the statistical property of dataset and the relationship between input and output for the trained model can be changed with drift occurrence. In this context, model performance such as inferencing and/or training is dependent on different model execution environments with varying configuration parameters.
To handle this issue, it is highly important to track model performance during a collaboration between UEs and gNBs and re-configure a model corresponding to different environments. When a AI/ML model enabled wireless communication network is deployed, it is then important to consider how to handle a AI/ML model in activation with re-configuration for wireless devices under operations such as model training, inference, updating, etc. When a UE suffers ML conditional changes where ML conditions indicate any elements related to support the configured model for specific LCM operations, the configured ML model can be impacted by producing the limited capability of LCM operations and/or failure of model execution. Therefore, if any unexpected ML conditions change, the configured LCM operation with ML model activation between NW and target UE is not possible or even fails.
A first aspect of this invention discloses a method of a network-assisted indirect ML LCM operation in a wireless network, the wireless network at least comprising a target UE in a coverage location of a base station and at least one relay UE, the method comprising the steps:
• Initiate direct execution of a two-sided AI/ML model between the base station and the target UE, during which a LCM operation is executed on the target UE and model management is executed on the base station,
• Configure criteria for successful execution of the LCM operation to achieve a target performance,
• Measure a target UE ML condition,
• Compare the target UE ML condition with the criteria,
• Generate a decision indication message which indicates whether the target UE ML condition fulfills the criteria and achieves the target performance,
• Estimate a relay UE ML capability for each of the at least one relay UE,
• Compare each relay UE ML capability with the criteria,
• Select, if the decision indication message indicates that the target UE ML condition does not fulfill the criteria, at least a first relay UE from the at least one relay UE, the comparison of the relay UE ML capability of the at least first relay UE with the criteria indicating that the at least first relay UE can execute the LCM operation and achieving the target performance,
• Send a collaboration indication message to the target UE, indicating that the LCM operation will be executed on the at least first relay UE, • Stop executing the LCM operation on the target UE,
• Send a ML configuration to the at least first relay UE via a system information or a dedicated RRC message, configuring the LCM operation, and
• Execute the LCM operation by the at least first relay UE, thereby activating the indirect LCM operation.
Advantageously, the wireless network comprises multiple target UEs for which LCM operations are indirectly executed on the at least first relay UE.
Advantageously, the base station generates the decision indication message based on a target UE ML condition update message which is based on an on-device measurement of the target UE ML condition.
Advantageously, the target UE generates the decision indication message based on an on-device measurement of the target UE ML condition.
Advantageously, a periodicity message is send to the target UE and/or the at least one relay UE, the periodicity message defining times to perform the target UE ML condition measurement and/or the relay UE ML capability estimation.
Advantageously, at least one additional UE device jointly performs the LCM operation with the target UE or the multiple target UEs.
Advantageously, multiple relay UEs are selected to jointly execute the LCM operation if the at least first relay UE cannot support the LCM operation alone, which is determined based on a sidelink ML condition update message from the at least one relay UE comprising the estimated relay UE ML capability.
Advantageously, the criteria correspond to threshold values which are configured from an assessment of target UE ML conditions to execute any given specific LCM operation.
Advantageously, the collaboration indication message indicates: • An indirect LCM operation if the measured target UE ML condition is lower than a corresponding threshold value;
• A continuation or proceeding of a direct LCM operation if the measured target UE ML condition is higher than or equal to the corresponding threshold value;
Advantageously, the target UE measures the target UE ML condition in a non-periodic approach and sends a target UE ML condition update message to the base which triggers the step of generating the decision indication message.
Advantageously, the generation of the decision indication message is based one of: network traffic congestion or network level energy saving.
Advantageously, the target UE ML condition is based on elements related to support the LCM, the elements being at least one of:
• Device-specific HW-/SW ML capabilities;
• Non-device-specific environmental ML conditions and LCM conditions;
• Target UE compute processing power, memory size, battery power, Tx/Rx configuration/setting info;
• Availability of framework/library required to execute the LCM operation;
• Geographical location, wireless connection link information, device mobility, nighboring network/device ML information, sidelink channel quality;
• Supported model list, open/proprietary model formats, model structure supportability, LCM operability.
Advantageously, the at least first relay UE is configured to provide ML condition updates to the base station based on a relay ML configuration information periodically or non-periodically.
A second aspect of this invention discloses an apparatus for sidelink positioning in a wireless communication system, the apparatus comprising: a wireless transceiver, a processor coupled with a memory in which computer program instructions are stored, said instructions being configured to implement above-described steps and the apparatus is designed for the use in a base station (gNB). A third aspect of this invention discloses an apparatus for sidelink positioning in a wireless communication system, the apparatus comprising: a wireless transceiver, a processor coupled with a memory in which computer program instructions are stored, said instructions being configured to implement above-described steps and the apparatus is designed for the use in user equipment (UE).
A fourth aspect of this invention discloses a base station (gNB) comprising an above-described apparatus.
A fifth aspect of this invention discloses a User Equipment comprising an above-described apparatus.
A sixth aspect of this invention discloses a wireless communication system, comprising at least an above-described base station (gNB) and at least an above-described user equipment (UE).
In this method, ML collaboration types are defined such as direct LCM operation and indirect LCM operation. For direct LCM operation, any specific LCM is executed between the network side and the target UE(s) directly without additional UE device involvement such as relay and/or neighboring UEs. For indirect LCM operation, additional UE device(s) can join to perform part of the configured LCM operation together with the target UE(s) based on model management by the network side where an additional UE device can also be indicated as relay UE. The network side (e.g., gNB) generates the ML configuration information for the overall LCM operation to be executed between the network side and the target/relay UE(s).
Among ML collaboration types, indirect LCM operation can be triggered for activation based on pre-configured criteria such as a threshold-based measurement. For example, the network side configures the specific threshold to be sent to a single UE or a group of UEs (e.g., through system information and/or dedicated RRC message) so that the threshold value is configured from an assessment of the target UE with the associated ML conditions (e.g., representing capability of supporting the configured LCM/model activation) to execute any given specific LCM operations (e.g., training/inferencing, etc.). Another example of deciding indirect LCM operation can be based on a network side implementation such as network traffic congestion or network level energy saving where indirect LCM can bring improvement.
The network side also can configure periodicity values of on-device measurements of a ML condition change status to compare with the configured threshold where target UE periodically measures and compares its current value of ML condition with the configured threshold. For example, if the measured current value is lower than the configured threshold, then the UE informs the network (e.g., through L1/L2/L3 signaling) to execute indirect LCM operation as an indication message. On the other side, if the measured current value is higher than or equal to the configured threshold, then the UE will report to the network side so that direct LCM operation can proceed. In a non-periodic approach, threshold-based measurements at the target UE can be performed as well when any ML condition change is detected.
In addition, a sidelink ML condition update is introduced to indicate whether ML conditions of the relay UE for LCM/model support capability can satisfy the relay ML configuration sent by the network after the network determines indirect LCM operation. Based on the reported sidelink ML condition update from candidate relay UE(s), the network can decide whether indirect LCM can be formed with candidate relay UE(s) to achieve the configured target performance. Depending on the reported sidelink ML condition update from candidate relay UE(s), a final selection of relay UE(s) is made by the network. For example, two or more relay UEs can be selected if any single candidate relay UE cannot support the configured relay ML functionality.
In addition, model information and/or on-device LCM operation related information can be exchanged among target UEs and/or relay UEs so that those exchanged information about the configured and/or activated LCM/model operation can be reference information for each UE devices. In addition, a decision of switching between direct LCM and indirect LCM can be based on the following threshold measurement signaling process. Firstly, the network side determines a difference between the configured threshold and the indicated measurement sent by the UE with the received indication message about the measurement of an on-device ML condition from the UE so that either direct or indirect LCM operation is enabled. Secondly, the UE autonomously determines the difference between the configured threshold and the measurement so as to identify whether on-device ML condition is above or below the configured threshold.
When indirect LCM operation is enabled, multiple target UEs and relay UEs can be combined to support the overall LCM operation configured by the network with a different number of target UEs and/or relay UEs based on a sidelink channel quality and an on-device ML condition level of the target/relay UEs. Candidate relay UE devices can be configured to provide their ML condition updates to the network side based on a relay ML configuration information periodically or non-periodically.
The ML condition indicates any elements related to support the configured model for specific LCM operations to represent capability of supporting the configured LCM/model. Specifically, a ML condition can be divided into two categories such as device ML condition and non-device ML condition. Device ML conditions contain HW-ZSW-specific ML capabilities, and non-device ML conditions contain environmental ML conditions and LCM/model ML conditions. A device HW-specific ML capability indicates HW elements to support ML operation such as ML compute processing power, memory size, battery power, Tx/Rx configuration/setting info, etc. A device SW-specific ML capability indicates SW elements to support ML operation such as ML-related framework/library, etc. Environmental ML conditions indicate device geographical location, wireless connection links information, device mobility, neighboring network/device ML information, etc. The LCM/model ML condition indicates ML-related configuration capability (e.g., supported model list, open-format/proprietary model formats, model structure supportability, LCM operability, etc.) to support the ML operation (not belonging to device HW/SW capability or environmental ML condition). Based on different ML condition configurations, any specific combinations of the above ML condition categories can be used for use-case-specific ML operation for network-device communication or device-device communication.
Figure 1 shows an exemplary block diagram of an indirect LCM operation using relay UE and target UEs. In this example, one or more target UE devices are connected with a relay UE device. The configured LCM operation is executed on each device so that the output information of a sidelink-based LCM operation across relay and target UEs is sent to the network side.
Figure 2 shows an exemplary block diagram of an information exchange about ML LCM/model operation between UE devices. In this example, the exchanged information can be used to perform on-device LCM operations on each side as reference data or parameters, where any exchangeable information scope can be pre-configured in advance by the network side and/or the device side.
Figure 3 shows an exemplary flow chart of a network sided behavior for LCM operation modes. In this example, the default LCM operation mode is a direct LCM by communicating between the network side and the target UE without an additional relay UE. The UE report about enabling indirect LCM is monitored so that indirect LCM can be activated based on an indication message (e.g., through L1/L2/L3 signaling).
Figure 4 shows an exemplary flow chart of a target UE behavior for LCM operation modes. In this example, the ML condition is measured to be compared with the configured threshold so that an indication message can be sent to the network for indirect LCM activation. This measurement can be performed periodically or non-periodically. Also, the indication message can be sent to candidate relay UEs or configured.
Figure 5 shows an exemplary flow chart of a relay UE behavior for LCM operation modes. In this example, the request of enabling the relay UE to support indirect LCM operation is monitored so that a relay ML configuration is received to join indirect LCM operation with target UE(s). Figure 6 shows an exemplary signaling flow of a UE-sided decision of switching between direct LCM and indirect LCM operation mode. In this example, the UE autonomously determines the difference between configured thresholds and the measurements so as to identify whether the on-device ML condition is above or below the configured threshold. The indication message of a measurement decision is sent to the network side so that the network side can finally confirm a ML collaboration type between direct LCM and indirect LCM operation mode.
Figure 7 shows an exemplary signaling flow of a UE-sided decision of switching between direct LCM and indirect LCM operation mode. In this example, the network side determines the difference between the configured thresholds and the indicated measurements sent by the UE with the received indication message about the measurement of the on-device ML condition from the UE so that either direct or indirect LCM operation is enabled.
Figure 8 shows an exemplary signaling flow of a sidelink ML condition based relay UE selection. In this example, a sidelink ML condition update is sent to the network side based on the relay ML configuration information so that the relay UE can be selected to support an indirect LCM operation mode.
Figure 9 shows an exemplary block diagram of categories of ML conditions. In this example, ML conditions can be divided into two categories such as device ML condition and non-device ML condition. Device ML condition contain HW-/SW-specific ML capabilities, and non-device ML condition contain environmental ML conditions and LCM/model ML conditions. Abbreviations
BWP Bandwidth part
CBG Code block group
CLI Cross Link Interference
CP Cyclic prefix
CQI Channel quality indicator
CPU CSI processing unit
CRB Common resource block
CRC Cyclic redundancy check
CRI CSI-RS Resource Indicator
CSI Channel state information
CSI-RS Channel state information reference signal
CSI-RSRP CSI reference signal received power
CSI-RSRQ CSI reference signal received quality
CSI-SINR CSI signal-to-noise and interference ratio
CW Codeword
DCI Downlink control information
DL Downlink
DM-RS Demodulation reference signals
DRX Discontinuous Reception
EPRE Energy per resource element
IAB-MT Integrated Access and Backhaul - Mobile Terminal
L1 -RSRP Layer 1 reference signal received power
LI Layer Indicator
MCS Modulation and coding scheme
PDCCH Physical downlink control channel
PDSCH Physical downlink shared channel
PSS Primary Synchronisation signal
PUCCH Physical uplink control channel
QCL Quasi co-location
PMI Precoding Matrix Indicator
PRB Physical resource block
PRG Precoding resource block group PRS Positioning reference signal
PT-RS Phase-tracking reference signal
RB Resource block
RBG Resource block group
Rl Rank Indicator
RIV Resource indicator value
RP Resource Pool
RS Reference signal
SCI Sidelink control information
SL CR Sidelink Channel Occupancy Ratio
SL CBR Sidelink Channel Busy Ratio
SLIV Start and length indicator value
SR Scheduling Request
SRS Sounding reference signal
SS Synchronisation signal
SSS Secondary Synchronisation signal
SS-RSRP SS reference signal received power
SS-RSRQ SS reference signal received quality
SS-SINR SS signal-to-noise and interference ratio
TB Transport Block
TCI Transmission Configuration Indicator
TDM Time division multiplexing
UE User equipment
UL Uplink

Claims

1 . A method of a network-assisted indirect ML LCM operation in a wireless network, the wireless network at least comprising a target UE in a coverage location of a base station and at least one relay UE, the method comprising the steps:
• Initiate direct execution of a two-sided AI/ML model between the base station and the target UE, during which a LCM operation is executed on the target UE and model management is executed on the base station,
• Configure criteria for successful execution of the LCM operation to achieve a target performance,
• Measure a target UE ML condition,
• Compare the target UE ML condition with the criteria,
• Generate a decision indication message which indicates whether the target UE ML condition fulfills the criteria and achieves the target performance,
• Estimate a relay UE ML capability for each of the at least one relay UE,
• Compare each relay UE ML capability with the criteria,
• Select, if the decision indication message indicates that the target UE ML condition does not fulfill the criteria, at least a first relay UE from the at least one relay UE, the comparison of the relay UE ML capability of the at least first relay UE with the criteria indicating that the at least first relay UE can execute the LCM operation and achieving the target performance,
• Send a collaboration indication message to the target UE, indicating that the LCM operation will be executed on the at least first relay UE,
• Stop executing the LCM operation on the target UE,
• Send a ML configuration to the at least first relay UE via a system information or a dedicated RRC message, configuring the LCM operation, and
• Execute the LCM operation by the at least first relay UE, thereby activating the indirect LCM operation.
2. Method according to claim 1 , characterized in that the wireless network comprises multiple target UEs for which LCM operations are indirectly executed on the at least first relay UE.
3. Method according to claim 1 or 2, characterized in that the base station generates the decision indication message based on a target UE ML condition update message which is based on an on-device measurement of the target UE ML condition.
4. Method according to claim 1 or 2, characterized in that the target UE generates the decision indication message based on an on-device measurement of the target UE ML condition.
5. Method according to claim 3 or 4, characterized in that a periodicity message is send to the target UE and/or the at least one relay UE, the periodicity message defining times to perform the target UE ML condition measurement and/or the relay UE ML capability estimation.
6. Method according to one of the previous claims, characterized in that at least one additional UE device jointly performs the LCM operation with the target UE or the multiple target UEs.
7. Method according to one of the previous claims, characterized in that multiple relay UEs are selected to jointly execute the LCM operation if the at least first relay UE cannot support the LCM operation alone, which is determined based on a sidelink ML condition update message from the at least one relay UE comprising the estimated relay UE ML capability.
8. Method according to one of the previous claims, characterized in that the criteria correspond to threshold values which are configured from an assessment of target UE ML conditions to execute any given specific LCM operation.
9. Method according to claim 8, characterized in that the collaboration indication message indicates:
• An indirect LCM operation if the measured target UE ML condition is lower than a corresponding threshold value;
• A continuation or proceeding of a direct LCM operation if the measured target UE ML condition is higher than or equal to the corresponding threshold value;
10. Method according to one of the previous claims, characterized in that the target UE measures the target UE ML condition in a non-periodic approach and sends a target UE ML condition update message to the base which triggers the step of generating the decision indication message.
11 . Method according to one of the previous claims, characterized in that the generation of the decision indication message is based one of: network traffic congestion or network level energy saving.
12. Method according to one of the previous claims, characterized in that the target UE ML condition is based on elements related to support the LCM, the elements being at least one of:
• Device-specific HW-/SW ML capabilities;
• Non-device-specific environmental ML conditions and LCM conditions;
• Target UE compute processing power, memory size, battery power, Tx/Rx configuration/setting info;
• Availability of framework/library required to execute the LCM operation;
• Geographical location, wireless connection link information, device mobility, neighboring network/device ML information, sidelink channel quality;
• Supported model list, open/proprietary model formats, model structure supportability, LCM operability.
13. Method according to one of the previous claims, characterized in that the at least first relay UE is configured to provide ML condition updates to the base station based on a relay ML configuration information periodically or non-periodically.
14. Apparatus for sidelink positioning in a wireless communication system, the apparatus comprising: a wireless transceiver, a processor coupled with a memory in which computer program instructions are stored, said instructions being configured to implement steps of the claims 1 to 13 and the apparatus is designed for the use in a base station (gNB).
15. Apparatus for sidelink positioning in a wireless communication system, the apparatus comprising: a wireless transceiver, a processor coupled with a memory in which computer program instructions are stored, said instructions being configured to implement steps of the claims 1 to 13 and the apparatus is designed for the use in user equipment (UE).
16. Base station (gNB) comprising an apparatus according to claim 14.
17. User Equipment comprising an apparatus according to claim 15.
18. Wireless communication system, comprising at least a base station (gNB) according to claim 16 and at least a user equipment (UE) according to claim 17.
PCT/EP2024/078908 2023-10-27 2024-10-14 Method of a network-assisted indirect ml lcm operation Pending WO2025087720A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
DE102023210632.9 2023-10-27
DE102023210632 2023-10-27

Publications (1)

Publication Number Publication Date
WO2025087720A1 true WO2025087720A1 (en) 2025-05-01

Family

ID=93119785

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/EP2024/078908 Pending WO2025087720A1 (en) 2023-10-27 2024-10-14 Method of a network-assisted indirect ml lcm operation

Country Status (1)

Country Link
WO (1) WO2025087720A1 (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20210203565A1 (en) 2019-12-31 2021-07-01 Hughes Network Systems, Llc Managing internet of things network traffic using federated machine learning
WO2022015008A1 (en) 2020-07-13 2022-01-20 Samsung Electronics Co., Ltd. Method and system for determining target cell for handover of ue
US20220261697A1 (en) 2021-02-15 2022-08-18 Devron Corporation Federated learning platform and methods for using same
WO2022205023A1 (en) 2021-03-31 2022-10-06 Huawei Technologies Co., Ltd. Systems, methods, and apparatus on wireless network architecture and air interface
US20230037893A1 (en) 2021-08-02 2023-02-09 Samsung Electronics Co., Ltd. Method and network apparatus for generating real-time radio coverage map in wireless network

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20210203565A1 (en) 2019-12-31 2021-07-01 Hughes Network Systems, Llc Managing internet of things network traffic using federated machine learning
WO2022015008A1 (en) 2020-07-13 2022-01-20 Samsung Electronics Co., Ltd. Method and system for determining target cell for handover of ue
US20220261697A1 (en) 2021-02-15 2022-08-18 Devron Corporation Federated learning platform and methods for using same
WO2022205023A1 (en) 2021-03-31 2022-10-06 Huawei Technologies Co., Ltd. Systems, methods, and apparatus on wireless network architecture and air interface
US20230037893A1 (en) 2021-08-02 2023-02-09 Samsung Electronics Co., Ltd. Method and network apparatus for generating real-time radio coverage map in wireless network

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
"3rd Generation Partnership Project; Technical Specification Group Radio Access Network; Study on Artificial Intelligence (AI)/Machine Learning (ML) for NR air interface (Release 18)", no. V1.0.0, 4 September 2023 (2023-09-04), pages 1 - 148, XP052512007, Retrieved from the Internet <URL:https://ftp.3gpp.org/Specs/archive/38_series/38.843/38843-100.zip RP-231766 TR 38.843 v1.0.0.docx> [retrieved on 20230904] *
"Study on enhancement for Data Collection for NR and EN-DC", 3GPP TR 37.817
HUAWEI ET AL: "Discussion on general aspects of AI/ML framework", vol. RAN WG1, no. Toulouse, France; 20220822 - 20220826, 12 August 2022 (2022-08-12), XP052273819, Retrieved from the Internet <URL:https://ftp.3gpp.org/tsg_ran/WG1_RL1/TSGR1_110/Docs/R1-2205889.zip R1-2205889.docx> [retrieved on 20220812] *
RUDRAKSH SHRIVASTAVA ET AL: "Discussion on AI/ML Capability Reporting and Model LCM", vol. RAN WG2, no. Toulouse, FR; 20230821 - 20230825, 10 August 2023 (2023-08-10), XP052443195, Retrieved from the Internet <URL:https://www.3gpp.org/ftp/TSG_RAN/WG2_RL2/TSGR2_123/Docs/R2-2307484.zip R2-2307484.docx> [retrieved on 20230810] *

Similar Documents

Publication Publication Date Title
WO2025008304A1 (en) Method of advanced ml report signaling
US12389288B2 (en) Simultaneous handover and carrier aggregation configuration
US20250219698A1 (en) Csi processing mode switching method and apparatus, and medium, product and chip
WO2025026970A1 (en) Method of activating a candidate model
WO2024223570A1 (en) Method of data-driven model signaling for multi-usim
WO2025087720A1 (en) Method of a network-assisted indirect ml lcm operation
WO2025087719A1 (en) Method of offloading a ml lcm operation in a wireless network
WO2025067885A1 (en) Method of model signaling for multi-connectivity
WO2025016856A1 (en) Method of advanced assistance signaling for user equipment of machine learning reporting
WO2025026851A1 (en) Method of distributed partitioned-model monitoring
WO2024231363A1 (en) Method of advanced model adaptation for radio access network
FI130871B1 (en) Transmission beam determination
WO2024160972A2 (en) Method of gnb-ue behaviors for model-based mobility
WO2025087718A1 (en) Method of model grouping signaling
WO2025017056A1 (en) Method of ai/ml model matching signaling
WO2024231362A1 (en) Method of ml model configuration and signaling
WO2025124931A1 (en) Method of model-sharing signaling in a wireless communication system
WO2025087873A1 (en) Method of ai/ml model re-training in a wireless network
EP4659159A1 (en) Method of advanced gnb-ue model monitoring
WO2024200393A1 (en) Method of model switching signaling for radio access network
WO2025168462A1 (en) Method of advanced online training signaling for ran
WO2025168467A1 (en) Method of ml condition pairing
WO2025093364A1 (en) System and apparatus for model transfer in a network and a method in association thereto
WO2025168471A1 (en) Method of rrc state-based online training signaling
WO2025093284A1 (en) System and apparatus for model transfer in a network and a method in association thereto

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 24790470

Country of ref document: EP

Kind code of ref document: A1