[go: up one dir, main page]

WO2024170535A1 - Method of model identification signaling - Google Patents

Method of model identification signaling Download PDF

Info

Publication number
WO2024170535A1
WO2024170535A1 PCT/EP2024/053564 EP2024053564W WO2024170535A1 WO 2024170535 A1 WO2024170535 A1 WO 2024170535A1 EP 2024053564 W EP2024053564 W EP 2024053564W WO 2024170535 A1 WO2024170535 A1 WO 2024170535A1
Authority
WO
WIPO (PCT)
Prior art keywords
model
models
mapping
network
gnb
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
PCT/EP2024/053564
Other languages
French (fr)
Inventor
Hojin Kim
Rikin SHAH
Reuben GEORGE STEPHEN
David GONZALEZ GONZALEZ
Andreas Andrae
Shravan Kumar KALYANKAR
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Aumovio Germany GmbH
Original Assignee
Continental Automotive Technologies GmbH
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Continental Automotive Technologies GmbH filed Critical Continental Automotive Technologies GmbH
Priority to CN202480012823.8A priority Critical patent/CN120642395A/en
Publication of WO2024170535A1 publication Critical patent/WO2024170535A1/en
Anticipated expiration legal-status Critical
Pending legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W24/00Supervisory, monitoring or testing arrangements
    • H04W24/02Arrangements for optimising operational condition

Definitions

  • the present disclosure relates to AI/ML based model identification, where techniques for pre-configuring and signaling the specific information about model selection using association between models and index values are presented.
  • AI/ML artificial intelligence/machine learning
  • RP-213599 3GPP TSG (Technical Specification Group) RAN (Radio Access Network) meeting #94e.
  • the official title of AI/ML study item is “Study on AI/ML for NR Air Interface”, and currently RAN WG1 (Working Group 1 ) and WG2 are actively working on specification.
  • the goal of this study item is to identify a common AI/ML framework and areas of obtaining gains using AI/ML based techniques with use cases.
  • the main objective of this study item is to study AI/ML framework for air-interface with target use cases by considering performance, complexity, and potential specification impact.
  • AI/ML model terminology and description to identify common and specific characteristics for framework will be one of key work scope.
  • AI/ML framework various aspects are under consideration for investigation and one of key items is about lifecycle management of AI/ML model where multiple stages are included as mandatory for model training, model deployment, model inference, model monitoring, model updating etc.
  • UE user equipment mobility was also considered as one of AI/ML use cases and one of scenarios for model training/inference is that both functions are located within RAN node.
  • AI Artificial Intelligence
  • ML Machine Learning
  • model identification to support RAN-based AI/ML model is considered very significant for both network and UE to meet any desired model operations (e.g., model training, inference, selection, switching, update, monitoring, etc.).
  • BSZgNB base station
  • US 2022400373 describes the method of determining neural network functions and configuring models for performing wireless communications management procedures.
  • US 2022108214 explains ML model management method for network data analytics function device, and US 2022337487 shows that a network entity determines at least one model parameter of a model for digitally analyzing input data depending on the at least one model parameter of a model, the network entity being configured to receive a model request.
  • WO 2023277780 contains a method of downloading of a compiled machine code version of a ML model to a wireless communication device.
  • WO 2022258149 provides a way for training a model in a server device based on training data in a user device, and
  • WO 2022228666 shows about influencing training of a ML model based on a training policy provided by an actor node.
  • WO 2022161624 describes the method of receiving a request for retrieving or executing a ML model or a combination of ML models. The solution of that mentioned problem is given by the embodiments of this disclosure.
  • a method of model identification signaling where a pre-configuring association between models and index values built to generate a different number of mapping tables using model IDs based on a finite set of ML models for specific use case/application, wherein at both sides of network and UE, a set of the preconfigured ML models are available for each specific use cases or functionalities with a set of different feature sets (e.g., network model set for multiple models located in network and UE model set for multiple models located in UE).
  • UE receives the indication about association between models and index values from network (e.g., gNB) and the received information about the indicated association is sent through either system information message or dedicated RRC message.
  • UE further receives the index (e.g., indication of model ID) from gNB through dedicated signaling message (L1/L2) and then UE simply checks index indicated in dedicated signaling message so that the corresponding model(s) can be applied to operation accordingly.
  • the determined index information with model ID(s) can be provided to a group of UEs that use the same model and/or each individual UEs through dedicated signaling.
  • model ID mapping pairs can be one-to-one and/or one-to-many mapping, where the association of models and index values is represented by model mapping table with model ID list. Index information can be indicated by specific bits based on the pre-configured size of table.
  • the size of model mapping pairs with model IDs can vary flexibly and index values can be pre-configured together with different number of bits with multiple model mapping tables.
  • the selected model ID(s) in one side can be paired with any combination of multiple model IDs that are available on the other side for model mapping pair.
  • Network-UE model mapping pairs indicate the matching relationship between network model set and UE model set and are differently configured for each model function ID as each model function ID can have multiple pairs of network-UE model IDs.
  • Multiple mapping tables can also be pre-configured to support various model functionalities and/or applications as well as different vendors and designed to be optimized for model mapping pairs that consist of the combinations of network model IDs and UE model IDs.
  • Model operation can then be setup in RAN between network and UE based on the matched model IDs from mapping table that is generated and updated with the configurable content using historical data from network or server with model repository. Different number of multiple mapping tables can be generated for varying model execution environment.
  • the method is characterized by, that multiple functional blocks to generate model mapping table are engaged to be input to model mapping such as network/UE model set, model attribute data and model functionality as model mapping table can be updated by communicating with model matching repository.
  • model mapping such as network/UE model set
  • model attribute data and model functionality as model mapping table can be updated by communicating with model matching repository.
  • the exemplary steps are as follows.
  • the service type/application and environmental conditions for model operation is identified to determine model functionality and the associated feature set with attribute data.
  • one or more candidate models associated with each feature subsets are listed. Model performances of candidate models based on the given metrics are evaluated so as to generate association of the selected optimal models with model IDs.
  • the matched model pairs can then be generated as output.
  • the selected index information is sent to UE so that UE can also identify the matched model(s) to be paired for model operation
  • multiple indexes can be also chosen for transmission so that multiple combinations of model IDs can be possibly used if necessary.
  • model mapping repository a number of different model IDs are updated with varying versions. Both network model ID set and UE model ID set are updated with the latest versions in periodic, semi-static, or non-periodic way. When there are multiple number of UEs with the same model ID set, those UEs can be grouped together for model operation in communication with the same gNB.
  • the methods are characterized by, that based on the index of specific model ID(s) transmitted by gNB, UE receives the index information so as to configure the necessary model operation setup with the indicated model ID(s) for activation, sidelink UEs can use their own model ID set and collaborate with other UEs or the associated gNB for collaborative model operation using the model mapping pairs.
  • the methods are characterized by, that based on the preconfigured association information between models and index values using mapping tables, firstly the requested model ID(s) is searched, where in some cases, there is no available model ID to select, mapping table need to be updated to include the requested model ID(s) by communicating with model matching repository and in some cases, the closest matching model(s) can also be used for selection and the range of model ID pairs is quantized based on the pre-configured criteria.
  • the method of identifying available model ID list information based on the pre-configured model mapping from UE signaling about ML capability information wherein, based on the pre-configured mapping table, gNB determines selection of model ID pair(s) and UE is then indicated about the determined model ID(s) to be paired from gNB signaling.
  • UE model status report is sent to confirm the selected model ID(s) for use. In case that UE selects a different model ID(s) not requested by gNB, it should be reported to gNB for the selected UE model ID(s).
  • the methods are characterized by, that based on the preconfigured mapping information, UE determines selection of model ID pair(s) and UE then sends ML capability information and model ID information to gNB so that gNB configures the indicated model ID(s) and sends re-configuration information update, where in some cases, gNB can also select other model ID(s) to be used in UE and the recommended model ID information can also be sent to UE when reconfiguration information update is signaled.
  • model set can comprise multiple layers such as superset model, subset models, and individual models, where one or more number of individual models represent each subset model, and a set of subset models are contained in superset model.
  • Model ID values are assigned to each level of models for superset, subset and individual models.
  • Superset based ML model structure is applied to both network side and UE side.
  • Model attribute data contains information about model related ⁇ configuration, metadata, parameter set ⁇ .
  • Model functionality contains information about list of model functions based on the specified applications.
  • two or more model IDs can be selected for collaborative model operation (non-standalone type) and a single model ID can be used for independent model operation (standalone type).
  • the method of categorizing two types of model IDs is characterized by, that for Type-1 , it is common model ID and standard-compatible models where model IDs for network and UE are registered in the same repository of model ID list, and/or model IDs have the minimum required feature set and baseline capability based on the given model attributes and functionality.
  • Type-2 it is dedicated model ID and standard/proprietary models where model IDs for network and UE are registered in the separate repository of independent superset model ID list, and/or model IDs have the enhanced feature set and specific capability with higher complexity over common model based on the given model attributes and functionality.
  • the present disclosure relates to an apparatus for advanced ML signaling for RAN a wireless communication system, the apparatus comprising a wireless transceiver, a processor coupled with a memory in which computer program instructions are stored, said instructions being configured to implement steps according to the forementioned aspects.
  • the present disclosure relates User Equipment comprising an forementioned apparatus.
  • the present disclosure relates gNB comprising an an forementioned apparatus.
  • the present disclosure relates wireless communication system for ML condition pairing, wherein the wireless communication systems comprises forementioned user equipment and gNB whereby the user Equipment and the gNB each comprises a processor coupled with a memory in which computer program instructions are stored, said instructions being configured to implement steps of the forementioned methods.
  • Figure 1 is an exemplary table of model ID mapping pairs for one-to-one mapping with 2-bit.
  • Figure 2 is an exemplary table of model ID mapping pairs for one-to-many mapping with 2-bit.
  • Figure 3 is a block diagram of model mapping generation system.
  • Figure 4 is a flow chart of gNB behavior to send model ID.
  • Figure 5 is a flow chart of UE behavior to receive model ID.
  • Figure 6 is a flow chart of model ID search and selection procedure.
  • Figure 7 is a signaling flow of model ID selection by gNB.
  • Figure 8 is a signaling flow of model ID selection by UE.
  • Figure 9 is a block diagram of model set structure.
  • a more general term “network node” may be used and may correspond to any type of radio network node or any network node, which communicates with a UE (directly or via another node) and/or with another network node.
  • network nodes are NodeB, MeNB, ENB, a network node belonging to MCG or SCG, base station (BS), multi-standard radio (MSR) radio node such as MSR BS, eNodeB, gNodeB, network controller, radio network controller (RNC), base station controller (BSC), relay, donor node controlling relay, base transceiver station (BTS), access point (AP), transmission points, transmission nodes, RRU, RRH, nodes in distributed antenna system (DAS), core network node (e.g.
  • MSC Mobile Switching Center
  • MME Mobility Management Entity
  • O&M Operations & Maintenance
  • OSS Operations Support System
  • SON Self Optimized Network
  • positioning node e.g. Evolved- Serving Mobile Location Centre (E-SMLC)
  • E-SMLC Evolved- Serving Mobile Location Centre
  • MDT Minimization of Drive Tests
  • test equipment physical node or software
  • the non-limiting term user equipment (UE) or wireless device may be used and may refer to any type of wireless device communicating with a network node and/or with another UE in a cellular or mobile communication system.
  • UE are target device, device to device (D2D) UE, machine type UE or UE capable of machine to machine (M2M) communication, PDA, PAD, Tablet, mobile terminals, smart phone, laptop embedded equipped (LEE), laptop mounted equipment (LME), USB dongles, UE category Ml, UE category M2, ProSe UE, V2V UE, V2X UE, etc.
  • terminologies such as base station/gNodeB and UE should be considered non-limiting and do in particular not imply a certain hierarchical relation between the two; in general, “gNodeB” could be considered as device 1 and “UE” could be considered as device 2 and these two devices communicate with each other over some radio channel. And in the following the transmitter or receiver could be either gNodeB (gNB), or UE.
  • gNB gNodeB
  • aspects of the embodiments may be embodied as a system, apparatus, method, or program product. Accordingly, embodiments may take the form of an entirely hardware embodiment, an entirely software embodiment (including firmware, resident software, micro-code, etc.) or an embodiment combining software and hardware aspects.
  • the disclosed embodiments may be implemented as a hardware circuit comprising custom very-large-scale integration (“VLSI”) circuits or gate arrays, off- the-shelf semiconductors such as logic chips, transistors, or other discrete components.
  • VLSI very-large-scale integration
  • the disclosed embodiments may also be implemented in programmable hardware devices such as field programmable gate arrays, programmable array logic, programmable logic devices, or the like.
  • the disclosed embodiments may include one or more physical or logical blocks of executable code which may, for instance, be organized as an object, procedure, or function.
  • embodiments may take the form of a program product embodied in one or more computer readable storage devices storing machine readable code, computer readable code, and/or program code, referred hereafter as code.
  • the storage devices may be tangible, non- transitory, and/or non-transmission.
  • the storage devices may not embody signals. In a certain embodiment, the storage devices only employ signals for accessing code
  • the computer readable medium may be a computer readable storage medium.
  • the computer readable storage medium may be a storage device storing the code.
  • the storage device may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, holographic, micromechanical, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing.
  • a storage device More specific examples (a non-exhaustive list) of the storage device would include the following: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a random-access memory (“RAM”), a read-only memory (“ROM”), an erasable programmable read-only memory (“EPROM” or Flash memory), a portable compact disc readonly memory (“CD-ROM”), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
  • a computer readable storage medium may be any tangible medium that can contain or store a program for use by or in connection with an instruction execution system, apparatus, or device.
  • Code for carrying out operations for embodiments may be any number of lines and may be written in any combination of one or more programming languages including an object- oriented programming language such as Python, Ruby, Java, Smalltalk, C++, or the like, and conventional procedural programming languages, such as the “C” programming language, or the like, and/or machine languages such as assembly languages.
  • the code may execute entirely on the user’s computer, partly on the user’s computer, as a stand-alone software package, partly on the user’s computer and partly on a remote computer or entirely on the remote computer or server.
  • the remote computer may be connected to the user’s computer through any type of network, including a local area network (“LAN”), wireless LAN (“WLAN”), or a wide area network (“WAN”), or the connection may be made to an external computer (for example, through the Internet using an Internet Service Provider (“ISP”)).
  • LAN local area network
  • WLAN wireless LAN
  • WAN wide area network
  • ISP Internet Service Provider
  • the code may also be stored in a storage device that can direct a computer, other programmable data processing apparatus, or other devices to function in a particular manner, such that the instructions stored in the storage device produce an article of manufacture including instructions which implement the function/act specified in the flowchart diagrams and/or block diagrams.
  • the code may also be loaded onto a computer, other programmable data processing apparatus, or other devices to cause a series of operational steps to be performed on the computer, other programmable apparatus, or other devices to produce a computer implemented process such that the code which execute on the computer or other programmable apparatus provide processes for implementing the functions/acts specified in the flowchart diagrams and/or block diagrams.
  • each block in the flowchart diagrams and/or block diagrams may represent a module, segment, or portion of code, which includes one or more executable instructions of the code for implementing the specified logical function(s).
  • the disclosure is related to wireless communication system, which may be for example a 5G NR wireless communication system. More specifically, it represents a RAN of the wireless communication system, which is used exchange data with UEs via radio signals. For example, the RAN may send data to the UEs (downlink, DL), for instance data received from a core network (CN). The RAN may also receive data from the UEs (uplink, UL), which data may be forwarded to the CN.
  • DL downlink
  • CN core network
  • uplink, UL uplink
  • the RAN comprises one base station, BS.
  • the RAN may comprise more than one BS to increase the coverage of the wireless communication system.
  • Each of these BSs may be referred to as NB, eNodeB (or eNB), gNodeB (or gNB, in the case of a 5G NR wireless communication system), an access point or the like, depending on the wireless communication standard(s) implemented.
  • the UEs are located in a coverage of the BS.
  • the coverage of the BS corresponds for example to the area in which UEs can decode a PDCCH transmitted by the BS.
  • An example of a wireless device suitable for implementing any method, discussed in the present disclosure, performed at a UE corresponds to an apparatus that provides wireless connectivity with the RAN of the wireless communication system, and that can be used to exchange data with said RAN.
  • a wireless device may be included in a UE.
  • the UE may for instance be a cellular phone, a wireless modem, a wireless communication device, a handheld device, a laptop computer, or the like.
  • the UE may also be an Internet of Things (loT) equipment, like a wireless camera, a smart sensor, a smart meter, smart glasses, a vehicle (manned or unmanned), a global positioning system device, etc., or any other equipment that may run applications that need to exchange data with remote recipients, via the wireless device.
  • LoT Internet of Things
  • the wireless device comprises one or more processors and one or more memories.
  • the one or more processors may include for instance a central processing unit (CPU), a digital signal processor (DSP), a field-programmable gate array (FPGA), an application specific integrated circuit (ASIC), etc.
  • the one or more memories may include any type of computer readable volatile and non-volatile memories (magnetic hard disk, solid-state disk, optical disk, electronic memory, etc.).
  • the one or more memories may store a computer program product, in the form of a set of programcode instructions to be executed by the one or more processors to implement all or part of the steps of a method for exchanging data, performed at a UE’s side, according to any one of the embodiments disclosed herein.
  • the wireless device can comprise also a main radio, MR, unit.
  • the MR unit corresponds to a main wireless communication unit of the wireless device, used for exchanging data with BSs of the RAN using radio signals.
  • the MR unit may implement one or more wireless communication protocols, and may for instance be a 3G, 4G, 5G, NR, WiFi, WiMax, etc. transceiver or the like.
  • the MR unit corresponds to a 5G NR wireless communication unit.
  • a more general term “network node” may be used and may correspond to any type of radio network node or any network node, which communicates with a UE (directly or via another node) and/or with another network node.
  • network nodes are NodeB, MeNB, ENB, a network node belonging to MCG or SCG, base station (BS), multi-standard radio (MSR) radio node such as MSR BS, eNodeB, gNodeB, network controller, radio network controller (RNC), base station controller (BSC), relay, donor node controlling relay, base transceiver station (BTS), access point (AP), transmission points, transmission nodes, RRU, RRH, nodes in distributed antenna system (DAS), core network node (e.g.
  • MSC Mobile Switching Center
  • MME Mobility Management Entity
  • O&M Operations & Maintenance
  • OSS Operations Support System
  • SON Self Optimized Network
  • positioning node e.g. Evolved- Serving Mobile Location Centre (E-SMLC)
  • E-SMLC Evolved- Serving Mobile Location Centre
  • MDT Minimization of Drive Tests
  • test equipment physical node or software
  • the non-limiting term user equipment (UE) or wireless device may be used and may refer to any type of wireless device communicating with a network node and/or with another UE in a cellular or mobile communication system.
  • UE are target device, device to device (D2D) UE, machine type UE or UE capable of machine to machine (M2M) communication, PDA, PAD, Tablet, mobile terminals, smart phone, laptop embedded equipped (LEE), laptop mounted equipment (LME), USB dongles, UE category Ml, UE category M2, ProSe UE, V2V UE, V2X UE, etc.
  • terminologies such as base station/gNodeB and UE should be considered non-limiting and do in particular not imply a certain hierarchical relation between the two; in general, “gNodeB” could be considered as device 1 and “UE” could be considered as device 2 and these two devices communicate with each other over some radio channel. And in the following the transmitter or receiver could be either gNodeB (gNB), or UE.
  • gNB gNodeB
  • embodiments may be embodied as a system, apparatus, method, or program product. Accordingly, embodiments may take the form of an entirely hardware embodiment, an entirely software embodiment (including firmware, resident software, micro-code, etc.) or an embodiment combining software and hardware aspects.
  • the disclosed embodiments may be implemented as a hardware circuit comprising custom very-large-scale integration (“VLSI”) circuits or gate arrays, off- the-shelf semiconductors such as logic chips, transistors, or other discrete components.
  • VLSI very-large-scale integration
  • the disclosed embodiments may also be implemented in programmable hardware devices such as field programmable gate arrays, programmable array logic, programmable logic devices, or the like.
  • the disclosed embodiments may include one or more physical or logical blocks of executable code which may, for instance, be organized as an object, procedure, or function.
  • embodiments may take the form of a program product embodied in one or more computer readable storage devices storing machine readable code, computer readable code, and/or program code, referred hereafter as code.
  • the storage devices may be tangible, non- transitory, and/or non-transmission.
  • the storage devices may not embody signals. In a certain embodiment, the storage devices only employ signals for accessing code
  • the computer readable medium may be a computer readable storage medium.
  • the computer readable storage medium may be a storage device storing the code.
  • the storage device may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, holographic, micromechanical, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing.
  • a storage device More specific examples (a non-exhaustive list) of the storage device would include the following: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a random-access memory (“RAM”), a read-only memory (“ROM”), an erasable programmable read-only memory (“EPROM” or Flash memory), a portable compact disc readonly memory (“CD-ROM”), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
  • a computer readable storage medium may be any tangible medium that can contain or store a program for use by or in connection with an instruction execution system, apparatus, or device.
  • Code for carrying out operations for embodiments may be any number of lines and may be written in any combination of one or more programming languages including an object- oriented programming language such as Python, Ruby, Java, Smalltalk, C++, or the like, and conventional procedural programming languages, such as the “C” programming language, or the like, and/or machine languages such as assembly languages.
  • the code may execute entirely on the user’s computer, partly on the user’s computer, as a stand-alone software package, partly on the user’s computer and partly on a remote computer or entirely on the remote computer or server.
  • the remote computer may be connected to the user’s computer through any type of network, including a local area network (“LAN”), wireless LAN (“WLAN”), or a wide area network (“WAN”), or the connection may be made to an external computer (for example, through the Internet using an Internet Service Provider (“ISP”)).
  • LAN local area network
  • WLAN wireless LAN
  • WAN wide area network
  • ISP Internet Service Provider
  • the code may also be stored in a storage device that can direct a computer, other programmable data processing apparatus, or other devices to function in a particular manner, such that the instructions stored in the storage device produce an article of manufacture including instructions which implement the function/act specified in the flowchart diagrams and/or block diagrams.
  • the code may also be loaded onto a computer, other programmable data processing apparatus, or other devices to cause a series of operational steps to be performed on the computer, other programmable apparatus, or other devices to produce a computer implemented process such that the code which execute on the computer or other programmable apparatus provide processes for implementing the functions/acts specified in the flowchart diagrams and/or block diagrams.
  • each block in the flowchart diagrams and/or block diagrams may represent a module, segment, or portion of code, which includes one or more executable instructions of the code for implementing the specified logical function(s).
  • the disclosure is related to wireless communication system, which may be for example a 5G NR wireless communication system. More specifically, it represents a RAN of the wireless communication system, which is used exchange data with UEs via radio signals. For example, the RAN may send data to the UEs (downlink, DL), for instance data received from a core network (CN). The RAN may also receive data from the UEs (uplink, UL), which data may be forwarded to the CN.
  • DL downlink
  • CN core network
  • uplink, UL uplink
  • the RAN comprises one base station, BS.
  • the RAN may comprise more than one BS to increase the coverage of the wireless communication system.
  • Each of these BSs may be referred to as NB, eNodeB (or eNB), gNodeB (or gNB, in the case of a 5G NR wireless communication system), an access point or the like, depending on the wireless communication standard(s) implemented.
  • the UEs are located in a coverage of the BS.
  • the coverage of the BS corresponds for example to the area in which UEs can decode a PDCCH transmitted by the BS.
  • An example of a wireless device suitable for implementing any method, discussed in the present disclosure, performed at a UE corresponds to an apparatus that provides wireless connectivity with the RAN of the wireless communication system, and that can be used to exchange data with said RAN.
  • a wireless device may be included in a UE.
  • the UE may for instance be a cellular phone, a wireless modem, a wireless communication device, a handheld device, a laptop computer, or the like.
  • the UE may also be an Internet of Things (loT) equipment, like a wireless camera, a smart sensor, a smart meter, smart glasses, a vehicle (manned or unmanned), a global positioning system device, etc., or any other equipment that may run applications that need to exchange data with remote recipients, via the wireless device.
  • LoT Internet of Things
  • the wireless device comprises one or more processors and one or more memories.
  • the one or more processors may include for instance a central processing unit (CPU), a digital signal processor (DSP), a field-programmable gate array (FPGA), an application specific integrated circuit (ASIC), etc.
  • the one or more memories may include any type of computer readable volatile and non-volatile memories (magnetic hard disk, solid-state disk, optical disk, electronic memory, etc.).
  • the one or more memories may store a computer program product, in the form of a set of program- code instructions to be executed by the one or more processors to implement all or part of the steps of a method for exchanging data, performed at a UE’s side, according to any one of the embodiments disclosed herein.
  • the wireless device can comprise also a main radio, MR, unit.
  • the MR unit corresponds to a main wireless communication unit of the wireless device, used for exchanging data with BSs of the RAN using radio signals.
  • the MR unit may implement one or more wireless communication protocols, and may for instance be a 3G, 4G, 5G, NR, WiFi, WiMax, etc. transceiver or the like.
  • the MR unit corresponds to a 5G NR wireless communication unit.
  • AI/ML based techniques are currently applied to many different applications and 3GPP also started to work on its technical investigation to apply to multiple use cases based on the observed potential gains.
  • AI/ML lifecycle can be split into several stages such as data collection/pre- processing, model training, model testing/validation, model deployment/update, model monitoring etc., where each stage is equally important to achieve target performance with any specific model(s).
  • one of the challenging issues is to manage the lifecycle of AI/ML model. It is mainly because the data/model drift occurs during model deployment/inference and it results in performance degradation of AI/ML model.
  • model selection is one of key issues for model performance maintenance as model performance such as inferencing and/or training is dependent on different model execution environment with varying configuration parameters.
  • collaboration between UE and gNB is highly important to track model performance and re-configure model corresponding to different environments.
  • AI/ML model needs model monitoring after deployment because model performance cannot be maintained continuously due to drift and update feedback is then provided to re-train/update the model or select alternative model.
  • AI/ML model enabled wireless communication network When AI/ML model enabled wireless communication network is deployed, it is then important to consider how to handle AI/ML model in activation with re-configuration for wireless devices under operations such as model training, inference, updating, etc. Therefore, there is a need for specification for signaling methods and gNB-UE behaviors when a set of multiple specific AI/ML models are supported for RAN-based model operation and a new mechanism about gNB-UE behaviors and procedures is necessary to avoid any performance impact on model operation using multiple specific AI/ML models.
  • association between models and index values is built to generate a different number of mapping tables using model IDs.
  • a set of the pre-configured ML models are available for each specific use cases or functionalities with a set of different feature sets (e.g., network model set for multiple models located in network and UE model set for multiple models located in UE).
  • UE receives the indication about association between models and index values from network (e.g., gNB) and the received information about the indicated association is sent through either system information message or dedicated RRC (radio resource control) message.
  • UE further receives the index (e.g., indication of model ID) from gNB through dedicated signaling message (L1/L2) and then UE simply checks index indicated in dedicated signaling message so that the corresponding model(s) can be applied to operation accordingly.
  • network e.g., gNB
  • RRC radio resource control
  • the determined index information with model ID(s) can be provided to a group of UEs that use the same model and/or each individual UEs through dedicated signaling.
  • Figure 1 shows an exemplary table of model ID mapping pairs when 2-bit indication of one-to-one mapping is used.
  • association of models and index values is represented by model mapping table with model ID list, and index information can be indicated by specific bits based on the pre-configured size of table.
  • the number of bits is 2 and model mapping pairs can be one- to-one between network and UE sides.
  • the size of model mapping pairs with model IDs can vary flexibly and index values can be pre-configured together with different number of bits with multiple model mapping tables.
  • Figure 2 shows an exemplary table of model ID mapping pairs when 2-bit indication of one-to-many mapping is used. Similar with Figure 1 , the same number of bits is used to represent association of models and index values, but this table is based on one-to-many mapping between network and UE sides. Specifically, the selected model ID(s) in one side can be paired with any combination of multiple model IDs that are available on the other side for model mapping pair.
  • Network-UE model mapping pairs indicate the matching relationship between network model set and UE model set and are differently configured for each model function ID as each model function ID can have multiple pairs of network-UE model IDs.
  • Multiple mapping tables can also be pre-configured to support various model functionalities and/or applications as well as different vendors and designed to be optimized for model mapping pairs that consist of the combinations of network model IDs and UE model IDs.
  • Model operation can then be setup in RAN between network and UE based on the matched model IDs from mapping table that is generated and updated with the configurable content using historical data from network or server with model repository. Different number of multiple mapping tables can be generated for varying model execution environment.
  • Model mapping table shows a block diagram of model mapping generation system.
  • multiple functional blocks are engaged to be input to model mapping such as network/UE model set, model attribute data and model functionality.
  • Model mapping table can be updated by communicating with model matching repository.
  • the exemplary steps are as follows. Firstly, the service type/application and environmental conditions for model operation is identified to determine model functionality and the associated feature set with attribute data. After then, one or more candidate models associated with each feature subsets are listed. Model performances of candidate models based on the given metrics are evaluated so as to generate association of the selected optimal models with model IDs. The matched model pairs can then be generated as output.
  • Figure 4 shows a flow chart of gNB behavior to send model ID.
  • the index of specific model ID(s) is selected and this information is then sent to UE so that UE can also identify the matched model(s) to be paired for model operation.
  • multiple indexes can be also chosen for transmission so that multiple combinations of model IDs can be possibly used if necessary.
  • model mapping tables For example, model training and inferencing operation will require different model ID pairs for activation depending on model execution environment.
  • model mapping repository a number of different model IDs are updated with varying versions. Therefore, both network model ID set and UE model ID set are updated with the latest versions in periodic, semi-static, or non-periodic way.
  • those UEs can be grouped together for model operation in communication with the same gNB.
  • Figure 5 shows a flow chart of UE behavior to receive model ID. Based on the index of specific model ID(s) transmitted by gNB, UE then receives this information so as to configure the necessary model operation setup with the indicated model ID(s) for activation.
  • sidelink UEs can use their own model ID set and collaborate with other UEs or the associated gNB for collaborative model operation using the model mapping pairs.
  • Figure 6 shows a flow chart of model ID search and selection procedure. Based on the pre-configured association information between models and index values using mapping tables, firstly the requested model ID(s) is searched. In case there is no available model ID to select, mapping table need to be updated to include the requested model ID(s) by communicating with model matching repository. Or the closest matching model(s) can also be used for selection and the range of model ID pairs is quantized based on the pre-configured criteria.
  • Figure 7 shows a signaling flow of model ID selection by gNB.
  • Available model ID list information (that is based on the pre-configured model mapping) is identified from UE signaling about ML capability information.
  • gNB determines selection of model ID pair(s) and UE is then indicated about the determined model ID(s) to be paired from gNB signaling.
  • UE model status report is sent to confirm the selected model ID(s) for use. In case that UE selects a different model ID(s) not requested by gNB, it should be reported to gNB for the selected UE model ID(s).
  • Figure 8 shows a signaling flow of model ID selection by UE.
  • UE determines selection of model ID pair(s) and UE then sends ML capability information and model ID information to gNB so that gNB configures the indicated model ID(s) and sends re-configuration information update.
  • the recommended model ID information can also be sent to UE when re-configuration information update is signaled.
  • Model set can comprise multiple layers such as superset model, subset models, and individual models.
  • One or more number of individual models represent each subset model, and a set of subset models are contained in superset model.
  • Model ID values are assigned to each level of models for superset, subset and individual models.
  • Superset based ML model structure is applied to both network side and UE side.
  • Model attribute data contains information about model related ⁇ configuration, metadata, parameter set ⁇ .
  • Model functionality contains information about list of model functions based on the specified applications.
  • two or more model IDs can be selected for collaborative model operation (non-standalone type) and a single model ID can be used for independent model operation (standalone type).
  • model IDs there can be two types of model IDs such as Type-1 and Type-2 model ID.
  • Type-1 it is common model ID and standard-compatible models where model IDs for network and UE are registered in the same repository of model ID list, and/or model IDs have the minimum required feature set and baseline capability based on the given model attributes and functionality.
  • Type-2 it is dedicated model ID and standard/proprietary models where model IDs for network and UE are registered in the separate repository of independent superset model ID list, and/or model IDs have the enhanced feature set and specific capability with higher complexity over common model based on the given model attributes and functionality.
  • the proposed scheme shows enhancements for signaling overhead and latency.
  • signaling overhead aspect by using model ID based mapping model pairs (which is pre-configured for best matching and shared on both sides), any index value is needed for signaling about model selection or switching.
  • configuration related signaling gets simpler as the proposed model ID structure fields contain those information that can be represented through model ID indication through mapping. Otherwise, when model transfer occurs, the whole model information (e.g., model parameters, model meta data, config info, etc.) might be signaled fully or partially.
  • latency aspect when model operation of switching/selection/etc.
  • gNB and/or UE can look up mapping model repository for those requested operations so that the best matching model(s) can be searched in advance. Therefore, execution latency is expected to be reduced.
  • the latency reduction level might depend on different model operation use cases such as model switching, selection, transfer, update, etc.
  • those UEs can be grouped together for model operation and we expect further improvement in signaling overhead/latency performance.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Mobile Radio Communication Systems (AREA)

Abstract

The present disclosure describes methods of using the pre-configured AI/ML (artificial intelligence/machine learning) based model list in wireless mobile communication system including base station (e.g., gNB) and mobile station (e.g., UE). In AI/ML model is applied to radio access network, model performance such as inferencing and/or training is dependent on different model execution environment with varying models. Therefore, model operation can be set up between network and UE by selecting the relevant model(s) from the model list.

Description

TITLE
Method of model identification signaling
TECHNNICAL FIELD
The present disclosure relates to AI/ML based model identification, where techniques for pre-configuring and signaling the specific information about model selection using association between models and index values are presented.
BACKGROUND
In 3GPP (Third Generation Partnership Project), one of the selected study items as the approved Release 18 package is AI/ML (artificial intelligence/machine learning) as described in the related document (RP-213599) addressed in 3GPP TSG (Technical Specification Group) RAN (Radio Access Network) meeting #94e. The official title of AI/ML study item is “Study on AI/ML for NR Air Interface”, and currently RAN WG1 (Working Group 1 ) and WG2 are actively working on specification. The goal of this study item is to identify a common AI/ML framework and areas of obtaining gains using AI/ML based techniques with use cases. According to 3GPP, the main objective of this study item is to study AI/ML framework for air-interface with target use cases by considering performance, complexity, and potential specification impact. In particular, AI/ML model, terminology and description to identify common and specific characteristics for framework will be one of key work scope. Regarding AI/ML framework, various aspects are under consideration for investigation and one of key items is about lifecycle management of AI/ML model where multiple stages are included as mandatory for model training, model deployment, model inference, model monitoring, model updating etc. Earlier, in 3GPP TR 37.817 for Release 17, titled as Study on enhancement for Data Collection for NR and EN-DC, UE (user equipment) mobility was also considered as one of AI/ML use cases and one of scenarios for model training/inference is that both functions are located within RAN node. Followingly, in Release 18 the new work item of “Artificial Intelligence (AI)ZMachine Learning (ML) for NG-RAN” was initiated to specify data collection enhancements and signaling support within existing NG-RAN interfaces and architecture. For the above active standardization works, model identification to support RAN-based AI/ML model is considered very significant for both network and UE to meet any desired model operations (e.g., model training, inference, selection, switching, update, monitoring, etc.).
Currently, there is no specification defined for signaling methods and network-UE behaviors when a set of multiple specific AI/ML models are supported for RAN-based model operation. Therefore, a new mechanism about base station (BSZgNB)-UE behaviors and procedures is provides to avoid any performance impact on model operation using multiple specific AI/ML models. It is necessary to investigate any specification impact by considering model operation when a finite set of models are used.
US 2022400373 describes the method of determining neural network functions and configuring models for performing wireless communications management procedures.
US 2022108214 explains ML model management method for network data analytics function device, and US 2022337487 shows that a network entity determines at least one model parameter of a model for digitally analyzing input data depending on the at least one model parameter of a model, the network entity being configured to receive a model request.
WO 2023277780 contains a method of downloading of a compiled machine code version of a ML model to a wireless communication device. WO 2022258149 provides a way for training a model in a server device based on training data in a user device, and WO 2022228666 shows about influencing training of a ML model based on a training policy provided by an actor node.
WO 2022161624 describes the method of receiving a request for retrieving or executing a ML model or a combination of ML models. The solution of that mentioned problem is given by the embodiments of this disclosure.
A method of model identification signaling, where a pre-configuring association between models and index values built to generate a different number of mapping tables using model IDs based on a finite set of ML models for specific use case/application, wherein at both sides of network and UE, a set of the preconfigured ML models are available for each specific use cases or functionalities with a set of different feature sets (e.g., network model set for multiple models located in network and UE model set for multiple models located in UE). For model operation, UE receives the indication about association between models and index values from network (e.g., gNB) and the received information about the indicated association is sent through either system information message or dedicated RRC message. UE further receives the index (e.g., indication of model ID) from gNB through dedicated signaling message (L1/L2) and then UE simply checks index indicated in dedicated signaling message so that the corresponding model(s) can be applied to operation accordingly. Based on one or multiple mapping tables reflecting association between models and index values, the determined index information with model ID(s) can be provided to a group of UEs that use the same model and/or each individual UEs through dedicated signaling.
In some embodiments of the method according to the first aspect, the method is characterized by, that model ID mapping pairs can be one-to-one and/or one-to-many mapping, where the association of models and index values is represented by model mapping table with model ID list. Index information can be indicated by specific bits based on the pre-configured size of table. In implementation, the size of model mapping pairs with model IDs can vary flexibly and index values can be pre-configured together with different number of bits with multiple model mapping tables. The selected model ID(s) in one side can be paired with any combination of multiple model IDs that are available on the other side for model mapping pair. Network-UE model mapping pairs indicate the matching relationship between network model set and UE model set and are differently configured for each model function ID as each model function ID can have multiple pairs of network-UE model IDs. Multiple mapping tables can also be pre-configured to support various model functionalities and/or applications as well as different vendors and designed to be optimized for model mapping pairs that consist of the combinations of network model IDs and UE model IDs.
Model operation can then be setup in RAN between network and UE based on the matched model IDs from mapping table that is generated and updated with the configurable content using historical data from network or server with model repository. Different number of multiple mapping tables can be generated for varying model execution environment.
In some embodiments of the method according to the first aspect, the method is characterized by, that multiple functional blocks to generate model mapping table are engaged to be input to model mapping such as network/UE model set, model attribute data and model functionality as model mapping table can be updated by communicating with model matching repository. Specifically, the exemplary steps are as follows. The service type/application and environmental conditions for model operation is identified to determine model functionality and the associated feature set with attribute data. After then, one or more candidate models associated with each feature subsets are listed. Model performances of candidate models based on the given metrics are evaluated so as to generate association of the selected optimal models with model IDs. The matched model pairs can then be generated as output.
In some embodiments of the method of selecting the index of specific model ID(s) based on the pre-configured model mapping information related to association between models and index values, where the selected index information is sent to UE so that UE can also identify the matched model(s) to be paired for model operation, where when selecting the index of model ID(s) from model mapping, multiple indexes can be also chosen for transmission so that multiple combinations of model IDs can be possibly used if necessary. For different model operation requests based on lifecycle management, various combinations or pairs of model IDs are supported using model mapping tables. In model mapping repository, a number of different model IDs are updated with varying versions. Both network model ID set and UE model ID set are updated with the latest versions in periodic, semi-static, or non-periodic way. When there are multiple number of UEs with the same model ID set, those UEs can be grouped together for model operation in communication with the same gNB.
In some embodiments the methods are characterized by, that based on the index of specific model ID(s) transmitted by gNB, UE receives the index information so as to configure the necessary model operation setup with the indicated model ID(s) for activation, sidelink UEs can use their own model ID set and collaborate with other UEs or the associated gNB for collaborative model operation using the model mapping pairs.
In some embodiments the methods are characterized by, that based on the preconfigured association information between models and index values using mapping tables, firstly the requested model ID(s) is searched, where in some cases, there is no available model ID to select, mapping table need to be updated to include the requested model ID(s) by communicating with model matching repository and in some cases, the closest matching model(s) can also be used for selection and the range of model ID pairs is quantized based on the pre-configured criteria.
In some embodiments the method of identifying available model ID list information based on the pre-configured model mapping from UE signaling about ML capability information, wherein, based on the pre-configured mapping table, gNB determines selection of model ID pair(s) and UE is then indicated about the determined model ID(s) to be paired from gNB signaling. UE model status report is sent to confirm the selected model ID(s) for use. In case that UE selects a different model ID(s) not requested by gNB, it should be reported to gNB for the selected UE model ID(s).
In some embodiments the methods are characterized by, that based on the preconfigured mapping information, UE determines selection of model ID pair(s) and UE then sends ML capability information and model ID information to gNB so that gNB configures the indicated model ID(s) and sends re-configuration information update, where in some cases, gNB can also select other model ID(s) to be used in UE and the recommended model ID information can also be sent to UE when reconfiguration information update is signaled.
In some embodiments the method of forming model set structure is characterized by, that, where model set can comprise multiple layers such as superset model, subset models, and individual models, where one or more number of individual models represent each subset model, and a set of subset models are contained in superset model. Model ID values are assigned to each level of models for superset, subset and individual models. Superset based ML model structure is applied to both network side and UE side. Model attribute data contains information about model related {configuration, metadata, parameter set}. Model functionality contains information about list of model functions based on the specified applications. Depending on the applied model operation with the specific models, two or more model IDs can be selected for collaborative model operation (non-standalone type) and a single model ID can be used for independent model operation (standalone type).
In some embodiments the method of categorizing two types of model IDs such as Type-1 and Type-2 model ID, is characterized by, that for Type-1 , it is common model ID and standard-compatible models where model IDs for network and UE are registered in the same repository of model ID list, and/or model IDs have the minimum required feature set and baseline capability based on the given model attributes and functionality. For Type-2, it is dedicated model ID and standard/proprietary models where model IDs for network and UE are registered in the separate repository of independent superset model ID list, and/or model IDs have the enhanced feature set and specific capability with higher complexity over common model based on the given model attributes and functionality.
According to another aspect the present disclosure relates to an apparatus for advanced ML signaling for RAN a wireless communication system, the apparatus comprising a wireless transceiver, a processor coupled with a memory in which computer program instructions are stored, said instructions being configured to implement steps according to the forementioned aspects. According to another aspect the present disclosure relates User Equipment comprising an forementioned apparatus.
According to another aspect the present disclosure relates gNB comprising an an forementioned apparatus.
According to another aspect the present disclosure relates wireless communication system for ML condition pairing, wherein the wireless communication systems comprises forementioned user equipment and gNB whereby the user Equipment and the gNB each comprises a processor coupled with a memory in which computer program instructions are stored, said instructions being configured to implement steps of the forementioned methods.
BRIEF DESCRIPTION OF THE DRAWINGS
Figure 1 is an exemplary table of model ID mapping pairs for one-to-one mapping with 2-bit.
Figure 2 is an exemplary table of model ID mapping pairs for one-to-many mapping with 2-bit.
Figure 3 is a block diagram of model mapping generation system.
Figure 4 is a flow chart of gNB behavior to send model ID.
Figure 5 is a flow chart of UE behavior to receive model ID.
Figure 6 is a flow chart of model ID search and selection procedure.
Figure 7 is a signaling flow of model ID selection by gNB.
Figure 8 is a signaling flow of model ID selection by UE. Figure 9 is a block diagram of model set structure.
DETAILED DESCRIPTION
The detailed description set forth below, with reference to annexed drawings, is intended as a description of various configurations and is not intended to represent the only configurations in which the concepts described herein may be practiced. The detailed description includes specific details for the purpose of providing a thorough understanding of the various concepts. However, it will be apparent to those skilled in the art that these concepts may be practiced without these specific details. In particular, although terminology from 3GPP 5G NR may be used in this disclosure to exemplify embodiments herein, this should not be seen as limiting the scope of the invention.
Some of the embodiments contemplated herein will now be described more fully with reference to the accompanying drawings. Other embodiments, however, are contained within the scope of the subject matter disclosed herein, the disclosed subject matter should not be construed as limited to only the embodiments set forth herein; rather, these embodiments are provided by way of example to convey the scope of the subject matter to those skilled in the art.
Generally, all terms used herein are to be interpreted according to their ordinary meaning in the relevant technical field, unless a different meaning is clearly given and/or is implied from the context in which it is used. All references to a/an/the element, apparatus, component, means, step, etc. are to be interpreted openly as referring to at least one instance of the element, apparatus, component, means, step, etc., unless explicitly stated otherwise. The steps of any methods disclosed herein do not have to be performed in the exact order disclosed, unless a step is explicitly described as following or preceding another step and/or where it is implicit that a step must follow or precede another step. Any feature of any of the embodiments disclosed herein may be applied to any other embodiment, wherever appropriate. Likewise, any advantage of any of the embodiments may apply to any other embodiments, and vice versa. Other objectives, features and advantages of the enclosed embodiments will be apparent from the following description.
In some embodiments, a more general term “network node” may be used and may correspond to any type of radio network node or any network node, which communicates with a UE (directly or via another node) and/or with another network node. Examples of network nodes are NodeB, MeNB, ENB, a network node belonging to MCG or SCG, base station (BS), multi-standard radio (MSR) radio node such as MSR BS, eNodeB, gNodeB, network controller, radio network controller (RNC), base station controller (BSC), relay, donor node controlling relay, base transceiver station (BTS), access point (AP), transmission points, transmission nodes, RRU, RRH, nodes in distributed antenna system (DAS), core network node (e.g. Mobile Switching Center (MSC), Mobility Management Entity (MME), etc), Operations & Maintenance (O&M), Operations Support System (OSS), Self Optimized Network (SON), positioning node (e.g. Evolved- Serving Mobile Location Centre (E-SMLC)), Minimization of Drive Tests (MDT), test equipment (physical node or software), etc.
In some embodiments, the non-limiting term user equipment (UE) or wireless device may be used and may refer to any type of wireless device communicating with a network node and/or with another UE in a cellular or mobile communication system. Examples of UE are target device, device to device (D2D) UE, machine type UE or UE capable of machine to machine (M2M) communication, PDA, PAD, Tablet, mobile terminals, smart phone, laptop embedded equipped (LEE), laptop mounted equipment (LME), USB dongles, UE category Ml, UE category M2, ProSe UE, V2V UE, V2X UE, etc.
Additionally, terminologies such as base station/gNodeB and UE should be considered non-limiting and do in particular not imply a certain hierarchical relation between the two; in general, “gNodeB” could be considered as device 1 and “UE” could be considered as device 2 and these two devices communicate with each other over some radio channel. And in the following the transmitter or receiver could be either gNodeB (gNB), or UE. As will be appreciated by one skilled in the art, aspects of the embodiments may be embodied as a system, apparatus, method, or program product. Accordingly, embodiments may take the form of an entirely hardware embodiment, an entirely software embodiment (including firmware, resident software, micro-code, etc.) or an embodiment combining software and hardware aspects.
For example, the disclosed embodiments may be implemented as a hardware circuit comprising custom very-large-scale integration (“VLSI”) circuits or gate arrays, off- the-shelf semiconductors such as logic chips, transistors, or other discrete components. The disclosed embodiments may also be implemented in programmable hardware devices such as field programmable gate arrays, programmable array logic, programmable logic devices, or the like. As another example, the disclosed embodiments may include one or more physical or logical blocks of executable code which may, for instance, be organized as an object, procedure, or function.
Furthermore, embodiments may take the form of a program product embodied in one or more computer readable storage devices storing machine readable code, computer readable code, and/or program code, referred hereafter as code. The storage devices may be tangible, non- transitory, and/or non-transmission. The storage devices may not embody signals. In a certain embodiment, the storage devices only employ signals for accessing code
Any combination of one or more computer readable medium may be utilized. The computer readable medium may be a computer readable storage medium. The computer readable storage medium may be a storage device storing the code. The storage device may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, holographic, micromechanical, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing.
More specific examples (a non-exhaustive list) of the storage device would include the following: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a random-access memory (“RAM”), a read-only memory (“ROM”), an erasable programmable read-only memory (“EPROM” or Flash memory), a portable compact disc readonly memory (“CD-ROM”), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this document, a computer readable storage medium may be any tangible medium that can contain or store a program for use by or in connection with an instruction execution system, apparatus, or device.
Code for carrying out operations for embodiments may be any number of lines and may be written in any combination of one or more programming languages including an object- oriented programming language such as Python, Ruby, Java, Smalltalk, C++, or the like, and conventional procedural programming languages, such as the “C” programming language, or the like, and/or machine languages such as assembly languages. The code may execute entirely on the user’s computer, partly on the user’s computer, as a stand-alone software package, partly on the user’s computer and partly on a remote computer or entirely on the remote computer or server. In the latter scenario, the remote computer may be connected to the user’s computer through any type of network, including a local area network (“LAN”), wireless LAN (“WLAN”), or a wide area network (“WAN”), or the connection may be made to an external computer (for example, through the Internet using an Internet Service Provider (“ISP”)).
Furthermore, the described features, structures, or characteristics of the embodiments may be combined in any suitable manner. In the following description, numerous specific details are provided, such as examples of programming, software modules, user selections, network transactions, database queries, database structures, hardware modules, hardware circuits, hardware chips, etc., to provide a thorough understanding of embodiments. One skilled in the relevant art will recognize, however, that embodiments may be practiced without one or more of the specific details, or with other methods, components, materials, and so forth. In other instances, well-known structures, materials, or operations are not shown or described in detail to avoid obscuring aspects of an embodiment. Reference throughout this specification to “one embodiment,” “an embodiment,” or similar language means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment. Thus, appearances of the phrases “in one embodiment,” “in an embodiment,” and similar language throughout this specification may, but do not necessarily, all refer to the same embodiment, but mean “one or more but not all embodiments” unless expressly specified otherwise. The terms “including,” “comprising,” “having,” and variations thereof mean “including but not limited to,” unless expressly specified otherwise. An enumerated listing of items does not imply that any or all of the items are mutually exclusive, unless expressly specified otherwise. The terms “a,” “an,” and “the” also refer to “one or more” unless expressly specified otherwise.
Aspects of the embodiments are described below with reference to schematic flowchart diagrams and/or schematic block diagrams of methods, apparatuses, systems, and program products according to embodiments. It will be understood that each block of the schematic flowchart diagrams and/or schematic block diagrams, and combinations of blocks in the schematic flowchart diagrams and/or schematic block diagrams, can be implemented by code. This code may be provided to a processor of a general-purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the fimctions/acts specified in the flowchart diagrams and/or block diagrams
The code may also be stored in a storage device that can direct a computer, other programmable data processing apparatus, or other devices to function in a particular manner, such that the instructions stored in the storage device produce an article of manufacture including instructions which implement the function/act specified in the flowchart diagrams and/or block diagrams.
The code may also be loaded onto a computer, other programmable data processing apparatus, or other devices to cause a series of operational steps to be performed on the computer, other programmable apparatus, or other devices to produce a computer implemented process such that the code which execute on the computer or other programmable apparatus provide processes for implementing the functions/acts specified in the flowchart diagrams and/or block diagrams.
The flowchart diagrams and/or block diagrams in the Figures illustrate the architecture, functionality, and operation of possible implementations of apparatuses, systems, methods, and program products according to various embodiments. In this regard, each block in the flowchart diagrams and/or block diagrams may represent a module, segment, or portion of code, which includes one or more executable instructions of the code for implementing the specified logical function(s).
It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the Figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. Other steps and methods may be conceived that are equivalent in function, logic, or effect to one or more blocks, or portions thereof, of the illustrated Figures.
Although various arrow types and line types may be employed in the flowchart and/or block diagrams, they are understood not to limit the scope of the corresponding embodiments. Indeed, some arrows or other connectors may be used to indicate only the logical flow of the depicted embodiment. For instance, an arrow may indicate a waiting or monitoring period of unspecified duration between enumerated steps of the depicted embodiment. It will also be noted that each block of the block diagrams and/or flowchart diagrams, and combinations of blocks in the block diagrams and/or flowchart diagrams, can be implemented by special purpose hardware-based systems that perform the specified functions or acts, or combinations of special purpose hardware and code.
The description of elements in each figure may refer to elements of proceeding figures. Like numbers refer to like elements in all figures, including alternate embodiments of like elements. The detailed description set forth below, with reference to the figures, is intended as a description of various configurations and is not intended to represent the only configurations in which the concepts described herein may be practiced. The detailed description includes specific details for the purpose of providing a thorough understanding of the various concepts. However, it will be apparent to those skilled in the art that these concepts may be practiced without these specific details. For instance, although 3GPP terminology, from e.g., 5G NR, may be used in this disclosure to exemplify embodiments herein, this should not be seen as limiting the scope of the present disclosure.
The disclosure is related to wireless communication system, which may be for example a 5G NR wireless communication system. More specifically, it represents a RAN of the wireless communication system, which is used exchange data with UEs via radio signals. For example, the RAN may send data to the UEs (downlink, DL), for instance data received from a core network (CN). The RAN may also receive data from the UEs (uplink, UL), which data may be forwarded to the CN.
In the examples illustrated, the RAN comprises one base station, BS. Of course, the RAN may comprise more than one BS to increase the coverage of the wireless communication system. Each of these BSs may be referred to as NB, eNodeB (or eNB), gNodeB (or gNB, in the case of a 5G NR wireless communication system), an access point or the like, depending on the wireless communication standard(s) implemented.
The UEs are located in a coverage of the BS. The coverage of the BS corresponds for example to the area in which UEs can decode a PDCCH transmitted by the BS.
An example of a wireless device suitable for implementing any method, discussed in the present disclosure, performed at a UE corresponds to an apparatus that provides wireless connectivity with the RAN of the wireless communication system, and that can be used to exchange data with said RAN. Such a wireless device may be included in a UE. The UE may for instance be a cellular phone, a wireless modem, a wireless communication device, a handheld device, a laptop computer, or the like. The UE may also be an Internet of Things (loT) equipment, like a wireless camera, a smart sensor, a smart meter, smart glasses, a vehicle (manned or unmanned), a global positioning system device, etc., or any other equipment that may run applications that need to exchange data with remote recipients, via the wireless device.
The wireless device comprises one or more processors and one or more memories. The one or more processors may include for instance a central processing unit (CPU), a digital signal processor (DSP), a field-programmable gate array (FPGA), an application specific integrated circuit (ASIC), etc. The one or more memories may include any type of computer readable volatile and non-volatile memories (magnetic hard disk, solid-state disk, optical disk, electronic memory, etc.). The one or more memories may store a computer program product, in the form of a set of programcode instructions to be executed by the one or more processors to implement all or part of the steps of a method for exchanging data, performed at a UE’s side, according to any one of the embodiments disclosed herein.
The wireless device can comprise also a main radio, MR, unit. The MR unit corresponds to a main wireless communication unit of the wireless device, used for exchanging data with BSs of the RAN using radio signals. The MR unit may implement one or more wireless communication protocols, and may for instance be a 3G, 4G, 5G, NR, WiFi, WiMax, etc. transceiver or the like. In preferred embodiments, the MR unit corresponds to a 5G NR wireless communication unit.
The detailed description set forth below, with reference to annexed drawings, is intended as a description of various configurations and is not intended to represent the only configurations in which the concepts described herein may be practiced. The detailed description includes specific details for the purpose of providing a thorough understanding of the various concepts. However, it will be apparent to those skilled in the art that these concepts may be practiced without these specific details. In particular, although terminology from 3GPP 5G NR may be used in this disclosure to exemplify embodiments herein, this should not be seen as limiting the scope of the invention. Some of the embodiments contemplated herein will now be described more fully with reference to the accompanying drawings. Other embodiments, however, are contained within the scope of the subject matter disclosed herein, the disclosed subject matter should not be construed as limited to only the embodiments set forth herein; rather, these embodiments are provided by way of example to convey the scope of the subject matter to those skilled in the art.
Generally, all terms used herein are to be interpreted according to their ordinary meaning in the relevant technical field, unless a different meaning is clearly given and/or is implied from the context in which it is used. All references to a/an/the element, apparatus, component, means, step, etc. are to be interpreted openly as referring to at least one instance of the element, apparatus, component, means, step, etc., unless explicitly stated otherwise. The steps of any methods disclosed herein do not have to be performed in the exact order disclosed, unless a step is explicitly described as following or preceding another step and/or where it is implicit that a step must follow or precede another step. Any feature of any of the embodiments disclosed herein may be applied to any other embodiment, wherever appropriate. Likewise, any advantage of any of the embodiments may apply to any other embodiments, and vice versa. Other objectives, features and advantages of the enclosed embodiments will be apparent from the following description.
In some embodiments, a more general term “network node” may be used and may correspond to any type of radio network node or any network node, which communicates with a UE (directly or via another node) and/or with another network node. Examples of network nodes are NodeB, MeNB, ENB, a network node belonging to MCG or SCG, base station (BS), multi-standard radio (MSR) radio node such as MSR BS, eNodeB, gNodeB, network controller, radio network controller (RNC), base station controller (BSC), relay, donor node controlling relay, base transceiver station (BTS), access point (AP), transmission points, transmission nodes, RRU, RRH, nodes in distributed antenna system (DAS), core network node (e.g. Mobile Switching Center (MSC), Mobility Management Entity (MME), etc), Operations & Maintenance (O&M), Operations Support System (OSS), Self Optimized Network (SON), positioning node (e.g. Evolved- Serving Mobile Location Centre (E-SMLC)), Minimization of Drive Tests (MDT), test equipment (physical node or software), etc.
In some embodiments, the non-limiting term user equipment (UE) or wireless device may be used and may refer to any type of wireless device communicating with a network node and/or with another UE in a cellular or mobile communication system. Examples of UE are target device, device to device (D2D) UE, machine type UE or UE capable of machine to machine (M2M) communication, PDA, PAD, Tablet, mobile terminals, smart phone, laptop embedded equipped (LEE), laptop mounted equipment (LME), USB dongles, UE category Ml, UE category M2, ProSe UE, V2V UE, V2X UE, etc.
Additionally, terminologies such as base station/gNodeB and UE should be considered non-limiting and do in particular not imply a certain hierarchical relation between the two; in general, “gNodeB” could be considered as device 1 and “UE” could be considered as device 2 and these two devices communicate with each other over some radio channel. And in the following the transmitter or receiver could be either gNodeB (gNB), or UE.
As will be appreciated by one skilled in the art, aspects of the embodiments may be embodied as a system, apparatus, method, or program product. Accordingly, embodiments may take the form of an entirely hardware embodiment, an entirely software embodiment (including firmware, resident software, micro-code, etc.) or an embodiment combining software and hardware aspects.
For example, the disclosed embodiments may be implemented as a hardware circuit comprising custom very-large-scale integration (“VLSI”) circuits or gate arrays, off- the-shelf semiconductors such as logic chips, transistors, or other discrete components. The disclosed embodiments may also be implemented in programmable hardware devices such as field programmable gate arrays, programmable array logic, programmable logic devices, or the like. As another example, the disclosed embodiments may include one or more physical or logical blocks of executable code which may, for instance, be organized as an object, procedure, or function.
Furthermore, embodiments may take the form of a program product embodied in one or more computer readable storage devices storing machine readable code, computer readable code, and/or program code, referred hereafter as code. The storage devices may be tangible, non- transitory, and/or non-transmission. The storage devices may not embody signals. In a certain embodiment, the storage devices only employ signals for accessing code
Any combination of one or more computer readable medium may be utilized. The computer readable medium may be a computer readable storage medium. The computer readable storage medium may be a storage device storing the code. The storage device may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, holographic, micromechanical, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing.
More specific examples (a non-exhaustive list) of the storage device would include the following: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a random-access memory (“RAM”), a read-only memory (“ROM”), an erasable programmable read-only memory (“EPROM” or Flash memory), a portable compact disc readonly memory (“CD-ROM”), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this document, a computer readable storage medium may be any tangible medium that can contain or store a program for use by or in connection with an instruction execution system, apparatus, or device.
Code for carrying out operations for embodiments may be any number of lines and may be written in any combination of one or more programming languages including an object- oriented programming language such as Python, Ruby, Java, Smalltalk, C++, or the like, and conventional procedural programming languages, such as the “C” programming language, or the like, and/or machine languages such as assembly languages. The code may execute entirely on the user’s computer, partly on the user’s computer, as a stand-alone software package, partly on the user’s computer and partly on a remote computer or entirely on the remote computer or server. In the latter scenario, the remote computer may be connected to the user’s computer through any type of network, including a local area network (“LAN”), wireless LAN (“WLAN”), or a wide area network (“WAN”), or the connection may be made to an external computer (for example, through the Internet using an Internet Service Provider (“ISP”)).
Furthermore, the described features, structures, or characteristics of the embodiments may be combined in any suitable manner. In the following description, numerous specific details are provided, such as examples of programming, software modules, user selections, network transactions, database queries, database structures, hardware modules, hardware circuits, hardware chips, etc., to provide a thorough understanding of embodiments. One skilled in the relevant art will recognize, however, that embodiments may be practiced without one or more of the specific details, or with other methods, components, materials, and so forth. In other instances, well-known structures, materials, or operations are not shown or described in detail to avoid obscuring aspects of an embodiment. Reference throughout this specification to “one embodiment,” “an embodiment,” or similar language means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment. Thus, appearances of the phrases “in one embodiment,” “in an embodiment,” and similar language throughout this specification may, but do not necessarily, all refer to the same embodiment, but mean “one or more but not all embodiments” unless expressly specified otherwise. The terms “including,” “comprising,” “having,” and variations thereof mean “including but not limited to,” unless expressly specified otherwise. An enumerated listing of items does not imply that any or all of the items are mutually exclusive, unless expressly specified otherwise. The terms “a,” “an,” and “the” also refer to “one or more” unless expressly specified otherwise.
Aspects of the embodiments are described below with reference to schematic flowchart diagrams and/or schematic block diagrams of methods, apparatuses, systems, and program products according to embodiments. It will be understood that each block of the schematic flowchart diagrams and/or schematic block diagrams, and combinations of blocks in the schematic flowchart diagrams and/or schematic block diagrams, can be implemented by code. This code may be provided to a processor of a general-purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the fimctions/acts specified in the flowchart diagrams and/or block diagrams
The code may also be stored in a storage device that can direct a computer, other programmable data processing apparatus, or other devices to function in a particular manner, such that the instructions stored in the storage device produce an article of manufacture including instructions which implement the function/act specified in the flowchart diagrams and/or block diagrams.
The code may also be loaded onto a computer, other programmable data processing apparatus, or other devices to cause a series of operational steps to be performed on the computer, other programmable apparatus, or other devices to produce a computer implemented process such that the code which execute on the computer or other programmable apparatus provide processes for implementing the functions/acts specified in the flowchart diagrams and/or block diagrams.
The flowchart diagrams and/or block diagrams in the Figures illustrate the architecture, functionality, and operation of possible implementations of apparatuses, systems, methods, and program products according to various embodiments. In this regard, each block in the flowchart diagrams and/or block diagrams may represent a module, segment, or portion of code, which includes one or more executable instructions of the code for implementing the specified logical function(s).
It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the Figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. Other steps and methods may be conceived that are equivalent in function, logic, or effect to one or more blocks, or portions thereof, of the illustrated Figures.
Although various arrow types and line types may be employed in the flowchart and/or block diagrams, they are understood not to limit the scope of the corresponding embodiments. Indeed, some arrows or other connectors may be used to indicate only the logical flow of the depicted embodiment. For instance, an arrow may indicate a waiting or monitoring period of unspecified duration between enumerated steps of the depicted embodiment. It will also be noted that each block of the block diagrams and/or flowchart diagrams, and combinations of blocks in the block diagrams and/or flowchart diagrams, can be implemented by special purpose hardware-based systems that perform the specified functions or acts, or combinations of special purpose hardware and code.
The description of elements in each figure may refer to elements of proceeding figures. Like numbers refer to like elements in all figures, including alternate embodiments of like elements.
The detailed description set forth below, with reference to the figures, is intended as a description of various configurations and is not intended to represent the only configurations in which the concepts described herein may be practiced. The detailed description includes specific details for the purpose of providing a thorough understanding of the various concepts. However, it will be apparent to those skilled in the art that these concepts may be practiced without these specific details. For instance, although 3GPP terminology, from e.g., 5G NR, may be used in this disclosure to exemplify embodiments herein, this should not be seen as limiting the scope of the present disclosure.
The disclosure is related to wireless communication system, which may be for example a 5G NR wireless communication system. More specifically, it represents a RAN of the wireless communication system, which is used exchange data with UEs via radio signals. For example, the RAN may send data to the UEs (downlink, DL), for instance data received from a core network (CN). The RAN may also receive data from the UEs (uplink, UL), which data may be forwarded to the CN.
In the examples illustrated, the RAN comprises one base station, BS. Of course, the RAN may comprise more than one BS to increase the coverage of the wireless communication system. Each of these BSs may be referred to as NB, eNodeB (or eNB), gNodeB (or gNB, in the case of a 5G NR wireless communication system), an access point or the like, depending on the wireless communication standard(s) implemented.
The UEs are located in a coverage of the BS. The coverage of the BS corresponds for example to the area in which UEs can decode a PDCCH transmitted by the BS.
An example of a wireless device suitable for implementing any method, discussed in the present disclosure, performed at a UE corresponds to an apparatus that provides wireless connectivity with the RAN of the wireless communication system, and that can be used to exchange data with said RAN. Such a wireless device may be included in a UE. The UE may for instance be a cellular phone, a wireless modem, a wireless communication device, a handheld device, a laptop computer, or the like. The UE may also be an Internet of Things (loT) equipment, like a wireless camera, a smart sensor, a smart meter, smart glasses, a vehicle (manned or unmanned), a global positioning system device, etc., or any other equipment that may run applications that need to exchange data with remote recipients, via the wireless device.
The wireless device comprises one or more processors and one or more memories. The one or more processors may include for instance a central processing unit (CPU), a digital signal processor (DSP), a field-programmable gate array (FPGA), an application specific integrated circuit (ASIC), etc. The one or more memories may include any type of computer readable volatile and non-volatile memories (magnetic hard disk, solid-state disk, optical disk, electronic memory, etc.). The one or more memories may store a computer program product, in the form of a set of program- code instructions to be executed by the one or more processors to implement all or part of the steps of a method for exchanging data, performed at a UE’s side, according to any one of the embodiments disclosed herein.
The wireless device can comprise also a main radio, MR, unit. The MR unit corresponds to a main wireless communication unit of the wireless device, used for exchanging data with BSs of the RAN using radio signals. The MR unit may implement one or more wireless communication protocols, and may for instance be a 3G, 4G, 5G, NR, WiFi, WiMax, etc. transceiver or the like. In preferred embodiments, the MR unit corresponds to a 5G NR wireless communication unit.
The following explanation will provide the detailed description of the mechanism about pre-configuring and signaling the specific information about model selection using association between models and index values. AI/ML based techniques are currently applied to many different applications and 3GPP also started to work on its technical investigation to apply to multiple use cases based on the observed potential gains. AI/ML lifecycle can be split into several stages such as data collection/pre- processing, model training, model testing/validation, model deployment/update, model monitoring etc., where each stage is equally important to achieve target performance with any specific model(s). In applying AI/ML model for any use case or application, one of the challenging issues is to manage the lifecycle of AI/ML model. It is mainly because the data/model drift occurs during model deployment/inference and it results in performance degradation of AI/ML model. Fundamentally, the dataset statistical changes occur after model is deployed and model inference capability is also impacted with unseen data as input. In a similar aspect, the statistical property of dataset and the relationship between input and output for the trained model can be changed with drift occurrence. In this context, model selection is one of key issues for model performance maintenance as model performance such as inferencing and/or training is dependent on different model execution environment with varying configuration parameters. To handle this issue, collaboration between UE and gNB is highly important to track model performance and re-configure model corresponding to different environments. AI/ML model needs model monitoring after deployment because model performance cannot be maintained continuously due to drift and update feedback is then provided to re-train/update the model or select alternative model. When AI/ML model enabled wireless communication network is deployed, it is then important to consider how to handle AI/ML model in activation with re-configuration for wireless devices under operations such as model training, inference, updating, etc. Therefore, there is a need for specification for signaling methods and gNB-UE behaviors when a set of multiple specific AI/ML models are supported for RAN-based model operation and a new mechanism about gNB-UE behaviors and procedures is necessary to avoid any performance impact on model operation using multiple specific AI/ML models.
Based on a finite set of ML models for specific use case/application, association between models and index values is built to generate a different number of mapping tables using model IDs. At both sides of network and UE, a set of the pre-configured ML models are available for each specific use cases or functionalities with a set of different feature sets (e.g., network model set for multiple models located in network and UE model set for multiple models located in UE). For model operation, UE receives the indication about association between models and index values from network (e.g., gNB) and the received information about the indicated association is sent through either system information message or dedicated RRC (radio resource control) message. UE further receives the index (e.g., indication of model ID) from gNB through dedicated signaling message (L1/L2) and then UE simply checks index indicated in dedicated signaling message so that the corresponding model(s) can be applied to operation accordingly.
Based on one or multiple mapping tables reflecting association between models and index values where the associated information about model (e.g., model attribute data, model functionality, etc.) can be extracted through model ID, the determined index information with model ID(s) can be provided to a group of UEs that use the same model and/or each individual UEs through dedicated signaling.
Figure 1 shows an exemplary table of model ID mapping pairs when 2-bit indication of one-to-one mapping is used. As described in this table, association of models and index values is represented by model mapping table with model ID list, and index information can be indicated by specific bits based on the pre-configured size of table. In this example, the number of bits is 2 and model mapping pairs can be one- to-one between network and UE sides. In implementation, the size of model mapping pairs with model IDs can vary flexibly and index values can be pre-configured together with different number of bits with multiple model mapping tables.
Figure 2 shows an exemplary table of model ID mapping pairs when 2-bit indication of one-to-many mapping is used. Similar with Figure 1 , the same number of bits is used to represent association of models and index values, but this table is based on one-to-many mapping between network and UE sides. Specifically, the selected model ID(s) in one side can be paired with any combination of multiple model IDs that are available on the other side for model mapping pair.
Network-UE model mapping pairs indicate the matching relationship between network model set and UE model set and are differently configured for each model function ID as each model function ID can have multiple pairs of network-UE model IDs. Multiple mapping tables can also be pre-configured to support various model functionalities and/or applications as well as different vendors and designed to be optimized for model mapping pairs that consist of the combinations of network model IDs and UE model IDs. Model operation can then be setup in RAN between network and UE based on the matched model IDs from mapping table that is generated and updated with the configurable content using historical data from network or server with model repository. Different number of multiple mapping tables can be generated for varying model execution environment.
Figure 3 shows a block diagram of model mapping generation system. To form model mapping table, multiple functional blocks are engaged to be input to model mapping such as network/UE model set, model attribute data and model functionality. Model mapping table can be updated by communicating with model matching repository. Specifically, the exemplary steps are as follows. Firstly, the service type/application and environmental conditions for model operation is identified to determine model functionality and the associated feature set with attribute data. After then, one or more candidate models associated with each feature subsets are listed. Model performances of candidate models based on the given metrics are evaluated so as to generate association of the selected optimal models with model IDs. The matched model pairs can then be generated as output.
Figure 4 shows a flow chart of gNB behavior to send model ID. Based on the preconfigured model mapping information related to association between models and index values, the index of specific model ID(s) is selected and this information is then sent to UE so that UE can also identify the matched model(s) to be paired for model operation. When selecting the index of model ID(s) from model mapping, multiple indexes can be also chosen for transmission so that multiple combinations of model IDs can be possibly used if necessary.
In addition, transmission of multiple indexes can reduce signaling overhead for consecutive model operation sequences of lifecycle management and/or model switching with the configured pattern. In other words, for different model operation requests based on lifecycle management, various combinations or pairs of model IDs can be supported using model mapping tables. For example, model training and inferencing operation will require different model ID pairs for activation depending on model execution environment. In model mapping repository, a number of different model IDs are updated with varying versions. Therefore, both network model ID set and UE model ID set are updated with the latest versions in periodic, semi-static, or non-periodic way. When there are multiple number of UEs with the same model ID set, those UEs can be grouped together for model operation in communication with the same gNB.
Figure 5 shows a flow chart of UE behavior to receive model ID. Based on the index of specific model ID(s) transmitted by gNB, UE then receives this information so as to configure the necessary model operation setup with the indicated model ID(s) for activation. In addition, sidelink UEs can use their own model ID set and collaborate with other UEs or the associated gNB for collaborative model operation using the model mapping pairs.
Figure 6 shows a flow chart of model ID search and selection procedure. Based on the pre-configured association information between models and index values using mapping tables, firstly the requested model ID(s) is searched. In case there is no available model ID to select, mapping table need to be updated to include the requested model ID(s) by communicating with model matching repository. Or the closest matching model(s) can also be used for selection and the range of model ID pairs is quantized based on the pre-configured criteria.
Figure 7 shows a signaling flow of model ID selection by gNB. Available model ID list information (that is based on the pre-configured model mapping) is identified from UE signaling about ML capability information. Based on the pre-configured mapping table, gNB determines selection of model ID pair(s) and UE is then indicated about the determined model ID(s) to be paired from gNB signaling. UE model status report is sent to confirm the selected model ID(s) for use. In case that UE selects a different model ID(s) not requested by gNB, it should be reported to gNB for the selected UE model ID(s).
Figure 8 shows a signaling flow of model ID selection by UE. Based on the preconfigured mapping information, UE determines selection of model ID pair(s) and UE then sends ML capability information and model ID information to gNB so that gNB configures the indicated model ID(s) and sends re-configuration information update. In case that gNB selects other model ID(s) to be used in UE, the recommended model ID information can also be sent to UE when re-configuration information update is signaled.
Figure 9 shows a block diagram of model set structure. Model set can comprise multiple layers such as superset model, subset models, and individual models. One or more number of individual models represent each subset model, and a set of subset models are contained in superset model. Model ID values are assigned to each level of models for superset, subset and individual models. Superset based ML model structure is applied to both network side and UE side. Model attribute data contains information about model related {configuration, metadata, parameter set}. Model functionality contains information about list of model functions based on the specified applications. Depending on the applied model operation with the specific models, two or more model IDs can be selected for collaborative model operation (non-standalone type) and a single model ID can be used for independent model operation (standalone type).
In addition, there can be two types of model IDs such as Type-1 and Type-2 model ID. For Type-1 , it is common model ID and standard-compatible models where model IDs for network and UE are registered in the same repository of model ID list, and/or model IDs have the minimum required feature set and baseline capability based on the given model attributes and functionality. For Type-2, it is dedicated model ID and standard/proprietary models where model IDs for network and UE are registered in the separate repository of independent superset model ID list, and/or model IDs have the enhanced feature set and specific capability with higher complexity over common model based on the given model attributes and functionality.
Compared with no model ID based mapping use with unconstrained ML model set, the proposed scheme shows enhancements for signaling overhead and latency. In signaling overhead aspect, by using model ID based mapping model pairs (which is pre-configured for best matching and shared on both sides), any index value is needed for signaling about model selection or switching. In addition, configuration related signaling gets simpler as the proposed model ID structure fields contain those information that can be represented through model ID indication through mapping. Otherwise, when model transfer occurs, the whole model information (e.g., model parameters, model meta data, config info, etc.) might be signaled fully or partially. In latency aspect, when model operation of switching/selection/etc. is executed, gNB and/or UE can look up mapping model repository for those requested operations so that the best matching model(s) can be searched in advance. Therefore, execution latency is expected to be reduced. However, the latency reduction level might depend on different model operation use cases such as model switching, selection, transfer, update, etc. In addition, when there are multiple number of UEs with the same model ID set, those UEs can be grouped together for model operation and we expect further improvement in signaling overhead/latency performance.

Claims

1. A method of model identification signaling of pre-configuring association between models and index values built to generate a different number of mapping tables using model IDs based on a finite set of ML models for specific use case/application, wherein
• At both sides of network and UE, a set of the pre-configured ML models are available for each specific use cases or functionalities with a set of different feature sets (e.g., network model set for multiple models located in network and UE model set for multiple models located in UE).
• For model operation, UE receives the indication about association between models and index values from network (e.g., gNB) and the received information about the indicated association is sent through either system information message or dedicated RRC message.
• UE further receives the index (e.g., indication of model ID) from gNB through dedicated signaling message (L1/L2) and then UE simply checks index indicated in dedicated signaling message so that the corresponding model(s) can be applied to operation accordingly.
• Based on one or multiple mapping tables reflecting association between models and index values, the determined index information with model ID(s) can be provided to a group of UEs that use the same model and/or each individual UEs through dedicated signaling.
2. The method according to any previous claim 1 , where model ID mapping pairs can be one-to-one and/or one-to-many mapping, where
• Association of models and index values is represented by model mapping table with model ID list.
• Index information can be indicated by specific bits based on the preconfigured size of table. In implementation, the size of model mapping pairs with model IDs can vary flexibly and index values can be preconfigured together with different number of bits with multiple model mapping tables. • The selected model ID(s) in one side can be paired with any combination of multiple model IDs that are available on the other side for model mapping pair.
• Network-UE model mapping pairs indicate the matching relationship between network model set and UE model set and are differently configured for each model function ID as each model function ID can have multiple pairs of network-UE model IDs.
• Multiple mapping tables can also be pre-configured to support various model functionalities and/or applications as well as different vendors and designed to be optimized for model mapping pairs that consist of the combinations of network model IDs and UE model IDs.
• Model operation can then be setup in RAN between network and UE based on the matched model IDs from mapping table that is generated and updated with the configurable content using historical data from network or server with model repository. Different number of multiple mapping tables can be generated for varying model execution environment.
3. The method according to any previous claims, where multiple functional blocks to generate model mapping table are engaged to be input to model mapping such as network/UE model set, model attribute data and model functionality as model mapping table can be updated by communicating with model matching repository. Specifically, the exemplary steps are as follows.
• The service type/application and environmental conditions for model operation is identified to determine model functionality and the associated feature set with attribute data.
• After then, one or more candidate models associated with each feature subsets are listed.
• Model performances of candidate models based on the given metrics are evaluated so as to generate association of the selected optimal models with model IDs. The matched model pairs can then be generated as output.
4. A method of selecting the index of specific model ID(s) based on the preconfigured model mapping information related to association between models and index values, where the selected index information is sent to UE so that UE can also identify the matched model(s) to be paired for model operation, where
• When selecting the index of model ID(s) from model mapping, multiple indexes can be also chosen for transmission so that multiple combinations of model IDs can be possibly used if necessary.
• For different model operation requests based on lifecycle management, various combinations or pairs of model IDs are supported using model mapping tables.
• In model mapping repository, a number of different model IDs are updated with varying versions.
• Both network model ID set and UE model ID set are updated with the latest versions in periodic, semi-static, or non-periodic way.
• When there are multiple number of UEs with the same model ID set, those UEs can be grouped together for model operation in communication with the same gNB.
5. The method according to any previous claims, where, based on the index of specific model ID(s) transmitted by gNB, UE receives the index information so as to configure the necessary model operation setup with the indicated model ID(s) for activation.
• For example, sidelink UEs can use their own model ID set and collaborate with other UEs or the associated gNB for collaborative model operation using the model mapping pairs.
6. The method according to any previous claims, where, based on the preconfigured association information between models and index values using mapping tables, firstly the requested model ID(s) is searched, where
• In some cases, there is no available model ID to select, mapping table need to be updated to include the requested model ID(s) by communicating with model matching repository. In some cases, the closest matching model(s) can also be used for selection and the range of model ID pairs is quantized based on the preconfigured criteria.
7. A method of identifying available model ID list information based on the preconfigured model mapping from UE signaling about ML capability information, wherein,
• Based on the pre-configured mapping table, gNB determines selection of model ID pair(s) and UE is then indicated about the determined model ID(s) to be paired from gNB signaling.
• UE model status report is sent to confirm the selected model ID(s) for use.
• In case that UE selects a different model ID(s) not requested by gNB, it should be reported to gNB for the selected UE model ID(s).
8. The method according to any previous claims, where based on the pre-configured mapping information, UE determines selection of model ID pair(s) and UE then sends ML capability information and model ID information to gNB so that gNB configures the indicated model ID(s) and sends re-configuration information update, where
• In some cases, gNB can also select other model ID(s) to be used in UE and the recommended model ID information can also be sent to UE when re-configuration information update is signaled.
9. A method of forming model set structure, where model set can comprise multiple layers such as superset model, subset models, and individual models, where
• One or more number of individual models represent each subset model, and a set of subset models are contained in superset model.
• Model ID values are assigned to each level of models for superset, subset and individual models.
• Superset based ML model structure is applied to both network side and UE side.
• Model attribute data contains information about model related {configuration, metadata, parameter set}. • Model functionality contains information about list of model functions based on the specified applications.
• Depending on the applied model operation with the specific models, two or more model IDs can be selected for collaborative model operation (non- standalone type) and a single model ID can be used for independent model operation (standalone type).
10. A method of categorizing two types of model IDs such as Type-1 and Type-2 model ID, where
• For Type-1 , it is common model ID and standard-compatible models where model IDs for network and UE are registered in the same repository of model ID list, and/or model IDs have the minimum required feature set and baseline capability based on the given model attributes and functionality.
• For Type-2, it is dedicated model ID and standard/proprietary models where model IDs for network and UE are registered in the separate repository of independent superset model ID list, and/or model IDs have the enhanced feature set and specific capability with higher complexity over common model based on the given model attributes and functionality.
11 .Apparatus for advanced ML signaling for RAN a wireless communication system, the apparatus comprising a wireless transceiver, a processor coupled with a memory in which computer program instructions are stored, said instructions being configured to implement steps of the claims 1 to 10.
12. User Equipment comprising an apparatus according to claim 11 .
13. gNB comprising an apparatus according to claim 12.
14. Wireless communication system for ML condition pairing, wherein the wireless communication systems comprises user equipment according to claim 12, gNB according to claim 13, whereby the user Equipment and the gNB each comprises a processor coupled with a memory in which computer program instructions are stored, said instructions being configured to implement steps of the claims 1 to 10.
PCT/EP2024/053564 2023-02-16 2024-02-13 Method of model identification signaling Pending WO2024170535A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202480012823.8A CN120642395A (en) 2023-02-16 2024-02-13 Model identification signaling method

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
DE102023201347.9 2023-02-16
DE102023201347 2023-02-16

Publications (1)

Publication Number Publication Date
WO2024170535A1 true WO2024170535A1 (en) 2024-08-22

Family

ID=89942614

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/EP2024/053564 Pending WO2024170535A1 (en) 2023-02-16 2024-02-13 Method of model identification signaling

Country Status (2)

Country Link
CN (1) CN120642395A (en)
WO (1) WO2024170535A1 (en)

Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20220108214A1 (en) 2020-08-13 2022-04-07 Electronics And Telecommunications Research Institute Management method of machine learning model for network data analytics function device
WO2022087930A1 (en) * 2020-10-28 2022-05-05 华为技术有限公司 Model configuration method and apparatus
US20220150727A1 (en) * 2020-11-11 2022-05-12 Qualcomm Incorporated Machine learning model sharing between wireless nodes
WO2022161624A1 (en) 2021-01-29 2022-08-04 Telefonaktiebolaget Lm Ericsson (Publ) Candidate machine learning model identification and selection
US20220337487A1 (en) 2020-01-03 2022-10-20 Huawei Technologies Co., Ltd. Network entity for determining a model for digitally analyzing input data
WO2022228666A1 (en) 2021-04-28 2022-11-03 Telefonaktiebolaget Lm Ericsson (Publ) Signaling of training policies
WO2022236807A1 (en) * 2021-05-14 2022-11-17 Qualcomm Incorporated Model status monitoring, reporting, and fallback in machine learning applications
US20220400373A1 (en) 2021-06-15 2022-12-15 Qualcomm Incorporated Machine learning model configuration in wireless networks
WO2022258149A1 (en) 2021-06-08 2022-12-15 Huawei Technologies Co., Ltd. User device, server device, method and system for privacy preserving model training
WO2022265549A1 (en) * 2021-06-18 2022-12-22 Telefonaktiebolaget Lm Ericsson (Publ) Method and arrangements for supporting value prediction by a wireless device served by a wireless communication network
US20230006913A1 (en) * 2021-06-28 2023-01-05 Samsung Electronics Co., Ltd. Method and apparatus for channel environment classification
WO2023277780A1 (en) 2021-07-01 2023-01-05 Telefonaktiebolaget Lm Ericsson (Publ) Enabling downloadable ai
WO2023010302A1 (en) * 2021-08-04 2023-02-09 Qualcomm Incorporated Machine learning group switching

Patent Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20220337487A1 (en) 2020-01-03 2022-10-20 Huawei Technologies Co., Ltd. Network entity for determining a model for digitally analyzing input data
US20220108214A1 (en) 2020-08-13 2022-04-07 Electronics And Telecommunications Research Institute Management method of machine learning model for network data analytics function device
WO2022087930A1 (en) * 2020-10-28 2022-05-05 华为技术有限公司 Model configuration method and apparatus
US20220150727A1 (en) * 2020-11-11 2022-05-12 Qualcomm Incorporated Machine learning model sharing between wireless nodes
WO2022161624A1 (en) 2021-01-29 2022-08-04 Telefonaktiebolaget Lm Ericsson (Publ) Candidate machine learning model identification and selection
WO2022228666A1 (en) 2021-04-28 2022-11-03 Telefonaktiebolaget Lm Ericsson (Publ) Signaling of training policies
WO2022236807A1 (en) * 2021-05-14 2022-11-17 Qualcomm Incorporated Model status monitoring, reporting, and fallback in machine learning applications
WO2022258149A1 (en) 2021-06-08 2022-12-15 Huawei Technologies Co., Ltd. User device, server device, method and system for privacy preserving model training
US20220400373A1 (en) 2021-06-15 2022-12-15 Qualcomm Incorporated Machine learning model configuration in wireless networks
WO2022265549A1 (en) * 2021-06-18 2022-12-22 Telefonaktiebolaget Lm Ericsson (Publ) Method and arrangements for supporting value prediction by a wireless device served by a wireless communication network
US20230006913A1 (en) * 2021-06-28 2023-01-05 Samsung Electronics Co., Ltd. Method and apparatus for channel environment classification
WO2023277780A1 (en) 2021-07-01 2023-01-05 Telefonaktiebolaget Lm Ericsson (Publ) Enabling downloadable ai
WO2023010302A1 (en) * 2021-08-04 2023-02-09 Qualcomm Incorporated Machine learning group switching

Also Published As

Publication number Publication date
CN120642395A (en) 2025-09-12

Similar Documents

Publication Publication Date Title
US11956106B2 (en) Modulation and coding scheme value selection
WO2025008304A1 (en) Method of advanced ml report signaling
WO2024208702A1 (en) Method of model dataset signaling for radio access network
WO2024170535A1 (en) Method of model identification signaling
WO2025026970A1 (en) Method of activating a candidate model
WO2024223570A1 (en) Method of data-driven model signaling for multi-usim
WO2025233225A1 (en) Method of model identification adaptation signaling
WO2025124932A1 (en) Method of cross-level model signaling in a wireless communication system
WO2025210139A1 (en) Method of training mode adaptation signaling
WO2024200393A1 (en) Method of model switching signaling for radio access network
WO2025195960A1 (en) Method of model adjustment signaling using representative model
WO2025233278A1 (en) Method of multi-model pre-activation signaling
WO2025168471A1 (en) Method of rrc state-based online training signaling
WO2025195792A1 (en) A method of configuring a set of the supported operation modes for ml functionality in a wireless communication system
WO2025168462A1 (en) Method of advanced online training signaling for ran
WO2025168467A1 (en) Method of ml condition pairing
WO2024160972A2 (en) Method of gnb-ue behaviors for model-based mobility
WO2025124930A1 (en) Method of multi-model signaling for ran
WO2025124931A1 (en) Method of model-sharing signaling in a wireless communication system
WO2025168468A1 (en) Method of advanced ml signaling for ran
WO2025210138A1 (en) Method of multi-training model operation signaling
WO2025087718A1 (en) Method of model grouping signaling
WO2025196092A1 (en) Method of configuring the multi-sided model pairing with the end-to-end ml model operation across different entities
WO2025233228A1 (en) Method of multi-cell based concatenation signaling
WO2025195957A1 (en) Method of advanced model activation signaling

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 24705415

Country of ref document: EP

Kind code of ref document: A1

WWE Wipo information: entry into national phase

Ref document number: 202480012823.8

Country of ref document: CN

WWP Wipo information: published in national office

Ref document number: 202480012823.8

Country of ref document: CN

WWE Wipo information: entry into national phase

Ref document number: 2024705415

Country of ref document: EP

NENP Non-entry into the national phase

Ref country code: DE