[go: up one dir, main page]

WO2025095847A1 - Appariement de modèles pour modèles d'ia/ml bilatéraux - Google Patents

Appariement de modèles pour modèles d'ia/ml bilatéraux Download PDF

Info

Publication number
WO2025095847A1
WO2025095847A1 PCT/SE2024/050939 SE2024050939W WO2025095847A1 WO 2025095847 A1 WO2025095847 A1 WO 2025095847A1 SE 2024050939 W SE2024050939 W SE 2024050939W WO 2025095847 A1 WO2025095847 A1 WO 2025095847A1
Authority
WO
WIPO (PCT)
Prior art keywords
model
pairing
sided
node
information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
PCT/SE2024/050939
Other languages
English (en)
Inventor
Jingya Li
Ilmiawan SHUBHI
Yufei Blankenship
Henrik RYDÉN
Reem KARAKI
Emil RINGH
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Telefonaktiebolaget LM Ericsson AB
Original Assignee
Telefonaktiebolaget LM Ericsson AB
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Telefonaktiebolaget LM Ericsson AB filed Critical Telefonaktiebolaget LM Ericsson AB
Publication of WO2025095847A1 publication Critical patent/WO2025095847A1/fr
Pending legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • G06N3/0455Auto-encoder networks; Encoder-decoder networks
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/16Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks using machine learning or artificial intelligence
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/40Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks using virtualisation of network functions or resources, e.g. SDN or NFV entities
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L43/00Arrangements for monitoring or testing data switching networks
    • H04L43/08Monitoring or testing based on specific metrics, e.g. QoS, energy consumption or environmental parameters

Definitions

  • the present disclosure relates generally to communication systems and, more specifically, to methods and systems for pairing two-sided Artificial Intelligence (Al)/ Machine Learning (ML) models including for AI/ML model lifecycle management (LCM).
  • Al Artificial Intelligence
  • ML Machine Learning
  • LCM AI/ML model lifecycle management
  • Example use cases include using autoencoders for Channel State Information (CSI) compression to reduce the feedback overhead and improve channel prediction accuracy; using deep neural networks for classifying Line-of-Sight (LOS) and Non- LOS (NLOS) conditions to enhance the positioning accuracy; and using reinforcement learning for beam selection at the network (NW) side and/or the User Equipment (UE) side to reduce the signaling overhead and beam alignment latency; using deep reinforcement learning to learn an optimal precoding policy for complex Multiple Input Multiple Output (MIMO) precoding problems
  • CSI Channel State Information
  • LOS Line-of-Sight
  • NLOS Non- LOS
  • NLOS Non- LOS
  • a UE comprising processing circuitry is configured to perform a method that comprises sending, to a network node, pairing information for a two-sided AI/ML model-based capability.
  • the method may further comprise receiving, from the network node, configuration information that indicates whether a first-part model can be selected or used at the UE for the two-sided AI/ML model-based capability.
  • the method may further comprise selecting or deselecting the first-part model based on the received configuration information.
  • a UE comprising processing circuitry is configured to perform a method that comprises receiving, from a network node, pairing information for at least one first-part model or second-part model of a two-sided AI/ML model-based capability, where first-part models of the two-sided AI/ML model-based capability are supported at the UE.
  • a network node comprising processing circuitry is configured to perform a method that comprises receiving, from a user equipment (UE), pairing information for first-part models performed on the UE or the network node.
  • UE user equipment
  • a network node comprising processing circuitry is configured to perform a method that comprises sending, to a user equipment (UE), pairing information for a two-sided AI/ML model-based capability.
  • UE user equipment
  • Figure 1 illustrates an example of a communication system in accordance with some embodiments.
  • Figure 2 illustrates an exemplary user equipment in accordance with some embodiments.
  • Figure 3 illustrates an exemplary network node in accordance with some embodiments.
  • Figure 4 is a block diagram of an exemplary host, which may be an embodiment of the host of Figure 1, in accordance with various aspects described herein.
  • Figure 5 is a block diagram illustrating an exemplary virtualization environment in which functions implemented by some embodiments may be virtualized.
  • Figure 6 illustrates a communication diagram of an exemplary host communicating via a network node with a UE over a partially wireless connection in accordance with some embodiments.
  • Figure 7 illustrates an AI/ML model lifecycle management process in accordance with some embodiments.
  • Figure 8 illustrates a functional framework for AI/ML model Lifecycle Management with different Network-User Equipment (NW-UE) collaboration levels in physical layer use cases in accordance with some embodiments.
  • NW-UE Network-User Equipment
  • Figure 9 illustrates an autoencoder-based two-sided AI/ML model used for CSI reporting in accordance with some embodiments.
  • Figure 10 illustrates a diagram showing an example of a two-sided AI/ML model in accordance with some embodiments.
  • Figure 16 illustrates a flowchart showing another method performed by a UE for artificial intelligence / machine learning (AI/ML) model pairing in accordance with some embodiments.
  • AI/ML artificial intelligence / machine learning
  • inventive subject matter is considered to include all possible combinations of the disclosed elements. As such, if one embodiment comprises elements A, B, and C, and another embodiment comprises elements B and D, then the inventive subject matter is also considered to include other remaining combinations of A, B, C, or D, even if not explicitly discussed herein.
  • transitional term “comprising” means to have as parts or members, or to be those parts or members. As used herein, the transitional term “comprising” is inclusive or open-ended and does not exclude additional, unrecited elements or method steps.
  • the communication system 100 includes a telecommunication network 102 that includes an access network 104, such as a radio access network (RAN), and a core network 106, which includes one or more core network nodes 108.
  • the access network 104 includes one or more access network nodes, such as network nodes 110a and 110b (one or more of which may be generally referred to as network nodes 110), or any other similar 3 rd Generation Partnership Project (3GPP) access nodes or non-3GPP access points.
  • 3GPP 3 rd Generation Partnership Project
  • a network node is not necessarily limited to an implementation in which a radio portion and a baseband portion are supplied and integrated by a single vendor.
  • the telecommunication network 102 includes one or more Open-RAN (ORAN) network nodes.
  • ORAN Open-RAN
  • An ORAN network node is a node in the telecommunication network 102 that supports an ORAN specification (e.g., a specification published by the O-RAN Alliance, or any similar organization) and may operate alone or together with other nodes to implement one or more functionalities of any node in the telecommunication network 102, including one or more network nodes 110 and/or core network nodes 108.
  • Examples of an ORAN network node include an open radio unit (O-RU), an open distributed unit (O-DU), an open central unit (O-CU), including an O-CU control plane (O- CU-CP) or an O-CU user plane (O-CU-UP), a RAN intelligent controller (near-real time or non-real time) hosting software or software plug-ins, such as a near-real time control application (e.g., xApp) or a non-real time control application (e.g., rApp), or any combination thereof (the adjective “open” designating support of an ORAN specification).
  • a near-real time control application e.g., xApp
  • rApp non-real time control application
  • the network node may support a specification by, for example, supporting an interface defined by the ORAN specification, such as an Al, Fl, Wl, El, E2, X2, Xn interface, an open fronthaul user plane interface, or an open fronthaul management plane interface.
  • an ORAN access node may be a logical node in a physical node.
  • an ORAN network node may be implemented in a virtualization environment (described further below) in which one or more network functions are virtualized.
  • the virtualization environment may include an O-Cloud computing platform orchestrated by a Service Management and Orchestration Framework via an 0-2 interface defined by the 0-RAN Alliance or comparable technologies.
  • the network nodes 110 facilitate direct or indirect connection of user equipment (UE), such as by connecting UEs 112a, 112b, 112c, and 112d (one or more of which may be generally referred to as UEs 112) to the core network 106 over one or more wireless connections.
  • UE user equipment
  • Example wireless communications over a wireless connection include transmitting and/or receiving wireless signals using electromagnetic waves, radio waves, infrared waves, and/or other types of signals suitable for conveying information without the use of wires, cables, or other material conductors.
  • the communication system 100 may include any number of wired or wireless networks, network nodes, UEs, and/or any other components or systems that may facilitate or participate in the communication of data and/or signals whether via wired or wireless connections.
  • the communication system 100 may include and/or interface with any type of communication, telecommunication, data, cellular, radio network, and/or other similar type of system.
  • the UEs 112 may be any of a wide variety of communication devices, including wireless devices arranged, configured, and/or operable to communicate wirelessly with the network nodes 110 and other communication devices.
  • the network nodes 110 are arranged, capable, configured, and/or operable to communicate directly or indirectly with the UEs 112 and/or with other network nodes or equipment in the telecommunication network 102 to enable and/or provide network access, such as wireless network access, and/or to perform other functions, such as administration in the telecommunication network 102.
  • the core network 106 connects the network nodes 110 to one or more hosts, such as host 116. These connections may be direct or indirect via one or more intermediary networks or devices. In other examples, network nodes may be directly coupled to hosts.
  • the core network 106 includes one more core network nodes (e.g., core network node 108) that are structured with hardware and software components. Features of these components may be substantially similar to those described with respect to the UEs, network nodes, and/or hosts, such that the descriptions thereof are generally applicable to the corresponding components of the core network node 108.
  • Example core network nodes include functions of one or more of a Mobile Switching Center (MSC), Mobility Management Entity (MME), Home Subscriber Server (HSS), Access and Mobility Management Function (AMF), Session Management Function (SMF), Authentication Server Function (AUSF), Subscription Identifier De-concealing function (SIDF), Unified Data Management (UDM), Security Edge Protection Proxy (SEPP), Network Exposure Function (NEF), and/or a User Plane Function (UPF).
  • MSC Mobile Switching Center
  • MME Mobility Management Entity
  • HSS Home Subscriber Server
  • AMF Access and Mobility Management Function
  • SMF Session Management Function
  • AUSF Authentication Server Function
  • SIDF Subscription Identifier De-concealing function
  • UDM Unified Data Management
  • SEPP Security Edge Protection Proxy
  • NEF Network Exposure Function
  • UPF User Plane Function
  • the host 116 may be under the ownership or control of a service provider other than an operator or provider of the access network 104 and/or the telecommunication network 102, and may be operated by the service provider or on behalf of the service provider.
  • the host 116 may host a variety of applications to provide one or more service. Examples of such applications include live and pre-recorded audio/video content, data collection services such as retrieving and compiling data on various ambient conditions detected by a plurality of UEs, analytics functionality, social media, functions for controlling or otherwise interacting with remote devices, functions for an alarm and surveillance center, or any other such function performed by a server.
  • the communication system 100 of Figure 1 enables connectivity between the UEs, network nodes, and hosts.
  • the communication system may be configured to operate according to predefined rules or procedures, such as specific standards that include, but are not limited to: Global System for Mobile Communications (GSM); Universal Mobile Telecommunications System (UMTS); Long Term Evolution (LTE), and/or other suitable 2G, 3G, 4G, 5G standards, or any applicable future generation standard (e.g., 6G); wireless local area network (WLAN) standards, such as the Institute of Electrical and Electronics Engineers (IEEE) 802.11 standards (WiFi); and/or any other appropriate wireless communication standard, such as the Worldwide Interoperability for Microwave Access (WiMax), Bluetooth, Z-Wave, Near Field Communication (NFC) ZigBee, LiFi, and/or any low-power wide-area network (LPWAN) standards such as LoRa and Sigfox.
  • GSM Global System for Mobile Communications
  • UMTS Universal Mobile Telecommunications System
  • LTE Long Term Evolution
  • the telecommunication network 102 is a cellular network that implements 3 GPP standardized features. Accordingly, the telecommunications network 102 may support network slicing to provide different logical networks to different devices that are connected to the telecommunication network 102. For example, the telecommunications network 102 may provide Ultra Reliable Low Latency Communication (URLLC) services to some UEs, while providing Enhanced Mobile Broadband (eMBB) services to other UEs, and/or Massive Machine Type Communication (mMTC)ZMassive loT services to yet further UEs.
  • URLLC Ultra Reliable Low Latency Communication
  • eMBB Enhanced Mobile Broadband
  • mMTC Massive Machine Type Communication
  • the UEs 112 are configured to transmit and/or receive information without direct human interaction.
  • a UE may be designed to transmit information to the access network 104 on a predetermined schedule, when triggered by an internal or external event, or in response to requests from the access network 104.
  • a UE may be configured for operating in single- or multi -RAT or multi -standard mode.
  • a UE may operate with any one or combination of Wi-Fi, NR (New Radio) and LTE, e.g. being configured for multi-radio dual connectivity (MR-DC), such as E-UTRAN (Evolved-UMTS Terrestrial Radio Access Network) New Radio - Dual Connectivity (EN- DC).
  • MR-DC multi-radio dual connectivity
  • the hub 114 communicates with the access network 104 to facilitate indirect communication between one or more UEs (e.g., UE 112c and/or 112d) and network nodes (e.g., network node 110b).
  • the hub 114 may be a controller, router, content source and analytics, or any of the other communication devices described herein regarding UEs.
  • the hub 114 may be a broadband router enabling access to the core network 106 for the UEs.
  • the hub 114 may be a controller that sends commands or instructions to one or more actuators in the UEs.
  • the hub 114 may be a data collector that acts as temporary storage for UE data and, in some embodiments, may perform analysis or other processing of the data.
  • the hub 114 may be a content source. For example, for a UE that is a VR headset, display, loudspeaker or other media delivery device, the hub 114 may retrieve VR assets, video, audio, or other media or data related to sensory information via a network node, which the hub 114 then provides to the UE either directly, after performing local processing, and/or after adding additional local content.
  • the hub 114 acts as a proxy server or orchestrator for the UEs, in particular if one or more of the UEs are low energy loT devices.
  • the hub 114 may have a constant/persistent or intermittent connection to the network node 110b.
  • the hub 114 may also allow for a different communication scheme and/or schedule between the hub 114 and UEs (e.g., UE 112c and/or 112d), and between the hub 114 and the core network 106.
  • the hub 114 is connected to the core network 106 and/or one or more UEs via a wired connection.
  • the hub 114 may be configured to connect to an M2M service provider over the access network 104 and/or to another UE over a direct connection.
  • UEs may establish a wireless connection with the network nodes 110 while still connected via the hub 114 via a wired or wireless connection.
  • the hub 114 may be a dedicated hub - that is, a hub whose primary function is to route communications to/from the UEs from/to the network node 110b.
  • the hub 114 may be a non-dedicated hub - that is, a device which is capable of operating to route communications between the UEs and network node 110b, but which is additionally capable of operating as a communication start and/or end point for certain data channels.
  • FIG. 2 shows a UE 200 in accordance with some embodiments.
  • a UE refers to a device capable, configured, arranged and/or operable to communicate wirelessly with network nodes and/or other UEs.
  • Examples of a UE include, but are not limited to, a smart phone, mobile phone, cell phone, voice over IP (VoIP) phone, wireless local loop phone, desktop computer, personal digital assistant (PDA), wireless cameras, gaming console or device, music storage device, playback appliance, wearable terminal device, wireless endpoint, mobile station, tablet, laptop, laptop-embedded equipment (LEE), laptop-mounted equipment (LME), smart device, wireless customer-premise equipment (CPE), vehicle, vehicle-mounted or vehicle embedded/integrated wireless device, etc.
  • VoIP voice over IP
  • PDA personal digital assistant
  • gaming console or device music storage device, playback appliance
  • wearable terminal device wireless endpoint, mobile station, tablet, laptop, laptop-embedded equipment (LEE), laptop-mounted equipment (LME), smart device, wireless customer
  • UEs identified by the 3rd Generation Partnership Project (3GPP), including a narrow band internet of things (NB-IoT) UE, a machine type communication (MTC) UE, and/or an enhanced MTC (eMTC) UE.
  • 3GPP 3rd Generation Partnership Project
  • NB-IoT narrow band internet of things
  • MTC machine type communication
  • eMTC enhanced MTC
  • a UE may support device-to-device (D2D) communication, for example by implementing a 3 GPP standard for sidelink communication, Dedicated Short-Range Communication (DSRC), vehicle-to-vehicle (V2V), vehicle-to-infrastructure (V2I), or vehicle- to-everything (V2X).
  • a UE may not necessarily have a user in the sense of a human user who owns and/or operates the relevant device.
  • a UE may represent a device that is intended for sale to, or operation by, a human user but which may not, or which may not initially, be associated with a specific human user (e.g., a smart sprinkler controller).
  • a UE may represent a device that is not intended for sale to, or operation by, an end user but which may be associated with or operated for the benefit of a user (e.g., a smart power meter).
  • the UE 200 includes processing circuitry 202 that is operatively coupled via a bus 204 to an input/output interface 206, a power source 208, a memory 210, a communication interface 212, and/or any other component, or any combination thereof.
  • Certain UEs may utilize all or a subset of the components shown in Figure 2. The level of integration between the components may vary from one UE to another UE. Further, certain UEs may contain multiple instances of a component, such as multiple processors, memories, transceivers, transmitters, receivers, etc.
  • the processing circuitry 202 is configured to process instructions and data and may be configured to implement any sequential state machine operative to execute instructions stored as machine-readable computer programs in the memory 210.
  • the processing circuitry 202 may be implemented as one or more hardware-implemented state machines (e.g., in discrete logic, field-programmable gate arrays (FPGAs), application specific integrated circuits (ASICs), etc.); programmable logic together with appropriate firmware; one or more stored computer programs, general-purpose processors, such as a microprocessor or digital signal processor (DSP), together with appropriate software; or any combination of the above.
  • the processing circuitry 202 may include multiple central processing units (CPUs).
  • the input/output interface 206 may be configured to provide an interface or interfaces to an input device, output device, or one or more input and/or output devices.
  • a sensor may be, for instance, an accelerometer, a gyroscope, a tilt sensor, a force sensor, a magnetometer, an optical sensor, a proximity sensor, a biometric sensor, etc., or any combination thereof.
  • An output device may use the same type of interface port as an input device. For example, a Universal Serial Bus (USB) port may be used to provide an input device and an output device.
  • USB Universal Serial Bus
  • the power source 208 is structured as a battery or battery pack. Other types of power sources, such as an external power source (e.g., an electricity outlet), photovoltaic device, or power cell, may be used.
  • the power source 208 may further include power circuitry for delivering power from the power source 208 itself, and/or an external power source, to the various parts of the UE 200 via input circuitry or an interface such as an electrical power cable. Delivering power may be, for example, for charging of the power source 208.
  • Power circuitry may perform any formatting, converting, or other modification to the power from the power source 208 to make the power suitable for the respective components of the UE 200 to which power is supplied.
  • the memory 210 may be or be configured to include memory such as random access memory (RAM), read-only memory (ROM), programmable read-only memory (PROM), erasable programmable read-only memory (EPROM), electrically erasable programmable read-only memory (EEPROM), magnetic disks, optical disks, hard disks, removable cartridges, flash drives, and so forth.
  • the memory 210 includes one or more application programs 214, such as an operating system, web browser application, a widget, gadget engine, or other application, and corresponding data 216.
  • the memory 210 may store, for use by the UE 200, any of a variety of various operating systems or combinations of operating systems.
  • the memory 210 may be configured to include a number of physical drive units, such as redundant array of independent disks (RAID), flash memory, USB flash drive, external hard disk drive, thumb drive, pen drive, key drive, high-density digital versatile disc (HD- DVD) optical disc drive, internal hard disk drive, Blu-Ray optical disc drive, holographic digital data storage (HDDS) optical disc drive, external mini-dual in-line memory module (DIMM), synchronous dynamic random access memory (SDRAM), external micro-DIMM SDRAM, smartcard memory such as tamper resistant module in the form of a universal integrated circuit card (UICC) including one or more subscriber identity modules (SIMs), such as a USIM and/or ISIM, other memory, or any combination thereof.
  • RAID redundant array of independent disks
  • HD- DVD high-density digital versatile disc
  • HD- DVD high-density digital versatile disc
  • HD- DVD high-density digital versatile disc
  • HD- DVD high-density digital versatile disc
  • HD- DVD high-
  • the UICC may for example be an embedded UICC (eUICC), integrated UICC (iUICC) or a removable UICC commonly known as ‘SIM card.’
  • eUICC embedded UICC
  • iUICC integrated UICC
  • SIM card removable UICC commonly known as ‘SIM card.’
  • the memory 210 may allow the UE 200 to access instructions, application programs and the like, stored on transitory or non-transitory memory media, to off-load data, or to upload data.
  • An article of manufacture, such as one utilizing a communication system may be tangibly embodied as or in the memory 210, which may be or comprise a device-readable storage medium.
  • the processing circuitry 202 may be configured to communicate with an access network or other network using the communication interface 212.
  • the communication interface 212 may comprise one or more communication subsystems and may include or be communicatively coupled to an antenna 222.
  • the communication interface 212 may include one or more transceivers used to communicate, such as by communicating with one or more remote transceivers of another device capable of wireless communication (e.g., another UE or a network node in an access network).
  • Each transceiver may include a transmitter 218 and/or a receiver 220 appropriate to provide network communications (e.g., optical, electrical, frequency allocations, and so forth).
  • the transmitter 218 and receiver 220 may be coupled to one or more antennas (e.g., antenna 222) and may share circuit components, software or firmware, or alternatively be implemented separately.
  • communication functions of the communication interface 212 may include cellular communication, Wi-Fi communication, LPWAN communication, data communication, voice communication, multimedia communication, short-range communications such as Bluetooth, near-field communication, location-based communication such as the use of the global positioning system (GPS) to determine a location, another like communication function, or any combination thereof.
  • GPS global positioning system
  • Communications may be implemented in according to one or more communication protocols and/or standards, such as IEEE 802.11, Code Division Multiplexing Access (CDMA), Wideband Code Division Multiple Access (WCDMA), GSM, LTE, New Radio (NR), UMTS, WiMax, Ethernet, transmission control protocol/intemet protocol (TCP/IP), synchronous optical networking (SONET), Asynchronous Transfer Mode (ATM), QUIC, Hypertext Transfer Protocol (HTTP), and so forth.
  • CDMA Code Division Multiplexing Access
  • WCDMA Wideband Code Division Multiple Access
  • WCDMA Wideband Code Division Multiple Access
  • GSM Global System for Mobile communications
  • LTE Long Term Evolution
  • NR New Radio
  • UMTS Worldwide Interoperability for Microwave Access
  • WiMax Ethernet
  • TCP/IP transmission control protocol/intemet protocol
  • SONET synchronous optical networking
  • ATM Asynchronous Transfer Mode
  • QUIC Hypertext Transfer Protocol
  • HTTP Hypertext Transfer Protocol
  • a UE may provide an output of data captured by its sensors, through its communication interface 212, via a wireless connection to a network node.
  • Data captured by sensors of a UE can be communicated through a wireless connection to a network node via another UE.
  • the output may be periodic (e.g., once every 15 minutes if it reports the sensed temperature), random (e.g., to even out the load from reporting from several sensors), in response to a triggering event (e.g., when moisture is detected an alert is sent), in response to a request (e.g., a user initiated request), or a continuous stream (e.g., a live video feed of a patient).
  • a UE comprises an actuator, a motor, or a switch, related to a communication interface configured to receive wireless input from a network node via a wireless connection.
  • the states of the actuator, the motor, or the switch may change.
  • the UE may comprise a motor that adjusts the control surfaces or rotors of a drone in flight according to the received input or to a robotic arm performing a medical procedure according to the received input.
  • a UE may represent a machine or other device that performs monitoring and/or measurements, and transmits the results of such monitoring and/or measurements to another UE and/or a network node.
  • the UE may in this case be an M2M device, which may in a 3GPP context be referred to as an MTC device.
  • the UE may implement the 3 GPP NB-IoT standard.
  • a UE may represent a vehicle, such as a car, a bus, a truck, a ship and an airplane, or other equipment that is capable of monitoring and/or reporting on its operational status or other functions associated with its operation.
  • a first UE might be or be integrated in a drone and provide the drone’ s speed information (obtained through a speed sensor) to a second UE that is a remote controller operating the drone.
  • the first UE may adjust the throttle on the drone (e.g. by controlling an actuator) to increase or decrease the drone’s speed.
  • the first and/or the second UE can also include more than one of the functionalities described above.
  • a UE might comprise the sensor and the actuator, and handle communication of data for both the speed sensor and the actuators.
  • FIG. 3 shows a network node 300 in accordance with some embodiments.
  • network node refers to equipment capable, configured, arranged and/or operable to communicate directly or indirectly with a UE and/or with other network nodes or equipment, in a telecommunication network.
  • network nodes include, but are not limited to, access points (APs) (e.g., radio access points), base stations (BSs) (e.g., radio base stations, Node Bs, evolved Node Bs (eNBs) and NR NodeBs (gNBs)), O-RAN nodes or components of an O-RAN node (e.g., O-RU, O-DU, O-CU).
  • APs access points
  • BSs base stations
  • eNBs evolved Node Bs
  • gNBs NR NodeBs
  • O-RAN nodes or components of an O-RAN node e.g., O-RU, O-DU, O-CU.
  • Base stations may be categorized based on the amount of coverage they provide (or, stated differently, their transmit power level) and so, depending on the provided amount of coverage, may be referred to as femto base stations, pico base stations, micro base stations, or macro base stations.
  • a base station may be a relay node or a relay donor node controlling a relay.
  • a network node may also include one or more (or all) parts of a distributed radio base station such as centralized digital units, distributed units (e.g., in an O-RAN access node) and/or remote radio units (RRUs), sometimes referred to as Remote Radio Heads (RRHs). Such remote radio units may or may not be integrated with an antenna as an antenna integrated radio.
  • Parts of a distributed radio base station may also be referred to as nodes in a distributed antenna system (DAS).
  • DAS distributed antenna system
  • network nodes include multiple transmission point (multi-TRP) 5G access nodes, multi -standard radio (MSR) equipment such as MSR BSs, network controllers such as radio network controllers (RNCs) or base station controllers (BSCs), base transceiver stations (BTSs), transmission points, transmission nodes, multi-cell/multicast coordination entities (MCEs), Operation and Maintenance (O&M) nodes, Operations Support System (OSS) nodes, Self-Organizing Network (SON) nodes, positioning nodes (e.g., Evolved Serving Mobile Location Centers (E-SMLCs)), and/or Minimization of Drive Tests (MDTs).
  • MSR multi -standard radio
  • RNCs radio network controllers
  • BSCs base station controllers
  • BTSs base transceiver stations
  • OFDM Operation and Maintenance
  • OSS Operations Support System
  • SON Self-Organizing Network
  • positioning nodes e.g., Evolved Serving Mobile Location Centers (E-SMLCs)
  • the network node 300 includes a processing circuitry 302, a memory 304, a communication interface 306, and a power source 308.
  • the network node 300 may be composed of multiple physically separate components (e.g., a NodeB component and a RNC component, or a BTS component and a BSC component, etc.), which may each have their own respective components.
  • the network node 300 comprises multiple separate components (e.g., BTS and BSC components)
  • one or more of the separate components may be shared among several network nodes.
  • a single RNC may control multiple NodeBs.
  • each unique NodeB and RNC pair may in some instances be considered a single separate network node.
  • the network node 300 may be configured to support multiple radio access technologies (RATs).
  • RATs radio access technologies
  • some components may be duplicated (e.g., separate memory 304 for different RATs) and some components may be reused (e.g., a same antenna 310 may be shared by different RATs).
  • the network node 300 may also include multiple sets of the various illustrated components for different wireless technologies integrated into network node 300, for example GSM, WCDMA, LTE, NR, WiFi, Zigbee, Z-wave, LoRaWAN, Radio Frequency Identification (RFID) or Bluetooth wireless technologies. These wireless technologies may be integrated into the same or different chip or set of chips and other components within network node 300.
  • RFID Radio Frequency Identification
  • the processing circuitry 302 may comprise a combination of one or more of a microprocessor, controller, microcontroller, central processing unit, digital signal processor, application-specific integrated circuit, field programmable gate array, or any other suitable computing device, resource, or combination of hardware, software and/or encoded logic operable to provide, either alone or in conjunction with other network node 300 components, such as the memory 304, to provide network node 300 functionality.
  • the processing circuitry 302 includes a system on a chip (SOC). In some embodiments, the processing circuitry 302 includes one or more of radio frequency (RF) transceiver circuitry 312 and baseband processing circuitry 314. In some embodiments, the radio frequency (RF) transceiver circuitry 312 and the baseband processing circuitry 314 may be on separate chips (or sets of chips), boards, or units, such as radio units and digital units. In alternative embodiments, part or all of RF transceiver circuitry 312 and baseband processing circuitry 314 may be on the same chip or set of chips, boards, or units.
  • SOC system on a chip
  • the processing circuitry 302 includes one or more of radio frequency (RF) transceiver circuitry 312 and baseband processing circuitry 314.
  • the radio frequency (RF) transceiver circuitry 312 and the baseband processing circuitry 314 may be on separate chips (or sets of chips), boards, or units, such as radio units and digital units. In alternative embodiments, part or all of RF trans
  • the memory 304 may comprise any form of volatile or non-volatile computer- readable memory including, without limitation, persistent storage, solid-state memory, remotely mounted memory, magnetic media, optical media, random access memory (RAM), read-only memory (ROM), mass storage media (for example, a hard disk), removable storage media (for example, a flash drive, a Compact Disk (CD) or a Digital Video Disk (DVD)), and/or any other volatile or non-volatile, non-transitory device-readable and/or computerexecutable memory devices that store information, data, and/or instructions that may be used by the processing circuitry 302.
  • volatile or non-volatile computer- readable memory including, without limitation, persistent storage, solid-state memory, remotely mounted memory, magnetic media, optical media, random access memory (RAM), read-only memory (ROM), mass storage media (for example, a hard disk), removable storage media (for example, a flash drive, a Compact Disk (CD) or a Digital Video Disk (DVD)), and/or any other volatile or non
  • the memory 304 may store any suitable instructions, data, or information, including a computer program, software, an application including one or more of logic, rules, code, tables, and/or other instructions capable of being executed by the processing circuitry 302 and utilized by the network node 300.
  • the memory 304 may be used to store any calculations made by the processing circuitry 302 and/or any data received via the communication interface 306.
  • the processing circuitry 302 and memory 304 is integrated.
  • the communication interface 306 is used in wired or wireless communication of signaling and/or data between a network node, access network, and/or UE. As illustrated, the communication interface 306 comprises port(s)/terminal(s) 316 to send and receive data, for example to and from a network over a wired connection.
  • the communication interface 306 also includes radio front-end circuitry 318 that may be coupled to, or in certain embodiments a part of, the antenna 310. Radio front-end circuitry 318 comprises filters 320 and amplifiers 322. The radio front-end circuitry 318 may be connected to an antenna 310 and processing circuitry 302. The radio front-end circuitry may be configured to condition signals communicated between antenna 310 and processing circuitry 302.
  • the radio front-end circuitry 318 may receive digital data that is to be sent out to other network nodes or UEs via a wireless connection.
  • the radio front-end circuitry 318 may convert the digital data into a radio signal having the appropriate channel and bandwidth parameters using a combination of filters 320 and/or amplifiers 322.
  • the radio signal may then be transmitted via the antenna 310.
  • the antenna 310 may collect radio signals which are then converted into digital data by the radio front-end circuitry 318.
  • the digital data may be passed to the processing circuitry 302.
  • the communication interface may comprise different components and/or different combinations of components.
  • the network node 300 does not include separate radio front-end circuitry 318, instead, the processing circuitry 302 includes radio front-end circuitry and is connected to the antenna 310.
  • the processing circuitry 302 includes radio front-end circuitry and is connected to the antenna 310.
  • all or some of the RF transceiver circuitry 312 is part of the communication interface 306.
  • the communication interface 306 includes one or more ports or terminals 316, the radio front-end circuitry 318, and the RF transceiver circuitry 312, as part of a radio unit (not shown), and the communication interface 306 communicates with the baseband processing circuitry 314, which is part of a digital unit (not shown).
  • the antenna 310 may include one or more antennas, or antenna arrays, configured to send and/or receive wireless signals.
  • the antenna 310 may be coupled to the radio front-end circuitry 318 and may be any type of antenna capable of transmitting and receiving data and/or signals wirelessly.
  • the antenna 310 is separate from the network node 300 and connectable to the network node 300 through an interface or port.
  • the antenna 310, communication interface 306, and/or the processing circuitry 302 may be configured to perform any receiving operations and/or certain obtaining operations described herein as being performed by the network node. Any information, data and/or signals may be received from a UE, another network node and/or any other network equipment. Similarly, the antenna 310, the communication interface 306, and/or the processing circuitry 302 may be configured to perform any transmitting operations described herein as being performed by the network node. Any information, data and/or signals may be transmitted to a UE, another network node and/or any other network equipment.
  • the power source 308 provides power to the various components of network node 300 in a form suitable for the respective components (e.g., at a voltage and current level needed for each respective component).
  • the power source 308 may further comprise, or be coupled to, power management circuitry to supply the components of the network node 300 with power for performing the functionality described herein.
  • the network node 300 may be connectable to an external power source (e.g., the power grid, an electricity outlet) via an input circuitry or interface such as an electrical cable, whereby the external power source supplies power to power circuitry of the power source 308.
  • the power source 308 may comprise a source of power in the form of a battery or battery pack which is connected to, or integrated in, power circuitry. The battery may provide backup power should the external power source fail.
  • Embodiments of the network node 300 may include additional components beyond those shown in Figure 3 for providing certain aspects of the network node’s functionality, including any of the functionality described herein and/or any functionality necessary to support the subject matter described herein.
  • the network node 300 may include user interface equipment to allow input of information into the network node 300 and to allow output of information from the network node 300. This may allow a user to perform diagnostic, maintenance, repair, and other administrative functions for the network node 300.
  • FIG 4 is a block diagram of a host 400, which may be an embodiment of the host 116 of Figure 1, in accordance with various aspects described herein.
  • the host 400 may be or comprise various combinations hardware and/or software, including a standalone server, a blade server, a cloud-implemented server, a distributed server, a virtual machine, container, or processing resources in a server farm.
  • the host 400 may provide one or more services to one or more UEs.
  • the host 400 includes processing circuitry 402 that is operatively coupled via a bus 404 to an input/output interface 406, a network interface 408, a power source 410, and a memory 412.
  • Other components may be included in other embodiments. Features of these components may be substantially similar to those described with respect to the devices of previous figures, such as Figures 2 and 3, such that the descriptions thereof are generally applicable to the corresponding components of host 400.
  • the memory 412 may include one or more computer programs including one or more host application programs 414 and data 416, which may include user data, e.g., data generated by a UE for the host 400 or data generated by the host 400 for a UE.
  • Embodiments of the host 400 may utilize only a subset or all of the components shown.
  • the host application programs 414 may be implemented in a container-based architecture and may provide support for video codecs (e.g., Versatile Video Coding (VVC), High Efficiency Video Coding (HEVC), Advanced Video Coding (AVC), MPEG, VP9) and audio codecs (e.g., FLAC, Advanced Audio Coding (AAC), MPEG, G.711), including transcoding for multiple different classes, types, or implementations of UEs (e.g., handsets, desktop computers, wearable display systems, heads-up display systems).
  • VVC Versatile Video Coding
  • HEVC High Efficiency Video Coding
  • AVC Advanced Video Coding
  • MPEG MPEG
  • VP9 Video Coding
  • audio codecs e.g., FLAC, Advanced Audio Coding (AAC), MPEG, G.711
  • the host application programs 414 may also provide for user authentication and licensing checks and may periodically report health, routes, and content availability to a central node, such as a device in or on the edge of a core network. Accordingly, the host 400 may select and/or indicate a different host for over-the-top services for a UE.
  • the host application programs 414 may support various protocols, such as the HTTP Live Streaming (HLS) protocol, Real-Time Messaging Protocol (RTMP), Real-Time Streaming Protocol (RTSP), Dynamic Adaptive Streaming over HTTP (MPEG-DASH), etc.
  • HLS HTTP Live Streaming
  • RTMP Real-Time Messaging Protocol
  • RTSP Real-Time Streaming Protocol
  • MPEG-DASH Dynamic Adaptive Streaming over HTTP
  • FIG. 5 is a block diagram illustrating a virtualization environment 500 in which functions implemented by some embodiments may be virtualized.
  • virtualizing means creating virtual versions of apparatuses or devices which may include virtualizing hardware platforms, storage devices and networking resources.
  • virtualization can be applied to any device described herein, or components thereof, and relates to an implementation in which at least a portion of the functionality is implemented as one or more virtual components.
  • Some or all of the functions described herein may be implemented as virtual components executed by one or more virtual machines (VMs) implemented in one or more virtual environments 500 hosted by one or more of hardware nodes, such as a hardware computing device that operates as a network node, UE, core network node, or host.
  • VMs virtual machines
  • the virtualization environment 500 includes components defined by the 0-RAN Alliance, such as an O-Cloud environment orchestrated by a Service Management and Orchestration Framework via an O-2 interface.
  • Applications 502 (which may alternatively be called software instances, virtual appliances, network functions, virtual nodes, virtual network functions, etc.) are run in the virtualization environment Q400 to implement some of the features, functions, and/or benefits of some of the embodiments disclosed herein.
  • Hardware 504 includes processing circuitry, memory that stores software and/or instructions executable by hardware processing circuitry, and/or other hardware devices as described herein, such as a network interface, input/output interface, and so forth.
  • Software may be executed by the processing circuitry to instantiate one or more virtualization layers 506 (also referred to as hypervisors or virtual machine monitors (VMMs)), provide VMs 508a and 508b (one or more of which may be generally referred to as VMs 508), and/or perform any of the functions, features and/or benefits described in relation with some embodiments described herein.
  • the virtualization layer 506 may present a virtual operating platform that appears like networking hardware to the VMs 508.
  • the VMs 508 comprise virtual processing, virtual memory, virtual networking or interface and virtual storage, and may be run by a corresponding virtualization layer 506. Different embodiments of the instance of a virtual appliance 502 may be implemented on one or more of VMs 508, and the implementations may be made in different ways. Virtualization of the hardware is in some contexts referred to as network function virtualization (NFV). NFV may be used to consolidate many network equipment types onto industry standard high volume server hardware, physical switches, and physical storage, which can be located in data centers, and customer premise equipment.
  • NFV network function virtualization
  • a VM 508 may be a software implementation of a physical machine that runs programs as if they were executing on a physical, non-virtualized machine.
  • Each of the VMs 508, and that part of hardware 504 that executes that VM be it hardware dedicated to that VM and/or hardware shared by that VM with others of the VMs, forms separate virtual network elements.
  • a virtual network function is responsible for handling specific network functions that run in one or more VMs 508 on top of the hardware 504 and corresponds to the application 502.
  • Hardware 504 may be implemented in a standalone network node with generic or specific components. Hardware 504 may implement some functions via virtualization. Alternatively, hardware 504 may be part of a larger cluster of hardware (e.g. such as in a data center or CPE) where many hardware nodes work together and are managed via management and orchestration 510, which, among others, oversees lifecycle management of applications 502.
  • hardware 504 is coupled to one or more radio units that each include one or more transmitters and one or more receivers that may be coupled to one or more antennas. Radio units may communicate directly with other hardware nodes via one or more appropriate network interfaces and may be used in combination with the virtual components to provide a virtual node with radio capabilities, such as a radio access node or a base station.
  • some signaling can be provided with the use of a control system 512 which may alternatively be used for communication between hardware nodes and radio units.
  • Figure 6 shows a communication diagram of a host 602 communicating via a network node 604 with a UE 606 over a partially wireless connection in accordance with some embodiments.
  • the UE 606 includes hardware and software, which is stored in or accessible by UE 606 and executable by the UE’s processing circuitry.
  • the software includes a client application, such as a web browser or operator-specific “app” that may be operable to provide a service to a human or non-human user via UE 606 with the support of the host 602.
  • a client application such as a web browser or operator-specific “app” that may be operable to provide a service to a human or non-human user via UE 606 with the support of the host 602.
  • an executing host application may communicate with the executing client application via the OTT connection 650 terminating at the UE 606 and host 602.
  • the UE's client application may receive request data from the host's host application and provide user data in response to the request data.
  • the OTT connection 650 may transfer both the request data and the user data.
  • the UE's client application may interact with the user to generate the user data that it provides to the host application through the OTT
  • the OTT connection 650 may extend via a connection 660 between the host 602 and the network node 604 and via a wireless connection 670 between the network node 604 and the UE 606 to provide the connection between the host 602 and the UE 606.
  • the connection 660 and wireless connection 670, over which the OTT connection 650 may be provided, have been drawn abstractly to illustrate the communication between the host 602 and the UE 606 via the network node 604, without explicit reference to any intermediary devices and the precise routing of messages via these devices.
  • the host 602 provides user data, which may be performed by executing a host application.
  • the user data is associated with a particular human user interacting with the UE 606.
  • the user data is associated with a UE 606 that shares data with the host 602 without explicit human interaction.
  • the host 602 initiates a transmission carrying the user data towards the UE 606.
  • the host 602 may initiate the transmission responsive to a request transmitted by the UE 606.
  • the request may be caused by human interaction with the UE 606 or by operation of the client application executing on the UE 606.
  • the transmission may pass via the network node 604, in accordance with the teachings of the embodiments described throughout this disclosure. Accordingly, in step 612, the network node 604 transmits to the UE 606 the user data that was carried in the transmission that the host 602 initiated, in accordance with the teachings of the embodiments described throughout this disclosure. In step 614, the UE 606 receives the user data carried in the transmission, which may be performed by a client application executed on the UE 606 associated with the host application executed by the host 602.
  • the UE 606 executes a client application which provides user data to the host 602.
  • the user data may be provided in reaction or response to the data received from the host 602.
  • the UE 606 may provide user data, which may be performed by executing the client application.
  • the client application may further consider user input received from the user via an input/output interface of the UE 606. Regardless of the specific manner in which the user data was provided, the UE 606 initiates, in step 618, transmission of the user data towards the host 602 via the network node 604.
  • the network node 604 receives user data from the UE 606 and initiates transmission of the received user data towards the host 602.
  • the host 602 receives the user data carried in the transmission initiated by the UE 606.
  • One or more of the various embodiments improve the performance of OTT services provided to the UE 606 using the OTT connection 650, in which the wireless connection 670 forms the last segment. More precisely, the teachings of these embodiments may improve the e.g., data rate, latency, power consumption and thereby provide benefits such as e.g., reduced user waiting time, relaxed restriction on file size, improved content resolution, better responsiveness, extended battery lifetime, and a more efficient transmission of the target-CSI.
  • factory status information may be collected and analyzed by the host 602.
  • the host 602 may process audio and video data which may have been retrieved from a UE for use in creating maps.
  • the host 602 may collect and analyze real-time data to assist in controlling vehicle congestion (e.g., controlling traffic lights).
  • the host 602 may store surveillance video uploaded by a UE.
  • the host 602 may store or control access to media content such as video, audio, VR or AR which it can broadcast, multicast or unicast to UEs.
  • the host 602 may be used for energy pricing, remote control of non-time critical electrical load to balance power generation needs, location services, presentation services (such as compiling diagrams etc. from data collected from remote devices), or any other function of collecting, retrieving, storing, analyzing and/or transmitting data.
  • a measurement procedure may be provided for the purpose of monitoring data rate, latency and other factors on which the one or more embodiments improve.
  • the measurement procedure and/or the network functionality for reconfiguring the OTT connection may be implemented in software and hardware of the host 602 and/or UE 606.
  • sensors (not shown) may be deployed in or in association with other devices through which the OTT connection 650 passes; the sensors may participate in the measurement procedure by supplying values of the monitored quantities exemplified above, or supplying values of other physical quantities from which software may compute or estimate the monitored quantities.
  • the reconfiguring of the OTT connection 650 may include message format, retransmission settings, preferred routing etc.; the reconfiguring need not directly alter the operation of the network node 604. Such procedures and functionalities may be known and practiced in the art.
  • measurements may involve proprietary UE signaling that facilitates measurements of throughput, propagation times, latency and the like, by the host 602.
  • the measurements may be implemented in that software causes messages to be transmitted, in particular empty or ‘dummy’ messages, using the OTT connection 650 while monitoring propagation times, errors, etc.
  • FIG. 7 illustrates an AI/ML model lifecycle management (LCM) process 700 in accordance with some embodiments.
  • Developing an AI/ML model involves multiple steps. The actual training of the AI/ML model is just one step in the training pipeline.
  • a part of Al development is the lifecycle management of AI/ML model.
  • the LCM process includes a training pipeline 710, an inference pipeline 730, and their interactions.
  • the model’s LCM process 700 is performed by an Al model development system.
  • the LCM process 700 comprises a training or re-training pipeline, an inference pipeline, a model deployment process that transforms the trained or re-trained AI/ML model into the inference pipeline, and a drift detection process that detects drifts in model operations.
  • a training or re-training pipeline 710 may include the following functional blocks: data ingestion 712, data pre-processing 714, model training 716, model evaluation 718, and model registration 720.
  • the Al model development system gathers raw data or training data from a data storage. After this functional block, there may also be another function block that controls the validity of the gathered data.
  • the Al model development system performs feature engineering on the gathered data. For example, data normalization, or a required data transformation, may be applied to the gathered data.
  • the model training functional block 716 the actual model training is performed by the Al model development system.
  • the model evaluation functional block 718 the Al model development system evaluates performance against specific model baselines through benchmarking. The iterative steps of model training and model evaluation continue until the acceptable level of performance is achieved.
  • the model registration functional block 720 the Al model development system registers the Al model. The registration may also include any corresponding Al metadata, which provides information on how the Al model was developed.
  • Al model evaluation performance results may also be registered. For example, at a deployment stage 725 the trained (or re-trained) Al model may become part of the inference pipeline.
  • An inference pipeline 730 may include the following functional blocks: data ingestion 732, data pre-processing 734, model operational 736, and data and model monitoring 738.
  • data ingestion functional block 732 the Al model development system gathers raw data or inference data from a data storage.
  • the data pre-processing functional block 734 is similar to the data pre-processing functional block 714 in the training pipeline.
  • model operational functional block 736 the Al model development system uses the trained and deployed models in operational mode.
  • data and model monitoring functional block 738 the Al model development system validates the inference data to assess if its distribution aligns well with the training data.
  • model output is monitored for detecting any performance or operational drifts. For example, a drift detection stage 740 the outputs may be used to inform about any drifts in the model operations.
  • FIG. 8 illustrates a functional framework for AI/ML model Lifecycle Management (LCM) 800 with different Network-User Equipment (NW-UE) collaboration levels in physical layer use cases in accordance with some embodiments.
  • an AI/ML model Lifecycle Management 800 system with different Network-User Equipment (NW -UE) collaboration levels in physical layer use cases may include the following functional blocks: data collection 810, model training operations 820, model deployment operations 830, model inference operations 840, and model monitoring operations 850.
  • model LCM 800 may comprise a data collection configuration for data collection 810, e.g., to gather raw data or training data from a data storage, and to deploy configured collected data for model training operations 820, model inference operations 840, and model monitoring operations 850.
  • Model LCM 800 may further comprise a model training configuration for model training operations 820.
  • the model training operations 820 may comprise using collected data to perform one or more model training operations to train the Al model development system.
  • Model LCM 800 may further comprise model deployment/update and/or transfer configurations for model deployment operations 830.
  • the model deployment operations 830 may include operations for deploying the trained AI/ML models in operational mode.
  • Model LCM 800 may further comprise model selection, activation, deactivation, model switching and fallback operations for configuring model inference operations 840.
  • the model inference operations 840 may include generating inference data using the deployed AL/ML model.
  • Model LCM 800 may further comprise a model monitoring configuration for model monitoring operations 850.
  • model monitoring operations 850 may include validating the inference data to assess if its distribution aligns well with the training data.
  • model output may be monitored for detecting any performance or operational drifts.
  • model monitoring outputs may be used to inform the model LCM 800 about any drifts in the model operations.
  • AI/ML models for the NR air interface can be classified into two types: onesided AI/ML models and two-sided AI/ML models.
  • a one-sided AI/ML model can either be a UE-sided model, where the inference is performed entirely at the UE, or a NW-sided model, where the inference is performed entirely at the NW, e.g., by a network node.
  • the two-sided AI/ML model involves paired models where joint inference is performed across the UE and the NW.
  • the initial part of the inference is executed by the UE, and the subsequent part is completed by a network node, e.g., nodes including, but not limited to, a base station in NR, referred to as a next generation Node B (“gNodeB” or “gNB”).
  • gNodeB next generation Node B
  • the process may also be reversed, with the initial part of the inference performed by the gNB and the subsequent part completed by the UE.
  • Figure 9 illustrates an autoencoder-based two-sided AI/ML model used for CSI reporting in accordance with some embodiments.
  • encoder 901 which is on the UE side of the two-sided autoencoder model, is operated on the UE to compress the estimated wireless channel.
  • the output of the encoder which is the compressed wireless channel information estimates, is reported from the UE to a network node like a gNB.
  • the network then uses decoder 902, which is on the NW side of the two-sided autoencoder model, to reconstruct the estimated wireless channel information.
  • the first level of collaboration is where no collaboration exists between network nodes and UEs.
  • a proprietary ML model operating within the existing standard air-interface is applied at one end of the communication chain, such as the UE side.
  • the model lifecycle management tasks such as model selection and training, model monitoring, model retraining, and model update, are performed at this node without inter-node assistance, such as assistance information provided by the network node.
  • the second level of collaboration is where limited collaboration exists between network nodes and UEs for one-sided models. In this scenario, an ML model operates at one end of the communication chain, such as the UE side.
  • the node When performing its model life cycle management tasks, the node receives limited assistance from one or more nodes at the other end of the communication chain, such as a gNB.
  • the limited assistance may include, for example, training and/or retraining of the AI/ML model, model update, model monitoring, and model selection, fallback, and/or switching, etc.
  • the third level of collaboration is where there are joint ML operations between network nodes and UEs for two-sided models.
  • the AI/ML model is divided into two parts, with one part located on the NW side, and the other part on the UE side. Consequently, the AI/ML model requires joint inference between the NW and UE.
  • the AI/ML model life cycle management involves both ends of the communication chain.
  • the model training process for one side can be transparent to the other side (e.g., the NW needs no information on how the UE side trains a UE model), and the responsibility for model LCM is clearly on the side that implements the functionality for making model inferences.
  • NW-UE collaboration refers to configuring transmissions and reports to assist in data collection for model training, inference, and monitoring.
  • the model training process may require sharing of data and/or model information and/or training related information, from one side to the other side since the input and output of a two-sided model can reside, e.g., within different vendors’ domain.
  • Type 1 Joint training of the two-sided model at a single side/entity, e.g., the UE side or the NW side.
  • a two-sided model UE-part model and NW-part model
  • the NW side e.g., by a NW vendor
  • the UE-part of the trained model e.g., encoder for the AE-based CSI compression use case
  • the NW side e.g., encoder for the AE-based CSI compression use case
  • Type 2 Joint training of the two-sided model at the NW side and UE side, respectively. Joint training can be done simultaneously at the network and UE sides or be performed in a sequential way.
  • the UE-part model (trained at the UE side) and the NW -part model (trained at the NW side) are jointly trained in the same loop through exchanging forward propagation values and backward propagation values between NW and UE.
  • one side UE side or NW side
  • API Application Programming Interface
  • the NW side trains its model first (thus also obtaining what is sometimes known as a nominal encoder, but that is not used at the UE), and then the UE side can train its encoder by using an API.
  • the API would accept, e.g., a CSI report and a target CSI, both which are derived by the UE side based on the data (note that the CSI report is generated, at least partially, by the UE encoder under training and may thus not be an efficient CSI report at each step in the training).
  • the API would return gradients of the decoder and a loss function, with respect to the variables in the CSI report. Thus, allowing the UE to train an encoder that is matched to the decoder. c.
  • Type 3 Sequential training starting with UE side training or sequential training starting with NW side training, where the UE-part model and the NW-part model are trained by UE side and NW side, respectively.
  • the NW can firstly train the UE-part and NW-part models jointly using training data (e.g., target CSI samples), and then share a dataset comprising a UE-part model output (e.g., latent space variables) associated to the ground-truth/labels (e.g., target CSI) for the UE side to train its UE-part model (e.g., an encoder).
  • training data e.g., target CSI samples
  • a dataset comprising a UE-part model output (e.g., latent space variables) associated to the ground-truth/labels (e.g., target CSI) for the UE side to train its UE-part model (e.g., an encoder).
  • a UE-part model output e.g., latent space variables
  • the NW can share a dataset comprising gradients of the NW-part model (e.g., the gradients of the decoder) together with loss function value indicating the discrepancy of the NW-part model output (e.g., the decoder output) and the ground-truth/labels (e.g., target CSI) with respect to the UE-part model output (e.g., latent space variables), based on which the UE side trains its UE-part model (e.g., an encoder).
  • the NW-part model e.g., the gradients of the decoder
  • loss function value indicating the discrepancy of the NW-part model output
  • the ground-truth/labels e.g., target CSI
  • UE-part models and NW-part models can be trained to work together, including in scenarios that require inter-vendor collaboration.
  • UE-part models and NW-part models can be trained to work together based on the following options/sub-options: a.
  • Option 1 A fully standardized reference model (structure + parameters).
  • Option 2 Standardized dataset.
  • Option 3 A standardized reference model structure + Parameter exchange between the NW side and the UE side.
  • Option 3a Parameters received at the UE or UE side goes through offline engineering at the UE side (e.g., UE-side OTT server), e.g., potential re-training, re-development of a different model, and/or offline testing.
  • Option 3a-l Model/Parameters exchanged from the NW side to UE side is CSI generation part.
  • Option 3a-2 Model/Parameters exchanged from the NW side to UE side is CSI reconstruction part.
  • Option 3a-3 Model/Parameters exchanged from the NW side to UE side are both CSI generation part and CSI reconstruction part.
  • Option 3b Parameters received at the UE are directly used for inference at the UE without offline engineering, potentially with on-device operations.
  • Option 4 Standardized data / dataset format + Dataset exchange between NW side and UE side.
  • Option 4-1 Dataset exchanged from the NW side to UE side comprising target CSI, CSI feedback.
  • Option 4-2 Dataset exchanged from the NW side to UE side comprising CSI feedback, reconstructed target CSI.
  • Option 4-3 Dataset exchanged from the NW side to UE side comprising target CSI, CSI feedback, reconstructed target CSI.
  • Option 5 Standardized model format + Reference model exchange between NW side and UE side.
  • Option 5a Model received at the UE or UE side goes through offline engineering at the UE side (e.g., UE-side OTT server), e.g., potential re-training, re-development of a different model, and/or offline testing.
  • Option 5a- 1 Model/Parameters exchanged from the NW side to UE side is CSI generation part.
  • Option 5a-2 Model/Parameters exchanged from the NW side to UE side is CSI reconstruction part.
  • Option 5a-3 Model/Parameters exchanged from the NW side to UE side are both CSI generation part and CSI reconstruction part.
  • Option 5b Model received at the UE is directly used for inference at the UE without offline engineering, potentially with on-device operations.
  • the network can indicate activation/deactivation/fallback/switching of AI/ML functionality via 3GPP signaling (e.g., Radio Resource Control (RRC), Medium Access Control-Control Element (MAC-CE), and/or Downlink Control Information (DCI)).
  • RRC Radio Resource Control
  • MAC-CE Medium Access Control-Control Element
  • DCI Downlink Control Information
  • AI/ML models may not be identifiable at the network, and the UE may perform model-level LCM. Whether and how much awareness/interaction the NW side should have about model-level LCM can vary in such scenarios.
  • an AI/ML-enabled feature i.e., a feature where AI/ML may be used.
  • a UE may have one or more AI/ML models (i.e., multiple AI/ML models) for a given functionality.
  • functionality refers to an AI/ML-enabled Feature/feature group enabled by configuration(s), where configurations are supported based on conditions indicated by the UE capability.
  • a functionality-based LCM operates based on at least one configuration of an AI/ML-enabled feature/feature group or specific configurations of an AI/ML-enabled feature/feature group.
  • functionality-based LCM and model-ID based LCM can include mechanisms for the UE to report on applicable functionalities among configured/identified functionalities, where the applicable functionalities may be a subset of all configured/identified functionalities. For example, the UE may report on updates to applicable functionalities/models.
  • model-ID-based LCM models can be identified at the network, and the network and/or UE may activate, deactivate, select, and/or switch individual AI/ML models based on the model ID.
  • model-ID-based LCM can operate based on identified models, where a model may be associated with specific configurations and/or conditions associated with UE capabilities related to an AI/ML-enabled feature/feature group and/or additional conditions (e.g., scenarios, sites, and/or datasets) as determined or identified between the UE side and NW side of the AI/ML model.
  • an AI/ML model identified by a model ID may be logical, and how the AI/ML model maps to physical AI/ML model(s) may be up to implementation.
  • a “logical AI/ML model” may refer herein to a model that is identified and assigned a model ID
  • a “physical AI/ML model” may refer herein to an implementation of a logical AI/ML model.
  • functionality-based LCM and model-ID based LCM can include mechanisms for the UE to report updates on applicable UE-part/UE-side model(s), where the applicable models may be a subset of all identified models.
  • Type A The model is identified to the NW (if applicable) and the UE (if applicable) without over-the-air signaling. o
  • the model may be assigned with a model ID during the model identification, which may be referred to and/or used in over-the-air signaling after model identification.
  • Type B The model is identified via over-the-air signaling.
  • Type Bl The model is identified via over-the-air signaling.
  • Model identification is initiated by the UE, and the NW assists with the remaining steps (if any) of the model identification.
  • the model may be assigned a model ID during the model identification, o Type B2:
  • Model identification is initiated by the NW, and the UE responds (if applicable) for the remaining steps (if any) of the model identification.
  • the model may be assigned a model ID during the model identification.
  • the UE may indicate supported AI/ML model IDs for a given AI/ML-enabled feature/feature group in a UE capability report. It should be noted that communicating the model identification via a UE capability report is not precluded for the Type Bl and Type B2 scenarios above. Further, the model ID may or may not be globally unique, and different types of model IDs may be created for a single model for various LCM purposes.
  • the model ID if needed, can be used in a functionality defined in functionalitybased LCM for LCM operations.
  • additional conditions can refer to aspects that are assumed for the training of the model but are not a part of the UE capability for the AI/ML-enabled feature/feature group. Such additional conditions are not necessarily specified.
  • Additional conditions can be divided into two categories: NW-side additional conditions and UE-side additional conditions.
  • NW-side additional conditions For inference for UE-side/UE-part models, to ensure consistency between training and inference regarding NW-side additional conditions (if identified), the following options can be taken as potential approaches (when feasible and necessary): Model identification to achieve alignment on the NW-side additional condition between NW side and UE side; Model training at the NW and transfer to the UE, where the model has been trained under the additional condition; information and/or an indication regarding the NW-side additional condition is provided to UE; consistency assisted by monitoring, by the UE and/or the NW, the performance of UE-side/UE-part candidate models/functionalities to select a model/functionality.
  • Model identification to achieve alignment on the NW-side additional condition between NW side and UE side
  • information and/or an indication regarding the NW-side additional condition is provided to UE
  • the AI/ML model is split into two parts, one at the NW side and one at the UE side. These parts may reflect, e.g., CSI compression and decompression (encoding and decoding respectively).
  • the parts need to be trained together for the decoder to understand the variables passed from the encoder to the decoder.
  • the encoder and decoder typically come from different vendors and hence there are many different pairings possible between different NW and UE vendors.
  • a UE may need to support multiple UE-part models to pair with different NW -part models that are jointly trained with different NW-vendors.
  • multiple pairs of UE-part and NW-part models can be trained for different NW/UE-side additional conditions, e.g., NW/UE configurations, scenarios, and/or releases.
  • NW/UE-side additional conditions e.g., NW/UE configurations, scenarios, and/or releases.
  • One model pairing problem for two-sided AI/ML model-based use cases involves identifying and configuring the correct AI/ML model to use on either of the UE or NW-sides before a model inference can be initiated (e.g., before CSI reporting using the AI/ML model can begin).
  • Model identification may be required, e.g., when a UE connects to a cell, for different model LCM stages, e.g., to pair UE-part and NW-part models for a two-sided AI/ML model-based capability (i.e., feature/functionality) during model inference, to identify a potential two-sided model malfunction during performance monitoring, and/or to configure model activation, deactivation, switching, and/or fallback on either the UE side or NW side when a potential model malfunction or performance degradation is detected.
  • a two-sided AI/ML model-based capability i.e., feature/functionality
  • a UE-part model that is compatible with a NW-part model used at a network node can be selected by monitoring two-sided model performance when different UE-part candidate models are used.
  • the two-sided model performance monitoring may be performed at the UE side and/or the NW side.
  • this method could be challenging to scale as the number of stored models at the UE side or NW side becomes larger.
  • a randomly or pseudo-randomly generated key (e.g., a hash function) comprising a string of letters and/or numbers (e.g., binary numbers) may identify a pair of two-sided models that are intended, trained, and/or designed to operate together, e.g., to provide performance benefits.
  • the key may be generated during training between NW and UE side vendors and may be used during operation with deployed UEs and network nodes, e.g., by signaling between UE and NW side to identify supported models (e.g., UE capabilities), to activate/deactivate paired models, and/or for monitoring purposes.
  • this method does not address how to perform model pairing for different UE-side additional conditions (e.g., when there are updates/changes to the UE-part models deployed at the UE) and/or NW-side additional conditions (e.g., when there are updates/changes to NW configurations, scenarios).
  • Certain embodiments may provide one or more of the following technical advantages.
  • the embodiments herein enable model pairing between two sides of the communication chain for two-sided AI/ML based capabilities (features, functionalities, feature-groups, etc.) e.g., when the first node connects to the second node, or when there are updates and/or changes to the first-part models at the first node, so that the selected model pairs at both sides are compatible for AI/ML model inference operations.
  • the embodiments herein relate to a two-sided AI/ML model pairing solution that can be used for supporting different model life cycle management stages, e.g., to pair the UE-part and NW-part models for a two-sided AI/ML model-based capability (feature/functionality) during model inference, to identify a potential two-sided model malfunction during performance monitoring, and/or to configure model activation, deactivation, switching, and/or fallback on either side when detecting a potential model malfunction or performance degradation.
  • a two-sided AI/ML model pairing solution that can be used for supporting different model life cycle management stages, e.g., to pair the UE-part and NW-part models for a two-sided AI/ML model-based capability (feature/functionality) during model inference, to identify a potential two-sided model malfunction during performance monitoring, and/or to configure model activation, deactivation, switching, and/or fallback on either side when detecting a potential model malfunction or performance degradation.
  • Figures 10-13 illustrate a channel state information (CSI) compression use case as an example of various embodiments.
  • Figures 10-13 illustrate example cases of AI/ML model pairing identifier (ID) design for a CSI-compression use case.
  • ID AI/ML model pairing identifier
  • the pairing ID is designed based on network (NW)-part models, e.g., based on one or more of a NW training dataset, a NW vendor ID, a NW part model structure, etc.
  • NW network
  • similar methodology can be used for designing a pairing ID based on UE-part models, e.g., based on a UE’s training dataset, a UE vendor ID, a UE-part model structure, etc.
  • FIG. 10 illustrates a diagram showing an example of a two-sided AI/ML model in accordance with some embodiments.
  • Figure 10 shows Model pairing Case 1 where a single NW-part model (decoder) is deployed at each gNB from the same NW vendor for all scenarios/configurations.
  • each NW vendor, gNB vendor A 1002 and gNB vendor B 1004 deploys a single NW-part model, decoder A 1006 and decoder B 1008 corresponding to gNB vendor A 1002 and gNB vendor B 1004, respectively, for an AL CSI compression feature for each scenario/configuration.
  • Each UE deploys different UE-part models (encoders) for communicating with different gNBs from different NW vendors.
  • Specific UE/chipset vendors may train different UE-part models with the given NW-part model for different UE/chipset releases/types (e.g., Device type X 1014 and Device type Y 1016 for UE vendor 1 1010; UE vendor 2 device 1015 for UE vendor 2 1012) and/or for different scenarios/configurations (e.g., Encoder 1_X1:A 1018 for scenario/configuration 1 and Encoder 1_X2:A 1020 for scenario/configuration 2; Encoder 1_X1 :B 1019 for scenario/configuration 1 and Encoder 1_X2:B 1021 for scenario/configuration 2).
  • UE/chipset releases/types e.g., Device type X 1014 and Device type Y 1016 for UE vendor 1 1010; UE vendor 2 device 1015 for UE vendor 2 1012
  • scenarios/configurations e.g., Encoder 1_X1:A 1018 for scenario/configuration 1 and Encoder 1_X2:
  • the pairing ID is designed based on the following principles:
  • Different NW-part models e.g., Decoder A 1006 and Decoder B 1008 that are trained by using different training datasets (e.g., Dataset A 1030 and Dataset B 1040) are assigned with different logical pairing IDs (e.g., Pairing ID A 1022 and Pairing ID B 1024).
  • Different UE-part models trained with the same NW-part model are assigned with the same logical pairing ID as the NW-part model.
  • Different UE-part models, Encoder 1_X1:A 1018, Encoder 1_X2:A 1020, Encoder 1_Y1:A 1026, and Encoder 2:A 1028 trained with the same NW-part model, Decoder A 1006, are assigned with the same logical pairing ID as the NW-part model, Pairing ID A 1022.
  • Encoder 1_X1 :B 1019, Encoder 1_X2:B 1021, Encoder 1_Y1:B 1017, and Encoder 2:B 1029) trained with the same NW-part model, Decoder B 1008, are assigned with the same logical pairing ID as the NW-part model, Pairing ID B 1024.
  • a pairing ID shall be designed to enable the differentiation between different pairs of two-sided models that are trained by using different training datasets.
  • the pairing ID can be designed in a variety of ways, e.g., based on a global training dataset ID, a global network vendor ID, a global NW-part model ID, etc.
  • a model pairing process can be designed by utilizing pairing information to enable the UE to select a CSI generation model(s) that is compatible with a CSI reconstruction model used by the gNB.
  • the first node is a UE
  • the second node is a gNB.
  • the first-part model is a UE-part model (e.g., an encoder or a CSI generation model)
  • the second-part model is a NW-part model (e.g., a decoder or a CSI reconstruction model).
  • step 1 when connecting to a gNB, a UE reports pairing information (e.g., all its supported pairing IDs) together optionally with other parameters (e.g., supported frequency range, carrier frequency, range of bandwidth, range of the number of antenna ports, range of number of layers, etc.) to the gNB for the Al CSI compression feature via UE capability reporting.
  • the gNB determines the CSI generation model(s) to be used at the UE. If the gNB identifies a pairing ID that matches with the pairing ID of its CSI reconstruction model in use, the gNB indicates the selected pairing ID to the UE. Otherwise, no CSI generation model is selected for the UE, and the gNB may, for example, configure the UE to use a fallback/legacy CSI report format.
  • step 3 if the UE receives an indication of selected pairing ID from the gNB, the UE selects a CSI generation model associated with the indicated pairing ID for model inference operations. Otherwise, the UE may use a fallback/legacy CSI report format based on the NW configuration.
  • Figure 11 illustrates a diagram showing another example of a two-sided AI/ML model in accordance with some embodiments.
  • Figure 11 shows Model pairing Case 2 where multiple NW-part models (decoders) from the same NW vendor are trained for different scenarios/configurations, and one gNB hosts a single NW-part model only.
  • a NW vendor designs multiple NW-part models (decoders) for an AI-CSI compression feature for different scenarios/configurations.
  • Different gNBs from the same NW vendor may deploy a different NW-part model, but one gNB hosts a single NW-part model only (e.g., based on the deployment scenario).
  • the pairing ID is designed based on the following principles:
  • Decoder Al 1102, Decoder A2 1104, Decoder Bl 1106 trained by using different training datasets (e.g., Dataset Al 1108, Dataset A2 1110, Dataset Bl 1112) are assigned with different logical pairing IDs (e.g., Pairing ID Al 1114, Pairing ID A2 1116, Pairing ID Bl 1118).
  • Encoder 1 XEB1 1130, Encoder 1 YEB1 1132, and Encoder 2:B1 1134 trained with the same NW-part model are assigned with the same logical pairing ID as for the NW-part model (e.g., Pairing ID B 1 1118).
  • a pairing ID shall be designed to enable the differentiation between different pairs of two-sided models that are trained by using different training datasets.
  • the pairing ID of Case 2 can be designed in many ways, e.g., based on a global training dataset ID, a global NW vendor ID and a local scenario/configuration ID and/or a global NW-part model ID.
  • the first node is a gNB (i.e., a network node) and the second node is a UE.
  • the second-part model is a NW-part model (also called as a decoder or a CSI reconstruction model) and the first-part model is a UE-part model (also called as an encoder or a CSI generation model).
  • the gNB sends pairing information (e.g., the supported pairing IDs at the gNB) together with other information (if needed) to the UE for the Al CSI compression feature.
  • the gNB may send the pairing information to an individual UE (e.g., via UE-specific signaling), or a group of UEs (e.g., via multicast or broadcast signaling).
  • the UE receives the pairing information (e.g., pairing ID(s)) and other information (if available) from the gNB.
  • the UE compares the received pairing ID(s) to the pairing ID(s) for its stored CSI generation model(s) and selects the CSI generation model(s) that match with the received pairing ID(s). If the UE cannot identify a CSI generation model that matches with any of the received pairing ID(s), then, the UE does not select any CSI generation model. If one or more CSI generation model(s) are selected by the UE, then the UE may signal to the gNB that the pairing of two-sided model for the Al CSI compression feature is successful, and the two-sided model can be activated.
  • the UE may not need to signal to the gNB of the pairing ID selected by the UE. However, if the pairing information contains two or more pairing IDs (i.e., two or more logical models are available for the CSI reconstruction model), the UE may select a preferred pairing ID, and signal the preferred selection to the gNB. If no CSI generation model(s) are selected by the UE, then the UE may signal to the gNB that the pairing of two-sided model for the Al CSI compression feature has failed, and the two-sided model cannot be activated.
  • step 3 based on the selection results, the UE signals to the gNB whether the two- sided model can be activated.
  • FIG. 12 illustrates a diagram showing another example of a two-sided AI/ML model 1200 in accordance with some embodiments.
  • Figure 12 shows model pairing Case 3(a) where multiple NW -part models (decoders) from the same NW vendor are trained for different scenarios/configurations/AI design choices, and one gNB hosts a single NW-part model only.
  • Case 3(a) a NW vendor designs multiple NW-part models (decoders) for an AI-CSI compression feature for different scenarios/configurations.
  • a NW vendor may design multiple NW-part models (decoders) using different Al model design choices (e.g., different Al model architectures) for the same scenarios/configuration/dataset. Similar to Case 2, different gNBs deployed by the same NW vendor may implement a different NW-part model, but each gNB hosts one NW-part model.
  • the pairing ID is designed based on the following principles:
  • Decoder Al_l 1202, Decoder A2_l 1207, and Decoder B 1 1 1209 trained by using different training datasets (e.g., Dataset Al 1206, Dataset A2 1212, and Dataset Bl 1213) are assigned with different logical pairing IDs (e.g., Pairing ID Al l 1208, Pairing ID Al_2 1210, Pairing ID A2_l 1214, and Pairing ID Bl l 1215).
  • Encoders 1_X2:A1_2 1222, Encoder 1_Y1 :A1_2 1224, and Encoder 2: Al_2 1226) trained with the same NW-part model (decoder Al_2 1204) are assigned with the same logical pairing ID as for the NW-part model (e.g., Pairing ID Al_2 1210).
  • Encoders 1_X2:A2_1 1228 and Encoder 2:A2_1 1230) trained with the same NW-part model (decoder A2_l 1207) are assigned with the same logical pairing ID as for the NW-part model (e.g., Pairing ID A2_l 1214).
  • FIG. 13 illustrates a diagram showing another example of a two-sided AI/ML model 1300 in accordance with some embodiments.
  • Figure 13 shows model pairing Case 3(b) where multiple NW -part models (decoders) from the same NW vendor are trained for different scenarios/configurations/AI design choices, and each gNB hosts multiple NW-part models, where each NW-part model is assigned with a separate Pairing ID.
  • the pairing ID is designed based on the following principles:
  • Decoder Al_l 1302, Decoder A2_l 1307, and Decoder B 1 1 1309 trained by using different training datasets (e.g., Dataset Al 1306, Dataset A2 1312, and Dataset Bl 1313) are assigned with different logical pairing IDs (e.g., Pairing ID Al l 1308, Pairing ID Al_2 1310, Pairing ID A2_l 1314, and Pairing ID Bl l 1315).
  • Encoders 1_X2:A1_2 1322, Encoder 1_Y1 :A1_2 1324, and Encoder 2: Al_2 1326) trained with the same NW-part model (decoder Al_2 1304) are assigned with the same logical pairing ID as for the NW-part model (e.g., Pairing ID Al_2 1310).
  • Encoders 1_X2:A2_1 1328 and Encoder 2:A2_1 1330) trained with the same NW-part model (decoder A2_l 1307) are assigned with the same logical pairing ID as for the NW-part model (e.g., Pairing ID A2_l 1314).
  • Encoders 1_X1:B1_1 1332, Encoder 1_Y1:B1_1 1334, and Encoder 2:B1_1 1336) trained with the same NW-part model (decoder B 1 1 1309) are assigned with the same logical pairing ID as for the NW-part model (e.g., Pairing ID Bl_l 1315).
  • a pairing ID shall be designed to enable the differentiation between different pairs of two-sided models that are trained using different training datasets and/or using different AI/ML model design choices (e.g., different model structures).
  • the pairing ID for Case 3 (described in relation to Figures 12-13) cannot be designed just based on dataset ID, since different AI/ML model design choices (e.g., CNN or transformer-based model structure) maybe selected for training different decoders at the NW side.
  • the pairing ID design shall take model design choices into account, e.g., it can be designed based on a global training dataset ID together with a local NW-part model backbone/structure ID, a global NW-part model ID, a global NW vendor ID, a local scenario/configuration ID, and/or a local NW-part model ID.
  • the aspects of ensuring compatibility between UE encoders and NW decoders can be considered as additional conditions for AL/ML model design. Specifically, for consistency from training to inference, the UE and NW should use compatible encoders/decoders.
  • NW-side additional conditions e.g., scenarios, configurations, NW- part model design choices, etc.
  • One approach to address the NW-side additional conditions is by assigning different pairing IDs for different NW-side additional conditions, e.g., where the pairing IDs are signalled from the NW to UE. This approach would correspond to the approach where the NW-side additional condition is provided to the UE.
  • a pairing ID can indicate the NW-side additional condition-related information (e.g., the scenarios, configurations, and/or AI-ML model design choices of NW-part model), and the UE can select a CSI generation model to use for inference.
  • NW-side additional condition-related information e.g., the scenarios, configurations, and/or AI-ML model design choices of NW-part model
  • the NW may not be aware of the updates/changes to the encoder(s) deployed/used at the UE. Methods to solve this issue is discussed below.
  • the UE-side additional conditions may include one or more of the following:
  • One or more new logical CSI generation models associated to one or more newly supported Pairing IDs are added for the UE, where the new CSI-generation model(s) are trained with a specific CSI reconstruction model.
  • the new CSI-generation model(s) are trained with a specific CSI reconstruction model.
  • Device type y of UE vendor 1 has added a new CSI generation model called Encoder 1_Y2:A2_1 with the Pairing ID A2_l.
  • One or more deactivated logical CSI generation models start functioning again (e.g., based on UE-side model performance monitoring).
  • the Encoder 1 X1 Al l Device type x of UE vendor 1 starts work again with the Decoder Al l, corresponding to the Pairing ID Al l.
  • the CSI generation models mentioned in the above bullets are logical models, where each logical model is associated with a logical Pairing ID.
  • the UE can indicate the additional condition information in forms of valid/invalid/newly added Pairing ID(s) to the NW via additional condition reporting, if needed.
  • the gNB can select a pairing ID that is compatible with the CSI reconstruction model in operation and signal such selection results to the UE.
  • Figure 14 illustrates a flowchart showing a method 1400 performed by a UE for artificial intelligence / machine learning (AI/ML) model pairing in accordance with some embodiments.
  • Figure 14 relates to a pairing information-based method to enable a first node (e.g., UE) to select a first-part model (e.g., UE-part model) that is compatible with a second-part model (e.g., NW-part model) used by a second node (e.g., a network (NW) node) for a two-sided AI/ML based capability (e.g., an AI/ML based feature, functionality, and/or feature group operation).
  • a first node e.g., UE
  • NW-part model e.g., NW-part model
  • NW network node
  • NW/second node may deploy multiple second-part models for the Al/ two-sided AI/ML based capability, during model inference operation, only one second-part model of the two-sided AI/ML based capability is used at the NW/second node.
  • the method comprises sending, to a network node, pairing information for two-sided AI/ML model-based capability.
  • the first node e.g., the UE
  • the second node e.g., the network node
  • the pairing information contains at least one or more pairing IDs, and each pairing ID is associated with one or more physical first-part models supported at the first node for the two- sided AI/ML based feature/functionality.
  • a pairing ID identifies a viable two-sided model, which is uniquely characterized by one or more of the following: (a) the training data set used for training the one or more physical first-part model(s) associated to this pairing ID, (b) the AI/ML architecture/structure choices for a second-part model (e.g., CNN or transformer) that the one or more physical first-part models are jointly trained with, (c) the set of preferred model parameters of a second-part model.
  • a second-part model e.g., CNN or transformer
  • a pairing ID can be designed based on a global/local second-part model structure ID, and/or a global/local training dataset ID, and/or a global/local second-node vendor/type ID, and/or a global/local scenario ID, and/or a first-part model input/a second-part output types, and/or preprocessing/post-processing, and/or a global/local second-node configuration ID.
  • the pairing information may also contain additional model design choices related parameters.
  • additional model design choice parameters include the supported candidate latent space sizes or candidate latent space size ranges, the supported candidate compression ratios or range of compression ratios, the supported candidate first-part model output quantization related parameters, the supported candidate payload sizes or supported candidate payload size ranges.
  • the payload refers to the bits generated based on the quantized first-part model output and shall be reported from the first node to the second node (e.g., the Al-based CSI report generated based on the encoder output for the CSI- compression use case).
  • These additional model design choice related parameters can be included per pairing ID (i.e., each pairing ID is associated with a set of candidate values for these additional information). If a model design parameter is fixed for a pairing ID, e.g., the parameter is optimized during the two-sided model training phase, then, there is no need to report this parameter as additional model design choice parameter in the first pairing information.
  • the first node may also report the other information to the second node to facilitate the initial first-part model selection or the associated two-sided AI/ML based functionality configuration/activation.
  • An exemplary case is when the first node is the UE and the second node is a NW-side node (e.g., gNB), where the UE connects to a cell and the gNB does not have the UE’s capability information related to this two-sided AI/ML based feature.
  • the other information may include the supported carrier frequency, the supported number of ports, the supported number of ranks, the supported bandwidth, etc. for this feature.
  • the pairing information and the other UE capability parameters related to this two-sided AI/ML based feature/functionality/feature-group can be reported together in the corresponding UE capability reporting message.
  • the first node may indicate the updates/changes to the second node.
  • the first node only needs to report the pairing information to the second node, as the second node already has the other information related to the AI/ML based capability (feature/functionality) obtained from the first node’s capability reporting (e.g., during UE capability reporting procedure) or obtained from another network node (e.g., from the source network node during handover procedures).
  • the pairing information can be reported using dynamic UE reporting signaling (e.g., in forms of the UEAssistancelnformation RRC IE, MAC CE, or UCI).
  • the second node may request the first node to report the status of the first-part models (e.g., the currently supported first-part models for this AI/ML based feature) to decide whether further model LCM decision shall be taken or not.
  • the first node can just report the pairing information to the second node in a dynamic UE reporting signaling (e.g., in forms of the UEAssistancelnformation RRC IE, MAC CE, or UCI).
  • the method performed by the UE may comprise receiving, from the network node, assistance information for two-sided AI/ML model-based capability at block 1510.
  • the second node e.g., the network node
  • the first node e.g., the UE
  • assistance information can be provided to the first node using dedicated signaling, or by broadcasting the information to multiple first network nodes.
  • the method may further comprise comparing received assistance information with stored first-part models.
  • the first node may receive assistance information from the second node, and compare the received assistance information with its stored first-part models. Based on the comparison, the first node may pre-select a subset of the stored first-part models for the two-sided AI/ML based feature, functionality, and/or feature- group, and prepare the pairing information accordingly.
  • the method may further comprise pre-selecting a subset of stored first-part models for two-sided AI/ML model -based capability.
  • the method may further comprise generating the pairing information based on subset of stored first-part models.
  • the method further comprises receiving, from the network node, configuration information that indicates whether a first-part model can be selected or used at UE for two-sided AI/ML model-based capability at block 1420.
  • the second node may receive the pairing information and other information (if available) from the first node (the UE in the above example).
  • the second node may compare the received pairing information of the first-part models to its one or more second-part models for the two-sided AI/ML based feature/functionality/feature-group and select one or more first-part models that are compatible with its one or more second-part models.
  • the second node may not select (or may deselect) a first- part model.
  • the second node may configure the first node/UE with certain reference channel data to be compressed via one or more first-part candidate models.
  • the first node/UE may then be configured to process the reference data comprising, for example, a channel matrix to be used as input to the one-or more first-part models.
  • the second node may generate the channel matrix in a manner similar to how reference signals are generated in the frequency domain.
  • the second node may indicate to the first node, e.g., via configuration information, how to generate the reference channel matrix.
  • the reference channel matrix may be based on the Sounding Reference Signal (SRS) symbol generation and resource mapping in time-instance N for antenna port number M.
  • the second/ network node may also signal the raw channel information to the first node/UE, e.g., described with a time-frequency-antenna grid.
  • the second node may signal the reference channel matrix using the Type-EType- II format, typically used for UE channel feedback.
  • the UE may report an output from one or more first-part candidate models.
  • the second node may then determine if one or more of the second-part models can reconstruct the outputs from the one or more first part candidate models accurately. For example, if the AI/ML model loss is within a certain threshold range, the second node/ network node may select a first-part model.
  • the configuration/indication sent from the second node may include at least part of the pairing information associated with the selected first-part model(s) that can be used at the first node (e.g., the selected pairing ID and/or the associated additional model design choices related parameters).
  • the configuration/indication sent from the second node can be part of the configuration parameters used for configuring the first node to report the first-part model output to the second node, or it can be signaled to the first node via separate signaling (e.g., UE- specific RRC signaling, MAC CE, DCI, etc.).
  • the configuration/indication sent from the second node may be a fallback configuration for the AI/ML model, e.g., a configuration for a non-AI/ML based capability (feature/functionality).
  • the second node sends to the first node (and the first node receives) a configuration/indication, which either indicates the one or more selected first- part models that can be used at the first node, or that no first-part models shall be used at the first node (i.e., there are no compatible first-part models that can be used).
  • a configuration/indication which either indicates the one or more selected first- part models that can be used at the first node, or that no first-part models shall be used at the first node (i.e., there are no compatible first-part models that can be used).
  • the absence of an explicit indication from the second node related to the selected first-part model(s) may be interpreted by the first node as a 'NULL' indication.
  • the lack of an indication may be interpreted by the first node as a 'NULL' indication, where the 'NULL' indication is a predefined indication that no first-part model may be used by the first node.
  • the first node may deactivate the AI/ML based capability for the two-sided AI/ML model.
  • the first node shall also report the first-part model output to the second node in model inference operation. Note that the first-part model output may also be processed before being reported to the second node (e.g., quantized).
  • the method further comprises selecting or deselecting a first-part model based on received configuration information. For example, based on the received configuration/indication from the second node, the first node may select a first-part model for the two-sided AI/ML model to generate an inference. Further, the first node may report the selected first-part model to the second node, e.g., by providing a pairing ID that can be signaled to the second node via a separate signaling (e.g., an RRC message, MAC CE, UCI, etc.). Alternatively, the first node may not select, or may deselect, a first-part model for the two- sided AI/ML based capability.
  • a first-part model for the two-sided AI/ML based capability.
  • the first node may deactivate the AI/ML based capability for the two-sided AI/ML model in the absence of a compatible first-part model.
  • Figure 16 illustrates a flowchart showing another method 1600 performed by a UE for artificial intelligence / machine learning (AI/ML) model pairing in accordance with some embodiments.
  • the pairing information-based method allows a second node (e.g., a network node) to signal to a first node (e.g., a UE) sufficient information, so that the first node can select a first-part model (e.g., UE-part model) which is compatible with the second-part model (e.g., NW-part model) at the second node.
  • a first node e.g., UE-part model
  • NW-part model e.g., NW-part model
  • the method comprises receiving, from network node, pairing information for at least one second-part model of two-sided AI/ML model-based capability.
  • the NW/second node may send to the UE/first node (the UE may receive from the network node) pairing information together with other information (if needed) for the two- sided AI/ML based capability (e.g., a feature/functionality/feature-group).
  • the NW/second node may send the pairing information to an individual UE/first node (e.g., via UE-specific signaling), or a group of UE/first nodes (e.g., via multicast or broadcast signaling).
  • the pairing information may contain at least one pairing ID associated with one or more physical second-part models supported at the NW/second node for the two-sided AI/ML based capability.
  • the pairing information may contain at least one pairing ID associated with one or more first-part models that the second-part models supported at the NW/second node can pair with.
  • the pairing information may present to the UE/first node a promise of supported first-part models based on the second-part models present at the NW/second node.
  • the pairing information may indicate (implicitly or explicitly) other information, including:
  • the validity parameters of the second-part model(s) e.g., the time duration that the second-part model(s) is valid; the physical or logical area that the second-part model(s) is valid; the category of UE/first nodes (e.g., low-vs-medium-vs-high cost UE, low-vs- medium-vs-high power UE, low-vs-medium-vs-high complexity UE, etc.) for which the second-part model(s) is valid;
  • the category of UE/first nodes e.g., low-vs-medium-vs-high cost UE, low-vs- medium-vs-high power UE, low-vs-medium-vs-high complexity UE, etc.
  • the method further comprises comparing received pairing information for at least one first-part model or second-part model to the first-part models of the two-sided AI/ML model-based capability, and selecting or deselecting a first-part model based on the received pairing information at block 1630.
  • the UE/first node receives the pairing information and other information (if available) from the NW/second node.
  • the UE/first node is operative to compare the received pairing information of at least one second-part model to its first-part models for the two-sided AI/ML based capability and selects one or more first-part models that are compatible with the at least one second-part model. If the UE/first node cannot identify a first- part model that is compatible with the second-part model(s) associated with the pairing information, then, the UE/first node may not select (or may deselect) a first-part model.
  • the UE/first node may signal to the NW/second node that the pairing of two-sided model is successful, and the two-sided model can be activated.
  • the pairing information contains a single pairing ID (i.e., a single logical model is available for the second-part model)
  • the UE/first node may not need to signal to the NW/second node the pairing ID selected by the UE/first node.
  • the pairing information contains two or more pairing IDs (i.e., two or more logical models are available for the second-part model)
  • the UE/first node may select a preferred pairing ID and signal the preferred selection to the NW/second node.
  • the UE/first node may signal to the NW/second node that the pairing of two-sided model has failed, and the two-sided model cannot be activated.
  • the UE/first node may signal to the NW/second node whether the two-sided model can be activated. For example, absent an explicit signal (e.g., an indication from the UE/first node is not received by the NW/second node before a timer expires), the lack of a response from the UE/first node may be interpreted by the NW/second node as 'NULL' indication, where the 'NULL' indication may be a predefined interpretation that no first-part model has been selected by the UE/first node and the pairing of two-sided model has failed.
  • an explicit signal e.g., an indication from the UE/first node is not received by the NW/second node before a timer expires
  • the lack of a response from the UE/first node may be interpreted by the NW/second node as 'NULL' indication, where the 'NULL' indication may be a predefined interpretation that no first-part model has been selected by the UE/first node and the pairing
  • the UE/first node and/or the NW/second node may send to the other node time delay information for activation of the AI/ML based capability.
  • the UE/first node may signal an earliest expected time T start, i that the two-sided model can be activated from the perspective of the UE/first node, i.e., the UE/first node will be ready to run the first-part model at or after Tstart,!.
  • the NW/second node may signal an earliest expected time T s tart,2 that the two-sided model can be activated from the perspective of the NW/second node, i.e., the NW/second node will be ready to run the second-part model at or after T s tart,2. Therefore, the first and second nodes may indicate that the AI/ML based capability will be activated and usable after the expected times (Tstart,i, Tstart,2).
  • each of the UE/first nodes may share the same time delay value for activation, or each of the UE/first nodes may have its own time delay value (e.g., a lower category UE may need a longer delay to activate, while a higher category UE may require shorter delay to activate).
  • the NW/second node may send configuration parameters to the UE/first node, so that the two-sided model is enabled to perform a model inference operation.
  • the UE/first node may report first-part model output to the first node.
  • the configuration parameters may include the timing information, for example, the starting time and periodicity when the UE/first node performs measurements to provide input to the model inference; the starting time and periodicity when the UE/first node reports the first-part model output, etc.
  • FIG. 17 illustrates a flowchart showing a method 1700 performed by a network node for artificial intelligence / machine learning (AI/ML) model pairing in accordance with some embodiments.
  • the method comprises receiving, from a user equipment (UE), pairing information for first-part models performed on the UE or the network node.
  • the UE/first node may send to the NW/second node (the NW/second node may receive from the UE/first node) pairing information together with other information (if needed) for the two-sided AI/ML based capability (e.g., a feature/functionality/feature-group).
  • the UE/first node may send the pairing information to a network generally, or a group of NW/second nodes (e.g., via multicast orbroadcast signaling).
  • the pairing information may contain at least one pairing ID associated with one or more physical first-part models supported at the UE/first node for the two-sided AI/ML based capability.
  • the pairing information may contain at least one pairing ID associated with one or more second- part models that the first-part models supported at the UE/first node can pair with.
  • the pairing information may present to the NW/second node a promise of supported second-part models based on the first-part models present at the UE/first node.
  • the pairing information may indicate (implicitly or explicitly) other information, including: (a) the training data set used for training the one or more physical first-part model(s) associated to this pairing ID.
  • the validity parameters of the first-part model(s) e.g., the time duration that the first-part model(s) is valid; the physical or logical area that the first-part model(s) is valid; the category of NW/second nodes for which the first-part model(s) is valid.
  • a first-part model e.g., CNN, transformer, etc.
  • the method further comprises comparing the received pairing information for the first-part models to second-part models performed at the network node for a two-sided AI/ML model-based capability.
  • the method further comprises selecting a first-part model performed at UE that is compatible with a second-part model performed at network node.
  • the method further comprises sending, to the UE, configuration information that indicates at least one of the selected first-part model, or that no first-part model shall be used at the UE.
  • the NW/second node receives the pairing information and other information (if available) from the UE/first node.
  • the NW/second node is operative to compare the received pairing information of the first-part models to its second-part models for the two- sided AI/ML based capability and selects one or more first-part models performed at the UE/first node that are compatible with the at least one second-part model at the NW/second node. If the NW/second node cannot identify a first-part model that is compatible with its second-part model(s) associated with the pairing information, then, the NW/second node may not select (or may deselect) a first-part model to be performed at the UE/first node.
  • the NW/second node may signal to the UE/first node configuration information and/or an indication that the pairing of two-sided model is successful, and the two-sided model can be activated.
  • the pairing information contains a single pairing ID (i.e., a single logical model is available for the first-part model)
  • the NW/second node may not need to signal to the UE/first node the pairing ID selected by the NW/second node.
  • the NW/second node may select a preferred pairing ID and signal the preferred selection to the UE/first node. [0192] If no first-part model is selected by the NW/second node, then the NW/second node may signal to the UE/first node that the pairing of two-sided model has failed, and the two- sided model cannot be activated.
  • the NW/second node may signal to the UE/first node whether the two-sided model can be activated. For example, absent an explicit signal (e.g., an indication from the NW/second node is not received by the UE/first node before a timer expires), the lack of a response from the NW/second node may be interpreted by the UE/first node as 'NULL' indication, where the 'NULL' indication may be a predefined interpretation that no first-part model has been selected by the NW/second node and the pairing of two-sided model has failed.
  • an explicit signal e.g., an indication from the NW/second node is not received by the UE/first node before a timer expires
  • the lack of a response from the NW/second node may be interpreted by the UE/first node as 'NULL' indication, where the 'NULL' indication may be a predefined interpretation that no first-part model has been selected by the NW/second node and the pairing of two
  • the NW/second node may send to the UE/first node configuration information including time delay information for activation of the AI/ML based capability. For example, the NW/second node may signal an earliest expected time T s tart,2 that the two-sided model can be activated from the perspective of the NW/second node, i.e., the NW/second node will be ready to run the second-part model at or after Tstart,2.
  • the UE/first node may signal an earliest expected time Tstart,i that the two-sided model can be activated from the perspective of the UE/first node after receiving the configuration information, i.e., the UE/first node will be ready to run the first-part model at or after T s tart,i. Therefore, the first and second nodes may indicate that the AI/ML based capability will be activated and usable after the expected times (T s tart,i, T s tart,2).
  • each of the UE/first nodes may share the same time delay value for activation, or each of the UE/first nodes may have its own time delay value (e.g., a lower category UE may need a longer delay to activate, while a higher category UE may require shorter delay to activate).
  • the NW/second node may further send configuration parameters to the UE/first node, so that the two-sided model is enabled to perform a model inference operation.
  • the UE/first node may report first-part model output to the NW/second node.
  • the configuration parameters may include the timing information, for example, the starting time and periodicity when the UE/first node performs measurements to provide input to the model inference; the starting time and periodicity when the UE/first node reports the first-part model output, etc.
  • a method performed by a user equipment (UE) for artificial intelligence / machine learning (AI/ML) model pairing comprising: sending, to a network node, pairing information for a two-sided AI/ML model-based capability.
  • UE user equipment
  • AI/ML artificial intelligence / machine learning
  • pairing information includes at least one pairing identifier (ID), wherein the at least one pairing ID is associated with one or more first-part models supported at the UE for the two-sided AI/ML model-based capability.
  • ID pairing identifier
  • the at least one pairing ID indicates at least one of the following: an associated training dataset used for training the one or more first-part models; or one or more AI/ML architecture or structural choices for a second-part model that the one or more first-part models are jointly trained with.
  • the at least one pairing ID is based on one or more of the following: a global second-part model structure ID; a global training dataset ID; a global network node vendor/type ID; a global network-node vendor/type ID and a local second-part model structure ID; a global network-node vendor/type ID and a local scenario ID; a global network-node vendor/type ID and a local network-node configuration ID; a global training dataset ID and a local second-part model structure ID; a global network-node vendor/type ID, a local scenario ID, and a local second-part model structure ID; a global first-part model structure ID; a global UE vendor/type ID; a global UE vendor/type ID and a local first-part model structure ID; a global UE vendor/type ID and a local scenario ID; a global UE vendor/type ID and a local network-node configuration ID; a
  • the pairing information includes one or more of the following model design choice-related information: a supported candidate latent space size or latent space size range; one or more supported candidate first-part model output quantization-related parameters; or a supported candidate payload size or payload size range.
  • any of (l)-( 14) further comprising: receiving, from the network node, assistance information for the two-sided AI/ML model-based capability; comparing the received assistance information with stored first-part models; based on the comparison, pre-selecting a subset of the stored first-part models for the two-sided AI/ML model-based capability, and generating the pairing information based on the subset of the stored first-part models.
  • the pairing information may include one or more of the following first-part model performance-related information: a typical bitrate or Uplink Control Information (UCI) overhead with a first-part model; or training information associated with a first-part model.
  • UCI Uplink Control Information
  • a method performed by a user equipment (UE) for artificial intelligence / machine learning (AI/ML) model pairing comprising: receiving, from a network node, pairing information for at least one first-part model or second-part model of a two-sided AI/ML model-based capability, wherein first-part models of the two-sided AI/ML model-based capability are supported at the UE.
  • UE user equipment
  • AI/ML artificial intelligence / machine learning
  • a method performed by a network node for AI/ML model pairing comprising: receiving, from a user equipment (UE), pairing information for first-part models performed on the UE or the network node.
  • UE user equipment
  • a method performed by a network node for AI/ML model pairing comprising: sending, to a user equipment (UE), pairing information for a two-sided AI/ML modelbased capability.
  • UE user equipment
  • the at least one pairing ID indicates at least one of the following: a training data set used for training the one or more first-part models associated with the at least one pairing ID; validity parameters of the one or more first-part models; or one or more AI/ML architecture or structural choices for a second-part model that the one or more first-part models are jointly trained with.
  • a user equipment for pairing information comprising: processing circuitry configured to perform to perform the step of sending, to a network node, pairing information for a two-sided AI/ML modelbased capability; and power supply circuitry configured to supply power to the processing circuitry.
  • a network node for pairing information comprising: processing circuitry configured to perform the step of: receiving, from a user equipment (UE), pairing information for first-part models performed on the UE or the network node; and power supply circuitry configured to supply power to the processing circuitry.
  • UE user equipment
  • a user equipment (UE) for pairing information comprising: an antenna configured to send and receive wireless signals; radio front-end circuitry connected to the antenna and to processing circuitry, and configured to condition signals communicated between the antenna and the processing circuitry; the processing circuitry being configured to perform the step of: sending, to a network node, pairing information for a two-sided AI/ML modelbased capability; an input interface connected to the processing circuitry and configured to allow input of information into the UE to be processed by the processing circuitry; an output interface connected to the processing circuitry and configured to output information from the UE that has been processed by the processing circuitry; and a battery connected to the processing circuitry and configured to supply power to the UE.
  • computing devices described herein may include the illustrated combination of hardware components, other embodiments may comprise computing devices with different combinations of components. It is to be understood that these computing devices may comprise any suitable combination of hardware and/or software needed to perform the tasks, features, functions and methods disclosed herein. Determining, calculating, obtaining or similar operations described herein may be performed by processing circuitry, which may process information by, for example, converting the obtained information into other information, comparing the obtained information or converted information to information stored in the network node, and/or performing one or more operations based on the obtained information or converted information, and as a result of said processing making a determination.
  • processing circuitry may process information by, for example, converting the obtained information into other information, comparing the obtained information or converted information to information stored in the network node, and/or performing one or more operations based on the obtained information or converted information, and as a result of said processing making a determination.
  • computing devices may comprise multiple different physical components that make up a single illustrated component, and functionality may be partitioned between separate components.
  • a communication interface may be configured to include any of the components described herein, and/or the functionality of the components may be partitioned between the processing circuitry and the communication interface.
  • non-computationally intensive functions of any of such components may be implemented in software or firmware and computationally intensive functions may be implemented in hardware.
  • processing circuitry executing instructions stored on in memory, which in certain embodiments may be a computer program product in the form of a non-transitory computer- readable storage medium.
  • some or all of the functionality may be provided by the processing circuitry without executing instructions stored on a separate or discrete device-readable storage medium, such as in a hard-wired manner.
  • the processing circuitry can be configured to perform the described functionality. The benefits provided by such functionality are not limited to the processing circuitry alone or to other components of the computing device, but are enjoyed by the computing device as a whole, and/or by end users and a wireless network generally.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Software Systems (AREA)
  • Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Medical Informatics (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Databases & Information Systems (AREA)
  • Signal Processing (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Mobile Radio Communication Systems (AREA)

Abstract

Des procédés, des systèmes et des appareils d'appariement de modèles d'intelligence artificielle/apprentissage automatique (IA/ML) sont divulgués ici. Un procédé mis en oeuvre par un équipement utilisateur (UE) pour un appariement IA/ML consiste à envoyer, à un nœud de réseau, des informations d'appariement pour une capacité basée sur un modèle d'IA/ML bilatéral. Le procédé peut en outre consister à recevoir, en provenance du nœud de réseau, des informations de configuration qui indiquent si un modèle de première partie peut être sélectionné ou utilisé au niveau de l'UE pour la capacité basée sur un modèle d'IA/ML bilatéral. Le procédé peut en outre consister à sélectionner ou à désélectionner le modèle de première partie en fonction des informations de configuration reçues.
PCT/SE2024/050939 2023-11-03 2024-11-01 Appariement de modèles pour modèles d'ia/ml bilatéraux Pending WO2025095847A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US202363547307P 2023-11-03 2023-11-03
US63/547,307 2023-11-03

Publications (1)

Publication Number Publication Date
WO2025095847A1 true WO2025095847A1 (fr) 2025-05-08

Family

ID=95582122

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/SE2024/050939 Pending WO2025095847A1 (fr) 2023-11-03 2024-11-01 Appariement de modèles pour modèles d'ia/ml bilatéraux

Country Status (1)

Country Link
WO (1) WO2025095847A1 (fr)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20220353155A1 (en) * 2019-11-22 2022-11-03 Huawei Technologies Co., Ltd. Personalized tailored air interface
WO2023209673A1 (fr) * 2022-04-28 2023-11-02 Telefonaktiebolaget Lm Ericsson (Publ) Modèle de repli par apprentissage automatique pour dispositif sans fil
WO2023211572A1 (fr) * 2022-04-28 2023-11-02 Rakuten Mobile, Inc. Procédé et système de gestion de modèle d'intelligence artificielle/apprentissage automatique (ai/ml)
WO2024061522A1 (fr) * 2022-09-22 2024-03-28 Nokia Technologies Oy Mises à jour de modèle initiées par un ue pour un modèle ai/ml à deux côtés
WO2024096710A1 (fr) * 2022-11-04 2024-05-10 Samsung Electronics Co., Ltd. Entraînement fl à multiples fonctionnalités de modèle d'un modèle d'apprentissage ia/ml pour de multiples fonctionnalités de modèle

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20220353155A1 (en) * 2019-11-22 2022-11-03 Huawei Technologies Co., Ltd. Personalized tailored air interface
WO2023209673A1 (fr) * 2022-04-28 2023-11-02 Telefonaktiebolaget Lm Ericsson (Publ) Modèle de repli par apprentissage automatique pour dispositif sans fil
WO2023211572A1 (fr) * 2022-04-28 2023-11-02 Rakuten Mobile, Inc. Procédé et système de gestion de modèle d'intelligence artificielle/apprentissage automatique (ai/ml)
WO2024061522A1 (fr) * 2022-09-22 2024-03-28 Nokia Technologies Oy Mises à jour de modèle initiées par un ue pour un modèle ai/ml à deux côtés
WO2024096710A1 (fr) * 2022-11-04 2024-05-10 Samsung Electronics Co., Ltd. Entraînement fl à multiples fonctionnalités de modèle d'un modèle d'apprentissage ia/ml pour de multiples fonctionnalités de modèle

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
HENRIK RYDEN, ERICSSON: "Discussion on general aspects of AI/ML framework", 3GPP DRAFT; R1-2309184; TYPE DISCUSSION; FS_NR_AIML_AIR, 3RD GENERATION PARTNERSHIP PROJECT (3GPP), vol. RAN WG1, 29 September 2023 (2023-09-29), FR, XP052526904 *
INTEL CORPORATION: "Discussion of AI/ML framework", 3GPP DRAFT; R1-2206577, 3RD GENERATION PARTNERSHIP PROJECT (3GPP), vol. RAN WG1, 12 August 2022 (2022-08-12), FR, XP052274509 *

Similar Documents

Publication Publication Date Title
US20250330373A1 (en) Ml model support and model id handling by ue and network
US20250219898A1 (en) :user equipment report of machine learning model performance
US20250220471A1 (en) Network assisted error detection for artificial intelligence on air interface
WO2023191682A1 (fr) Gestion de modèles d'intelligence artificielle/d'apprentissage machine entre des nœuds radio sans fil
EP4352658A1 (fr) Sélection de modèles d'apprentissage automatique globaux pour apprentissage automatique collaboratif dans un réseau de communication
WO2024242612A1 (fr) Configuration et test d'ue rapportant des résultats de surveillance des performances d'un modèle ia/ml
US20250280304A1 (en) Machine Learning for Radio Access Network Optimization
WO2025046273A1 (fr) Procédé d'entraînement collaboratif de modèle ml sur la base d'un apprentissage fédéré assisté par relais avec calcul par voie hertzienne et confidentialité de données
WO2023211343A1 (fr) Rapport d'ensemble de caractéristiques de modèle d'apprentissage automatique
WO2025038021A1 (fr) Surveillance de performance d'un modèle d'intelligence artificielle/apprentissage automatique à deux côtés au côté équipement utilisateur
US20250293942A1 (en) Machine learning fallback model for wireless device
WO2024210809A1 (fr) Mises à jour dynamiques de rapport d'applicabilité pour modèles ia/ml
WO2023187678A1 (fr) Gestion de modèle de machine d'équipement utilisateur assistée par réseau d'apprentissage
WO2024193831A1 (fr) Réalisation d'une prédiction en boucle fermée sur la base du comportement d'un réseau en réponse à une politique de commande
WO2025095847A1 (fr) Appariement de modèles pour modèles d'ia/ml bilatéraux
US20250234219A1 (en) Network assisted user equipment machine learning model handling
US20250227764A1 (en) Handling of random access partitions and priorities
WO2024241222A1 (fr) Procédé et systèmes pour l'établissement de rapport sur la capacité d'équipement utilisateur et la configuration d'informations sur l'état des canaux basée sur l'apprentissage automatique.
WO2024096805A1 (fr) Communication basée sur un partage d'identifiant de configuration de réseau
WO2025183613A1 (fr) Procédés, appareil et supports lisibles par ordinateur associés au partage d'ensembles de données sur un réseau de communication
WO2024214075A1 (fr) Gestion du cycle de vie d'un modèle unilatéral basée sur l'id
WO2025172940A1 (fr) Rapport de faisceau initié par équipement utilisateur sur la base d'une prédiction de faisceau
WO2025165273A1 (fr) Dispositif sans fil, nœud de réseau et procédés de surveillance de performance pour de multiples schémas de prédiction de csi
WO2025233909A1 (fr) Identifiant de configuration de réseau pour modèles d'apprentissage automatique
WO2025183598A1 (fr) Noeud de réseau radio, équipement utilisateur et procédés mis en oeuvre dans celui-ci

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 24886466

Country of ref document: EP

Kind code of ref document: A1