[go: up one dir, main page]

WO2025038021A1 - Surveillance de performance d'un modèle d'intelligence artificielle/apprentissage automatique à deux côtés au côté équipement utilisateur - Google Patents

Surveillance de performance d'un modèle d'intelligence artificielle/apprentissage automatique à deux côtés au côté équipement utilisateur Download PDF

Info

Publication number
WO2025038021A1
WO2025038021A1 PCT/SE2024/050724 SE2024050724W WO2025038021A1 WO 2025038021 A1 WO2025038021 A1 WO 2025038021A1 SE 2024050724 W SE2024050724 W SE 2024050724W WO 2025038021 A1 WO2025038021 A1 WO 2025038021A1
Authority
WO
WIPO (PCT)
Prior art keywords
model
sided
kpi
network node
csi
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
PCT/SE2024/050724
Other languages
English (en)
Inventor
Jingya Li
Emil RINGH
Ilmiawan SHUBHI
Mattias Frenne
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Telefonaktiebolaget LM Ericsson AB
Original Assignee
Telefonaktiebolaget LM Ericsson AB
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Telefonaktiebolaget LM Ericsson AB filed Critical Telefonaktiebolaget LM Ericsson AB
Publication of WO2025038021A1 publication Critical patent/WO2025038021A1/fr
Pending legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L5/00Arrangements affording multiple use of the transmission path
    • H04L5/0091Signalling for the administration of the divided path, e.g. signalling of configuration information
    • H04L5/0094Indication of how sub-channels of the path are allocated
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • G06N3/0455Auto-encoder networks; Encoder-decoder networks
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L5/00Arrangements affording multiple use of the transmission path
    • H04L5/003Arrangements for allocating sub-channels of the transmission path
    • H04L5/0048Allocation of pilot signals, i.e. of signals known to the receiver
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W24/00Supervisory, monitoring or testing arrangements
    • H04W24/02Arrangements for optimising operational condition
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04BTRANSMISSION
    • H04B7/00Radio transmission systems, i.e. using radiation field
    • H04B7/02Diversity systems; Multi-antenna system, i.e. transmission or reception using multiple antennas
    • H04B7/04Diversity systems; Multi-antenna system, i.e. transmission or reception using multiple antennas using two or more spaced independent antennas
    • H04B7/0413MIMO systems
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/16Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks using machine learning or artificial intelligence

Definitions

  • the present disclosure relates generally to communication systems and, more specifically, to methods and systems for performance monitoring of a two-sided artificial intelligence / machine learning model at the user equipment (UE) side.
  • UE user equipment
  • BACKGROUND Artificial Intelligence (AI) and Machine Learning (ML) have been investigated, both in academia and industry, as promising tools to optimize the design of air interfaces in wireless communication networks.
  • Example use cases include using autoencoders for Channel State Information (CSI) compression to reduce the feedback overhead and improve channel prediction accuracy, using deep neural networks for classifying Line-of-Sight (LOS) and Non- LOS (NLOS) conditions to enhance the positioning accuracy, using reinforcement learning for beam selection at the network side and/or the User Equipment (UE) side to reduce the signaling overhead and beam alignment latency, and using deep reinforcement learning to learn an optimal precoding policy for complex Multiple Input Multiple Output (MIMO) precoding problems.
  • CSI Channel State Information
  • LOS Line-of-Sight
  • NLOS Non- LOS
  • MIMO Multiple Input Multiple Output
  • the deployment of two-sided AI/ML models in the NR air interface can improve channel performance by performing joint inference across the UE and the Network (NW). For example, the UE executes the initial part of the inference, and the network node completes the subsequent part. Effective monitoring of these AI/ML models is important to ensure correct functioning of channel communication.
  • KPIs Key Performance Indicators
  • a method performed by a user equipment comprises measuring at least one reference signal resource indicated by a reference signal configuration associated with a two-sided AI/ML model.
  • the two-sided AI/ML model comprises a UE part operated by the UE and a network part operated by a network node.
  • the method further comprises performing model inference based on the measuring of the at least one reference signal resource using the UE part of the two-sided AI/ML model.
  • the method further comprises estimating at least one intermediate KPI of the two-sided AI/ML model.
  • the method further comprises reporting the estimated at least one intermediate KPI to the network node.
  • a method performed by a network node comprises collecting target channel state information (CSI) samples from a UE.
  • the method further comprises training a network-part of a two-sided AI/ML model using the collected CSI samples and a nominal model at the network node.
  • CSI target channel state information
  • the method further comprises generating at least one dataset for training a UE-part of the two-sided AI/ML model and a one- sided AI/ML model for estimating an intermediate KPI range indicator of the two-sided AI/ML model.
  • the method further comprises providing the at least one dataset to the UE.
  • Figure 1 illustrates an example of a communication system in accordance with some embodiments.
  • Figure 2 illustrates an exemplary user equipment in accordance with some embodiments.
  • Figure 3 illustrates an exemplary network node in accordance with some embodiments.
  • Figure 4 is a block diagram of an exemplary host, which may be an embodiment of the host of Figure 1, in accordance with various aspects described herein.
  • Figure 5 is a block diagram illustrating an exemplary virtualization environment in which functions implemented by some embodiments may be virtualized.
  • Figure 6 illustrates a communication diagram of an exemplary host communicating via a network node with a UE over a partially wireless connection in accordance with some embodiments.
  • Figure 7 illustrates an AI/ML model lifecycle management process in accordance with some embodiments.
  • Figure 8 illustrates a functional framework for AI/ML model Lifecycle Management with different Network-User Equipment (NW-UE) collaboration levels in physical layer use cases in accordance with some embodiments.
  • NW-UE Network-User Equipment
  • Figure 9 illustrates an autoencoder-based two-sided AI/ML model used for CSI reporting in accordance with some embodiments.
  • Figure 10 illustrates a flowchart showing a method for performance monitoring of a two-sided AI/ML model at the UE side in accordance with some embodiments.
  • Figure 11 illustrates a flowchart showing a method for performance monitoring using a one-sided AI/ML model at the NW side in accordance with some embodiments.
  • Figure 12 illustrates a flowchart showing another method for performance monitoring at the NW side in accordance with some embodiments. DETAILED DESCRIPTION [0021] To provide a more thorough understanding of the present invention, the following description sets forth numerous specific details, such as specific configurations, parameters, examples, and the like.
  • Coupled to is intended to include both direct coupling (in which two elements that are coupled to each other contact each other) and indirect coupling (in which at least one additional element is located between the two elements). Therefore, the terms “coupled to” and “coupled with” are used synonymously. Within the context of a networked environment where two or more components or devices are able to exchange data, the terms “coupled to” and “coupled with” are also used to mean “communicatively coupled with”, possibly via one or more intermediary devices.
  • FIG. 1 shows an example of a communication system 100 in accordance with some embodiments.
  • the communication system 100 includes a telecommunication network 102 that includes an access network 104, such as a radio access network (RAN), and a core network 106, which includes one or more core network nodes 108.
  • the access network 104 includes one or more access network nodes, such as network nodes 110a and 110b (one or more of which may be generally referred to as network nodes 110), or any other similar 3 rd Generation Partnership Project (3GPP) access nodes or non-3GPP access points.
  • 3GPP 3 rd Generation Partnership Project
  • a network node is not necessarily limited to an implementation in which a radio portion and a baseband portion are supplied and integrated by a single vendor.
  • network nodes include disaggregated implementations or portions thereof.
  • the telecommunication network 102 includes one or more Open-RAN (ORAN) network nodes.
  • ORAN Open-RAN
  • An ORAN network node is a node in the telecommunication network 102 that supports an ORAN specification (e.g., a specification published by the O-RAN Alliance, or any similar organization) and may operate alone or together with other nodes to implement one or more functionalities of any node in the telecommunication network 102, including one or more network nodes 110 and/or core network nodes 108.
  • ORAN specification e.g., a specification published by the O-RAN Alliance, or any similar organization
  • Examples of an ORAN network node include an open radio unit (O-RU), an open distributed unit (O-DU), an open central unit (O-CU), including an O-CU control plane (O- CU-CP) or an O-CU user plane (O-CU-UP), a RAN intelligent controller (near-real time or non-real time) hosting software or software plug-ins, such as a near-real time control application (e.g., xApp) or a non-real time control application (e.g., rApp), or any combination thereof (the adjective “open” designating support of an ORAN specification).
  • a near-real time control application e.g., xApp
  • rApp non-real time control application
  • the network node may support a specification by, for example, supporting an interface defined by the ORAN specification, such as an A1, F1, W1, E1, E2, X2, Xn interface, an open fronthaul user plane interface, or an open fronthaul management plane interface.
  • an ORAN access node may be a logical node in a physical node.
  • an ORAN network node may be implemented in a virtualization environment (described further below) in which one or more network functions are virtualized.
  • the virtualization environment may include an O-Cloud computing platform orchestrated by a Service Management and Orchestration Framework via an O-2 interface defined by the O-RAN Alliance or comparable technologies.
  • the network nodes 110 facilitate direct or indirect connection of user equipment (UE), such as by connecting UEs 112a, 112b, 112c, and 112d (one or more of which may be generally referred to as UEs 112) to the core network 106 over one or more wireless connections.
  • UE user equipment
  • Example wireless communications over a wireless connection include transmitting and/or receiving wireless signals using electromagnetic waves, radio waves, infrared waves, and/or other types of signals suitable for conveying information without the use of wires, cables, or other material conductors.
  • the communication system 100 may include any number of wired or wireless networks, network nodes, UEs, and/or any other components or systems that may facilitate or participate in the communication of data and/or signals whether via wired or wireless connections.
  • the communication system 100 may include and/or interface with any type of communication, telecommunication, data, cellular, radio network, and/or other similar type of system.
  • the UEs 112 may be any of a wide variety of communication devices, including wireless devices arranged, configured, and/or operable to communicate wirelessly with the network nodes 110 and other communication devices.
  • the network nodes 110 are arranged, capable, configured, and/or operable to communicate directly or indirectly with the UEs 112 and/or with other network nodes or equipment in the telecommunication network 102 to enable and/or provide network access, such as wireless network access, and/or to perform other functions, such as administration in the telecommunication network 102.
  • the core network 106 connects the network nodes 110 to one or more hosts, such as host 116. These connections may be direct or indirect via one or more intermediary networks or devices. In other examples, network nodes may be directly coupled to hosts.
  • the core network 106 includes one more core network nodes (e.g., core network node 108) that are structured with hardware and software components.
  • Example core network nodes include functions of one or more of a Mobile Switching Center (MSC), Mobility Management Entity (MME), Home Subscriber Server (HSS), Access and Mobility Management Function (AMF), Session Management Function (SMF), Authentication Server Function (AUSF), Subscription Identifier De-concealing function (SIDF), Unified Data Management (UDM), Security Edge Protection Proxy (SEPP), Network Exposure Function (NEF), and/or a User Plane Function (UPF).
  • MSC Mobile Switching Center
  • MME Mobility Management Entity
  • HSS Home Subscriber Server
  • AMF Access and Mobility Management Function
  • SMF Session Management Function
  • AUSF Authentication Server Function
  • SIDF Subscription Identifier De-concealing function
  • UDM Unified Data Management
  • SEPP Security Edge Protection Proxy
  • NEF Network Exposure Function
  • UPF User Plane Function
  • the host 116 may be under the ownership or control of a service provider other than an operator or provider of the access network 104 and/or the telecommunication network 102, and may be operated by the service provider or on behalf of the service provider.
  • the host 116 may host a variety of applications to provide one or more service. Examples of such applications include live and pre-recorded audio/video content, data collection services such as retrieving and compiling data on various ambient conditions detected by a plurality of UEs, analytics functionality, social media, functions for controlling or otherwise interacting with remote devices, functions for an alarm and surveillance center, or any other such function performed by a server.
  • the communication system 100 of Figure 1 enables connectivity between the UEs, network nodes, and hosts.
  • the communication system may be configured to operate according to predefined rules or procedures, such as specific standards that include, but are not limited to: Global System for Mobile Communications (GSM); Universal Mobile Telecommunications System (UMTS); Long Term Evolution (LTE), and/or other suitable 2G, 3G, 4G, 5G standards, or any applicable future generation standard (e.g., 6G); wireless local area network (WLAN) standards, such as the Institute of Electrical and Electronics Engineers (IEEE) 802.11 standards (WiFi); and/or any other appropriate wireless communication standard, such as the Worldwide Interoperability for Microwave Access (WiMax), Bluetooth, Z-Wave, Near Field Communication (NFC) ZigBee, LiFi, and/or any low-power wide-area network (LPWAN) standards such as LoRa and Sigfox.
  • GSM Global System for Mobile Communications
  • UMTS Universal Mobile Telecommunications System
  • LTE Long Term Evolution
  • 6G wireless local area network
  • WiFi wireless local area network
  • WiMax Worldwide Interoperability for Micro
  • the telecommunication network 102 is a cellular network that implements 3GPP standardized features. Accordingly, the telecommunications network 102 may support network slicing to provide different logical networks to different devices that are connected to the telecommunication network 102. For example, the telecommunications network 102 may provide Ultra Reliable Low Latency Communication (URLLC) services to some UEs, while providing Enhanced Mobile Broadband (eMBB) services to other UEs, and/or Massive Machine Type Communication (mMTC)/Massive IoT services to yet further UEs.
  • URLLC Ultra Reliable Low Latency Communication
  • eMBB Enhanced Mobile Broadband
  • mMTC Massive Machine Type Communication
  • the UEs 112 are configured to transmit and/or receive information without direct human interaction.
  • a UE may be designed to transmit information to the access network 104 on a predetermined schedule, when triggered by an internal or external event, or in response to requests from the access network 104.
  • a UE may be configured for operating in single- or multi-RAT or multi-standard mode.
  • a UE may operate with any one or combination of Wi-Fi, NR (New Radio) and LTE, e.g. being configured for multi-radio dual connectivity (MR-DC), such as E-UTRAN (Evolved-UMTS Terrestrial Radio Access Network) New Radio – Dual Connectivity (EN- DC).
  • MR-DC multi-radio dual connectivity
  • the hub 114 communicates with the access network 104 to facilitate indirect communication between one or more UEs (e.g., UE 112c and/or 112d) and network nodes (e.g., network node 110b).
  • the hub 114 may be a controller, router, content source and analytics, or any of the other communication devices described herein regarding UEs.
  • the hub 114 may be a broadband router enabling access to the core network 106 for the UEs.
  • the hub 114 may be a controller that sends commands or instructions to one or more actuators in the UEs.
  • Commands or instructions may be received from the UEs, network nodes 110, or by executable code, script, process, or other instructions in the hub 114.
  • the hub 114 may be a data collector that acts as temporary storage for UE data and, in some embodiments, may perform analysis or other processing of the data.
  • the hub 114 may be a content source. For example, for a UE that is a VR headset, display, loudspeaker or other media delivery device, the hub 114 may retrieve VR assets, video, audio, or other media or data related to sensory information via a network node, which the hub 114 then provides to the UE either directly, after performing local processing, and/or after adding additional local content.
  • the hub 114 acts as a proxy server or orchestrator for the UEs, in particular if one or more of the UEs are low energy IoT devices.
  • the hub 114 may have a constant/persistent or intermittent connection to the network node 110b.
  • the hub 114 may also allow for a different communication scheme and/or schedule between the hub 114 and UEs (e.g., UE 112c and/or 112d), and between the hub 114 and the core network 106.
  • the hub 114 is connected to the core network 106 and/or one or more UEs via a wired connection.
  • the hub 114 may be configured to connect to an M2M service provider over the access network 104 and/or to another UE over a direct connection.
  • UEs may establish a wireless connection with the network nodes 110 while still connected via the hub 114 via a wired or wireless connection.
  • the hub 114 may be a dedicated hub – that is, a hub whose primary function is to route communications to/from the UEs from/to the network node 110b.
  • the hub 114 may be a non-dedicated hub – that is, a device which is capable of operating to route communications between the UEs and network node 110b, but which is additionally capable of operating as a communication start and/or end point for certain data channels.
  • FIG. 2 shows a UE 200 in accordance with some embodiments.
  • a UE refers to a device capable, configured, arranged and/or operable to communicate wirelessly with network nodes and/or other UEs.
  • Examples of a UE include, but are not limited to, a smart phone, mobile phone, cell phone, voice over IP (VoIP) phone, wireless local loop phone, desktop computer, personal digital assistant (PDA), wireless cameras, gaming console or device, music storage device, playback appliance, wearable terminal device, wireless endpoint, mobile station, tablet, laptop, laptop-embedded equipment (LEE), laptop-mounted equipment (LME), smart device, wireless customer-premise equipment (CPE), vehicle, vehicle-mounted or vehicle embedded/integrated wireless device, etc.
  • VoIP voice over IP
  • PDA personal digital assistant
  • gaming console or device music storage device, playback appliance
  • wearable terminal device wireless endpoint, mobile station, tablet, laptop, laptop-embedded equipment (LEE), laptop-mounted equipment (LME), smart device, wireless customer
  • a UE may support device-to-device (D2D) communication, for example by implementing a 3GPP standard for sidelink communication, Dedicated Short-Range Communication (DSRC), vehicle-to-vehicle (V2V), vehicle-to-infrastructure (V2I), or vehicle- to-everything (V2X).
  • a UE may not necessarily have a user in the sense of a human user who owns and/or operates the relevant device.
  • a UE may represent a device that is intended for sale to, or operation by, a human user but which may not, or which may not initially, be associated with a specific human user (e.g., a smart sprinkler controller).
  • a UE may represent a device that is not intended for sale to, or operation by, an end user but which may be associated with or operated for the benefit of a user (e.g., a smart power meter).
  • the UE 200 includes processing circuitry 202 that is operatively coupled via a bus 204 to an input/output interface 206, a power source 208, a memory 210, a communication interface 212, and/or any other component, or any combination thereof.
  • Certain UEs may utilize all or a subset of the components shown in Figure 2.
  • the level of integration between the components may vary from one UE to another UE. Further, certain UEs may contain multiple instances of a component, such as multiple processors, memories, transceivers, transmitters, receivers, etc.
  • the processing circuitry 202 is configured to process instructions and data and may be configured to implement any sequential state machine operative to execute instructions stored as machine-readable computer programs in the memory 210.
  • the processing circuitry 202 may be implemented as one or more hardware-implemented state machines (e.g., in discrete logic, field-programmable gate arrays (FPGAs), application specific integrated circuits (ASICs), etc.); programmable logic together with appropriate firmware; one or more stored computer programs, general-purpose processors, such as a microprocessor or digital signal processor (DSP), together with appropriate software; or any combination of the above.
  • the processing circuitry 202 may include multiple central processing units (CPUs).
  • the input/output interface 206 may be configured to provide an interface or interfaces to an input device, output device, or one or more input and/or output devices.
  • Examples of an output device include a speaker, a sound card, a video card, a display, a monitor, a printer, an actuator, an emitter, a smartcard, another output device, or any combination thereof.
  • An input device may allow a user to capture information into the UE 200.
  • Examples of an input device include a touch-sensitive or presence-sensitive display, a camera (e.g., a digital camera, a digital video camera, a web camera, etc.), a microphone, a sensor, a mouse, a trackball, a directional pad, a trackpad, a scroll wheel, a smartcard, and the like.
  • the presence-sensitive display may include a capacitive or resistive touch sensor to sense input from a user.
  • a sensor may be, for instance, an accelerometer, a gyroscope, a tilt sensor, a force sensor, a magnetometer, an optical sensor, a proximity sensor, a biometric sensor, etc., or any combination thereof.
  • An output device may use the same type of interface port as an input device.
  • a Universal Serial Bus (USB) port may be used to provide an input device and an output device.
  • the power source 208 is structured as a battery or battery pack. Other types of power sources, such as an external power source (e.g., an electricity outlet), photovoltaic device, or power cell, may be used.
  • the power source 208 may further include power circuitry for delivering power from the power source 208 itself, and/or an external power source, to the various parts of the UE 200 via input circuitry or an interface such as an electrical power cable. Delivering power may be, for example, for charging of the power source 208.
  • Power circuitry may perform any formatting, converting, or other modification to the power from the power source 208 to make the power suitable for the respective components of the UE 200 to which power is supplied.
  • the memory 210 may be or be configured to include memory such as random access memory (RAM), read-only memory (ROM), programmable read-only memory (PROM), erasable programmable read-only memory (EPROM), electrically erasable programmable read-only memory (EEPROM), magnetic disks, optical disks, hard disks, removable cartridges, flash drives, and so forth.
  • the memory 210 includes one or more application programs 214, such as an operating system, web browser application, a widget, gadget engine, or other application, and corresponding data 216.
  • the memory 210 may store, for use by the UE 200, any of a variety of various operating systems or combinations of operating systems.
  • the memory 210 may be configured to include a number of physical drive units, such as redundant array of independent disks (RAID), flash memory, USB flash drive, external hard disk drive, thumb drive, pen drive, key drive, high-density digital versatile disc (HD- DVD) optical disc drive, internal hard disk drive, Blu-Ray optical disc drive, holographic digital data storage (HDDS) optical disc drive, external mini-dual in-line memory module (DIMM), synchronous dynamic random access memory (SDRAM), external micro-DIMM SDRAM, smartcard memory such as tamper resistant module in the form of a universal integrated circuit card (UICC) including one or more subscriber identity modules (SIMs), such as a USIM and/or ISIM, other memory, or any combination thereof.
  • RAID redundant array of independent disks
  • HD- DVD high-density digital versatile disc
  • HD- DVD high-density digital versatile disc
  • HD- DVD high-density digital versatile disc
  • HD- DVD high-density digital versatile disc
  • HD- DVD high-
  • the UICC may for example be an embedded UICC (eUICC), integrated UICC (iUICC) or a removable UICC commonly known as ‘SIM card.’
  • the memory 210 may allow the UE 200 to access instructions, application programs and the like, stored on transitory or non-transitory memory media, to off-load data, or to upload data.
  • An article of manufacture, such as one utilizing a communication system may be tangibly embodied as or in the memory 210, which may be or comprise a device-readable storage medium.
  • the processing circuitry 202 may be configured to communicate with an access network or other network using the communication interface 212.
  • the communication interface 212 may comprise one or more communication subsystems and may include or be communicatively coupled to an antenna 222.
  • the communication interface 212 may include one or more transceivers used to communicate, such as by communicating with one or more remote transceivers of another device capable of wireless communication (e.g., another UE or a network node in an access network).
  • Each transceiver may include a transmitter 218 and/or a receiver 220 appropriate to provide network communications (e.g., optical, electrical, frequency allocations, and so forth).
  • the transmitter 218 and receiver 220 may be coupled to one or more antennas (e.g., antenna 222) and may share circuit components, software or firmware, or alternatively be implemented separately.
  • communication functions of the communication interface 212 may include cellular communication, Wi-Fi communication, LPWAN communication, data communication, voice communication, multimedia communication, short-range communications such as Bluetooth, near-field communication, location-based communication such as the use of the global positioning system (GPS) to determine a location, another like communication function, or any combination thereof.
  • GPS global positioning system
  • Communications may be implemented in according to one or more communication protocols and/or standards, such as IEEE 802.11, Code Division Multiplexing Access (CDMA), Wideband Code Division Multiple Access (WCDMA), GSM, LTE, New Radio (NR), UMTS, WiMax, Ethernet, transmission control protocol/internet protocol (TCP/IP), synchronous optical networking (SONET), Asynchronous Transfer Mode (ATM), QUIC, Hypertext Transfer Protocol (HTTP), and so forth.
  • a UE may provide an output of data captured by its sensors, through its communication interface 212, via a wireless connection to a network node. Data captured by sensors of a UE can be communicated through a wireless connection to a network node via another UE.
  • a UE comprises an actuator, a motor, or a switch, related to a communication interface configured to receive wireless input from a network node via a wireless connection. In response to the received wireless input the states of the actuator, the motor, or the switch may change.
  • the UE may comprise a motor that adjusts the control surfaces or rotors of a drone in flight according to the received input or to a robotic arm performing a medical procedure according to the received input.
  • a UE when in the form of an Internet of Things (IoT) device, may be a device for use in one or more application domains, these domains comprising, but not limited to, city wearable technology, extended industrial application and healthcare.
  • IoT Internet of Things
  • Non-limiting examples of such an IoT device are a device which is or which is embedded in: a connected refrigerator or freezer, a TV, a connected lighting device, an electricity meter, a robot vacuum cleaner, a voice controlled smart speaker, a home security camera, a motion detector, a thermostat, a smoke detector, a door/window sensor, a flood/moisture sensor, an electrical door lock, a connected doorbell, an air conditioning system like a heat pump, an autonomous vehicle, a surveillance system, a weather monitoring device, a vehicle parking monitoring device, an electric vehicle charging station, a smart watch, a fitness tracker, a head-mounted display for Augmented Reality (AR) or Virtual Reality (VR), a wearable for tactile augmentation or sensory enhancement, a water sprinkler, an animal- or item-tracking device, a sensor for monitoring a plant or animal, an industrial robot, an Unmanned Aerial Vehicle (UAV), and any kind of medical device, like a heart rate monitor or a remote controlled surgical robot.
  • UAV Un
  • a UE in the form of an IoT device comprises circuitry and/or software in dependence of the intended application of the IoT device in addition to other components as described in relation to the UE 200 shown in Figure 2.
  • a UE may represent a machine or other device that performs monitoring and/or measurements, and transmits the results of such monitoring and/or measurements to another UE and/or a network node.
  • the UE may in this case be an M2M device, which may in a 3GPP context be referred to as an MTC device.
  • the UE may implement the 3GPP NB-IoT standard.
  • a UE may represent a vehicle, such as a car, a bus, a truck, a ship and an airplane, or other equipment that is capable of monitoring and/or reporting on its operational status or other functions associated with its operation.
  • a first UE might be or be integrated in a drone and provide the drone’s speed information (obtained through a speed sensor) to a second UE that is a remote controller operating the drone.
  • the first UE may adjust the throttle on the drone (e.g. by controlling an actuator) to increase or decrease the drone’s speed.
  • the first and/or the second UE can also include more than one of the functionalities described above.
  • a UE might comprise the sensor and the actuator, and handle communication of data for both the speed sensor and the actuators.
  • Figure 3 shows a network node 300 in accordance with some embodiments.
  • network node refers to equipment capable, configured, arranged and/or operable to communicate directly or indirectly with a UE and/or with other network nodes or equipment, in a telecommunication network.
  • network nodes include, but are not limited to, access points (APs) (e.g., radio access points), base stations (BSs) (e.g., radio base stations, Node Bs, evolved Node Bs (eNBs) and NR NodeBs (gNBs)), O-RAN nodes or components of an O-RAN node (e.g., O-RU, O-DU, O-CU).
  • APs access points
  • BSs base stations
  • eNBs evolved Node Bs
  • gNBs NR NodeBs
  • O-RAN nodes e.g., O-RU, O-DU, O-CU
  • Base stations may be categorized based on the amount of coverage they provide (or, stated differently, their transmit power level) and so, depending on the provided amount of coverage, may be referred to as femto base stations, pico base stations, micro base stations, or macro base stations.
  • a base station may be a relay node or a relay donor node controlling a relay.
  • a network node may also include one or more (or all) parts of a distributed radio base station such as centralized digital units, distributed units (e.g., in an O-RAN access node) and/or remote radio units (RRUs), sometimes referred to as Remote Radio Heads (RRHs).
  • RRUs remote radio units
  • RRHs Remote Radio Heads
  • Such remote radio units may or may not be integrated with an antenna as an antenna integrated radio.
  • Parts of a distributed radio base station may also be referred to as nodes in a distributed antenna system (DAS).
  • DAS distributed antenna system
  • network nodes include multiple transmission point (multi-TRP) 5G access nodes, multi-standard radio (MSR) equipment such as MSR BSs, network controllers such as radio network controllers (RNCs) or base station controllers (BSCs), base transceiver stations (BTSs), transmission points, transmission nodes, multi-cell/multicast coordination entities (MCEs), Operation and Maintenance (O&M) nodes, Operations Support System (OSS) nodes, Self-Organizing Network (SON) nodes, positioning nodes (e.g., Evolved Serving Mobile Location Centers (E-SMLCs)), and/or Minimization of Drive Tests (MDTs).
  • MSR multi-standard radio
  • RNCs radio network controllers
  • BSCs base station controllers
  • BTSs base transceiver stations
  • OFDM Operation and Maintenance
  • OSS Operations Support System
  • SON Self-Organizing Network
  • positioning nodes e.g., Evolved Serving Mobile Location Centers (E-SMLCs)
  • the network node 300 includes a processing circuitry 302, a memory 304, a communication interface 306, and a power source 308.
  • the network node 300 may be composed of multiple physically separate components (e.g., a NodeB component and a RNC component, or a BTS component and a BSC component, etc.), which may each have their own respective components.
  • the network node 300 comprises multiple separate components (e.g., BTS and BSC components)
  • one or more of the separate components may be shared among several network nodes.
  • a single RNC may control multiple NodeBs.
  • each unique NodeB and RNC pair may in some instances be considered a single separate network node.
  • the network node 300 may be configured to support multiple radio access technologies (RATs).
  • RATs radio access technologies
  • some components may be duplicated (e.g., separate memory 304 for different RATs) and some components may be reused (e.g., a same antenna 310 may be shared by different RATs).
  • the network node 300 may also include multiple sets of the various illustrated components for different wireless technologies integrated into network node 300, for example GSM, WCDMA, LTE, NR, WiFi, Zigbee, Z-wave, LoRaWAN, Radio Frequency Identification (RFID) or Bluetooth wireless technologies. These wireless technologies may be integrated into the same or different chip or set of chips and other components within network node 300.
  • RFID Radio Frequency Identification
  • the processing circuitry 302 may comprise a combination of one or more of a microprocessor, controller, microcontroller, central processing unit, digital signal processor, application-specific integrated circuit, field programmable gate array, or any other suitable computing device, resource, or combination of hardware, software and/or encoded logic operable to provide, either alone or in conjunction with other network node 300 components, such as the memory 304, to provide network node 300 functionality.
  • the processing circuitry 302 includes a system on a chip (SOC).
  • the processing circuitry 302 includes one or more of radio frequency (RF) transceiver circuitry 312 and baseband processing circuitry 314.
  • RF radio frequency
  • the radio frequency (RF) transceiver circuitry 312 and the baseband processing circuitry 314 may be on separate chips (or sets of chips), boards, or units, such as radio units and digital units. In alternative embodiments, part or all of RF transceiver circuitry 312 and baseband processing circuitry 314 may be on the same chip or set of chips, boards, or units.
  • the memory 304 may comprise any form of volatile or non-volatile computer- readable memory including, without limitation, persistent storage, solid-state memory, remotely mounted memory, magnetic media, optical media, random access memory (RAM), read-only memory (ROM), mass storage media (for example, a hard disk), removable storage media (for example, a flash drive, a Compact Disk (CD) or a Digital Video Disk (DVD)), and/or any other volatile or non-volatile, non-transitory device-readable and/or computer- executable memory devices that store information, data, and/or instructions that may be used by the processing circuitry 302.
  • volatile or non-volatile computer- readable memory including, without limitation, persistent storage, solid-state memory, remotely mounted memory, magnetic media, optical media, random access memory (RAM), read-only memory (ROM), mass storage media (for example, a hard disk), removable storage media (for example, a flash drive, a Compact Disk (CD) or a Digital Video Disk (DVD)), and/or any other volatile or non
  • the memory 304 may store any suitable instructions, data, or information, including a computer program, software, an application including one or more of logic, rules, code, tables, and/or other instructions capable of being executed by the processing circuitry 302 and utilized by the network node 300.
  • the memory 304 may be used to store any calculations made by the processing circuitry 302 and/or any data received via the communication interface 306.
  • the processing circuitry 302 and memory 304 is integrated.
  • the communication interface 306 is used in wired or wireless communication of signaling and/or data between a network node, access network, and/or UE.
  • the communication interface 306 comprises port(s)/terminal(s) 316 to send and receive data, for example to and from a network over a wired connection.
  • the communication interface 306 also includes radio front-end circuitry 318 that may be coupled to, or in certain embodiments a part of, the antenna 310.
  • Radio front-end circuitry 318 comprises filters 320 and amplifiers 322.
  • the radio front-end circuitry 318 may be connected to an antenna 310 and processing circuitry 302.
  • the radio front-end circuitry may be configured to condition signals communicated between antenna 310 and processing circuitry 302.
  • the radio front-end circuitry 318 may receive digital data that is to be sent out to other network nodes or UEs via a wireless connection.
  • the radio front-end circuitry 318 may convert the digital data into a radio signal having the appropriate channel and bandwidth parameters using a combination of filters 320 and/or amplifiers 322. The radio signal may then be transmitted via the antenna 310. Similarly, when receiving data, the antenna 310 may collect radio signals which are then converted into digital data by the radio front-end circuitry 318. The digital data may be passed to the processing circuitry 302. In other embodiments, the communication interface may comprise different components and/or different combinations of components. [0064] In certain alternative embodiments, the network node 300 does not include separate radio front-end circuitry 318, instead, the processing circuitry 302 includes radio front-end circuitry and is connected to the antenna 310.
  • the RF transceiver circuitry 312 is part of the communication interface 306.
  • the communication interface 306 includes one or more ports or terminals 316, the radio front-end circuitry 318, and the RF transceiver circuitry 312, as part of a radio unit (not shown), and the communication interface 306 communicates with the baseband processing circuitry 314, which is part of a digital unit (not shown).
  • the antenna 310 may include one or more antennas, or antenna arrays, configured to send and/or receive wireless signals.
  • the antenna 310 may be coupled to the radio front-end circuitry 318 and may be any type of antenna capable of transmitting and receiving data and/or signals wirelessly.
  • the antenna 310 is separate from the network node 300 and connectable to the network node 300 through an interface or port.
  • the antenna 310, communication interface 306, and/or the processing circuitry 302 may be configured to perform any receiving operations and/or certain obtaining operations described herein as being performed by the network node. Any information, data and/or signals may be received from a UE, another network node and/or any other network equipment. Similarly, the antenna 310, the communication interface 306, and/or the processing circuitry 302 may be configured to perform any transmitting operations described herein as being performed by the network node. Any information, data and/or signals may be transmitted to a UE, another network node and/or any other network equipment.
  • the power source 308 provides power to the various components of network node 300 in a form suitable for the respective components (e.g., at a voltage and current level needed for each respective component).
  • the power source 308 may further comprise, or be coupled to, power management circuitry to supply the components of the network node 300 with power for performing the functionality described herein.
  • the network node 300 may be connectable to an external power source (e.g., the power grid, an electricity outlet) via an input circuitry or interface such as an electrical cable, whereby the external power source supplies power to power circuitry of the power source 308.
  • the power source 308 may comprise a source of power in the form of a battery or battery pack which is connected to, or integrated in, power circuitry.
  • Embodiments of the network node 300 may include additional components beyond those shown in Figure 3 for providing certain aspects of the network node’s functionality, including any of the functionality described herein and/or any functionality necessary to support the subject matter described herein.
  • the network node 300 may include user interface equipment to allow input of information into the network node 300 and to allow output of information from the network node 300. This may allow a user to perform diagnostic, maintenance, repair, and other administrative functions for the network node 300.
  • Figure 4 is a block diagram of a host 400, which may be an embodiment of the host 116 of Figure 1, in accordance with various aspects described herein.
  • the host 400 may be or comprise various combinations hardware and/or software, including a standalone server, a blade server, a cloud-implemented server, a distributed server, a virtual machine, container, or processing resources in a server farm.
  • the host 400 may provide one or more services to one or more UEs.
  • the host 400 includes processing circuitry 402 that is operatively coupled via a bus 404 to an input/output interface 406, a network interface 408, a power source 410, and a memory 412.
  • Other components may be included in other embodiments. Features of these components may be substantially similar to those described with respect to the devices of previous figures, such as Figures 2 and 3, such that the descriptions thereof are generally applicable to the corresponding components of host 400.
  • the memory 412 may include one or more computer programs including one or more host application programs 414 and data 416, which may include user data, e.g., data generated by a UE for the host 400 or data generated by the host 400 for a UE.
  • Embodiments of the host 400 may utilize only a subset or all of the components shown.
  • the host application programs 414 may be implemented in a container-based architecture and may provide support for video codecs (e.g., Versatile Video Coding (VVC), High Efficiency Video Coding (HEVC), Advanced Video Coding (AVC), MPEG, VP9) and audio codecs (e.g., FLAC, Advanced Audio Coding (AAC), MPEG, G.711), including transcoding for multiple different classes, types, or implementations of UEs (e.g., handsets, desktop computers, wearable display systems, heads-up display systems).
  • the host application programs 414 may also provide for user authentication and licensing checks and may periodically report health, routes, and content availability to a central node, such as a device in or on the edge of a core network.
  • FIG. 5 is a block diagram illustrating a virtualization environment 500 in which functions implemented by some embodiments may be virtualized.
  • virtualizing means creating virtual versions of apparatuses or devices which may include virtualizing hardware platforms, storage devices and networking resources.
  • virtualization can be applied to any device described herein, or components thereof, and relates to an implementation in which at least a portion of the functionality is implemented as one or more virtual components.
  • Some or all of the functions described herein may be implemented as virtual components executed by one or more virtual machines (VMs) implemented in one or more virtual environments 500 hosted by one or more of hardware nodes, such as a hardware computing device that operates as a network node, UE, core network node, or host.
  • VMs virtual machines
  • hardware nodes such as a hardware computing device that operates as a network node, UE, core network node, or host.
  • the virtual node does not require radio connectivity (e.g., a core network node or host)
  • the node may be entirely virtualized.
  • the virtualization environment 500 includes components defined by the O-RAN Alliance, such as an O-Cloud environment orchestrated by a Service Management and Orchestration Framework via an O-2 interface.
  • Applications 502 (which may alternatively be called software instances, virtual appliances, network functions, virtual nodes, virtual network functions, etc.) are run in the virtualization environment Q400 to implement some of the features, functions, and/or benefits of some of the embodiments disclosed herein.
  • Hardware 504 includes processing circuitry, memory that stores software and/or instructions executable by hardware processing circuitry, and/or other hardware devices as described herein, such as a network interface, input/output interface, and so forth.
  • Software may be executed by the processing circuitry to instantiate one or more virtualization layers 506 (also referred to as hypervisors or virtual machine monitors (VMMs)), provide VMs 508a and 508b (one or more of which may be generally referred to as VMs 508), and/or perform any of the functions, features and/or benefits described in relation with some embodiments described herein.
  • the virtualization layer 506 may present a virtual operating platform that appears like networking hardware to the VMs 508.
  • the VMs 508 comprise virtual processing, virtual memory, virtual networking or interface and virtual storage, and may be run by a corresponding virtualization layer 506.
  • a virtual appliance 502 may be implemented on one or more of VMs 508, and the implementations may be made in different ways.
  • Virtualization of the hardware is in some contexts referred to as network function virtualization (NFV).
  • NFV network function virtualization
  • NFV may be used to consolidate many network equipment types onto industry standard high volume server hardware, physical switches, and physical storage, which can be located in data centers, and customer premise equipment.
  • a VM 508 may be a software implementation of a physical machine that runs programs as if they were executing on a physical, non-virtualized machine.
  • Each of the VMs 508, and that part of hardware 504 that executes that VM forms separate virtual network elements. Still in the context of NFV, a virtual network function is responsible for handling specific network functions that run in one or more VMs 508 on top of the hardware 504 and corresponds to the application 502.
  • Hardware 504 may be implemented in a standalone network node with generic or specific components. Hardware 504 may implement some functions via virtualization. Alternatively, hardware 504 may be part of a larger cluster of hardware (e.g.
  • hardware 504 is coupled to one or more radio units that each include one or more transmitters and one or more receivers that may be coupled to one or more antennas. Radio units may communicate directly with other hardware nodes via one or more appropriate network interfaces and may be used in combination with the virtual components to provide a virtual node with radio capabilities, such as a radio access node or a base station. In some embodiments, some signaling can be provided with the use of a control system 512 which may alternatively be used for communication between hardware nodes and radio units.
  • Figure 6 shows a communication diagram of a host 602 communicating via a network node 604 with a UE 606 over a partially wireless connection in accordance with some embodiments.
  • UE such as a UE 112a of Figure 1 and/or UE 200 of Figure 2
  • network node such as network node 110a of Figure 1 and/or network node 300 of Figure 3
  • host such as host 116 of Figure 1 and/or host 400 of Figure 4
  • embodiments of host 602 include hardware, such as a communication interface, processing circuitry, and memory.
  • the host 602 also includes software, which is stored in or accessible by the host 602 and executable by the processing circuitry.
  • the software includes a host application that may be operable to provide a service to a remote user, such as the UE 606 connecting via an over-the-top (OTT) connection 650 extending between the UE 606 and host 602.
  • OTT over-the-top
  • a host application may provide user data which is transmitted using the OTT connection 650.
  • the network node 604 includes hardware enabling it to communicate with the host 602 and UE 606.
  • the connection 660 may be direct or pass through a core network (like core network 106 of Figure 1) and/or one or more other intermediate networks, such as one or more public, private, or hosted networks.
  • an intermediate network may be a backbone network or the Internet.
  • the UE 606 includes hardware and software, which is stored in or accessible by UE 606 and executable by the UE’s processing circuitry.
  • the software includes a client application, such as a web browser or operator-specific “app” that may be operable to provide a service to a human or non-human user via UE 606 with the support of the host 602.
  • a client application such as a web browser or operator-specific “app” that may be operable to provide a service to a human or non-human user via UE 606 with the support of the host 602.
  • an executing host application may communicate with the executing client application via the OTT connection 650 terminating at the UE 606 and host 602.
  • the UE's client application may receive request data from the host's host application and provide user data in response to the request data.
  • the OTT connection 650 may transfer both the request data and the user data.
  • the UE's client application may interact with the user to generate the user data that it provides to the host application through the OTT connection 650.
  • the OTT connection 650 may extend via a connection 660 between the host 602 and the network node 604 and via a wireless connection 670 between the network node 604 and the UE 606 to provide the connection between the host 602 and the UE 606.
  • the connection 660 and wireless connection 670, over which the OTT connection 650 may be provided, have been drawn abstractly to illustrate the communication between the host 602 and the UE 606 via the network node 604, without explicit reference to any intermediary devices and the precise routing of messages via these devices.
  • the host 602 provides user data, which may be performed by executing a host application.
  • the user data is associated with a particular human user interacting with the UE 606.
  • the user data is associated with a UE 606 that shares data with the host 602 without explicit human interaction.
  • the host 602 initiates a transmission carrying the user data towards the UE 606.
  • the host 602 may initiate the transmission responsive to a request transmitted by the UE 606.
  • the request may be caused by human interaction with the UE 606 or by operation of the client application executing on the UE 606.
  • the transmission may pass via the network node 604, in accordance with the teachings of the embodiments described throughout this disclosure. Accordingly, in step 612, the network node 604 transmits to the UE 606 the user data that was carried in the transmission that the host 602 initiated, in accordance with the teachings of the embodiments described throughout this disclosure. In step 614, the UE 606 receives the user data carried in the transmission, which may be performed by a client application executed on the UE 606 associated with the host application executed by the host 602. [0084] In some examples, the UE 606 executes a client application which provides user data to the host 602. The user data may be provided in reaction or response to the data received from the host 602.
  • the UE 606 may provide user data, which may be performed by executing the client application.
  • the client application may further consider user input received from the user via an input/output interface of the UE 606.
  • the UE 606 initiates, in step 618, transmission of the user data towards the host 602 via the network node 604.
  • the network node 604 receives user data from the UE 606 and initiates transmission of the received user data towards the host 602.
  • the host 602 receives the user data carried in the transmission initiated by the UE 606.
  • One or more of the various embodiments improve the performance of OTT services provided to the UE 606 using the OTT connection 650, in which the wireless connection 670 forms the last segment. More precisely, the teachings of these embodiments may improve the e.g., data rate, latency, power consumption and thereby provide benefits such as e.g., reduced user waiting time, relaxed restriction on file size, improved content resolution, better responsiveness, extended battery lifetime, and a more efficient transmission of the target-CSI. [0086] In an example scenario, factory status information may be collected and analyzed by the host 602. As another example, the host 602 may process audio and video data which may have been retrieved from a UE for use in creating maps.
  • the host 602 may collect and analyze real-time data to assist in controlling vehicle congestion (e.g., controlling traffic lights).
  • the host 602 may store surveillance video uploaded by a UE.
  • the host 602 may store or control access to media content such as video, audio, VR or AR which it can broadcast, multicast or unicast to UEs.
  • the host 602 may be used for energy pricing, remote control of non-time critical electrical load to balance power generation needs, location services, presentation services (such as compiling diagrams etc. from data collected from remote devices), or any other function of collecting, retrieving, storing, analyzing and/or transmitting data.
  • a measurement procedure may be provided for the purpose of monitoring data rate, latency and other factors on which the one or more embodiments improve.
  • the measurement procedure and/or the network functionality for reconfiguring the OTT connection may be implemented in software and hardware of the host 602 and/or UE 606.
  • sensors (not shown) may be deployed in or in association with other devices through which the OTT connection 650 passes; the sensors may participate in the measurement procedure by supplying values of the monitored quantities exemplified above, or supplying values of other physical quantities from which software may compute or estimate the monitored quantities.
  • the reconfiguring of the OTT connection 650 may include message format, retransmission settings, preferred routing etc.; the reconfiguring need not directly alter the operation of the network node 604. Such procedures and functionalities may be known and practiced in the art.
  • measurements may involve proprietary UE signaling that facilitates measurements of throughput, propagation times, latency and the like, by the host 602.
  • the measurements may be implemented in that software causes messages to be transmitted, in particular empty or ‘dummy’ messages, using the OTT connection 650 while monitoring propagation times, errors, etc.
  • a functional framework for AI/ML model lifecycle management (LCM) is described next in detail.
  • Figure 7 illustrates an AI/ML model lifecycle management (LCM) process 700 in accordance with some embodiments.
  • the LCM process includes a training pipeline 710, an inference pipeline 730, and their interactions.
  • the model’s LCM process 700 is performed by an AI model development system.
  • the LCM process 700 comprises a training or re-training pipeline, an inference pipeline, a model deployment process that transforms the trained or re-trained AI/ML model into the inference pipeline, and a drift detection process that detects drifts in model operations.
  • a training or re-training pipeline 710 may include the following functional blocks: data ingestion 712, data pre-processing 714, model training 716, model evaluation 718, and model registration 720.
  • the AI model development system gathers raw data or training data from a data storage. After this functional block, there may also be another function block that controls the validity of the gathered data.
  • the AI model development system performs feature engineering on the gathered data. For example, data normalization, or a required data transformation, may be applied to the gathered data.
  • the model training functional block the actual model training is performed by the AI model development system.
  • An inference pipeline 730 may include the following functional blocks: data ingestion 732, data pre-processing 734, model operational 736, and data and model monitoring 738.
  • data ingestion functional block the AI model development system gathers raw data or inference data from a data storage.
  • the data pre-processing functional block is similar to the data pre-processing functional block in the training pipeline.
  • FIG. 8 illustrates a functional framework for AI/ML model Lifecycle Management with different Network-User Equipment (NW-UE) collaboration levels in physical layer use cases in accordance with some embodiments.
  • NW-UE collaboration levels for one- and two-sided AI/ML models are described next in detail.
  • AI/ML models for the NR air interface can be classified into two types: one- sided AI/ML models and two-sided AI/ML models.
  • a one-sided AI/ML model can either be a UE-sided model, where the inference is performed entirely at the UE, or a NW-sided model, where the inference is performed entirely at the NW by a network node.
  • the two-sided AI/ML model involves paired models where joint inference is performed across the UE and the NW.
  • the initial part of the inference is executed by the UE, and the subsequent part is completed by a base station in NR, referred to as a next generation Node B (“gNodeB” or “gNB”).
  • gNodeB next generation Node B
  • the process may also be reversed, with the initial part of the inference performed by the gNB and the subsequent part completed by the UE.
  • Figure 9 illustrates an autoencoder-based two-sided AI/ML model used for CSI reporting in accordance with some embodiments.
  • encoder 901 which is on the UE-side of the two-sided autoencoder model, is operated on the UE to compress the estimated wireless channel.
  • the output of the encoder which is the compressed wireless channel information estimates, is reported from the UE to a network node like a gNB.
  • the network uses decoder 902, which is on the NW-side of the two-sided autoencoder model, to reconstruct the estimated wireless channel information.
  • decoder 902 which is on the NW-side of the two-sided autoencoder model
  • the first level of collaboration is where no collaboration exists between network nodes and UEs.
  • a proprietary ML model operating within the existing standard air-interface is applied at one end of the communication chain, such as the UE side.
  • the model lifecycle management tasks such as model selection and training, model monitoring, model retraining, and model update, are performed at this node without inter-node assistance, such as assistance information provided by the network node.
  • the second level of collaboration is where limited collaboration exists between network nodes and UEs for one-sided models.
  • an ML model operates at one end of the communication chain, such as the UE side.
  • the node When performing its model life cycle management tasks, the node receives limited assistance from one or more nodes at the other end of the communication chain, such as a gNB.
  • the limited assistance may include, for example, training and/or retraining of the AI/ML model, model update, model monitoring, and model selection, fallback, and/or switching, etc.
  • the third level of collaboration is where there are joint ML operations between network nodes and UEs for two-sided models. In this scenario, the AI/ML model is divided into two parts, with one part located on the NW side, and the other part on the UE side. Consequently, the AI/ML model requires joint inference between the NW and UE. At this level of collaboration, the AI/ML model life cycle management involves both ends of the communication chain.
  • a UE runs an encoder model as part of generating a CSI report, while the network node runs a decoder model to interpret the generated CSI report.
  • An actual model is trained with a specific purpose for deployment in CSI compression and decompression. This includes an encoder in the UE, or a decoder in the network node used for CSI reporting.
  • a nominal model is a model that is trained for purposes other than the deployment for CSI reporting.
  • One purpose for training a nominal model is for technical training.
  • the first monitoring method is monitoring based on intermediate Key Performance Indicators (KPIs), such as monitoring based on inference accuracy, requires collecting new ground-truth data similar or identical to the training data. Although this method yields high accuracy, the cost is high because of the potential large measurement and reporting overhead.
  • KPIs Key Performance Indicators
  • the second monitoring method is monitoring based on data distribution of input and output data. This method does not require additional signaling overhead. However, this method is less accurate than monitoring based on inference accuracy because the ground-truth is not retrieved.
  • the third monitoring method is monitoring based on system performance. Similar to the second method, the third method does not require any additional signaling overhead. However, if the system performance being monitored is bad, it can be challenging to determine whether the root cause for the bad performance is due to an inaccurate model, or due to other malfunctioning procedures or hardware.
  • the fourth monitoring method is monitoring based on data distribution. In contrast to other methods, this method can identify potential problems in the model by detecting that the dataset observed during inference has different statistics than the dataset used during training.
  • the data distribution based KPIs may include, e.g., low false alarm rate, low missed detection rate, or low latency.
  • Examples of model output accuracy-based performance monitoring results may include intermediate KPI per monitoring data sample, intermediate KPI statistics associated with a monitoring dataset, the percentage of monitoring data samples within a monitoring dataset for which the intermediate KPI fulfills a certain condition, or a flag indicating whether the model is functioning correctly.
  • Examples of data drift-based performance monitoring results may include monitoring data statistics, the difference between the monitoring data statistics and the data statistics obtained in the model training stage, or a flag indicating whether data drift is detected.
  • the monitoring method can be selected based on UE service requirements. For example, a UE with mobile broadband (MBB) may start with a low-cost solution, such as a system performance based solution.
  • MBB mobile broadband
  • be a complex-valued tensor of dimensions (Number of virtual Tx-ports) ⁇ (Number of subbands) ⁇ (Number of multiple-input multiple-output (MIMO) layers), representing a complete precoder tensor for a number of MIMO layers.
  • the precoder matrix ⁇ have dimensions of (Number of virtual Tx-ports) ⁇ (Number of subbands).
  • V-variables are used to denote ground truth CSI, e.g., ⁇ ⁇ ,l being a channel Tx-covariance eigenvector corresponding to the lth largest eigenvalue.
  • P-variables are used to denote AI/ML reconstructed approximations of said ground truth CSI.
  • Mean-Square Error can be described as any of ⁇ ⁇ ⁇ ⁇ ⁇ , and/or ⁇ ⁇ ⁇ ⁇ ⁇ , and/or The norms can be any suitable norms, e.g., the spectral 2-norm, the Frobenius norm, the 1-norm, the infinity norm, or even the pseudo norm called 0-norm.
  • NMSE normalized mean square error
  • the norms can be any suitable norms, e.g., the spectral 2-norm, the Frobenius norm, the 1-norm, the infinity norm, or even the pseudo norm called 0-norm.
  • the MSE and NMSE for subband precoders for can be summed over all subbands to get a KPI for the whole precoder matrix, for a specific layer l. With a KPI for a precoder matrix, certain weighting can be applied between layers to compute a single KPI for an entire precoder tensor.
  • Loss Function Ranges and Normalization [0121] Different loss functions and/or KPIs can take different values. However, in many cases these are normalized to be within the range from 0 to 1, or from -1 to 1. Different cosine similarity measures fall in this category. Moreover, for any bounded loss function or KPI, it is possible to derive another KPI that is normalized.
  • ⁇ ( ⁇ ) ( ⁇ ( ⁇ ) ⁇ ⁇ ) ⁇ ( ⁇ ⁇ ⁇ )
  • ⁇ ( ⁇ ) takes values between 0 and 1.
  • a sigmoid function it is possible to use a sigmoid function to obtain a bounded range of 0 to 1, or -1 to 1.
  • the hyperbolic tangent: ⁇ ( ⁇ ) ( ⁇ ) having range [- ⁇ , ⁇ ], ⁇ ( ⁇ ) has the range [-1,1].
  • the (Gauss) error function: ⁇ ( ⁇ ) 2 ⁇ ( ⁇ ) ⁇ ⁇ 2 ⁇ ⁇ 0 ⁇ d ⁇ . For ⁇ ( ⁇ ) having range [- ⁇ , ⁇ ], ⁇ ( ⁇ ) has the range [-1,1].
  • Method 1 The first method is UE-side monitoring based on the output of CSI reconstruction model.
  • Method 1 requires the NW to transmit or provide the output of its CSI reconstruction model (decoder) to the UE, which introduces a signaling overhead in downlink (DL).
  • the NW needs to provide the loss function used for training at the NW-side to the UE, so that the UE can use it to derive the intermediate KPI. Otherwise, there may be a mismatch on how the loss is calculated. For example, the NW and the UE may have different loss functions to assess the performance.
  • the first method requires signaling of multiple samples of the NW decoder output to the UE within a time window. To reduce the signaling overhead, based on the first model, the intermediate-KPI-based model monitoring on the UE-side can only be performed either periodically with a large periodicity or be event triggered. [0131] Lastly, a concern with Method 1 is that the input and output relation of the decoder and the loss function of the decoder will be exposed.
  • Method 2 is the UE-side monitoring based on the output of a proxy model at the UE-side, where the proxy model is a proxy CSI reconstruction part. Method 2 does not require NW to transmit or provide the reconstructed or output CSI to the UE for monitoring the two-sided CSI compression model. This is because the UE can obtain proxy reconstrued CSIs using its proxy model, e.g., a nominal or reference decoder, and thereby deriving a proxy intermediate KPI by comparing the proxy reconstructed CSI with the associated target CSI.
  • Method 2 has several drawbacks.
  • the proxy model at the UE will not be the accurate representation of the actual decoder in the NW-side. Therefore, the proxy intermediate KPI values may not reflect the actual intermediate KPI values for the two-sided model, which may undermine the purpose of performance monitoring of the two-sided model. This implies that an additional model LCM is required for training, deploying, monitoring, or testing the proxy CSI reconstruction model at the UE, resulting in additional complexity.
  • monitoring the performance of proxy CSI reconstruction model performance requires NW to transmit multiple samples of the NW decoder output to the UE within a time window. This introduces DL signaling overhead and results in the proprietary aspects of the NW-part of the two-sided model (decoder) being disclosed.
  • Method 3 The third method is UE-side monitoring based on the output of a proxy model at the UE-side, where the proxy model directly outputs SGCS values. Method 3 does not require NW to transmit or provide the reconstructed or output CSI to the UE for monitoring the two-sided CSI compression model. This is because the UE can use the proxy model to directly estimate an SGCS for the two-sided CSI-compression model. [0136] Method 3 has several drawbacks. Firstly, the reliability and accuracy of Method 3 depend heavily on the performance of the proxy model, for example, how accurately the SGCS value can be estimated or predicted.
  • introducing a proxy model adds additional model LCM complexity for training, deploying, monitoring, or testing the proxy model.
  • the third model requires NW to either send the reconstructed CSI (NW-side model output) or intermediate KPI values (e.g., SGCS) to the UE for the UE-side to train the proxy model.
  • NW needs to send the reconstructed CSI or intermediate KPI values to the UE for monitoring the performance of the proxy model.
  • new methods are needed to support the performance monitoring of two-sided CSI compression model.
  • a new method is provided for UE to estimate or predict an intermediate KPI range indicator or index, and to report the same to a NW.
  • the intermediate KPI range indicator or index indicates the performance of a two-sided AI/ML model, and is estimated or predicted by using a one-sided AI/ML model at the UE which is different than the primary model.
  • the intermediate KPI such as SGCS, NMSE, or loss-value, that is associated to the two-sided AI/ML model is divided into several value ranges, where different ranges or indexes are associated with different model LCM operations for the two-sided AI/ML model.
  • the operations may include, e.g., triggering model deactivation, model activation or model monitoring procedure of the two-sided AI/ML model.
  • the intermediate KPI metric or format and its corresponding ranges can be configured by the NW or defined by the standard specification for the associated two-sided AI/ML model based feature. Different novel methods for defining KPI ranges are described herein.
  • the one-sided AI/ML model for estimating or predicting intermediated KPI range indicator or index is trained by using ground-truth labels provided by the NW, e.g., intermediate KPI range indicators or indexes associated with the performance of the two-sided AI/ML model. The performance of the one-sided AI/ML model is monitored at either the NW side or at the UE side.
  • a UE is configured by a NW to monitor and report the performance of a two-sided AI/ML model (the primary model).
  • the UE performs measurements on at least one reference signal resource indicated by a reference signal configuration from the NW.
  • a further configuration from the NW associates the reference signal to inference using the two-sided AI/ML model and associated CSI reporting.
  • the UE also performs model inference based on the measurements using the UE-part of the two-sided model.
  • the UE estimates or predicts at least one intermediate KPI range indicator or index of the two-sided model based on at least the output of the UE-part of the two-sided model.
  • the UE stores at least one estimated or predicted intermediate KPI range indicator or index related information at the UE, and/or reports at least one estimated or predicted intermediate KPI range indicator or index related information to the NW.
  • the format or metric of the at least one intermediate KPI is defined and configured by the NW or defined by the standard specification for the associated two-sided AI/ML model based feature.
  • the ranges of the at least one intermediate KPI is defined and configured by the NW or defined by the standard specification for the associated two-sided AI/ML model based feature.
  • the at least one intermediate KPI range indicator or index related information triggers a corresponding model LCM operation for the two-sided model at either the UE-side or the NW-side or both sides.
  • the estimation or prediction of the at least one intermediate KPI range indicator or index of the two-sided primary model is performed by using a secondary one-sided AI/ML model at the UE.
  • the one-sided AI/ML model is trained by intermediate KPI range indicator/index ground-truth labels generated at the NW.
  • the at least one intermediate KPI indicator is reported together with the corresponding inference output of UE-part of the two-sided model (e.g., encoder output CSI) from the UE to the NW.
  • the at least one intermediate KPI indicator can be used as a type of assistance information for the NW to understand the quality of the corresponding inference output from the UE-part of the two-sided model.
  • the at least one intermediate KPI indicator is reported together with the corresponding two-sided AI/ML model ground-truth label (e.g., target CSI for the CSI compression use case) and the corresponding output of UE-part of the two-sided model (e.g., encoder output CSI) from the UE to the NW. This is to enable the NW monitoring of the intermediate KPI indicator estimation or prediction performance.
  • the UE stores at least one intermediate KPI range indicator or index related information and receives at least a ground-truth label associated with the at least one intermediate KPI range indicator or index from the NW.
  • the UE also monitors the intermediate KPI estimation or prediction performance by comparing the at least one ground-truth label with the stored at least one intermediate KPI range indicator or index related information.
  • Certain embodiments may provide one or more of the following technical advantages. Some embodiments disclosed herein estimates intermediate KPI range indicators using classification methods. This is easier than Method 3’s estimating the exact value of the intermediate KPI, which uses regression methods.
  • an AI/ML model trained for estimating intermediate KPI value range indicators can achieve better accuracy, comparing to an AI/ML model trained for estimating the exact value of the intermediate KPI.
  • An intermediate KPI range indicator is sufficient for assisting two-sided model LCM operations, including fall back and activation processes.
  • a range indicator can be represented by a single or multiple bits.
  • the one-sided AI/ML model trained for estimating the intermediate KPI value range indicator is monitored at the UE side, it requires NW to transmit ground-truth labels to the UE.
  • the signaling from NW to UE in some disclosed embodiments is in the format of indicators. This requires less payload size as compared to Method 3, which requires NW to transmit either decoder output or SGCS values to the UE. This implies less DL signaling overhead.
  • the small payload size also makes it possible to transmit the payload over Downlink Control Indicator (DCI), resulting in reduced latency as compared to Radio Resource Control (RRC) signaling.
  • DCI Downlink Control Indicator
  • RRC Radio Resource Control
  • the one-sided AI/ML model is trained and monitored based on Intermediate KPI range indicator labels defined and provided by the NW.
  • the disclosed embodiments enable the NW to configure or define an intermediate KPI associated with the two-sided model by considering MU-MIMO (multi-user MIMO) performance and network specific implementations, but without disclosing the NW-part model information such as loss function, loss value, NW-part model output, etc., to the UE.
  • the monitoring metrics used at both UE and NW sides are consistent since they are defined by the NW side and signaled to the UE for training or monitoring the one-sided AI/ML model. This enables the alignment between NW and UE sides on the two-sided model performance, and consequently, enables aligned decisions on model LCM operations for the two-sided model.
  • a UE is configured to monitor the performance of a two-sided AI/ML model by a network node (NW).
  • NW network node
  • the UE performs measurements on at least one reference signal resource indicated by a reference signal configuration associated with the two-sided AI/ML model.
  • the UE also performs model inference based on the measurements using the UE-part of the two-sided model.
  • the UE also estimates or predicts at least one intermediate KPI range indicator or index of the two-sided model based on at least the output of the UE-part of the two-sided model.
  • the UE stores and/or reports the estimated or predicted at least one intermediate KPI range indicator or index related information to the network node.
  • a different AI/ML model i.e., a one-sided AI/ML model, can be used to estimate or predict the intermediate KPI range indicator or index at the UE.
  • the model training and model monitoring aspects of the one-sided AI/ML model are discussed in detail in the sections below titled “Training of a one-sided model to estimate/predict the intermediate KPI range indicator,” and “Monitoring the performance of the one-sided model.”
  • Definition of Intermediate KPI of a Two-Sided Model and Range Indication [0161]
  • the format or metric of the at least one intermediate KPI is defined and configured by the NW or defined by the standard specification for the associated two-sided AI/ML model-based feature.
  • an intermediate KPI of a two-sided model is defined as a function of a ground-truth label (e.g., ground truth CSI for the CSI-compression use case) and an output of the two-sided model (e.g., the reconstructed CSI generated by the NW-part decoder model).
  • the function for deriving an intermediate KPI can be based on cosine similarity such as GCS, SGCS, ECS, or SECS, based on or mean-square errors such as MSE or NMSE.
  • an intermediate KPI of a two-sided model is defined based on the loss functions used for training the two-sided model.
  • the intermediated KPI can be a decoder reconstruction error metric defined by a loss function used for training the two-sided model at the NW-side. This loss function is unknown to the UE-side.
  • Ways to Define Intermediate KPI Ranges [0165]
  • the ranges of the at least one intermediate KPI is defined and configured by the NW or defined by the standard specification for the associated two-sided AI/ML model based feature.
  • the at least one intermediate KPI range indicator or index related information triggers a corresponding model LCM operation for the two-sided model at the UE-side, the NW-side, or both sides.
  • Embodiments for Choosing Ranges for Bounded KPIs [0169] Let d be the number of bits needed for communicating the range indicator or index. Then it is possible to communicate 2 ⁇ different range indicators or indices. There are several different embodiments for defining the ranges for an intermediate KPI.
  • One embodiment defines an intermediate KPI ranges through uniform discretization.
  • the interval in 2 ⁇ equally sized intervals or ranges is discretize.
  • the ranges for the at least one intermediate KPI is defined through uniform discretization based on the number of bits configured for signaling the range indicator from UE to the NW.
  • Another embodiment defines uneven ranges from the interpretation of KPI. For some KPIs the values have interpretations or relations to legacy algorithms or schemes associated with the two-sided AI/ML model-based feature.
  • the ranges for the at least one intermediate KPI are defined based on at least one performance metric (e.g., mean SGCS) of at least one legacy algorithm or scheme.
  • Such algorithm may include, for example, eType II CSI reporting with a given set of configuration parameters associated with the two-sided AI/ML model-based feature or functionality.
  • the ranges are configured by the NW based on the scenarios or conditions under which the two-sided model is operated and signaled from the NW to the UE.
  • the mean SGCS achieved by legacy CSI reporting formats over some representative dataset can be examined.
  • the mean values for one implementation of eType II CSI reporting, for MIMO layer 1, in a dense Urban scenario are listed as follows: • Parameter combination 1: Mean SGCS 0.723.
  • One embodiment uses the method of “cutting lower values”.
  • a value ⁇ is chosen, the interval [0, ⁇ ] is set to one range, and the interval [ ⁇ , 1] is discretized in 2 ⁇ ⁇ 1 uniform intervals or ranges.
  • the value ⁇ can be chosen, e.g., by comparing SGCS values for legacy algorithm(s). For example, value ⁇ can be set to 0.723, which was found to be the mean value in one implementation of eType II CSI reporting, Parameter combination 1, for MIMO layer 1, in a dense Urban scenario.
  • Another embodiment uses the method of “cutting higher values”.
  • a value ⁇ is chosen, the interval [ ⁇ , 1] is set to one range, and the interval [0, ⁇ ] is discretized in 2 ⁇ ⁇ 1 uniform intervals or ranges.
  • the value ⁇ can be chosen, e.g., by comparing SGCS values for legacy algorithm(s). For example, value ⁇ can be set to 0.880, which was found to be the mean value in one implementation of eType II CSI reporting, Parameter combination 8, for MIMO layer 1, in a dense Urban scenario.
  • Yet another embodiment uses the method of “cutting both lower and higher values.” This is a combination of the two embodiments above. In this embodiment, two values ⁇ and ⁇ are chosen, such that ⁇ ⁇ ⁇ .
  • the interval [0, ⁇ ] is set to one range and the interval [ ⁇ , 1] is set to one range. Then, the interval [ ⁇ , ⁇ ] is discretized in 2 ⁇ ⁇ 2 uniform intervals or ranges.
  • the values ⁇ and ⁇ can be chosen, e.g., by comparing SGCS values for legacy algorithm(s). For example, value ⁇ can be set to 0.723 and value ⁇ to 0.880, which were found to be the mean values for one implementation of Rel-16 eType II CSI reporting, Parameter combinations 1 and 8 respectively, for MIMO layer 1 in a dense Urban scenario. [0182]
  • the above description contains examples of computing mean SGCS for legacy algorithms over some representative dataset.
  • Statistical quantities other than mean values can also be computed, e.g., in any number of percentiles, such as 5-percentile, 50-percentile, 75- percentile, or 95-percentile, etc. Different points for defining the ranges may also use both the mean and multiple percentile values for the same legacy algorithm (and parameter combination). For example, the mean and 75-percentile for parameter combination 1 and 5 can be used, instead of the mean for 1, 3, 5, and 8, as described in the example above. [0183] For a bounded KPIs that are taking values on the interval [ ⁇ , ⁇ ] for some bounded real numbers ⁇ and ⁇ such that ⁇ ⁇ ⁇ , there are two main approaches.
  • Adaptions to KPIs where 0 (or another lower bound ⁇ ) corresponds to a perfect reconstruction It should be appreciated by a person of ordinary skill in the art that larger values indicate worse reconstruction.
  • SGCS is used as the example of an intermediate KPI.
  • the methodology is applicable to other intermediate KPI formats with bounded values, e.g., loss value generated by the loss function used for training the two-sided AI/ML model.
  • Embodiments for Choosing Ranges for Unbounded KPIs [0187] In one embodiment for unbounded KPIs, such as MSE and NMSE, a transformation is used with a bounded output, e.g., as described previously.
  • the transformed KPI can be treated with any embodiments for bounded KPIs listed above.
  • approaches or embodiments corresponding to the three hybrid embodiments described above may be used.
  • One embodiment is for functions taking values on the interval [ ⁇ , ⁇ ] for some bounded real value ⁇ .
  • a value ⁇ ⁇ is chosen, the interval [ ⁇ , ⁇ ⁇ ] is set to one range, and the interval [ ⁇ ⁇ , ⁇ ] is discretized in 2 ⁇ ⁇ 1 intervals or ranges with any of the methods described in the embodiments for bounded KPIs listed above.
  • the value ⁇ ⁇ can be chosen, e.g., by comparing KPI values for legacy algorithm(s) and/or other interpretation of the KPI.
  • Another embodiment is for functions taking values on the interval [ ⁇ , ⁇ ] for some bounded real value ⁇ .
  • a value ⁇ ⁇ is chosen, the interval [ ⁇ ⁇ , ⁇ ] is set to one range, and the interval [ ⁇ ⁇ , ⁇ ] is discretized in 2 ⁇ ⁇ 1 intervals or ranges with any of the methods described in the embodiments for bounded KPIs listed above.
  • the value ⁇ ⁇ can be chosen, e.g., by comparing KPI values for legacy algorithm(s) and/or other interpretation of the KPI.
  • the NMSE can take values in the interval [0, ⁇ ].
  • Yet another embodiment is functions taking values on the interval [ ⁇ , ⁇ ]. This is a combination of the two embodiments above. In this embodiment, two values ⁇ ⁇ and ⁇ ⁇ are chosen, such that ⁇ ⁇ ⁇ ⁇ ⁇ .
  • the interval [0, ⁇ ⁇ ] is set to one range and the interval [ ⁇ ⁇ ,1] is set to one range.
  • the interval [ ⁇ ⁇ , is discretized in 2 ⁇ ⁇ 2 intervals or ranges with any of the methods described in the embodiments for bounded KPIs listed above.
  • the values ⁇ ⁇ and ⁇ ⁇ can be chosen, e.g., by comparing KPI values for legacy algorithm(s) and/or other interpretations of the KPI.
  • Embodiments for Differentiating MIMO Layers [0194] If the intermediate KPI is computed per MIMO layer, then the ranges can be chosen equally for all layers or individually for each layer. In one embodiment, the ranges are equal for all MIMO layers. [0195] In another embodiment, the ranges of the at least one intermediate KPI is defined per MIMO layer.
  • Table 1 is an example of the effect of subband averaging, where the Tx-covariance is averaged over 4 frequency units and eigenvectors of that average are computed and use as subband precoders.
  • Layer 1 Layer 2
  • Layer 3 Layer 4 0.955 0.919 0.854 0.795
  • Table 1 [0196]
  • the SGCS is compared with the eigenvector, which is computed from the per- frequency-unit Tx-covariance matrix. It can be seen that the SGCS is decreasing for higher layers, indicating faster fluctuation over frequency and less compressibility. This is discussed in more detail in Reference 6.
  • bit interpretation may be defined.
  • the same bit size is used for defining the intermediate KPI ranges for different MIMO layers.
  • ⁇ ⁇ bits may be used to report the model monitoring, where ⁇ is the number of MIMO layers in which the KPI is reported by the UE.
  • different bit sizes are used for defining intermediate KPI ranges for different MIMO layers. For example, the number of ranges (or number of bits) used for Layer 3 and Layer 4 may be less than the number of ranges used for Layer 1 and Layer 2. In some scenario, this is because Layer 3 and Layer 4 may have lower achievable KPI, and/or are used less often compared to Layer 1 and Layer 2.
  • the total number of bits used for the reporting is ⁇ 1 ⁇ ⁇ , where ⁇ ⁇ is the number of bits used for layer ⁇ for performance monitoring.
  • ⁇ ⁇ is the number of bits used for layer ⁇ for performance monitoring.
  • Embodiments for Choosing ranges Based On Different Ground Truth CSI [0200] Multiple formats may be defined for reporting ground truth. There are also concepts for how to configure the reporting format if a UE supports more than one. This is disclosed in more detail in Reference 1. The different ground truth formats provide different accuracy when used for monitoring. In one embodiment, the ranges depend on the ground truth format used by the UE when training the encoder and/or proxy model.
  • the number of ranges depend on the ground truth format used by the UE when training the encoder and/or proxy model. For example, one bit can be used if the UE trains its model using a low-quality ground truth format, e.g., only classifying as “Acceptable” and “Not acceptable”. On the contrary, two bits can be used if the UE trains its models using a high-quality ground truth format. Consequently, the KPIs computed and used for training are more accurate estimations of an ideal KPI thought of as comparing the output precoder with the theoretically optimal precoder.
  • the range may be defined differently depending on the payload size of the CSI report that is being monitored.
  • the range of intermediate KPI is defined based on the payload size of the UE report generated based on the inference output of the UE-part of the two-sided model.
  • the payload size of the AI-CSI report generated by the UE-part encoder model is ⁇ 60 bits
  • 0 and 1 may be interpreted as the KPI being lower than and higher than a first threshold, respectively.
  • the payload size is ⁇ 110 bits
  • 0 and 1 may be interpreted as the KPI being lower than and higher than a second threshold, respectively.
  • the first and the second threshold may be, for example, a KPI equal or similar (e.g., with adding a certain value) to the KPI for legacy eType II CSI format with ParComb 1 and legacy eType II CSI format with ParComb 3, e.g., as legacy eType II CSI format with ParComb 1 and ParComb 3 have a payload size of ⁇ 60 bits and ⁇ 110 bits, respectively.
  • the ranges for the ⁇ 60-bit model may for example be [0, 0.723] and [0.723, 1]
  • the ranges may for example be [0, 0.802] and [0.802, 1].
  • the examples mostly cover monitoring for one CSI payload size.
  • the monitored CSI payload size may be configured by the NW or indicated by the UE. This, however, should not limit the possibility of monitoring multiple payload sizes simultaneously.
  • the total number of bits used in the monitoring then, is the sum of the number of bits used for monitoring for each payload size.
  • Training of a One-sided Model to Estimate or Predict the Intermediate KPI range Indicator [0208] In one embodiment, the estimation/prediction of the at least one intermediate KPI range indicator/index of the two-sided model is performed by using a one-sided AI/ML model at the UE, where the one-sided AI/ML model is trained by intermediate KPI range indicator/index ground-truth labels generated at the NW.
  • the one-sided model for estimating or predicting the at least one intermediate KPI range indicator can be trained while training the two-sided model or after the two-sided model has been trained and deployed at the UE.
  • a method is provided for training a one-sided model to be deployed at a UE for estimating an intermediate KPI indicator of a two-sided CSI-compression model.
  • the one-sided model is trained while training the two-sided CSI- compression model.
  • the method comprises Step 1, where NW collects target CSI samples from the UE side and uses the target CSI samples to train a NW-part of the CSI compression model (the actual decoder) and a nominal UE-part of the CSI compression model (the nominal encoder).
  • the method further comprises Step 2, where the NW creates at least one dataset for training the UE-part of the two-sided CSI compression model (the actual encoder) and the one-sided model for estimating or predicting the intermediated KPI range indicator of the two- sided CSI-compression model.
  • the dataset contains at least the target CSI samples, the corresponding output CSI samples generated from the nominal encoder, and the corresponding intermediate KPI range indicator (e.g., SGCS or loss indicator/index) samples.
  • the intermediate KPI range indicator samples are collected at the NW- side by firstly deriving the intermediate KPI values by using the reconstructed CSI samples generated from the actual decoder and the target CSI samples according to the intermediate KPI definition (e.g., intermediate KPI defined in terms of SGCS or NMSE), and then, mapping the intermediated KPI value samples into the range indicators based on the range definition, e.g., one of the range definitions described previously.
  • the intermediate KPI definition e.g., intermediate KPI defined in terms of SGCS or NMSE
  • the intermediate KPI is defined as the loss value generated by the loss function used for training the actual decoder and nominal encoder at the NW-side.
  • the intermediate KPI range indicator samples are collected at the NW-side by mapping the loss values into the range indicators based on the range definition, e.g., one of the range definitions described previously.
  • the method further comprises Step 3, where the UE-side obtains the at least one dataset created in Step 2 from the NW-side and uses it for training the actual encoder and the one-sided model for estimating/predicting the intermediated KPI of the two-sided CSI- compression model.
  • the UE-side uses the target CSI samples to generated model inputs and uses the corresponding nominal encoder output CSI samples as labels to train an actual encoder (the UE-part of the two-sided model).
  • the UE-side may also use the target CSI samples to generate model inputs and feeds the model inputs to the trained actual encoder to create the corresponding actual encoder output samples.
  • the UE-side uses the actual encoder output samples as model input and the associated intermediate KPI indicator samples as labels to train the one-sided model.
  • a method is provided for training a one-sided model to be deployed at a UE for estimating an intermediate KPI indicator of a two-sided CSI-compression model.
  • the one-sided model is trained after the two-sided CSI-compression model has been trained and deployed at the UE(s).
  • the method comprises Step 1, where a set of UE(s) capable of performing AI- based CSI-compression perform(s) CSI-RS measurements based on the measurement configuration received from the NW, and report(s) the CSI-RS measurements together with the associated encoder output CSI samples to the NW, based on the reporting configuration received from the NW.
  • the method further comprises Step 2, where the NW side feds the collected encoder output CSI samples as model input to the decoder (NW part of the two-sided model) and generates the corresponding reconstructed CSI samples (decoder output samples).
  • the NW derives the intermediate KPI range indicators using the generated reconstructed CSI samples and the received CSI-RS measurement samples (as target CSI samples) based on the definition of the intermediate KPI and the KPI range definition.
  • the method further comprises Step 3, where the UE side obtains a data set containing at least the target CSI samples and the corresponding intermediate KPI indicator samples from the NW-side.
  • the method further comprises Step 4, where the UE-side uses the target CSI samples to generate model inputs and feeds the model inputs to the actual encoder to create the corresponding actual encoder output samples.
  • the UE-side uses the actual encoder output samples as model input and the associated intermediate KPI indicator samples as labels to train the one-sided model.
  • Step 1 the UE saves some proprietary measurement data and labels it with a measurement ID.
  • the measurement ID is also sent to the NW together with the CSI-RS measurement and encoder output CSI.
  • Step 3 the UE obtains a dataset containing measurement IDs and corresponding intermediate KPI indicator samples from the NW side.
  • Step 4 the UE side uses the proprietary data stored as input and associated intermediate KIP indicator samples (associated trough the measurement ID) to train the one-sided model.
  • Monitoring the Performance of the One-sided Model [0223] Monitoring the Performance of the One-sided Model at the NW Side
  • a method is provided for monitoring at the NW side for the performance of the one-sided model at the UE.
  • the method comprises Step 1, where the UE performs channel measurement on one or more CSI-RS resources based on the CSI-RS measurement configuration received from the NW.
  • the UE uses the CSI-RS measurement(s) to generate model inputs and feeds the model inputs to the UE-part of the two-sided model (encoder) to create the corresponding encoder output sample(s).
  • the UE feeds the encoder output sample(s) as model inputs to the one-sided model to generate the associated intermediate KPI indicator(s).
  • the UE report(s) the CSI-RS measurement(s) (as target CSI sample(s)), the associated encoder output CSI sample(s), and the associated intermediate KPI indicator(s) to the NW, based on the reporting configuration received from the NW.
  • the method further comprises Step 2, where the NW feeds the collected encoder output CSI sample(s) as model input to the decoder (NW part of the two-sided model) and generates the corresponding reconstructed CSI sample(s) (decoder output sample(s)).
  • the NW derives the intermediate KPI range indicator labels using the generated reconstructed CSI samples and the received CSI-RS measurement samples (as target CSI samples) based on the definition of the intermediate KPI and the KPI range definition.
  • the NW calculates the performance metrics of the one-sided model by comparing the intermediated KPI range indicator samples received from the UE with the intermediate KPI range indicator(s) labels derived at the NW.
  • Step 1 the at least one intermediate KPI indicator is reported together with the corresponding two-sided AI/ML model ground-truth label (e.g., target CSI for the CSI compression use case) and the corresponding output of UE-part of the two-sided model (e.g., encoder output CSI) from the UE to the NW to enable NW monitoring the intermediate KPI indicator estimation/prediction performance.
  • the corresponding two-sided AI/ML model ground-truth label e.g., target CSI for the CSI compression use case
  • the corresponding output of UE-part of the two-sided model e.g., encoder output CSI
  • the at least one intermediate KPI indicator, the corresponding output of the UE-part of the two-sided model, and the corresponding two-sided AI/ML model ground-truth label are carried in the same channel (e.g., as UCI on PUCCH or as UCI on PUSCH) or the same message (as a data collection related RRC message).
  • the at least one intermediate KPI range indicator and the corresponding output of the UE-part of the two-sided model are carried in a first channel/message, and the corresponding two-sided AI/ML model ground-truth label is carried in a second channel/message, which is different from the first channel or message.
  • the at least one intermediate KPI indicator is carried in a different channel or message as compared to the one(s) used for carrying the corresponding output of the UE-part of the two-sided model and the corresponding two-sided AI/ML model ground-truth label. Similarly, if multiple intermediate KPI range indicators are carried in a single channel/message, then, the association between the samples carried on different channels/messages is either explicitly indicated by the UE or implicitly indicated based on pre- defined rule(s).
  • Monitoring the Performance of the One-sided Model at the UE Side [0232] In one embodiment, a method is provided for monitoring at the UE side for the performance of the one-sided model at the UE.
  • the method comprises Step 1, where the UE performs channel measurement on one or more CSI-RS resources based on the CSI-RS measurement configuration received from the NW.
  • the UE reports the CSI-RS measurement(s) (i.e., target CSI sample(s)) together with the associated encoder output CSI sample(s) to the NW, based on the reporting configuration received from the NW.
  • the UE stores the intermediate KPI range indicator(s) or/and the statistics of the intermediate KPI range indicator(s).
  • the method further comprises Step 2, where the NW feeds the collected encoder output CSI sample(s) as model input to the decoder (NW part of the two-sided model) and generates the corresponding reconstructed CSI sample(s) (decoder output sample(s)).
  • the NW derives the intermediate KPI range indicator labels using the generated reconstructed CSI samples and the received CSI-RS measurement samples (as target CSI samples) based on the definition of the intermediate KPI and the KPI range definition.
  • the method further comprises Step 3, where the NW transmits the intermediate KPI range indicator label(s) or/and the label(s) for the statistics of the intermediate KPI range indicator(s) to the UE.
  • the method further comprises Step 4, where the UE calculates the performance metrics of the one-sided model by comparing the stored intermediated KPI range indicator samples with the intermediate KPI range indicator(s) labels received from the NW, or/and by comparing the stored statistics of the intermediate KPI range indicator(s) with the statistics labels received from the NW.
  • the UE stores at least one intermediate KPI range indicator/index related information and receives at least a ground-truth label associated to the at least one intermediate KPI range indicator or index from the NW, and monitors the intermediate KPI estimation/prediction performance by comparing the at least ground-truth label with the stored at least one intermediate KPI range indicator/index related information.
  • the ground-truth label contains at least an intermediate KPI indicator or index label or at least a statistic information associated to a set of intermediate KPI indicators or indexes.
  • the ground-truth label is derived by the NW based on at least an output sample of the UE-part of the two-side model (e.g., encoder output for the CSI compression use case) and at least a corresponding label of two-sided model (e.g., target CSI for the CSI compression use case) received from the UE.
  • the at least ground-truth label associated to the at least one intermediate KPI range indicator/index is carried in a DL signaling, e.g., a DCI or RRC signaling.
  • Figure 10 illustrates a flowchart showing method 1000 for performance monitoring of a two-sided AI/ML model at the UE side in accordance with some embodiments.
  • the UE measures at least one reference signal resource indicated by a reference signal configuration associated with a two-sided artificial intelligence or machine learning (AI/ML) model.
  • the two-sided AI/ML model comprises a UE part operated by the UE and a network part operated by a network node.
  • the UE performs model inference based on the measuring of the at least one reference signal resource using the UE part of the two-sided AI/ML model.
  • the UE estimates at least one intermediate Key Performance Indicator (KPI) of the two-sided AI/ML model based on at least an output of the UE part of the two-sided AI/ML model.
  • KPI Key Performance Indicator
  • FIG. 11 illustrates a flowchart showing method 1100 for performance monitoring using a one-sided AI/ML model at the NW side in accordance with some embodiments.
  • the network node collects target channel state information (CSI) samples from a UE.
  • CSI target channel state information
  • the network node trains a network-part of a two- sided AI/ML model using the collected CSI samples and a nominal model at the network node.
  • the network node In block 1130 of method 1100, the network node generates at least one dataset for training a UE-part of the two-sided AI/ML model and a one-sided AI/ML model for estimating an intermediate KPI range indicator of the two-sided AI/ML model.
  • the dataset may contain the target CSI samples, and the corresponding intermediate KPI range indicator samples, which can be collected at the NW-side by deriving the intermediate KPI values, and then mapping the intermediated KPI value samples into the range indicators.
  • the network node provides the at least one dataset to the UE.
  • Figure 12 illustrates a flowchart showing another method for performance monitoring at the NW side in accordance with some embodiments.
  • the network node receives CSI samples from a user equipment (UE).
  • UE user equipment
  • the network node generates reconstructed CSI samples by providing the received CSI samples from the UE to a network-part of a two-sided AI/ML model.
  • the network node derives an intermediate KPI range indicator of the two-sided AI/ML model based at least on the reconstructed CSI samples.
  • the network node provides to the UE a dataset comprising target CSI samples and corresponding intermediate KPI indicator samples.
  • computing devices described herein may include the illustrated combination of hardware components, other embodiments may comprise computing devices with different combinations of components. It is to be understood that these computing devices may comprise any suitable combination of hardware and/or software needed to perform the tasks, features, functions and methods disclosed herein. Determining, calculating, obtaining or similar operations described herein may be performed by processing circuitry, which may process information by, for example, converting the obtained information into other information, comparing the obtained information or converted information to information stored in the network node, and/or performing one or more operations based on the obtained information or converted information, and as a result of said processing making a determination.
  • processing circuitry may process information by, for example, converting the obtained information into other information, comparing the obtained information or converted information to information stored in the network node, and/or performing one or more operations based on the obtained information or converted information, and as a result of said processing making a determination.
  • computing devices may comprise multiple different physical components that make up a single illustrated component, and functionality may be partitioned between separate components.
  • a communication interface may be configured to include any of the components described herein, and/or the functionality of the components may be partitioned between the processing circuitry and the communication interface.
  • non-computationally intensive functions of any of such components may be implemented in software or firmware and computationally intensive functions may be implemented in hardware.
  • processing circuitry executing instructions stored on in memory, which in certain embodiments may be a computer program product in the form of a non-transitory computer- readable storage medium.
  • some or all of the functionality may be provided by the processing circuitry without executing instructions stored on a separate or discrete device-readable storage medium, such as in a hard-wired manner.
  • the processing circuitry can be configured to perform the described functionality. The benefits provided by such functionality are not limited to the processing circuitry alone or to other components of the computing device, but are enjoyed by the computing device as a whole, and/or by end users and a wireless network generally.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Signal Processing (AREA)
  • Software Systems (AREA)
  • Computing Systems (AREA)
  • Artificial Intelligence (AREA)
  • Mathematical Physics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Biomedical Technology (AREA)
  • Molecular Biology (AREA)
  • General Health & Medical Sciences (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Medical Informatics (AREA)
  • Mobile Radio Communication Systems (AREA)

Abstract

L'invention concerne un procédé mis en œuvre par un équipement utilisateur, le procédé consistant à mesurer au moins une ressource de signal de référence indiquée par une configuration de signal de référence associée à un modèle d'IA/AA à deux côtés. Le modèle d'IA/AA à deux côtés comprend une partie UE actionnée par l'UE et une partie réseau actionnée par un nœud de réseau. Le procédé consiste également à effectuer une inférence de modèle sur la base de la mesure de ladite au moins une ressource de signal de référence à l'aide de la partie UE du modèle d'IA/AA à deux côtés. Le procédé consiste en outre à estimer au moins un ICP intermédiaire du modèle d'IA/AA à deux côtés. Le procédé consiste par ailleurs à rapporter ledit au moins un ICP intermédiaire estimé au nœud de réseau.
PCT/SE2024/050724 2023-08-11 2024-08-09 Surveillance de performance d'un modèle d'intelligence artificielle/apprentissage automatique à deux côtés au côté équipement utilisateur Pending WO2025038021A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US202363532322P 2023-08-11 2023-08-11
US63/532,322 2023-08-11

Publications (1)

Publication Number Publication Date
WO2025038021A1 true WO2025038021A1 (fr) 2025-02-20

Family

ID=94633002

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/SE2024/050724 Pending WO2025038021A1 (fr) 2023-08-11 2024-08-09 Surveillance de performance d'un modèle d'intelligence artificielle/apprentissage automatique à deux côtés au côté équipement utilisateur

Country Status (1)

Country Link
WO (1) WO2025038021A1 (fr)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN119848478A (zh) * 2025-03-21 2025-04-18 南京纳恩自动化科技有限公司 一种基于机器学习的电力设备状态评估方法及系统

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022220642A1 (fr) * 2021-04-16 2022-10-20 Samsung Electronics Co., Ltd. Procédé et appareil de prise en charge de techniques d'apprentissage automatique ou d'intelligence artificielle pour la rétroaction de csi dans des systèmes mimo fdd
CN116074813A (zh) * 2021-10-29 2023-05-05 中国电信股份有限公司 无线通信方法及相关设备
US20240098533A1 (en) * 2022-09-15 2024-03-21 Samsung Electronics Co., Ltd. Ai/ml model monitoring operations for nr air interface

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022220642A1 (fr) * 2021-04-16 2022-10-20 Samsung Electronics Co., Ltd. Procédé et appareil de prise en charge de techniques d'apprentissage automatique ou d'intelligence artificielle pour la rétroaction de csi dans des systèmes mimo fdd
CN116074813A (zh) * 2021-10-29 2023-05-05 中国电信股份有限公司 无线通信方法及相关设备
US20240098533A1 (en) * 2022-09-15 2024-03-21 Samsung Electronics Co., Ltd. Ai/ml model monitoring operations for nr air interface

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
NVIDIA: "AI and ML for CSI feedback enhancement", 3GPP DRAFT; R1-2305161, 3RD GENERATION PARTNERSHIP PROJECT (3GPP), MOBILE COMPETENCE CENTRE ; 650, ROUTE DES LUCIOLES ; F-06921 SOPHIA-ANTIPOLIS CEDEX ; FRANCE, vol. 3GPP RAN 1, no. Incheon, Korea; 20230522 - 20230526, 21 May 2023 (2023-05-21), Mobile Competence Centre ; 650, route des Lucioles ; F-06921 Sophia-Antipolis Cedex ; France, XP052394109 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN119848478A (zh) * 2025-03-21 2025-04-18 南京纳恩自动化科技有限公司 一种基于机器学习的电力设备状态评估方法及系统

Similar Documents

Publication Publication Date Title
US20250219898A1 (en) :user equipment report of machine learning model performance
US20250330373A1 (en) Ml model support and model id handling by ue and network
EP4384947A1 (fr) Systèmes et procédés pour optimiser l'entraînement de modèles et d'algorithmes d'ia/aa
WO2023191682A1 (fr) Gestion de modèles d'intelligence artificielle/d'apprentissage machine entre des nœuds radio sans fil
WO2024242612A1 (fr) Configuration et test d'ue rapportant des résultats de surveillance des performances d'un modèle ia/ml
WO2024176165A1 (fr) Surveillance de modèle d'apprentissage automatique avec autocodeur
US20250280304A1 (en) Machine Learning for Radio Access Network Optimization
EP4595290A1 (fr) Systèmes et procédés de rapport d'informations d'état de canal basé sur des informations artificielles
WO2025038021A1 (fr) Surveillance de performance d'un modèle d'intelligence artificielle/apprentissage automatique à deux côtés au côté équipement utilisateur
US20250293942A1 (en) Machine learning fallback model for wireless device
US20240357380A1 (en) Managing decentralized autoencoder for detection or prediction of a minority class from an imbalanced dataset
US20250234219A1 (en) Network assisted user equipment machine learning model handling
EP4666219A1 (fr) Surveillance de modèle d'apprentissage machine à l'aide d'inférences multiples
WO2025165273A1 (fr) Dispositif sans fil, nœud de réseau et procédés de surveillance de performance pour de multiples schémas de prédiction de csi
WO2025183613A1 (fr) Procédés, appareil et supports lisibles par ordinateur associés au partage d'ensembles de données sur un réseau de communication
WO2025095847A1 (fr) Appariement de modèles pour modèles d'ia/ml bilatéraux
WO2025183598A1 (fr) Noeud de réseau radio, équipement utilisateur et procédés mis en oeuvre dans celui-ci
WO2025202690A1 (fr) Procédé et signalisation de collecte de données avec collaboration entre des nœuds de réseau
EP4466805A1 (fr) Balayage de faisceau avec détection compressée basée sur l'intelligence artificielle (ia)
WO2024241222A1 (fr) Procédé et systèmes pour l'établissement de rapport sur la capacité d'équipement utilisateur et la configuration d'informations sur l'état des canaux basée sur l'apprentissage automatique.
WO2024209435A1 (fr) Entrée de modèle basée sur un profil de retard pour ai/ml
WO2025127989A1 (fr) Nœud de réseau radio, équipement utilisateur et procédé mis en œuvre dans celui-ci
EP4595647A1 (fr) Mappage de ressources pour une liaison montante basée sur l'ia
WO2024176170A1 (fr) Surveillance de dérive de modèle pour positionnement reposant sur ia/ml par distribution statistique conjointe et métrique de performance
WO2024096805A1 (fr) Communication basée sur un partage d'identifiant de configuration de réseau

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 24854539

Country of ref document: EP

Kind code of ref document: A1