[go: up one dir, main page]

WO2024207182A1 - Mélange de jeu de données d'apprentissage pour un apprentissage de modèle basé sur un équipement utilisateur dans une gestion de faisceau prédictif - Google Patents

Mélange de jeu de données d'apprentissage pour un apprentissage de modèle basé sur un équipement utilisateur dans une gestion de faisceau prédictif Download PDF

Info

Publication number
WO2024207182A1
WO2024207182A1 PCT/CN2023/086138 CN2023086138W WO2024207182A1 WO 2024207182 A1 WO2024207182 A1 WO 2024207182A1 CN 2023086138 W CN2023086138 W CN 2023086138W WO 2024207182 A1 WO2024207182 A1 WO 2024207182A1
Authority
WO
WIPO (PCT)
Prior art keywords
dataset
training
model
inference
mixture ratio
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
PCT/CN2023/086138
Other languages
English (en)
Inventor
Qiaoyu Li
Mahmoud Taherzadeh Boroujeni
Tao Luo
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Qualcomm Inc
Original Assignee
Qualcomm Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Qualcomm Inc filed Critical Qualcomm Inc
Priority to PCT/CN2023/086138 priority Critical patent/WO2024207182A1/fr
Publication of WO2024207182A1 publication Critical patent/WO2024207182A1/fr
Anticipated expiration legal-status Critical
Pending legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04BTRANSMISSION
    • H04B7/00Radio transmission systems, i.e. using radiation field
    • H04B7/02Diversity systems; Multi-antenna system, i.e. transmission or reception using multiple antennas
    • H04B7/04Diversity systems; Multi-antenna system, i.e. transmission or reception using multiple antennas using two or more spaced independent antennas
    • H04B7/06Diversity systems; Multi-antenna system, i.e. transmission or reception using multiple antennas using two or more spaced independent antennas at the transmitting station
    • H04B7/0686Hybrid systems, i.e. switching and simultaneous transmission
    • H04B7/0695Hybrid systems, i.e. switching and simultaneous transmission using beam selection
    • H04B7/06952Selecting one or more beams from a plurality of beams, e.g. beam training, management or sweeping
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • G06N20/20Ensemble learning
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/044Recurrent networks, e.g. Hopfield networks
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Definitions

  • the present disclosure relates generally to communication systems, and more particularly, to machine learning for predictive beam management.
  • Wireless communication systems are widely deployed to provide various telecommunication services such as telephony, video, data, messaging, and broadcasts.
  • Typical wireless communication systems may employ multiple-access technologies capable of supporting communication with multiple users by sharing available system resources. Examples of such multiple-access technologies include code division multiple access (CDMA) systems, time division multiple access (TDMA) systems, frequency division multiple access (FDMA) systems, orthogonal frequency division multiple access (OFDMA) systems, single-carrier frequency division multiple access (SC-FDMA) systems, and time division synchronous code division multiple access (TD-SCDMA) systems.
  • CDMA code division multiple access
  • TDMA time division multiple access
  • FDMA frequency division multiple access
  • OFDMA orthogonal frequency division multiple access
  • SC-FDMA single-carrier frequency division multiple access
  • TD-SCDMA time division synchronous code division multiple access
  • 5G New Radio is part of a continuous mobile broadband evolution promulgated by Third Generation Partnership Project (3GPP) to meet new requirements associated with latency, reliability, security, scalability (e.g., with Internet of Things (IoT) ) , and other requirements.
  • 3GPP Third Generation Partnership Project
  • 5G NR includes services associated with enhanced mobile broadband (eMBB) , massive machine type communications (mMTC) , and ultra-reliable low latency communications (URLLC) .
  • eMBB enhanced mobile broadband
  • mMTC massive machine type communications
  • URLLC ultra-reliable low latency communications
  • Some aspects of 5G NR may be based on the 4G Long Term Evolution (LTE) standard.
  • LTE Long Term Evolution
  • a method, a computer-readable medium, and an apparatus at a user equipment are provided.
  • the apparatus may include memory and at least one processor coupled to the memory.
  • the at least one processor based at least in part on information stored in the memory may be configured to obtain a first reference dataset associated with a first set of characteristics and a second reference dataset associated with a second set of characteristics, and to perform at least one of model training or model inference based on a mixture ratio of the first reference dataset and the second reference dataset.
  • a method, a computer-readable medium, and an apparatus at a network entity may include memory and at least one processor coupled to the memory.
  • the at least one processor, based at least in part on information stored in the memory may be configured to provide a first request to perform model training based on a first mixture ratio of a first training dataset and a second training dataset, and to provide a second request to perform model inference based on a second mixture ratio of a first inference dataset and a second inference dataset.
  • the one or more aspects may include the features hereinafter fully described and particularly pointed out in the claims.
  • the following description and the drawings set forth in detail certain illustrative features of the one or more aspects. These features are indicative, however, of but a few of the various ways in which the principles of various aspects may be employed.
  • FIG. 1 is a diagram illustrating an example of a wireless communications system and an access network.
  • FIG. 2A is a diagram illustrating an example of a first frame, in accordance with various aspects of the present disclosure.
  • FIG. 2B is a diagram illustrating an example of downlink (DL) channels within a subframe, in accordance with various aspects of the present disclosure.
  • FIG. 2C is a diagram illustrating an example of a second frame, in accordance with various aspects of the present disclosure.
  • FIG. 2D is a diagram illustrating an example of uplink (UL) channels within a subframe, in accordance with various aspects of the present disclosure.
  • FIG. 3 is a diagram illustrating an example of a base station and user equipment (UE) in an access network.
  • UE user equipment
  • FIG. 4 is a diagram illustrating an artificial intelligence/machine learning algorithm for wireless communication and that illustrates various aspects model training, model inference, model feedback, and model update.
  • FIG. 5 is a diagram illustrating different scenarios for the identification of a dataset mixture for model training and model inference in accordance with various aspects of the present disclosure.
  • FIG. 6 is a call flow diagram illustrating a method of wireless communication, in accordance with various aspects of this present disclosure.
  • FIG. 7 is a flowchart illustrating methods of wireless communication in accordance with various aspects of the present disclosure.
  • FIG. 8 is a flowchart illustrating methods of wireless communication in accordance with various aspects of the present disclosure.
  • FIG. 9 is a flowchart illustrating methods of wireless communication in accordance with various aspects of the present disclosure.
  • FIG. 10 is a flowchart illustrating methods of wireless communication in accordance with various aspects of the present disclosure.
  • FIG. 11 is a diagram illustrating an example of a hardware implementation for an example apparatus and/or network entity.
  • FIG. 12 is a diagram illustrating an example of a hardware implementation for an example network entity.
  • FIG. 13 is a diagram illustrating an example of a hardware implementation for an example network entity.
  • a UE may receive a request, from a network entity, to perform model training based on a mixture ratio of a first training dataset and a second training dataset.
  • the UE may generate a dataset mixture based on the mixture ratio of the first training dataset and the second training dataset, where each training dataset is associated with particular characteristics, for example, associated with a radio environment in which the UE and the network entity are located.
  • the UE may train at least one machine learning model based on the dataset mixture.
  • the UE may generate multiple dataset mixtures based on one or more mixture ratios of different training datasets (each being associated with different characteristics) to generate a plurality of machine learning models, where each of the machine learning models is trained based on a particular mixture ratio of at least two training datasets.
  • the UE may store each of the trained machine learning models in a memory of the UE.
  • the UE may also receive, from the network entity, a request to perform model inference based on a mixture ratio of a first inference dataset and a second inference dataset. For instance, the UE may determine a dataset mixture based on the mixture ratio of the first inference dataset and the second inference dataset. The UE may then determine which of the plurality of machine learning models maintained thereby best match the generated dataset mixture.
  • the UE utilizes the determined machine learning model for model inference.
  • the determined machine learning model may be configured to output a prediction as to which transmit beam and/or receive beam is to be utilized at the UE for transmitting and/or receiving a signal, respectively.
  • the UE may select a machine learning model from the plurality of machine learning models that is tailored to the environment in which the UE is located.
  • Such a machine learning model may more accurately predict an optimal transmit beam for transmitting signals and/or an optimal receive beam for receiving signals.
  • the aspects of the subject matter described in this disclosure may improve the signal-to-noise ratio of received signals, eliminate undesirable interference sources, and focus transmitted signals to desired locations.
  • processors include microprocessors, microcontrollers, graphics processing units (GPUs) , central processing units (CPUs) , application processors, digital signal processors (DSPs) , reduced instruction set computing (RISC) processors, systems on a chip (SoC) , baseband processors, field programmable gate arrays (FPGAs) , programmable logic devices (PLDs) , state machines, gated logic, discrete hardware circuits, and other suitable hardware configured to perform the various functionality described throughout this disclosure.
  • processors in the processing system may execute software.
  • Software whether referred to as software, firmware, middleware, microcode, hardware description language, or otherwise, shall be construed broadly to mean instructions, instruction sets, code, code segments, program code, programs, subprograms, software components, applications, software applications, software packages, routines, subroutines, objects, executables, threads of execution, procedures, functions, or any combination thereof.
  • the functions described may be implemented in hardware, software, or any combination thereof. If implemented in software, the functions may be stored on or encoded as one or more instructions or code on a computer-readable medium.
  • Computer-readable media includes computer storage media. Storage media may be any available media that can be accessed by a computer.
  • such computer-readable media can include a random-access memory (RAM) , a read-only memory (ROM) , an electrically erasable programmable ROM (EEPROM) , optical disk storage, magnetic disk storage, other magnetic storage devices, combinations of the types of computer-readable media, or any other medium that can be used to store computer executable code in the form of instructions or data structures that can be accessed by a computer.
  • RAM random-access memory
  • ROM read-only memory
  • EEPROM electrically erasable programmable ROM
  • optical disk storage magnetic disk storage
  • magnetic disk storage other magnetic storage devices
  • combinations of the types of computer-readable media or any other medium that can be used to store computer executable code in the form of instructions or data structures that can be accessed by a computer.
  • aspects, implementations, and/or use cases are described in this application by illustration to some examples, additional or different aspects, implementations and/or use cases may come about in many different arrangements and scenarios. Aspects, implementations, and/or use cases described herein may be implemented across many differing platform types, devices, systems, shapes, sizes, and packaging arrangements. For example, aspects, implementations, and/or use cases may come about via integrated chip implementations and other non-module-component based devices (e.g., end-user devices, vehicles, communication devices, computing devices, industrial equipment, retail/purchasing devices, medical devices, artificial intelligence (AI) -enabled devices, etc. ) .
  • non-module-component based devices e.g., end-user devices, vehicles, communication devices, computing devices, industrial equipment, retail/purchasing devices, medical devices, artificial intelligence (AI) -enabled devices, etc.
  • OFEM original equipment manufacturer
  • Deployment of communication systems may be arranged in multiple manners with various components or constituent parts.
  • a network node, a network entity, a mobility element of a network, a radio access network (RAN) node, a core network node, a network element, or a network equipment, such as a base station (BS) , or one or more units (or one or more components) performing base station functionality may be implemented in an aggregated or disaggregated architecture.
  • a BS such as a Node B (NB) , evolved NB (eNB) , NR BS, 5G NB, access point (AP) , a transmission reception point (TRP) , or a cell, etc.
  • NB Node B
  • eNB evolved NB
  • NR BS 5G NB
  • AP access point
  • TRP transmission reception point
  • a cell etc.
  • an aggregated base station also known as a standalone BS or a monolithic BS
  • disaggregated base station also known as a standalone BS or a monolithic BS
  • An aggregated base station may be configured to utilize a radio protocol stack that is physically or logically integrated within a single RAN node.
  • a disaggregated base station may be configured to utilize a protocol stack that is physically or logically distributed among two or more units (such as one or more central or centralized units (CUs) , one or more distributed units (DUs) , or one or more radio units (RUs) ) .
  • a CU may be implemented within a RAN node, and one or more DUs may be co-located with the CU, or alternatively, may be geographically or virtually distributed throughout one or multiple other RAN nodes.
  • the DUs may be implemented to communicate with one or more RUs.
  • Each of the CU, DU and RU can be implemented as virtual units, i.e., a virtual central unit (VCU) , a virtual distributed unit (VDU) , or a virtual radio unit (VRU) .
  • VCU virtual central unit
  • VDU virtual distributed unit
  • Base station operation or network design may consider aggregation characteristics of base station functionality.
  • disaggregated base stations may be utilized in an integrated access backhaul (IAB) network, an open radio access network (O-RAN (such as the network configuration sponsored by the O-RAN Alliance) ) , or a virtualized radio access network (vRAN, also known as a cloud radio access network (C-RAN) ) .
  • Disaggregation may include distributing functionality across two or more units at various physical locations, as well as distributing functionality for at least one unit virtually, which can enable flexibility in network design.
  • the various units of the disaggregated base station, or disaggregated RAN architecture can be configured for wired or wireless communication with at least one other unit.
  • FIG. 1 is a diagram 100 illustrating an example of a wireless communications system and an access network.
  • the illustrated wireless communications system includes a disaggregated base station architecture.
  • the disaggregated base station architecture may include one or more CUs 110 that can communicate directly with a core network 120 via a backhaul link, or indirectly with the core network 120 through one or more disaggregated base station units (such as a Near-Real Time (Near-RT) RAN Intelligent Controller (RIC) 125 via an E2 link, or a Non-Real Time (Non-RT) RIC 115 associated with a Service Management and Orchestration (SMO) Framework 105, or both) .
  • a CU 110 may communicate with one or more DUs 130 via respective midhaul links, such as an F1 interface.
  • the DUs 130 may communicate with one or more RUs 140 via respective fronthaul links.
  • the RUs 140 may communicate with respective UEs 104 via one or more radio frequency (RF) access links.
  • RF radio frequency
  • the UE 104 may be simultaneously served by multiple RUs 140.
  • Each of the units may include one or more interfaces or be coupled to one or more interfaces configured to receive or to transmit signals, data, or information (collectively, signals) via a wired or wireless transmission medium.
  • Each of the units, or an associated processor or controller providing instructions to the communication interfaces of the units can be configured to communicate with one or more of the other units via the transmission medium.
  • the units can include a wired interface configured to receive or to transmit signals over a wired transmission medium to one or more of the other units.
  • the units can include a wireless interface, which may include a receiver, a transmitter, or a transceiver (such as an RF transceiver) , configured to receive or to transmit signals, or both, over a wireless transmission medium to one or more of the other units.
  • a wireless interface which may include a receiver, a transmitter, or a transceiver (such as an RF transceiver) , configured to receive or to transmit signals, or both, over a wireless transmission medium to one or more of the other units.
  • the CU 110 may host one or more higher layer control functions.
  • control functions can include radio resource control (RRC) , packet data convergence protocol (PDCP) , service data adaptation protocol (SDAP) , or the like.
  • RRC radio resource control
  • PDCP packet data convergence protocol
  • SDAP service data adaptation protocol
  • Each control function can be implemented with an interface configured to communicate signals with other control functions hosted by the CU 110.
  • the CU 110 may be configured to handle user plane functionality (i.e., Central Unit –User Plane (CU-UP) ) , control plane functionality (i.e., Central Unit –Control Plane (CU-CP) ) , or a combination thereof.
  • the CU 110 can be logically split into one or more CU-UP units and one or more CU-CP units.
  • the CU-UP unit can communicate bidirectionally with the CU-CP unit via an interface, such as an E1 interface when implemented in an O-RAN configuration.
  • the CU 110 can be implemented to communicate with
  • the DU 130 may correspond to a logical unit that includes one or more base station functions to control the operation of one or more RUs 140.
  • the DU 130 may host one or more of a radio link control (RLC) layer, a medium access control (MAC) layer, and one or more high physical (PHY) layers (such as modules for forward error correction (FEC) encoding and decoding, scrambling, modulation, demodulation, or the like) depending, at least in part, on a functional split, such as those defined by 3GPP.
  • RLC radio link control
  • MAC medium access control
  • PHY high physical layers
  • the DU 130 may further host one or more low PHY layers.
  • Each layer (or module) can be implemented with an interface configured to communicate signals with other layers (and modules) hosted by the DU 130, or with the control functions hosted by the CU 110.
  • Lower-layer functionality can be implemented by one or more RUs 140.
  • an RU 140 controlled by a DU 130, may correspond to a logical node that hosts RF processing functions, or low-PHY layer functions (such as performing fast Fourier transform (FFT) , inverse FFT (iFFT) , digital beamforming, physical random access channel (PRACH) extraction and filtering, or the like) , or both, based at least in part on the functional split, such as a lower layer functional split.
  • the RU (s) 140 can be implemented to handle over the air (OTA) communication with one or more UEs 104.
  • OTA over the air
  • real-time and non-real-time aspects of control and user plane communication with the RU (s) 140 can be controlled by the corresponding DU 130.
  • this configuration can enable the DU (s) 130 and the CU 110 to be implemented in a cloud-based RAN architecture, such as a vRAN architecture.
  • the SMO Framework 105 may be configured to support RAN deployment and provisioning of non-virtualized and virtualized network elements.
  • the SMO Framework 105 may be configured to support the deployment of dedicated physical resources for RAN coverage requirements that may be managed via an operations and maintenance interface (such as an O1 interface) .
  • the SMO Framework 105 may be configured to interact with a cloud computing platform (such as an open cloud (O-Cloud) 190) to perform network element life cycle management (such as to instantiate virtualized network elements) via a cloud computing platform interface (such as an O2 interface) .
  • a cloud computing platform such as an open cloud (O-Cloud) 190
  • network element life cycle management such as to instantiate virtualized network elements
  • a cloud computing platform interface such as an O2 interface
  • Such virtualized network elements can include, but are not limited to, CUs 110, DUs 130, RUs 140 and Near-RT RICs 125.
  • the SMO Framework 105 can communicate with a hardware aspect of a 4G RAN, such as an open eNB (O-eNB) 111, via an O1 interface. Additionally, in some implementations, the SMO Framework 105 can communicate directly with one or more RUs 140 via an O1 interface.
  • the SMO Framework 105 also may include a Non-RT RIC 115 configured to support functionality of the SMO Framework 105.
  • the Non-RT RIC 115 may be configured to include a logical function that enables non-real-time control and optimization of RAN elements and resources, artificial intelligence (AI) /machine learning (ML) (AI/ML) workflows including model training and updates, or policy-based guidance of applications/features in the Near-RT RIC 125.
  • the Non-RT RIC 115 may be coupled to or communicate with (such as via an A1 interface) the Near-RT RIC 125.
  • the Near-RT RIC 125 may be configured to include a logical function that enables near-real-time control and optimization of RAN elements and resources via data collection and actions over an interface (such as via an E2 interface) connecting one or more CUs 110, one or more DUs 130, or both, as well as an O-eNB, with the Near-RT RIC 125.
  • the Non-RT RIC 115 may receive parameters or external enrichment information from external servers. Such information may be utilized by the Near-RT RIC 125 and may be received at the SMO Framework 105 or the Non-RT RIC 115 from non-network data sources or from network functions. In some examples, the Non-RT RIC 115 or the Near-RT RIC 125 may be configured to tune RAN behavior or performance. For example, the Non-RT RIC 115 may monitor long-term trends and patterns for performance and employ AI/ML models to perform corrective actions through the SMO Framework 105 (such as reconfiguration via O1) or via creation of RAN management policies (such as A1 policies) .
  • SMO Framework 105 such as reconfiguration via O1
  • A1 policies such as A1 policies
  • a base station 102 may include one or more of the CU 110, the DU 130, and the RU 140 (each component indicated with dotted lines to signify that each component may or may not be included in the base station 102) .
  • the base station 102 provides an access point to the core network 120 for a UE 104.
  • the base station 102 may include macrocells (high power cellular base station) and/or small cells (low power cellular base station) .
  • the small cells include femtocells, picocells, and microcells.
  • a network that includes both small cell and macrocells may be known as a heterogeneous network.
  • a heterogeneous network may also include Home Evolved Node Bs (eNBs) (HeNBs) , which may provide service to a restricted group known as a closed subscriber group (CSG) .
  • the communication links between the RUs 140 and the UEs 104 may include uplink (UL) (also referred to as reverse link) transmissions from a UE 104 to an RU 140 and/or downlink (DL) (also referred to as forward link) transmissions from an RU 140 to a UE 104.
  • the communication links may use multiple-input and multiple-output (MIMO) antenna technology, including spatial multiplexing, beamforming, and/or transmit diversity.
  • the communication links may be through one or more carriers.
  • the base station 102 /UEs 104 may use spectrum up to Y MHz (e.g., 5, 10, 15, 20, 100, 400, etc. MHz) bandwidth per carrier allocated in a carrier aggregation of up to a total of Yx MHz (x component carriers) used for transmission in each direction.
  • the carriers may or may not be adjacent to each other. Allocation of carriers may be asymmetric with respect to DL and UL (e.g., more or fewer carriers may be allocated for DL than for UL) .
  • the component carriers may include a primary component carrier and one or more secondary component carriers.
  • a primary component carrier may be referred to as a primary cell (PCell) and a secondary component carrier may be referred to as a secondary cell (SCell) .
  • PCell primary cell
  • SCell secondary cell
  • the D2D communication link 158 may use the DL/UL wireless wide area network (WWAN) spectrum.
  • the D2D communication link 158 may use one or more sidelink channels, such as a physical sidelink broadcast channel (PSBCH) , a physical sidelink discovery channel (PSDCH) , a physical sidelink shared channel (PSSCH) , and a physical sidelink control channel (PSCCH) .
  • PSBCH physical sidelink broadcast channel
  • PSDCH physical sidelink discovery channel
  • PSSCH physical sidelink shared channel
  • PSCCH physical sidelink control channel
  • D2D communication may be through a variety of wireless D2D communications systems, such as for example, Bluetooth TM (Bluetooth is a trademark of the Bluetooth Special Interest Group (SIG) ) , Wi-Fi TM (Wi-Fi is a trademark of the Wi-Fi Alliance) based on the Institute of Electrical and Electronics Engineers (IEEE) 802.11 standard, LTE, or NR.
  • Bluetooth TM Bluetooth is a trademark of the Bluetooth Special Interest Group (SIG)
  • Wi-Fi TM Wi-Fi is a trademark of the Wi-Fi Alliance
  • IEEE Institute of Electrical and Electronics Engineers
  • the wireless communications system may further include a Wi-Fi AP 150 in communication with UEs 104 (also referred to as Wi-Fi stations (STAs) ) via communication link 154, e.g., in a 5 GHz unlicensed frequency spectrum or the like.
  • UEs 104 also referred to as Wi-Fi stations (STAs)
  • communication link 154 e.g., in a 5 GHz unlicensed frequency spectrum or the like.
  • the UEs 104 /AP 150 may perform a clear channel assessment (CCA) prior to communicating in order to determine whether the channel is available.
  • CCA clear channel assessment
  • FR1 frequency range designations FR1 (410 MHz –7.125 GHz) and FR2 (24.25 GHz –52.6 GHz) . Although a portion of FR1 is greater than 6 GHz, FR1 is often referred to (interchangeably) as a “sub-6 GHz” band in various documents and articles.
  • FR2 which is often referred to (interchangeably) as a “millimeter wave” band in documents and articles, despite being different from the extremely high frequency (EHF) band (30 GHz –300 GHz) which is identified by the International Telecommunications Union (ITU) as a “millimeter wave” band.
  • EHF extremely high frequency
  • ITU International Telecommunications Union
  • FR3 7.125 GHz –24.25 GHz
  • FR3 7.125 GHz –24.25 GHz
  • Frequency bands falling within FR3 may inherit FR1 characteristics and/or FR2 characteristics, and thus may effectively extend features of FR1 and/or FR2 into mid-band frequencies.
  • higher frequency bands are currently being explored to extend 5G NR operation beyond 52.6 GHz.
  • FR2-2 52.6 GHz –71 GHz
  • FR4 71 GHz –114.25 GHz
  • FR5 114.25 GHz –300 GHz
  • sub-6 GHz may broadly represent frequencies that may be less than 6 GHz, may be within FR1, or may include mid-band frequencies.
  • millimeter wave or the like if used herein may broadly represent frequencies that may include mid-band frequencies, may be within FR2, FR4, FR2-2, and/or FR5, or may be within the EHF band.
  • the base station 102 and the UE 104 may each include a plurality of antennas, such as antenna elements, antenna panels, and/or antenna arrays to facilitate beamforming.
  • the base station 102 may transmit a beamformed signal 182 to the UE 104 in one or more transmit directions.
  • the UE 104 may receive the beamformed signal from the base station 102 in one or more receive directions.
  • the UE 104 may also transmit a beamformed signal 184 to the base station 102 in one or more transmit directions.
  • the base station 102 may receive the beamformed signal from the UE 104 in one or more receive directions.
  • the base station 102 /UE 104 may perform beam training to determine the best receive and transmit directions for each of the base station 102 /UE 104.
  • the transmit and receive directions for the base station 102 may or may not be the same.
  • the transmit and receive directions for the UE 104 may or may not be the same.
  • the base station 102 may include and/or be referred to as a gNB, Node B, eNB, an access point, a base transceiver station, a radio base station, a radio transceiver, a transceiver function, a basic service set (BSS) , an extended service set (ESS) , a TRP, network node, network entity, network equipment, or some other suitable terminology.
  • the base station 102 can be implemented as an integrated access and backhaul (IAB) node, a relay node, a sidelink node, an aggregated (monolithic) base station with a baseband unit (BBU) (including a CU and a DU) and an RU, or as a disaggregated base station including one or more of a CU, a DU, and/or an RU.
  • the set of base stations which may include disaggregated base stations and/or aggregated base stations, may be referred to as next generation (NG) RAN (NG-RAN) .
  • NG next generation
  • NG-RAN next generation
  • the core network 120 may include an Access and Mobility Management Function (AMF) 161, a Session Management Function (SMF) 162, a User Plane Function (UPF) 163, a Unified Data Management (UDM) 164, one or more location servers 168, and other functional entities.
  • the AMF 161 is the control node that processes the signaling between the UEs 104 and the core network 120.
  • the AMF 161 supports registration management, connection management, mobility management, and other functions.
  • the SMF 162 supports session management and other functions.
  • the UPF 163 supports packet routing, packet forwarding, and other functions.
  • the UDM 164 supports the generation of authentication and key agreement (AKA) credentials, user identification handling, access authorization, and subscription management.
  • AKA authentication and key agreement
  • the one or more location servers 168 are illustrated as including a Gateway Mobile Location Center (GMLC) 165 and a Location Management Function (LMF) 166.
  • the one or more location servers 168 may include one or more location/positioning servers, which may include one or more of the GMLC 165, the LMF 166, a position determination entity (PDE) , a serving mobile location center (SMLC) , a mobile positioning center (MPC) , or the like.
  • the GMLC 165 and the LMF 166 support UE location services.
  • the GMLC 165 provides an interface for clients/applications (e.g., emergency services) for accessing UE positioning information.
  • the LMF 166 receives measurements and assistance information from the NG-RAN and the UE 104 via the AMF 161 to compute the position of the UE 104.
  • the NG-RAN may utilize one or more positioning methods in order to determine the position of the UE 104.
  • Positioning the UE 104 may involve signal measurements, a position estimate, and an optional velocity computation based on the measurements.
  • the signal measurements may be made by the UE 104 and/or the base station 102 serving the UE 104.
  • the signals measured may be based on one or more of a satellite positioning system (SPS) 170 (e.g., one or more of a Global Navigation Satellite System (GNSS) , global position system (GPS) , non-terrestrial network (NTN) , or other satellite position/location system) , LTE signals, wireless local area network (WLAN) signals, Bluetooth signals, a terrestrial beacon system (TBS) , sensor-based information (e.g., barometric pressure sensor, motion sensor) , NR enhanced cell ID (NR E-CID) methods, NR signals (e.g., multi-round trip time (Multi-RTT) , DL angle-of-departure (DL-AoD) , DL time difference of arrival (DL-TDOA) , UL time difference of arrival (UL-TDOA) , and UL angle-of-arrival (UL-AoA) positioning) , and/or other systems/signals/sensors.
  • SPS satellite positioning system
  • GNSS Global Navigation Satellite
  • Examples of UEs 104 include a cellular phone, a smart phone, a session initiation protocol (SIP) phone, a laptop, a personal digital assistant (PDA) , a satellite radio, a global positioning system, a multimedia device, a video device, a digital audio player (e.g., MP3 player) , a camera, a game console, a tablet, a smart device, a wearable device, a vehicle, an electric meter, a gas pump, a large or small kitchen appliance, a healthcare device, an implant, a sensor/actuator, a display, or any other similar functioning device.
  • SIP session initiation protocol
  • PDA personal digital assistant
  • Some of the UEs 104 may be referred to as IoT devices (e.g., parking meter, gas pump, toaster, vehicles, heart monitor, etc. ) .
  • the UE 104 may also be referred to as a station, a mobile station, a subscriber station, a mobile unit, a subscriber unit, a wireless unit, a remote unit, a mobile device, a wireless device, a wireless communications device, a remote device, a mobile subscriber station, an access terminal, a mobile terminal, a wireless terminal, a remote terminal, a handset, a user agent, a mobile client, a client, or some other suitable terminology.
  • the term UE may also apply to one or more companion devices such as in a device constellation arrangement. One or more of these devices may collectively access the network and/or individually access the network.
  • the UE 104 may have a model training/inference component 198 that may be configured to obtain a first reference dataset associated with a first set of characteristics and a second reference dataset associated with a second set of characteristics, and to perform at least one of model training or model inference based on a mixture ratio of the first reference dataset and the second reference dataset.
  • the base station 102 may have a model training/inference component 199 that may be configured to provide a first request to perform model training based on a first mixture ratio of a first training dataset and a second training dataset, and to provide a second request to perform model inference based on a second mixture ratio of a first inference dataset and a second inference dataset.
  • FIG. 2A is a diagram 200 illustrating an example of a first subframe within a 5G NR frame structure.
  • FIG. 2B is a diagram 230 illustrating an example of DL channels within a 5G NR subframe.
  • FIG. 2C is a diagram 250 illustrating an example of a second subframe within a 5G NR frame structure.
  • FIG. 2D is a diagram 280 illustrating an example of UL channels within a 5G NR subframe.
  • the 5G NR frame structure may be frequency division duplexed (FDD) in which for a particular set of subcarriers (carrier system bandwidth) , subframes within the set of subcarriers are dedicated for either DL or UL, or may be time division duplexed (TDD) in which for a particular set of subcarriers (carrier system bandwidth) , subframes within the set of subcarriers are dedicated for both DL and UL.
  • FDD frequency division duplexed
  • TDD time division duplexed
  • the 5G NR frame structure is assumed to be TDD, with subframe 4 being configured with slot format 28 (with mostly DL) , where D is DL, U is UL, and F is flexible for use between DL/UL, and subframe 3 being configured with slot format 1 (with all UL) . While subframes 3, 4 are shown with slot formats 1, 28, respectively, any particular subframe may be configured with any of the various available slot formats 0-61. Slot formats 0, 1 are all DL, UL, respectively. Other slot formats 2-61 include a mix of DL, UL, and flexible symbols.
  • UEs are configured with the slot format (dynamically through DL control information (DCI) , or semi-statically/statically through radio resource control (RRC) signaling) through a received slot format indicator (SFI) .
  • DCI DL control information
  • RRC radio resource control
  • SFI received slot format indicator
  • FIGs. 2A-2D illustrate a frame structure, and the aspects of the present disclosure may be applicable to other wireless communication technologies, which may have a different frame structure and/or different channels.
  • a frame (10 ms) may be divided into 10 equally sized subframes (1 ms) .
  • Each subframe may include one or more time slots.
  • Subframes may also include mini-slots, which may include 7, 4, or 2 symbols.
  • Each slot may include 14 or 12 symbols, depending on whether the cyclic prefix (CP) is normal or extended.
  • CP cyclic prefix
  • the symbols on DL may be CP orthogonal frequency division multiplexing (OFDM) (CP-OFDM) symbols.
  • OFDM orthogonal frequency division multiplexing
  • the symbols on UL may be CP-OFDM symbols (for high throughput scenarios) or discrete Fourier transform (DFT) spread OFDM (DFT-s-OFDM) symbols (for power limited scenarios; limited to a single stream transmission) .
  • the number of slots within a subframe is based on the CP and the numerology.
  • the numerology defines the subcarrier spacing (SCS) (see Table 1) .
  • the symbol length/duration may scale with 1/SCS.
  • the numerology 2 allows for 4 slots per subframe. Accordingly, for normal CP and numerology ⁇ , there are 14 symbols/slot and 2 ⁇ slots/subframe.
  • the symbol length/duration is inversely related to the subcarrier spacing.
  • the slot duration is 0.25 ms
  • the subcarrier spacing is 60 kHz
  • the symbol duration is approximately 16.67 ⁇ s.
  • BWPs bandwidth parts
  • Each BWP may have a particular numerology and CP (normal or extended) .
  • a resource grid may be used to represent the frame structure.
  • Each time slot includes a resource block (RB) (also referred to as physical RBs (PRBs) ) that extends 12 consecutive subcarriers.
  • RB resource block
  • PRBs physical RBs
  • the resource grid is divided into multiple resource elements (REs) . The number of bits carried by each RE depends on the modulation scheme.
  • the RS may include demodulation RS (DM-RS) (indicated as R for one particular configuration, but other DM-RS configurations are possible) and channel state information reference signals (CSI-RS) for channel estimation at the UE.
  • DM-RS demodulation RS
  • CSI-RS channel state information reference signals
  • the RS may also include beam measurement RS (BRS) , beam refinement RS (BRRS) , and phase tracking RS (PT-RS) .
  • BRS beam measurement RS
  • BRRS beam refinement RS
  • PT-RS phase tracking RS
  • FIG. 2B illustrates an example of various DL channels within a subframe of a frame.
  • the physical downlink control channel (PDCCH) carries DCI within one or more control channel elements (CCEs) (e.g., 1, 2, 4, 8, or 16 CCEs) , each CCE including six RE groups (REGs) , each REG including 12 consecutive REs in an OFDM symbol of an RB.
  • CCEs control channel elements
  • REGs RE groups
  • a PDCCH within one BWP may be referred to as a control resource set (CORESET) .
  • CORESET control resource set
  • a UE is configured to monitor PDCCH candidates in a PDCCH search space (e.g., common search space, UE-specific search space) during PDCCH monitoring occasions on the CORESET, where the PDCCH candidates have different DCI formats and different aggregation levels. Additional BWPs may be located at greater and/or lower frequencies across the channel bandwidth.
  • a primary synchronization signal (PSS) may be within symbol 2 of particular subframes of a frame. The PSS is used by a UE 104 to determine subframe/symbol timing and a physical layer identity.
  • a secondary synchronization signal (SSS) may be within symbol 4 of particular subframes of a frame. The SSS is used by a UE to determine a physical layer cell identity group number and radio frame timing.
  • the UE can determine a physical cell identifier (PCI) . Based on the PCI, the UE can determine the locations of the DM-RS.
  • the physical broadcast channel (PBCH) which carries a master information block (MIB) , may be logically grouped with the PSS and SSS to form a synchronization signal (SS) /PBCH block (also referred to as SS block (SSB) ) .
  • the MIB provides a number of RBs in the system bandwidth and a system frame number (SFN) .
  • the physical downlink shared channel (PDSCH) carries user data, broadcast system information not transmitted through the PBCH such as system information blocks (SIBs) , and paging messages.
  • SIBs system information blocks
  • some of the REs carry DM-RS (indicated as R for one particular configuration, but other DM-RS configurations are possible) for channel estimation at the base station.
  • the UE may transmit DM-RS for the physical uplink control channel (PUCCH) and DM-RS for the physical uplink shared channel (PUSCH) .
  • the PUSCH DM-RS may be transmitted in the first one or two symbols of the PUSCH.
  • the PUCCH DM-RS may be transmitted in different configurations depending on whether short or long PUCCHs are transmitted and depending on the particular PUCCH format used.
  • the UE may transmit sounding reference signals (SRS) .
  • the SRS may be transmitted in the last symbol of a subframe.
  • the SRS may have a comb structure, and a UE may transmit SRS on one of the combs.
  • the SRS may be used by a base station for channel quality estimation to enable frequency-dependent scheduling on the UL.
  • FIG. 2D illustrates an example of various UL channels within a subframe of a frame.
  • the PUCCH may be located as indicated in one configuration.
  • the PUCCH carries uplink control information (UCI) , such as scheduling requests, a channel quality indicator (CQI) , a precoding matrix indicator (PMI) , a rank indicator (RI) , and hybrid automatic repeat request (HARQ) acknowledgment (ACK) (HARQ-ACK) feedback (i.e., one or more HARQ ACK bits indicating one or more ACK and/or negative ACK (NACK) ) .
  • the PUSCH carries data, and may additionally be used to carry a buffer status report (BSR) , a power headroom report (PHR) , and/or UCI.
  • BSR buffer status report
  • PHR power headroom report
  • FIG. 3 is a block diagram of a base station 310 in communication with a UE 350 in an access network.
  • IP Internet protocol
  • the controller/processor 375 implements layer 3 and layer 2 functionality.
  • Layer 3 includes a radio resource control (RRC) layer
  • layer 2 includes a service data adaptation protocol (SDAP) layer, a packet data convergence protocol (PDCP) layer, a radio link control (RLC) layer, and a medium access control (MAC) layer.
  • RRC radio resource control
  • SDAP service data adaptation protocol
  • PDCP packet data convergence protocol
  • RLC radio link control
  • MAC medium access control
  • the controller/processor 375 provides RRC layer functionality associated with broadcasting of system information (e.g., MIB, SIBs) , RRC connection control (e.g., RRC connection paging, RRC connection establishment, RRC connection modification, and RRC connection release) , inter radio access technology (RAT) mobility, and measurement configuration for UE measurement reporting; PDCP layer functionality associated with header compression /decompression, security (ciphering, deciphering, integrity protection, integrity verification) , and handover support functions; RLC layer functionality associated with the transfer of upper layer packet data units (PDUs) , error correction through ARQ, concatenation, segmentation, and reassembly of RLC service data units (SDUs) , re-segmentation of RLC data PDUs, and reordering of RLC data PDUs; and MAC layer functionality associated with mapping between logical channels and transport channels, multiplexing of MAC SDUs onto transport blocks (TBs) , demultiplexing of MAC SDU
  • the transmit (TX) processor 316 and the receive (RX) processor 370 implement layer 1 functionality associated with various signal processing functions.
  • Layer 1 which includes a physical (PHY) layer, may include error detection on the transport channels, forward error correction (FEC) coding/decoding of the transport channels, interleaving, rate matching, mapping onto physical channels, modulation/demodulation of physical channels, and MIMO antenna processing.
  • the TX processor 316 handles mapping to signal constellations based on various modulation schemes (e.g., binary phase-shift keying (BPSK) , quadrature phase-shift keying (QPSK) , M-phase-shift keying (M-PSK) , M-quadrature amplitude modulation (M-QAM) ) .
  • BPSK binary phase-shift keying
  • QPSK quadrature phase-shift keying
  • M-PSK M-phase-shift keying
  • M-QAM M-quadrature amplitude modulation
  • the coded and modulated symbols may then be split into parallel streams.
  • Each stream may then be mapped to an OFDM subcarrier, multiplexed with a reference signal (e.g., pilot) in the time and/or frequency domain, and then combined together using an Inverse Fast Fourier Transform (IFFT) to produce a physical channel carrying a time domain OFDM symbol stream.
  • IFFT Inverse Fast Fourier Transform
  • the OFDM stream is spatially precoded to produce multiple spatial streams.
  • Channel estimates from a channel estimator 374 may be used to determine the coding and modulation scheme, as well as for spatial processing.
  • the channel estimate may be derived from a reference signal and/or channel condition feedback transmitted by the UE 350.
  • Each spatial stream may then be provided to a different antenna 320 via a separate transmitter 318Tx.
  • Each transmitter 318Tx may modulate a radio frequency (RF) carrier with a respective spatial stream for transmission.
  • RF radio frequency
  • each receiver 354Rx receives a signal through its respective antenna 352.
  • Each receiver 354Rx recovers information modulated onto an RF carrier and provides the information to the receive (RX) processor 356.
  • the TX processor 368 and the RX processor 356 implement layer 1 functionality associated with various signal processing functions.
  • the RX processor 356 may perform spatial processing on the information to recover any spatial streams destined for the UE 350. If multiple spatial streams are destined for the UE 350, they may be combined by the RX processor 356 into a single OFDM symbol stream.
  • the RX processor 356 then converts the OFDM symbol stream from the time-domain to the frequency domain using a Fast Fourier Transform (FFT) .
  • FFT Fast Fourier Transform
  • the frequency domain signal includes a separate OFDM symbol stream for each subcarrier of the OFDM signal.
  • the symbols on each subcarrier, and the reference signal are recovered and demodulated by determining the most likely signal constellation points transmitted by the base station 310. These soft decisions may be based on channel estimates computed by the channel estimator 358.
  • the soft decisions are then decoded and deinterleaved to recover the data and control signals that were originally transmitted by the base station 310 on the physical channel.
  • the data and control signals are then provided to the controller/processor 359, which implements layer 3 and layer 2 functionality.
  • the controller/processor 359 can be associated with a memory 360 that stores program codes and data.
  • the memory 360 may be referred to as a computer-readable medium.
  • the controller/processor 359 provides demultiplexing between transport and logical channels, packet reassembly, deciphering, header decompression, and control signal processing to recover IP packets.
  • the controller/processor 359 is also responsible for error detection using an ACK and/or NACK protocol to support HARQ operations.
  • the controller/processor 359 provides RRC layer functionality associated with system information (e.g., MIB, SIBs) acquisition, RRC connections, and measurement reporting; PDCP layer functionality associated with header compression /decompression, and security (ciphering, deciphering, integrity protection, integrity verification) ; RLC layer functionality associated with the transfer of upper layer PDUs, error correction through ARQ, concatenation, segmentation, and reassembly of RLC SDUs, re-segmentation of RLC data PDUs, and reordering of RLC data PDUs; and MAC layer functionality associated with mapping between logical channels and transport channels, multiplexing of MAC SDUs onto TBs, demultiplexing of MAC SDUs from TBs, scheduling information reporting, error correction through HARQ, priority handling, and logical channel prioritization.
  • RRC layer functionality associated with system information (e.g., MIB, SIBs) acquisition, RRC connections, and measurement reporting
  • PDCP layer functionality associated with
  • Channel estimates derived by a channel estimator 358 from a reference signal or feedback transmitted by the base station 310 may be used by the TX processor 368 to select the appropriate coding and modulation schemes, and to facilitate spatial processing.
  • the spatial streams generated by the TX processor 368 may be provided to different antenna 352 via separate transmitters 354Tx. Each transmitter 354Tx may modulate an RF carrier with a respective spatial stream for transmission.
  • the UL transmission is processed at the base station 310 in a manner similar to that described in connection with the receiver function at the UE 350.
  • Each receiver 318Rx receives a signal through its respective antenna 320.
  • Each receiver 318Rx recovers information modulated onto an RF carrier and provides the information to a RX processor 370.
  • the controller/processor 375 can be associated with a memory 376 that stores program codes and data.
  • the memory 376 may be referred to as a computer-readable medium.
  • the controller/processor 375 provides demultiplexing between transport and logical channels, packet reassembly, deciphering, header decompression, control signal processing to recover IP packets.
  • the controller/processor 375 is also responsible for error detection using an ACK and/or NACK protocol to support HARQ operations.
  • At least one of the TX processor 368, the RX processor 356, and the controller/processor 359 may be configured to perform aspects in connection with the model training/inference component 198 of FIG. 1.
  • At least one of the TX processor 316, the RX processor 370, and the controller/processor 375 may be configured to perform aspects in connection with the model training/inference component 199 of FIG. 1.
  • the UE and the network may perform various aspects of beam management in order to select a beam for transmission and reception.
  • beam management may be performed using a tracking reference signal (TRS) , e.g., for a UE in an RRC inactive or RRC idle state.
  • TRS tracking reference signal
  • a UE may use an SSB, e.g., with a wide beam sweeping procedure to identify a beam to use for initial access.
  • SSB e.g., with a wide beam sweeping procedure to identify a beam to use for initial access.
  • CBRA contention-based random access
  • a UE may use a random access occasion (RO) and a preamble that corresponds to the selected SSB/beam.
  • RO random access occasion
  • the UE and/or network may perform various aspects of beam management, e.g., including a P1, P2, and P3 procedure using SSB or CSI-RS measurements; a U1, U2, and U3 procedure using SRS transmissions and measurement, layer 1 (L1) -reference signal received power (RSRP) reporting.
  • the network may configure one or more TCI state configurations for the UE, and may indicate a TCI state for the UE from the configured set of TCI states.
  • the UE may provide L1-signal-to-interference-plus-noise ratio (SINR) reporting, which may reduce overhead and latency and allow for CC group beam updates or faster UL beam updates.
  • SINR L1-signal-to-interference-plus-noise ratio
  • the UE may communicate with the network using unified TCI states, L1/layer 2 (L2) centric mobility (which may also be referred to a L1/L2 triggered mobility (LTM) ) , dynamic TCI updates, and/or uplink multi-panel selection, maximum permissible exposure (MPE) migration, further beam management latency reduction, etc.
  • Beam management may be employed for particular scenarios, such as high speed (e.g., high speed train (HST) ) , single frequency network (SFN) , multiple transmission reception points (mTRP) , among other examples.
  • HTT high speed train
  • SFN single frequency network
  • mTRP multiple transmission reception points
  • a UE may perform a beam failure detection (BFD) process and may perform a beam failure recovery (BFR) process.
  • BFD beam failure detection
  • BFR beam failure recovery
  • the BFD or BFR may be for a primary cell (PCell) or a primary secondary cell (PSCell) .
  • BFD may be based on a BFD reference signal (BFD-RS) and a PDCCH block error rate (BLER) .
  • BLER block error rate
  • the BFR may be based on a contention free random access (CFRA) .
  • CFRA contention free random access
  • the BFD and BFR may include a link recovery request via a scheduling request (SR) , or a MAC-CE based BFR for the SCell. If the BFR is unsuccessful, the UE may identify a radio link failure.
  • Some wireless communication may include the use of AI or ML at the network and/or at the UE.
  • AI/ML may be used for beam management at a UE and/or a network, including for performing beam predictions in a time domain and/or spatial domain.
  • the use of an AI/ML model may reduce latency or overhead and may improve the accuracy of beam selection.
  • Models may be provided that support various levels of network and UE collaboration and to support various use cases.
  • the use of an AI/ML model may include various aspects such as model training, model deployment, model inference, model monitoring, and model updating.
  • BM-Case1 spatial-domain downlink beam prediction for a first set of beams (Set A) may be based on measurement results of a second set of beams (Set B) .
  • BM-Case2 temporal downlink beam prediction for Set A may be based on the historic measurement results of Set B.
  • the beams in Set A and Set B may be in the same frequency range.
  • Set B may be a subset of Set A.
  • Set A and Set B may be different (e.g., Set A includes narrow beams and Set B includes wide beams) .
  • Set A may be for downlink beam prediction and Set B may be for downlink beam measurement.
  • L1 signaling maybe utilized to report various information of AI/ML model inference to the network. Such information may include, the beam (s) that are based on the output of AI/ML model inference, the predicted L1-RSRP corresponding to the beam (s) , etc.
  • L1 signaling maybe utilized to report various information of AI/ML model inference to the network.
  • Such information may include the beam (s) of N future time instance (s) that are based on the output of the AI/ML model, where N is any positive integer, the predicted L1-RSRP corresponding to the beam (s) , information about the timestamp corresponding to the reported beam (s) , etc.
  • UE-side model monitoring may be utilized at the UE for both BM-Case1 and BM-Case2
  • UE-side model monitoring may be utilized.
  • the UE may monitor the performance metric (s) and make decision (s) pertaining to model selection, activation, deactivation, switching, or fallback operation (s) .
  • the network may monitor the performance metric (s) and make decision (s) pertaining to model selection, activation, deactivation, switching, or fallback operation (s) .
  • the UE may monitor the performance metric (s) and the network may make decision (s) pertaining to model selection, activation, deactivation, switching, or fallback operation (s) , or vice versa.
  • the network may monitor the performance metric (s) and make decision (s) pertaining to model selection, activation, deactivation, switching, or fallback operation (s) .
  • beam measurements and reporting may also be performed for model monitoring and/or the UE may report the measurement results of more than four beams in one reporting instance.
  • FIG. 4 is a diagram 400 illustrating an AI/ML algorithm for wireless communication and that illustrates various aspects model training, model inference, model feedback, and model update.
  • the AI/ML algorithm may include various functions including a data collection function 402, a model training function 404, a model inference function 406, and an actor function 408.
  • Various aspects described in connection with FIG. 4 may be performed by one or more entities in a wireless communication system.
  • the data collection, model training, model inference, and action based on the model inference may occur at a UE.
  • the data collection, model training, model inference, and action based on the model inference may occur at the network.
  • the data collection may occur at the UE and may be provided to the network, which performs the model training and/or model inference.
  • the output may be used at the network or may be provided to a UE, which may perform an action based on the output.
  • the data collection may be performed at the network and may be provided to a UE, which may perform the model training and/or model inference.
  • the UE may use the output to perform an action or may provide the output to the network.
  • the data collection function 402 may be a function that provides input data to the model training function 404 and the model inference function 406.
  • the data collection function 402 may include any form of data preparation, and it may not be specific to the implementation of the AI/ML algorithm (e.g., data pre-processing and cleaning, formatting, and transformation) .
  • Examples of input data may include, but are not limited to, measurements, such as RSRP measurements, channel measurements, or other uplink/downlink transmissions, from entities including UEs or network nodes, feedback from the actor function 408 (e.g., which may be a UE or network node) , output from another AI/ML model, etc.
  • the data collection function 402 may include training data, which refers to the data to be sent as the input for the model training function 404, and inference data, which refers to be sent as the input for the model inference function 406.
  • the model training function 404 may be a function that performs the ML model training, validation, and testing, which may generate model performance metrics as part of the model testing procedure.
  • the model training function 404 may also be responsible for data preparation (e.g., data pre-processing and cleaning, formatting, and transformation) based on the training data delivered or received from the data collection function 402.
  • the model training function 404 may deploy or update a trained, validated, and tested AI/ML model to the model inference function 406, and receive a model performance feedback from the model inference function 406.
  • the model inference function 406 may be a function that provides an AI/ML model inference output (e.g., predictions or decisions) .
  • the model inference function 406 may also perform data preparation (e.g., data pre-processing and cleaning, formatting, and transformation) based on the inference data delivered from the data collection function 402.
  • the output of the model inference function 406 may include the inference output of the AI/ML model produced by the model inference function 406.
  • the details of the inference output may be use case specific.
  • the output may include a beam prediction for beam management.
  • the prediction may be for the network or may be for the UE.
  • the actor function 408 may be a component of the base station or of a core network. In other aspects, the actor function 408 may be a UE in communication with a wireless network.
  • the model performance feedback may refer to information derived from the model inference function 406 that may be suitable for the improvement of the AI/ML model trained in the model training function 404.
  • the feedback from the actor function 408 or other network entities may be implemented for the model inference function 406 to create the model performance feedback.
  • the actor function 408 may be a function that receives the output from the model inference function 406 and triggers or performs corresponding actions.
  • the actor function 408 may trigger actions directed to network entities including the other network entities or itself.
  • the actor function 408 may also provide feedback information that the model training function 404 or the model inference function 406 to derive training or inference data or performance feedback. The feedback may be transmitted back to the data collection function 402.
  • the network and/or a UE may use machine-learning algorithms, deep-learning algorithms, neural networks, reinforcement learning, regression, boosting, or advanced signal processing methods for aspects of wireless communication including the various functionalities such as beam management, CSF, or positioning, among other examples.
  • the network and/or a UE may train one or more neural networks to learn the dependence of measured qualities on individual parameters.
  • machine learning models or neural networks that may be included in the network entity include artificial neural networks (ANN) ; decision tree learning; convolutional neural networks (CNNs) ; deep learning architectures in which an output of a first layer of neurons becomes an input to a second layer of neurons, and so forth; support vector machines (SVM) , e.g., including a separating hyperplane (e.g., decision boundary) that categorizes data; regression analysis; Bayesian networks; genetic algorithms; deep convolutional networks (DCNs) configured with additional pooling and normalization layers; and deep belief networks (DBNs) .
  • ANN artificial neural networks
  • CNNs convolutional neural networks
  • DCNs deep convolutional networks
  • DCNs deep convolutional networks
  • a machine learning model such as an artificial neural network (ANN)
  • ANN artificial neural network
  • the connections of the neuron models may be modeled as weights.
  • Machine learning models may provide predictive modeling, adaptive control, and other applications through training via a dataset.
  • the model may be adaptive based on external or internal information that is processed by the machine learning model.
  • Machine learning may provide non-linear statistical data model or decision making and may model complex relationships between input data and output information.
  • a machine learning model may include multiple layers and/or operations that may be formed by the concatenation of one or more of the referenced operations. Examples of operations that may be involved include extraction of various features of data, convolution operations, fully connected operations that may be activated or deactivated, compression, decompression, quantization, flattening, etc.
  • a “layer” of a machine learning model may be used to denote an operation on input data. For example, a convolution layer, a fully connected layer, and/or the like may be used to refer to associated operations on data that is input into a layer.
  • a convolution AxB operation refers to an operation that converts a number of input features A into a number of output features B.
  • Kernel size may refer to a number of adjacent coefficients that are combined in a dimension.
  • weight may be used to denote one or more coefficients used in the operations in the layers for combining various rows and/or columns of input data. For example, a fully connected layer operation may have an output y that is determined based at least in part on a sum of a product of input matrix x and weights A (which may be a matrix) and bias values B (which may be a matrix) .
  • weights may be used herein to generically refer to both weights and bias values. Weights and biases are examples of parameters of a trained machine learning model. Different layers of a machine learning model may be trained separately.
  • Machine learning models may include a variety of connectivity patterns, e.g., any feed-forward networks, hierarchical layers, recurrent architectures, feedback connections, etc.
  • the connections between layers of a neural network may be fully connected or locally connected.
  • a neuron in a first layer may communicate its output to each neuron in a second layer, and each neuron in the second layer may receive input from every neuron in the first layer.
  • a neuron in a first layer may be connected to a limited number of neurons in the second layer.
  • a convolutional network may be locally connected and configured with shared connection strengths associated with the inputs for each neuron in the second layer.
  • a locally connected layer of a network may be configured such that each neuron in a layer has the same, or similar, connectivity pattern, but with different connection strengths.
  • a machine learning model or neural network may be trained.
  • a machine learning model may be trained based on supervised learning.
  • the machine learning model may be presented with input that the model uses to compute to produce an output.
  • the actual output may be compared to a target output, and the difference may be used to adjust parameters (such as weights and biases) of the machine learning model in order to provide an output closer to the target output.
  • the output may be incorrect or less accurate, and an error, or difference, may be calculated between the actual output and the target output.
  • the weights of the machine learning model may then be adjusted so that the output is more closely aligned with the target.
  • a learning algorithm may compute a gradient vector for the weights.
  • the gradient may indicate an amount that an error would increase or decrease if the weight were adjusted slightly.
  • the gradient may correspond directly to the value of a weight connecting an activated neuron in the penultimate layer and a neuron in the output layer.
  • the gradient may depend on the value of the weights and on the computed error gradients of the higher layers.
  • the weights may then be adjusted so as to reduce the error or to move the output closer to the target. This manner of adjusting the weights may be referred to as back propagation through the neural network. The process may continue until an achievable error rate stops decreasing or until the error rate has reached a target level.
  • the machine learning models may include computational complexity and substantial processing for training the machine learning model.
  • An output of one node is connected as the input to another node. Connections between nodes may be referred to as edges, and weights may be applied to the connections/edges to adjust the output from one node that is applied as input to another node.
  • Nodes may apply thresholds in order to determine whether, or when, to provide output to a connected node.
  • the output of each node may be calculated as a non-linear function of a sum of the inputs to the node.
  • the neural network may include any number of nodes and any type of connections between nodes.
  • the neural network may include one or more hidden nodes. Nodes may be aggregated into layers, and different layers of the neural network may perform different kinds of transformations on the input.
  • a signal may travel from an input at a first layer through the multiple layers of the neural network to an output at the last layer of the neural network and may traverse layers multiple times.
  • one solution is to utilize a dataset mixture and a ratio-defined target test dataset.
  • real-world datasets may be categorized into different sub-groups, where each data in each sub-group includes identical characteristics.
  • Such a categorization may be performed either at the network-side (e.g., at a network node or at a component of the core network) , where the data may be downloaded by the UE from the network, or at the UE side, where the data may be used by the UE to locally train its own models.
  • a particular number of sub-groups may be configured or defined (e.g., defined in a wireless standard, pre-determined, pre-defined, configured by the network for the UE, pre-configured in advance of being indicated, etc. ) , where a real-world test dataset may be identified based on a mixture of certain such sub-groups with a certain mixture ratio. This may result in a better model training/inference trade-off between efficiency and performance.
  • the network may request the UE to train a model matching with such a mixture ratio.
  • the network may request the UE to choose a model based on a certain mixture ratio (to match with the real-world test dataset at the current region) , where the UE may have prepared (e.g., trained) a number of models accordingly, and the one that best matches with the request may be used for OTA inference.
  • signaling between the network and the UE may facilitate such a dataset mixture, considering UE-side model training or inference.
  • such aspects may be utilized for beam prediction.
  • the UE may identify a training dataset mixture for predictive beam management.
  • the UE may identify a ratio between two datasets upon training/inference of an AI/ML model, where the two datasets include different characteristics.
  • the identification of the dataset mixture may be further based on downloading such two respective datasets from a network entity.
  • the model training 404 and/or model inference 406 may occur at the UE based on the data collection 402 received at the UE from the network.
  • the two respective datasets may be defined.
  • the ratio for mixing the two datasets for model training may be defined, or alternatively, indicated by the network entity.
  • a set of various ratio combinations for mixing multiple datasets may be defined, e.g., in wireless standard, and known by the UE and the network.
  • the network may configure, or otherwise indicate for, the UE to use one of the defined ratio combinations of the multiple datasets for model training and/or model inference.
  • the network may configure, or indicate, a set of different ratio combinations for combining multiple datasets. The network may then indicate for the UE to use one of the configured ratio combinations of the multiple datasets for model training and/or model inference.
  • the identification of dataset mixture may be further based on collecting such two respective datasets at the UE-side locally, where the ratio for mixing the two datasets for model training or model inference may be indicated by a network entity.
  • the data collection 402 in FIG. 4 may be performed at the UE, and the UE may use the training data for the model training 404 and/or the model inference 406.
  • the aspects described herein may be extended to cases considering more than two datasets.
  • the different characteristics included in the different datasets may be captured by a similarity coefficient between a certain data set and another test dataset targeting model inference. That is, the similarity coefficient may indicate a level of similarity between a dataset and a target dataset.
  • the characteristics of the test dataset may be defined, indicated by a network entity, or identified by the UE.
  • the first dataset 504 may include data that is based on transmit beams with 3 decibels (dBs) and a field of view (FoV) of 10 degrees
  • the second dataset 506 may include data that is based on transmit beams with 3 dBs and an FoV of 30 degrees.
  • the dataset mixture 502 may include data that is based on transmit beams with 3 dBs and an FoV of 25 degrees.
  • the similarity coefficient may be within the range of 0%to 100%with respect to the target test dataset.
  • the first dataset 504 may be associated with a similarity coefficient being a first percentage (e.g., 70%)
  • the second dataset 506 may be associated with a similarity coefficient being a second percentage (e.g., 50%) .
  • the mixture ratio may a quantitative relation between two amounts (e.g., 1: 2) .
  • a network entity e.g., a network node
  • the training datasets may be downloaded from a network entity, and the characteristics of the training datasets and the target test dataset may be indicated by the network entity.
  • the similarity coefficients of the training sets with respect to the target test dataset may be determined (e.g., calculated) by the UE.
  • the first training dataset e.g., the first dataset 504
  • the second training dataset e.g., the second dataset 506
  • the UE may be calculated by the UE to have a similarity coefficient corresponding to a second percentage (e.g., 50%) with respect to the target dataset.
  • the mixture ratio may be a quantitative relation between two amounts (e.g., 1: 2) .
  • a network entity e.g., a network node
  • the mixture ratio is another quantitative relation (e.g., 2: 3) .
  • the UE may be requested by the network to perform model training based on the dataset mixture.
  • the UE responsive to receiving the request, may perform the model training, where the UE may train at least one machine learning model based on the mixture ratio of the first dataset 504 and the second dataset 506. That is, the UE may determine a dataset mixture (e.g., the dataset mixture 502) based on the mixture ratio and train at least one machine learning model utilizing the dataset mixture.
  • a dataset mixture e.g., the dataset mixture 502
  • the mixture ratio is based on a first percentage associated with the first dataset 504 and a second percentage associated with the second dataset 506
  • data from the first dataset 504 may be provided to the machine learning model during training based on the first percentage
  • data from the second dataset 506 may be provided to the machine learning model during training based on the second percentage.
  • the first percentage is 70%and the second percentage is 30%
  • 70%of the data being providing during training comes from the first dataset 504
  • 30%of the data being providing during training comes from the second dataset 506.
  • the foregoing techniques generate a dataset mixture 502 utilized for training that is based on the first and second percentages.
  • the identification of a dataset mixture 502 may be based on collecting two respective datasets (e.g., a first dataset 504 and a second dataset 506) at the UE side locally.
  • the ratio for mixing the two datasets for model training and model inference may be indicated by a network entity or may be defined.
  • a network entity may provide an indication (e.g., a suggestion) to train a particular AI/ML model.
  • the ratio of mixing the two different datasets may be dependent on their respective similarity coefficients associated with the target test dataset.
  • the target test dataset’s characteristics may be separately indicated by the network entity.
  • a UE may retrieve the target training dataset’s characteristics (e.g., beam shapes) via network node RRC configurations.
  • the similarity coefficients between the UE’s local training datasets and the target training dataset may be determined (e.g., calculated) by the UE (e.g., a first similarity coefficient between the first dataset 504 and the target dataset may be a first percentage (e.g., 70%) , and a second similarity coefficient between the second dataset 506 and the target dataset may be a second percentage (e.g., 50%) ) .
  • the network entity may indicate to the UE that the ratio for mixing the datasets during training may be a quantitative relation between two amounts (e.g., 1: 2, where the similarity coefficients are 70%and 50%, respectively) .
  • the mixture ratio may be defined (e.g., the defined mixture ratio may be 1: 2, where the similarity coefficients are 70%and 50%, respectively) .
  • the network entity may indicate (e.g., suggest) that for a particular AI/ML-based beam prediction task, the model inference may be based on using a model that was trained via a mixture of two datasets with a particular mixture ratio (e.g., 1: 2) . That is, the network entity may indicate the target test dataset based on the mixture ratio of characteristics.
  • the first inference dataset may have a similarity coefficient associated with a first percentage (e.g., 70%) with respect to the target test dataset
  • a second inference dataset may have a similarity coefficient associated with a second percentage (e.g., 50%) with respect to the target test dataset.
  • the first inference dataset may have a first level of similarity to the target test dataset (e.g., the first inference dataset may be 70%similar to the target test dataset)
  • the second inference dataset may have a second level of similarity to the target test dataset (e.g., the second inference dataset may be 50%similar to the target test dataset)
  • the characteristics of the target test dataset may be defined or signaled by a network entity (e.g., via network node RRC configurations) .
  • the indication from the network entity that indicates that model inference is to be based on using a model that was trained via a mixture of two datasets with a particular mixture ratio may be provided to a UE via RRC, medium access control (MAC) , or DCI signaling, which may be associated with a particular CSI report carrying predictive beam characteristics.
  • the UE may select a model from a plurality of candidate machine learning models trained at the UE.The UE may select the model based on the indications (e.g., the requested dataset mixture) provided by the network.
  • the candidate models may be transparent to the network and may be trained by the UE locally, prior to receiving a request from the network to perform model inference. Each of the candidate models may be based on different combinations of the datasets maintained at the UE.
  • the selected model may be used to perform the requested inference task (e.g., to predict at least one of a transmit beam or a receive beam from a plurality of beams for the transmission or reception of a signal) .
  • the characteristics may include general characteristics and characteristics pertaining to beam prediction AI/ML models.
  • general characteristics include, but are not limited to, operation scenarios or an environment in which the UE is located (e.g., a dense urban environment, an indoor environment, a rural environment, etc. ) , channel profiles (e.g., the range of delay spread or Doppler of a channel via which the UE communications, one or more parameters of the network node/cell (e.g., the size/type/orientation of the network node antenna arrays, the number of beams in the codebook, the transmit power of the network node, the capabilities of the UE, etc.
  • characteristics pertaining to beam prediction AI/ML models include, but are not limited to, a distribution (e.g., a maximum, a minimum, a mean, a variance, a standard deviation, a probability density function (PDF) , a cumulative distribution function (CDF) ) of one or more measured/reported signal metrics (e.g., L1-RSRPs, L1-SINRs, etc.
  • a distribution e.g., a maximum, a minimum, a mean, a variance, a standard deviation, a probability density function (PDF) , a cumulative distribution function (CDF)
  • PDF probability density function
  • CDF cumulative distribution function
  • UE mobility characteristics e.g., moving direction/speed, rotation direction/speed, orientation
  • transmit beam shapes e.g., ranges of beam pointing directions (or directions that a beam may point) or beam widths of measurement resources or prediction targets
  • receive beam shapes e.g., ranges of beam pointing directions (or directions that a beam may point) or beam widths to measure the signal metrics (e.g., L1-RSRPs/L1-SINRs) , etc.
  • FIG. 6 is a call flow diagram 600 illustrating a method of wireless communication in accordance with various aspects of the present disclosure.
  • the diagram 600 includes a network entity 602 and a UE 604.
  • the UE 604 may be an example of the UE 104 or the UE 350.
  • the network entity 602 may be an example of the base station 102, the base station 310, or a component of the core network 120. Although aspects are described for the network entity 602, the aspects may be performed by the network entity 602 in aggregation and/or by one or more components of the network entity 602 (e.g., such as a CU 110, a DU 130, and/or an RU 140) .
  • the network entity 602 may provide a request to perform model training to the UE 604.
  • the UE 604 may obtain a first reference dataset associated with a first set of characteristics and a second reference dataset associated with a second set of characteristics.
  • the first reference dataset and the second reference dataset may be referred to as a first training dataset and a second training dataset, respectively.
  • the UE 604 may obtain the first and second training datasets prior to receiving the request at 606. In other aspects, the UE 604 may obtain the first and second training datasets responsive to receiving the request at 606.
  • the UE 604 may obtain the first and second training datasets from the network entity 602, e.g., as shown at 605.
  • the network entity 602 may provide the first and second training datasets to the UE 604 via the request received at 606, or alternatively, via another communication provided before or after the request at 606.
  • the first and second training datasets may be collected by the UE and stored at the UE 604.
  • the training datasets may be based on measurements and/or other wireless communication data collected at the UE.
  • the UE 604 may collect the first and second training datasets from a memory of the UE 604.
  • the UE may perform model training based on a mixture ratio of the obtained reference datasets. For example, responsive to receiving the request at 606, the UE 604 may perform the model training, where at least one machine learning model is trained based on the mixture ratio of the first training dataset and the second training dataset. That is, the UE 604 may generate a dataset mixture based on a mixture ratio of the first training dataset and the second training dataset. The UE 604 may train at least one machine learning model based on the dataset mixture. Over time, the UE 604 may generate multiple dataset mixtures based on one or more mixture ratios of different training datasets to generate a plurality of machine learning models, where each of the machine learning models is trained based on a particular mixture ratio of at least two training datasets. The UE 604 may store each of the trained machine learning models in a memory of the UE 604.
  • the mixture ratio used for model training may be defined.
  • an indication of the mixture ratio may be stored at the UE.
  • the network entity 602 may provide an indication of the mixture ratio to the UE 604.
  • the indication of the mixture ratio may be provided via the request provided at 606, or alternatively, via another communication provided before or after the request at 606.
  • the mixture ratio used for model training may be based on a first similarity coefficient that indicates a first level of similarity between the first training dataset and a target dataset (e.g., the dataset mixture) and a second similarity coefficient that indicates a second level of similarity between the second training dataset and the target dataset.
  • An indication of the target dataset (e.g., the characteristics thereof) may be provided by the network entity 602 to the UE 604, e.g., via the request provided at 606, or alternatively, may be defined.
  • the network entity 602 may indicate, at 606, for the UE 604 to use one of a set of defined ratios.
  • the network entity 602 may configure, or otherwise indicate, a set of mixture ratios to the UE 604 at 603. Then, the request, at 606, may indicate one of the previously-configured mixture ratios.
  • the mixture ratio used for model training may be based on a first percentage associated with the first training dataset and a second percentage associated with the second training dataset.
  • the UE 604 may provide the first training dataset to the at least one machine learning model based on the first percentage and may provide the second training dataset to the at least one machine learning model based on the second percentage.
  • the first percentage is 70%and the second percentage is 30%
  • 70%of the data being providing during training comes from the first training dataset
  • 30%of the data being providing during training comes from the second dataset 506.
  • the foregoing techniques generate a dataset mixture utilized for training a machine learning model that is based on the first and second percentages.
  • the network entity 602 may provide a request, to the UE 604, to perform model inference based on a mixture ratio of at least two reference datasets.
  • the network entity 602 may request that the model inference is performed to predict at least one of a transmit beam for transmitting a signal or a receive beam for receiving a signal.
  • the reference may be referred to as inference datasets.
  • the UE 604 may determine a particular machine learning model from at least one machine learning model trained at the UE 604.
  • the UE 604 may determine the particular machine learning model based on the mixture ratio.
  • the network entity 602 may indicate to the UE 604 that the model inference is to be based on a machine learning model that was trained based on a particular mixture ratio of two training datasets.
  • the network entity 602 may request the UE 604 to perform model inference based on a particular dataset mixture that is based on a first inference dataset and a second inference dataset that are maintained at the UE 604.
  • the UE 604 may generate a dataset mixture based on two inference datasets maintained at the UE 604.
  • the UE 604 may utilize the mixture ratio for inference to generate the dataset mixture.
  • the UE 604 may then determine which of the plurality of machine learning models maintained thereby best match the generated dataset mixture.
  • the determined machine learning model may be utilized for model inference.
  • the network entity 602 may provide an indication of the mixture ratio for inference to the UE 604.
  • the indication of the mixture ratio may be provided via the request provided at 612, or alternatively, via another communication provided before or after the request at 612.
  • the mixture ratio may be the same mixture ratio or a different mixture ratio utilized to perform model training at 610.
  • the mixture ratio used for model inference may be based on a third similarity coefficient that indicates a third level of similarity between the first inference dataset and a target dataset and a fourth similarity coefficient that indicates a fourth level of similarity between the second inference dataset and the target dataset.
  • An indication of the target dataset (e.g., the characteristics thereof) may be provided by the network entity 602 to the UE 604, e.g., via the request provided at 606, or alternatively, may be defined.
  • the UE 604 may perform the model inference based on the particular machine learning model to predict a beam. For example, the UE 604 may provide one or more characteristics (e.g., the characteristics of the target dataset) to the particular machine learning model. The machine learning model may output a prediction as to which transmit beam and/or receive beam is to be utilized at the UE 604 for transmitting and/or receiving a signal, respectively.
  • characteristics e.g., the characteristics of the target dataset
  • the UE 604 may provide an indication of the beam predicted by the machine learning model to the network entity 602.
  • characteristics associated with any the training datasets, inference datasets, and/or the target datasets include, but are not limited to, operation scenarios or an environment in which the UE 604 is located (e.g., a dense urban environment, an indoor environment, a rural environment, etc. ) , channel profiles (e.g., the range of delay spread or Doppler of a channel via which the UE 604 communications, one or more parameters of the network node/cell (e.g., the size/type/orientation of the network node antenna arrays, the number of beams in the codebook, the transmit power of the network node, the capabilities of the UE 604, etc.
  • operation scenarios or an environment in which the UE 604 is located e.g., a dense urban environment, an indoor environment, a rural environment, etc.
  • channel profiles e.g., the range of delay spread or Doppler of a channel via which the UE 604 communications
  • one or more parameters of the network node/cell e.g., the size/
  • characteristics pertaining to beam prediction AI/ML models include, but are not limited to, a distribution (e.g., a maximum, a minimum, a mean, a variance, a standard deviation, a probability density function (PDF) , a cumulative distribution function (CDF) ) of one or more measured/reported signal metrics (e.g., L1-RSRPs, L1-SINRs, etc.
  • a distribution e.g., a maximum, a minimum, a mean, a variance, a standard deviation, a probability density function (PDF) , a cumulative distribution function (CDF)
  • PDF probability density function
  • CDF cumulative distribution function
  • a reliability or accuracy level of the measurements to be used as AI/ML model inputs or to be used as prediction targets mobility characteristics of the UE 604 (e.g., moving direction/speed, rotation direction/speed, orientation) , transmit beam shapes (e.g., ranges of beam pointing directions (or directions that a beam may point) or beam widths of measurement resources or prediction targets) , receive beam shapes (e.g., ranges of beam pointing directions (or directions that a beam may point) or beam widths to measure the signal metrics (e.g., L1-RSRPs/L1-SINRs) , etc.
  • transmit beam shapes e.g., ranges of beam pointing directions (or directions that a beam may point) or beam widths of measurement resources or prediction targets
  • receive beam shapes e.g., ranges of beam pointing directions (or directions that a beam may point) or beam widths to measure the signal metrics (e.g., L1-RSRPs/L1-SINRs) , etc.
  • FIG. 7 is a flowchart 700 illustrating methods of wireless communication at a UE in accordance with various aspects of the present disclosure.
  • the UE may be the UE 104, the 350, or the UE 604, or the apparatus 1104 in the hardware implementation of FIG. 11.
  • the first UE may obtain a first reference dataset associated with a first set of characteristics and a second reference dataset associated with a second set of characteristics.
  • the UE 604, at 608, may obtain a first reference dataset associated with a first set of characteristics and a second reference dataset associated with a second set of characteristics.
  • 702 may be performed by the model training/inference component 198.
  • the first reference dataset may include a first training dataset and the second reference dataset may include a second training dataset
  • the UE may obtain the first training dataset and the second training dataset by receiving, from a network entity, the first training dataset and the second training dataset.
  • the UE 604 may receive the first training dataset and the second training dataset from the network entity 602, for example, at 606.
  • the UE 604 may obtain the training dataset and the second training dataset from the request received at 606.
  • the first reference dataset may include a first training dataset and the second reference dataset may include a second training dataset
  • the UE may obtain the first training dataset and the second training dataset by collecting, at the UE, the first training dataset and the second training dataset from a memory of the UE.
  • the UE 604 may collect the first training dataset and the second training dataset from a memory of the UE 604.
  • the first set of characteristics and the second set of characteristics may include at least one of an environment in which the UE is located, at least one profile of a channel via which the UE communicates, at least one parameter of a network entity communicatively coupled to the UE, a location of the UE with respect to the network entity, a distribution of one or more signal metrics utilized for at least one of the model training or the model inference, a level of accuracy of the one or more signal metrics, at least one mobility characteristic of the UE, a first shape of at least one transmit beam associated with the UE, or a second shape of at least one receive beam associated with the UE.
  • the first set of characteristics and the second set of characteristics associated with the first training dataset and the second training dataset respectively obtained at 608 may include at least one of an environment in which the UE 604 is located, at least one profile of a channel via which the UE 604 communicates, at least one parameter of a network entity communicatively coupled to the UE 604, a location of the UE 604 with respect to the network entity, a distribution of one or more signal metrics utilized for at least one of the model training or the model inference, a level of accuracy of the one or more signal metrics, at least one mobility characteristic of the UE 604, a first shape of at least one transmit beam associated with the UE 604, or a second shape of at least one receive beam associated with the UE 604.
  • the first UE may perform at least one of model training or model inference based on a mixture ratio of the first reference dataset and the second reference dataset.
  • the UE 604 may perform at least one of a model training (at 610) based on a mixture of the first reference dataset and the second reference dataset or model inference (at 616) based on the mixture of the first reference dataset and the second reference dataset.
  • 704 may be performed by the model training/inference component 198.
  • the mixture ratio is defined.
  • the mixture ratio utilized for model training at 610 and/or for model inference at 616 may be defined.
  • the UE may receive an indication of the mixture ratio from a network entity.
  • the UE 604 may receive an indication of the mixture ratio for model training from the network entity 602, for example, at 606 and/or may receive the mixture ratio for model inference from the network entity 602, for example, at 612.
  • the first reference dataset may be a first training dataset and the second reference dataset may be a second training dataset
  • the UE may perform at least one of the model training or the model inference by receiving a request to perform the model training, and responsive to receiving the request, performing the model training, where at least one machine learning model is trained based on the mixture ratio of the first training dataset and the second training dataset.
  • the UE 604 may receive a request to perform the model training.
  • the UE 604 may perform the model training, where at least one machine learning model is trained at the UE 604 based on the mixture ratio of the first training dataset and the second training dataset.
  • the mixture ratio may be based on a first similarity coefficient that indicates a first level of similarity between the first reference dataset and a target dataset and a second similarity coefficient that indicates a second level of similarity between the second reference dataset and the target dataset.
  • the mixture ratio utilized for model training at 610 or model inference at 616 may be based on a first similarity coefficient that indicates a first level of similarity between the first reference dataset and a target dataset and a second similarity coefficient that indicates a second level of similarity between the second reference dataset and the target dataset.
  • the mixture ratio may be based on a first percentage associated with the first training dataset and a second percentage associated with the second training dataset
  • performing the model training may include providing the first training dataset to the at least one machine learning model based on the first percentage and providing the second training dataset to the at least one machine learning model based on the second percentage.
  • the UE 604 may provide the first training dataset to the at least one machine learning model based on the first percentage and provide the second training dataset to the at least one machine learning model based on the second percentage.
  • the first reference dataset may be a first inference dataset and the second reference dataset is a second inference dataset
  • the UE may perform at least one of the model training or the model inference by receiving a request to perform the model inference, responsive to receiving the request, determining a particular machine learning model from at least one machine learning model trained at the UE based on the mixture ratio, and performing the model inference based on the particular machine learning model, where the particular machine learning model is configured to predict a beam from a plurality of beams for at least one of a transmission or reception of a signal.
  • the UE 604 may receive a request to perform the model inference.
  • the UE 604 may determine a particular machine learning model from at least one machine learning model trained at the UE 604 based on the mixture ratio.
  • the UE 604 may perform the model inference based on the particular machine learning model, where the particular machine learning model is configured to predict a beam from a plurality of beams for at least one of a transmission or reception of a signal.
  • FIG. 8 is a flowchart 800 illustrating methods of wireless communication at a UE in accordance with various aspects of the present disclosure.
  • the UE may be the UE 104, the 350, or the UE 604, or the apparatus 1104 in the hardware implementation of FIG. 11.
  • the first UE may obtain a first reference dataset associated with a first set of characteristics and a second reference dataset associated with a second set of characteristics.
  • the UE 604, at 608, may obtain a first reference dataset associated with a first set of characteristics and a second reference dataset associated with a second set of characteristics.
  • 802 may be performed by the model training/inference component 198.
  • the first reference dataset may include a first training dataset and the second reference dataset may include a second training dataset
  • the UE may obtain the first training dataset and the second training dataset by receiving, from a network entity, the first training dataset and the second training dataset.
  • the UE 604 may receive the first training dataset and the second training dataset from the network entity 602, for example, at 606.
  • the UE 604 may obtain the training dataset and the second training dataset from the request received at 606.
  • 804 may be performed by the model training/inference component 198.
  • the first reference dataset may include a first training dataset and the second reference dataset may include a second training dataset
  • the UE may obtain the first training dataset and the second training dataset by collecting, at the UE, the first training dataset and the second training dataset from a memory of the UE.
  • the UE 604 may collect the first training dataset and the second training dataset from a memory of the UE 604.
  • 806 may be performed by the model training/inference component 198.
  • the first set of characteristics and the second set of characteristics may include at least one of an environment in which the UE is located, at least one profile of a channel via which the UE communicates, at least one parameter of a network entity communicatively coupled to the UE, a location of the UE with respect to the network entity, a distribution of one or more signal metrics utilized for at least one of the model training or the model inference, a level of accuracy of the one or more signal metrics, at least one mobility characteristic of the UE, a first shape of at least one transmit beam associated with the UE, or a second shape of at least one receive beam associated with the UE.
  • the first set of characteristics and the second set of characteristics associated with the first training dataset and the second training dataset respectively obtained at 608 may include at least one of an environment in which the UE 604 is located, at least one profile of a channel via which the UE 604 communicates, at least one parameter of a network entity communicatively coupled to the UE 604, a location of the UE 604 with respect to the network entity, a distribution of one or more signal metrics utilized for at least one of the model training or the model inference, a level of accuracy of the one or more signal metrics, at least one mobility characteristic of the UE 604, a first shape of at least one transmit beam associated with the UE 604, or a second shape of at least one receive beam associated with the UE 604.
  • the UE may receive an indication of a mixture ratio from a network entity.
  • the UE 604 may receive an indication of the mixture ratio for model training from the network entity 602, for example, at 606 and/or may receive the mixture ratio for model inference from the network entity 602, for example, at 612.
  • 808 may be performed by the model training/inference component 198.
  • mixture ratio is defined.
  • the mixture ratio utilized for model training at 610 and/or for model inference at 616 may be defined.
  • the UE may not receive the mixture ratio from the network entity 602.
  • the first UE may perform at least one of model training or model inference based on the mixture ratio of the first reference dataset and the second reference dataset.
  • the UE 604 may perform at least one of a model training (at 610) based on a mixture of the first reference dataset and the second reference dataset or model inference (at 616) based on the mixture of the first reference dataset and the second reference dataset.
  • 810 may be performed by the model training/inference component 198.
  • the first reference dataset may be a first training dataset and the second reference dataset may be a second training dataset
  • the UE may perform at least one of the model training or the model inference by receiving a request to perform the model training.
  • the UE 604 may receive a request to perform the model training.
  • 812 may be performed by the model training/inference component 198.
  • the UE may perform the model training, where at least one machine learning model is trained based on the mixture ratio of the first training dataset and the second training dataset. For example, referring to FIG. 6, at 610, responsive to receiving the request at 606, the UE 604 may perform the model training, where at least one machine learning model is trained at the UE 604 based on the mixture ratio of the first training dataset and the second training dataset. In an aspect, 814 may be performed by the model training/inference component 198.
  • the mixture ratio may be based on a first percentage associated with the first training dataset and a second percentage associated with the second training dataset, and the UE may perform the model training by providing the first training dataset to the at least one machine learning model based on the first percentage.
  • the UE 604 may provide the first training dataset to the at least one machine learning model based on the first percentage.
  • 816 may be performed by the model training/inference component 198.
  • the UE may provide the second training dataset to the at least one machine learning model based on the second percentage.
  • the UE 604 may provide the second training dataset to the at least one machine learning model based on the second percentage.
  • 818 may be performed by the model training/inference component 198.
  • the mixture ratio may be based on a first similarity coefficient that indicates a first level of similarity between the first reference dataset and a target dataset and a second similarity coefficient that indicates a second level of similarity between the second reference dataset and the target dataset.
  • the mixture ratio utilized for model training at 610 or model inference at 616 may be based on a first similarity coefficient that indicates a first level of similarity between the first reference dataset and a target dataset and a second similarity coefficient that indicates a second level of similarity between the second reference dataset and the target dataset.
  • the first reference dataset may be a first inference dataset
  • the second reference dataset is a second inference dataset
  • the UE may perform at least one of the model training or the model inference by receiving a request to perform the model inference.
  • the UE 604 may receive a request to perform the model inference.
  • 820 may be performed by the model training/inference component 198.
  • the UE may, responsive to receiving the request, determine a particular machine learning model from at least one machine learning model trained at the UE based on the mixture ratio. For example, referring to FIG. 6, at 614, the UE 604, responsive to receiving the request at 612, may determine a particular machine learning model from at least one machine learning model trained at the UE 604 based on the mixture ratio. In an aspect, 822 may be performed by the model training/inference component 198.
  • the UE may perform the model inference based on the particular machine learning model, where the particular machine learning model is configured to predict a beam from a plurality of beams for at least one of a transmission or reception of a signal.
  • the UE 604 may perform the model inference based on the particular machine learning model, where the particular machine learning model is configured to predict a beam from a plurality of beams for at least one of a transmission or reception of a signal.
  • 824 may be performed by the model training/inference component 198.
  • FIG. 9 is a flowchart 900 illustrating methods of wireless communication at a network entity in accordance with various aspects of the present disclosure.
  • the network entity may be the base station 102, the core network 120, the base station 310, the network entity 602, the network entity 1202 in the hardware implementation of FIG. 12, or the network entity 1360 in the hardware implementation of FIG. 13.
  • the network entity may provide a first request to perform model training based on a first mixture ratio of a first training dataset and a second training dataset.
  • the network entity 602 may provide a first request, to the UE 604, to perform model training based on a first mixture ratio of a first training dataset and a second training dataset.
  • 902 may be performed by the model training/inference component 199.
  • the network entity may provide, to the UE, the first training dataset and the second training dataset.
  • the network entity 602 may provide, to the UE 604, the first training dataset and the second training dataset, for example, at 606.
  • the network entity may provide, to the UE, a first set of characteristics associated with the first training dataset and a second set of characteristics associated with the second training dataset.
  • the network entity 602 may provide, to the UE 604, a first set of characteristics associated with the first training dataset and a second set of characteristics associated with the second training dataset, for example, at 606.
  • the first set of characteristics and the second set of characteristics may include at least one of an environment in which the UE is located, at least one profile of a channel via which the UE communicates, at least one parameter of the network entity, a location of the UE with respect to the network entity, a distribution of one or more signal metrics utilized for at least one of the model training or the model inference, a level of accuracy of the one or more signal metrics, at least one mobility characteristic of the UE, a first shape of at least one transmit beam associated with the UE, or a second shape of at least one receive beam associated with the UE.
  • an environment in which the UE is located at least one profile of a channel via which the UE communicates
  • at least one parameter of the network entity a location of the UE with respect to the network entity
  • a distribution of one or more signal metrics utilized for at least one of the model training or the model inference a level of accuracy of the one or more signal metrics
  • at least one mobility characteristic of the UE a first shape of at least one transmit beam associated
  • the first set of characteristics and the second set of characteristics associated with the first training dataset and the second training dataset respectively obtained at 608 may include at least one of an environment in which the UE 604 is located, at least one profile of a channel via which the UE 604 communicates, at least one parameter of the network entity 602, a location of the UE 604 with respect to the network entity, a distribution of one or more signal metrics utilized for at least one of the model training or the model inference, a level of accuracy of the one or more signal metrics, at least one mobility characteristic of the UE 604, a first shape of at least one transmit beam associated with the UE 604, or a second shape of at least one receive beam associated with the UE 604.
  • the component 198 may be configured to obtain a first reference dataset associated with a first set of characteristics and a second reference dataset associated with a second set of characteristics, and to perform at least one of model training or model inference based on a mixture ratio of the first reference dataset and the second reference dataset.
  • the component 198 may be configured to perform any of the aspects described in connection with the flowcharts in FIGs. 7 and 8, the machine learning algorithm functions in FIGs. 4 or 6, and/or the aspects performed by the UE 604 in the communication flow in FIG. 6.
  • the component 198 may be within the cellular baseband processor 1124, the application processor 1106, or both the cellular baseband processor 1124 and the application processor 1106.
  • the component 198 may be one or more hardware components specifically configured to carry out the stated processes/algorithm, implemented by one or more processors configured to perform the stated processes/algorithm, stored within a computer-readable medium for implementation by one or more processors, or some combination thereof.
  • the apparatus 1104 may include a variety of components configured for various functions.
  • the apparatus 1104, and in particular the cellular baseband processor 1124 and/or the application processor 1106, may include means for obtaining a first reference dataset associated with a first set of characteristics and a second reference dataset associated with a second set of characteristics, and means for performing at least one of model training or model inference based on a mixture ratio of the first reference dataset and the second reference dataset.
  • the apparatus may further include means for performing any of the aspects described in connection with the flowcharts in FIGs. 7 and 8, the machine learning algorithm functions in FIGs. 4 or 6, and/or the aspects performed by the UE 604 in the communication flow in FIG. 6.
  • the means may be the component 198 of the apparatus 1104 configured to perform the functions recited by the means.
  • the apparatus 1104 may include the TX processor 368, the RX processor 356, and the controller/processor 359.
  • the means may be the TX processor 368, the RX processor 356, and/or the controller/processor 359 configured to perform the functions recited by the means.
  • FIG. 12 is a diagram 1200 illustrating an example of a hardware implementation for a network entity 1202.
  • the network entity 1202 may be a BS, a component of a BS, or may implement BS functionality.
  • the network entity 1202 may include at least one of a CU 1210, a DU 1230, or an RU 1240.
  • the network entity 1202 may include the CU 1210; both the CU 1210 and the DU 1230; each of the CU 1210, the DU 1230, and the RU 1240; the DU 1230; both the DU 1230 and the RU 1240; or the RU 1240.
  • the CU 1210 may include a CU processor 1212.
  • the CU processor 1212 may include on-chip memory 1212'. In some aspects, the CU 1210 may further include additional memory modules 1214 and a communications interface 1218. The CU 1210 communicates with the DU 1230 through a midhaul link, such as an F1 interface.
  • the DU 1230 may include a DU processor 1232.
  • the DU processor 1232 may include on-chip memory 1232'. In some aspects, the DU 1230 may further include additional memory modules 1234 and a communications interface 1238.
  • the DU 1230 communicates with the RU 1240 through a fronthaul link.
  • the RU 1240 may include an RU processor 1242.
  • the RU processor 1242 may include on-chip memory 1242'.
  • the RU 1240 may further include additional memory modules 1244, one or more transceivers 1246, antennas 1280, and a communications interface 1248.
  • the RU 1240 communicates with the UE 104.
  • the on-chip memory 1212', 1232', 1242' and the additional memory modules 1214, 1234, 1244 may each be considered a computer-readable medium /memory.
  • Each computer-readable medium /memory may be non-transitory.
  • Each of the processors 1212, 1232, 1242 is responsible for general processing, including the execution of software stored on the computer-readable medium /memory.
  • the software when executed by the corresponding processor (s) causes the processor (s) to perform the various functions described supra.
  • the computer-readable medium /memory may also be used for storing data that is manipulated by the processor (s) when executing software.
  • the component 199 may be configured to provide a first request to perform model training based on a first mixture ratio of a first training dataset and a second training dataset, and to provide a second request to perform model inference based on a second mixture ratio of a first inference dataset and a second inference dataset.
  • the component 199 may be configured to perform any of the aspects described in connection with the flowcharts in FIGs. 9 and 10 and/or the aspects performed by the network entity 602 in the communication flow in FIG. 6.
  • the component 199 may be within one or more processors of one or more of the CU 1210, DU 1230, and the RU 1240.
  • the component 199 may be one or more hardware components specifically configured to carry out the stated processes/algorithm, implemented by one or more processors configured to perform the stated processes/algorithm, stored within a computer-readable medium for implementation by one or more processors, or some combination thereof.
  • the network entity 1202 may include a variety of components configured for various functions. In one configuration, the network entity 1202 may include means for providing a first request to perform model training based on a first mixture ratio of a first training dataset and a second training dataset, and means for providing a second request to perform model inference based on a second mixture ratio of a first inference dataset and a second inference dataset.
  • the means may be the component 199 of the network entity 1202 configured to perform the functions recited by the means.
  • the network entity 1202 may include the TX processor 316, the RX processor 370, and the controller/processor 375.
  • the means may be the TX processor 316, the RX processor 370, and/or the controller/processor 375 configured to perform the functions recited by the means.
  • FIG. 13 is a diagram 1300 illustrating an example of a hardware implementation for a network entity 1360.
  • the network entity 1360 may be within the core network 120.
  • the network entity 1360 may include a network processor 1312.
  • the network processor 1312 may include on-chip memory 1312'.
  • the network entity 1360 may further include additional memory modules 1314.
  • the network entity 1360 communicates via the network interface 1380 directly (e.g., backhaul link) or indirectly (e.g., through a RIC) with the CU 1302.
  • the on-chip memory 1312' and the additional memory modules 1314 may each be considered a computer-readable medium /memory. Each computer-readable medium /memory may be non-transitory.
  • the processor 1312 is responsible for general processing, including the execution of software stored on the computer-readable medium /memory.
  • the software when executed by the corresponding processor (s) causes the processor (s) to perform the various functions described supra.
  • the computer-readable medium /memory may also be used for storing data that is manipulated by the processor (s) when executing software.
  • the component 199 may be configured to provide a first request to perform model training based on a first mixture ratio of a first training dataset and a second training dataset, and to provide a second request to perform model inference based on a second mixture ratio of a first inference dataset and a second inference dataset.
  • the component 199 may be configured to perform any of the aspects described in connection with the flowcharts in FIGs. 9 and 10 and/or the aspects performed by the network entity 602 in the communication flow in FIG. 6.
  • the component 199 may be within the processor 1312.
  • the component 199 may be one or more hardware components specifically configured to carry out the stated processes/algorithm, implemented by one or more processors configured to perform the stated processes/algorithm, stored within a computer-readable medium for implementation by one or more processors, or some combination thereof.
  • the network entity 1360 may include a variety of components configured for various functions. In one configuration, the network entity 1360 may include means for providing a first request to perform model training based on a first mixture ratio of a first training dataset and a second training dataset, and means for providing a second request to perform model inference based on a second mixture ratio of a first inference dataset and a second inference dataset.
  • the means may be the component 199 of the network entity 1360 configured to perform the functions recited by the means.
  • a UE may receive a request, from a network entity, to perform model training based on a mixture ratio of a first training dataset and a second training dataset.
  • the UE may generate a dataset mixture based on the mixture ratio of the first training dataset and the second training dataset, where each training dataset is associated with particular characteristics, for example, associated with a radio environment in which the UE and the network entity are located.
  • the UE may train at least one machine learning model based on the dataset mixture.
  • the UE may generate multiple dataset mixtures based on one or more mixture ratios of different training datasets (each being associated with different characteristics) to generate a plurality of machine learning models, where each of the machine learning models is trained based on a particular mixture ratio of at least two training datasets.
  • the UE may store each of the trained machine learning models in a memory of the UE.
  • the UE may also receive, from the network entity, a request to perform model inference based on a mixture ratio of a first inference dataset and a second inference dataset. For instance, the UE may determine a dataset mixture based on the mixture ratio of the first inference dataset and the second inference dataset. The UE may then determine which of the plurality of machine learning models maintained thereby best match the generated dataset mixture.
  • the UE utilizes the determined machine learning model for model inference.
  • the determined machine learning model may be configured to output a prediction as to which transmit beam and/or receive beam is to be utilized at the UE for transmitting and/or receiving a signal, respectively.
  • the UE may select a machine learning model from the plurality of machine learning models that is tailored to the environment in which the UE is located.
  • Such a machine learning model may more accurately predict an optimal transmit beam for transmitting signals and/or an optimal receive beam for receiving signals.
  • the aspects of the subject matter described in this disclosure may improve the signal-to-noise ratio of received signals, eliminate undesirable interference sources, and focus transmitted signals to desired locations.
  • Combinations such as “at least one of A, B, or C, ” “one or more of A, B, or C, ” “at least one of A, B, and C, ” “one or more of A, B, and C, ” and “A, B, C, or any combination thereof” include any combination of A, B, and/or C, and may include multiples of A, multiples of B, or multiples of C.
  • combinations such as “at least one of A, B, or C, ” “one or more of A, B, or C, ” “at least one of A, B, and C, ” “one or more of A, B, and C, ” and “A, B, C, or any combination thereof” may be A only, B only, C only, A and B, A and C, B and C, or A and B and C, where any such combinations may contain one or more member or members of A, B, or C.
  • Sets should be interpreted as a set of elements where the elements number one or more. Accordingly, for a set of X, X would include one or more elements.
  • a first apparatus receives data from or transmits data to a second apparatus
  • the data may be received/transmitted directly between the first and second apparatuses, or indirectly between the first and second apparatuses through a set of apparatuses.
  • a device configured to “output” data such as a transmission, signal, or message
  • may transmit the data for example with a transceiver, or may send the data to a device that transmits the data.
  • a device configured to “obtain” data such as a transmission, signal, or message, may receive, for example with a transceiver, or may obtain the data from a device that receives the data.
  • Information stored in a memory includes instructions and/or data.
  • the phrase “based on” shall not be construed as a reference to a closed set of information, one or more conditions, one or more factors, or the like.
  • the phrase “based on A” (where “A” may be information, a condition, a factor, or the like) shall be construed as “based at least on A” unless specifically recited differently.
  • Aspect 1 is a method of wireless communication at a UE, including: obtaining a first reference dataset associated with a first set of characteristics and a second reference dataset associated with a second set of characteristics; and performing at least one of model training or model inference based on a mixture ratio of the first reference dataset and the second reference dataset.
  • Aspect 2 is the method of aspect 1, where the mixture ratio is based on a first similarity coefficient that indicates a first level of similarity between the first reference dataset and a target dataset and a second similarity coefficient that indicates a second level of similarity between the second reference dataset and the target dataset
  • Aspect 3 is the method of any of aspects 1 and 2, where the first reference dataset is a first training dataset and the second reference dataset is a second training dataset, and where performing at least one of the model training or the model inference includes: receiving a request to perform the model training; and responsive to receiving the request, performing the model training, where at least one machine learning model is trained based on the mixture ratio of the first training dataset and the second training dataset.
  • Aspect 4 is the method of aspect 3, where the mixture ratio is based on a first percentage associated with the first training dataset and a second percentage associated with the second training dataset, and where performing the model training includes providing the first training dataset to the at least one machine learning model based on the first percentage; and providing the second training dataset to the at least one machine learning model based on the second percentage.
  • Aspect 5 is the method of aspect 1, where the first reference dataset is a first inference dataset and the second reference dataset is a second inference dataset, and where performing at least one of the model training or the model inference includes: receiving a request to perform the model inference; responsive to receiving the request, determining a particular machine learning model from at least one machine learning model trained at the UE based on the mixture ratio; and performing the model inference based on the particular machine learning model, where the particular machine learning model is configured to predict a beam from a plurality of beams for at least one of a transmission or reception of a signal.
  • Aspect 6 is the method of any of aspects 1 to 5, further including: receiving an indication of the mixture ratio from a network entity.
  • Aspect 7 is the method of any of aspects 1 to 5, where the mixture ratio is defined.
  • Aspect 8 is the method of any of aspects 1 to 4, 6, and 7, where the first reference dataset includes a first training dataset and the second reference dataset includes a second training dataset, and where obtaining the first training dataset and the second training dataset includes: receiving, from a network entity, the first training dataset and the second training dataset.
  • Aspect 9 is the method of any of aspects 1 to 4, 6, and 7, where the first reference dataset includes a first training dataset and the second reference dataset includes a second training dataset, and where obtaining the first training dataset and the second training dataset includes: collecting, at the UE, the first training dataset and the second training dataset from a memory of the UE.
  • Aspect 10 is the method of any of aspects 1 to 9, where the first set of characteristics and the second set of characteristics include at least one of: an environment in which the UE is located; at least one profile of a channel via which the UE communicates; at least one parameter of a network entity communicatively coupled to the UE; a location of the UE with respect to the network entity; a distribution of one or more signal metrics utilized for at least one of the model training or the model inference; a level of accuracy of the one or more signal metrics; at least one mobility characteristic of the UE; a first shape of at least one transmit beam associated with the UE; or a second shape of at least one receive beam associated with the UE.
  • Aspect 11 a method of wireless communication at a network entity, including: providing a first request to perform model training based on a first mixture ratio of a first training dataset and a second training dataset; and providing a second request to perform model inference based on a second mixture ratio of a first inference dataset and a second inference dataset.
  • Aspect 12 is the method of aspect 11, further including: providing, to a UE, at least one of the first mixture ratio or the second mixture ratio.
  • Aspect 13 is the method of any of aspects 11 and 12, further including: providing, to a UE, the first training dataset and the second training dataset.
  • Aspect 14 is the method of any of aspects 11 to 13, further including: receiving, from a UE, an indication of at least one of a transmit beam or a receive beam determined based on the second request.
  • Aspect 15 is the method of any of aspects 11 to 14, further including: providing, to a UE, a first set of characteristics associated with the first training dataset and a second set of characteristics associated with the second training dataset.
  • Aspect 16 is the method of aspect 15, where the first set of characteristics and the second set of characteristics include at least one of: an environment in which the UE is located; at least one profile of a channel via which the UE communicates; at least one parameter of the network entity; a location of the UE with respect to the network entity; a distribution of one or more signal metrics utilized for at least one of the model training or the model inference; a level of accuracy of the one or more signal metrics; at least one mobility characteristic of the UE; a first shape of at least one transmit beam associated with the UE; or a second shape of at least one receive beam associated with the UE.
  • Aspect 17 is the method of any of aspects 11 to 16, where the first mixture ratio is based on a first similarity coefficient that indicates a first level of similarity between the first training dataset and a first target dataset and a second similarity coefficient that indicates a second level of similarity between the second training dataset and the first target dataset, and where the second mixture ratio is based on a third similarity coefficient that indicates a third level of similarity between the first inference dataset and a second target dataset and a fourth similarity coefficient that indicates a fourth level of similarity between the second inference dataset and the second target dataset.
  • Aspect 18 is an apparatus for wireless communication at a UE.
  • the apparatus includes memory; and at least one processor coupled to the memory, the memory storing instructions executable by the at least one processor to cause the apparatus to implement any of aspects 1 to 10.
  • Aspect 19 is the apparatus of aspect 18, further including at least one of a transceiver or an antenna coupled to the at least one processor.
  • Aspect 20 is an apparatus for wireless communication at a network entity.
  • the apparatus includes memory; and at least one processor coupled to the memory, the memory storing instructions executable by the at least one processor to cause the apparatus to implement any of aspects 11 to 17.
  • Aspect 21 is the apparatus of aspect 20, further including at least one of a transceiver or an antenna coupled to the at least one processor.
  • Aspect 22 is an apparatus for wireless communication including means for implementing any of aspects 1 to 10.
  • Aspect 23 is an apparatus for wireless communication including means for implementing any of aspects 11 to 17.
  • Aspect 24 is a computer-readable medium (e.g., a non-transitory computer-readable medium) storing computer executable code, where the code when executed by a processor causes the processor to implement any of aspects 1 to 10.
  • Aspect 25 is a computer-readable medium (e.g., a non-transitory computer-readable medium) storing computer executable code, where the code when executed by a processor causes the processor to implement any of aspects 11 to 17.
  • a computer-readable medium e.g., a non-transitory computer-readable medium

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • Mathematical Physics (AREA)
  • Artificial Intelligence (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Biomedical Technology (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Medical Informatics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Mobile Radio Communication Systems (AREA)

Abstract

Selon un aspect, un équipement utilisateur peut obtenir un premier jeu de données de référence associé à un premier ensemble de caractéristiques et un second jeu de données de référence associé à un second ensemble de caractéristiques. L'équipement utilisateur peut réaliser une formation de modèle et/ou une inférence de modèle sur la base d'un rapport de mélange du premier jeu de données de référence et du second jeu de données de référence. Selon un autre aspect, une entité de réseau peut fournir une première requête de réalisation d'un apprentissage de modèle sur la base d'un premier rapport de mélange d'un premier jeu de données d'apprentissage et d'un second jeu de données d'apprentissage. L'entité de réseau peut également fournir une seconde requête de réalisation d'une inférence de modèle sur la base d'un second rapport de mélange d'un premier jeu de données d'inférence et d'un second jeu de données d'inférence.
PCT/CN2023/086138 2023-04-04 2023-04-04 Mélange de jeu de données d'apprentissage pour un apprentissage de modèle basé sur un équipement utilisateur dans une gestion de faisceau prédictif Pending WO2024207182A1 (fr)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/CN2023/086138 WO2024207182A1 (fr) 2023-04-04 2023-04-04 Mélange de jeu de données d'apprentissage pour un apprentissage de modèle basé sur un équipement utilisateur dans une gestion de faisceau prédictif

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2023/086138 WO2024207182A1 (fr) 2023-04-04 2023-04-04 Mélange de jeu de données d'apprentissage pour un apprentissage de modèle basé sur un équipement utilisateur dans une gestion de faisceau prédictif

Publications (1)

Publication Number Publication Date
WO2024207182A1 true WO2024207182A1 (fr) 2024-10-10

Family

ID=92973909

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2023/086138 Pending WO2024207182A1 (fr) 2023-04-04 2023-04-04 Mélange de jeu de données d'apprentissage pour un apprentissage de modèle basé sur un équipement utilisateur dans une gestion de faisceau prédictif

Country Status (1)

Country Link
WO (1) WO2024207182A1 (fr)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN120434692A (zh) * 2025-07-08 2025-08-05 荣耀终端股份有限公司 通信方法和通信装置

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107452372A (zh) * 2017-09-22 2017-12-08 百度在线网络技术(北京)有限公司 远场语音识别模型的训练方法和装置
CN113762579A (zh) * 2021-01-07 2021-12-07 北京沃东天骏信息技术有限公司 一种模型训练方法、装置、计算机存储介质及设备
CN113822348A (zh) * 2021-09-13 2021-12-21 深圳中兴网信科技有限公司 模型训练方法、训练装置、电子设备和可读存储介质
WO2022006594A1 (fr) * 2020-06-30 2022-01-06 Qualcomm Incorporated Techniques de prédiction et de génération de rapport de canal inter bande
WO2022104799A1 (fr) * 2020-11-23 2022-05-27 北京小米移动软件有限公司 Procédé d'entraînement, appareil d'entraînement et support d'enregistrement
US20230075369A1 (en) * 2021-09-08 2023-03-09 Sap Se Pseudo-label generation using an ensemble model

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107452372A (zh) * 2017-09-22 2017-12-08 百度在线网络技术(北京)有限公司 远场语音识别模型的训练方法和装置
WO2022006594A1 (fr) * 2020-06-30 2022-01-06 Qualcomm Incorporated Techniques de prédiction et de génération de rapport de canal inter bande
WO2022104799A1 (fr) * 2020-11-23 2022-05-27 北京小米移动软件有限公司 Procédé d'entraînement, appareil d'entraînement et support d'enregistrement
CN113762579A (zh) * 2021-01-07 2021-12-07 北京沃东天骏信息技术有限公司 一种模型训练方法、装置、计算机存储介质及设备
US20230075369A1 (en) * 2021-09-08 2023-03-09 Sap Se Pseudo-label generation using an ensemble model
CN113822348A (zh) * 2021-09-13 2021-12-21 深圳中兴网信科技有限公司 模型训练方法、训练装置、电子设备和可读存储介质

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN120434692A (zh) * 2025-07-08 2025-08-05 荣耀终端股份有限公司 通信方法和通信装置

Similar Documents

Publication Publication Date Title
US12238602B2 (en) AI/ML based mobility related prediction for handover
US20230403588A1 (en) Machine learning data collection, validation, and reporting configurations
WO2024092743A1 (fr) Signal de référence pré-codé pour surveillance de modèle pour rétroaction de csi basée sur ml
WO2023206245A1 (fr) Configuration de ressource rs voisine
US20250330388A1 (en) Identification of ue mobility states, ambient conditions, or behaviors based on machine learning and wireless physical channel characteristics
WO2024207182A1 (fr) Mélange de jeu de données d'apprentissage pour un apprentissage de modèle basé sur un équipement utilisateur dans une gestion de faisceau prédictif
WO2024207416A1 (fr) Rétroinformations de similarité de données d'inférence pour une surveillance de performance de modèle d'apprentissage automatique dans une prédiction de faisceau
US20250385753A1 (en) Reference channel state information reference signal (csi-rs) for machine learning (ml) channel state feedback (csf)
US12057915B2 (en) Machine learning based antenna selection
US20240048977A1 (en) Data signatures for ml security
WO2025030357A1 (fr) Informations d'assistance d'un réseau à un ue pour une gestion prédictive de faisceau
WO2024174204A1 (fr) Commutateur de groupe de paramètres d'inférence d'apprentissage automatique implicites sur la base d'une fonctionnalité pour une prédiction de faisceau
WO2025039097A1 (fr) Rapport de marges l1-rsrp pour gestion prédictive de faisceau
WO2024020993A1 (fr) Mesure de faisceau mmw basée sur l'apprentissage automatique
WO2024207285A1 (fr) Amélioration de précision de prédiction de faisceau assistée par dmrs ou csi-rs opportuniste
WO2024197511A1 (fr) Niveaux de confiance pour une correspondance de faisceau par l'intermédiaire d'une prédiction de faisceau de transmission de liaison montante
US20240430062A1 (en) Ml based dynamic bit loading and rate control
US12470355B2 (en) ACK coalescing performance through dynamic stream selection
US12185124B2 (en) Candidate beam set update based on defined or configured neighboring beam set
WO2024254779A1 (fr) Indication d'occupation de domaine fréquentiel virtuelle pour une prédiction de mesure de faisceau
WO2025231692A1 (fr) Cohérence de niveau de puissance d'émission entre entraînement et inférence
US20250203400A1 (en) Federated parameter training for machine learning
WO2023184156A1 (fr) Techniques pour déterminer des états de communication d'ue via un apprentissage automatique
US20230421229A1 (en) Methods for ue to request gnb tci state switch for blockage conditions
US20250350501A1 (en) Recurrent equivariant inference machines for refining 5g ammse cross-slot channel estimation

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23931255

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE