[go: up one dir, main page]

WO2024179075A1 - Appariement de modèles pour compression de csi basée sur ia/ml - Google Patents

Appariement de modèles pour compression de csi basée sur ia/ml Download PDF

Info

Publication number
WO2024179075A1
WO2024179075A1 PCT/CN2023/134153 CN2023134153W WO2024179075A1 WO 2024179075 A1 WO2024179075 A1 WO 2024179075A1 CN 2023134153 W CN2023134153 W CN 2023134153W WO 2024179075 A1 WO2024179075 A1 WO 2024179075A1
Authority
WO
WIPO (PCT)
Prior art keywords
model
pmi
base station
csi
transceiver
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
PCT/CN2023/134153
Other languages
English (en)
Inventor
Jianfeng Wang
Bingchao LIU
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Lenovo Beijing Ltd
Original Assignee
Lenovo Beijing Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Lenovo Beijing Ltd filed Critical Lenovo Beijing Ltd
Priority to PCT/CN2023/134153 priority Critical patent/WO2024179075A1/fr
Publication of WO2024179075A1 publication Critical patent/WO2024179075A1/fr
Pending legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04BTRANSMISSION
    • H04B7/00Radio transmission systems, i.e. using radiation field
    • H04B7/02Diversity systems; Multi-antenna system, i.e. transmission or reception using multiple antennas
    • H04B7/04Diversity systems; Multi-antenna system, i.e. transmission or reception using multiple antennas using two or more spaced independent antennas
    • H04B7/06Diversity systems; Multi-antenna system, i.e. transmission or reception using multiple antennas using two or more spaced independent antennas at the transmitting station
    • H04B7/0613Diversity systems; Multi-antenna system, i.e. transmission or reception using multiple antennas using two or more spaced independent antennas at the transmitting station using simultaneous transmission
    • H04B7/0615Diversity systems; Multi-antenna system, i.e. transmission or reception using multiple antennas using two or more spaced independent antennas at the transmitting station using simultaneous transmission of weighted versions of same signal
    • H04B7/0619Diversity systems; Multi-antenna system, i.e. transmission or reception using multiple antennas using two or more spaced independent antennas at the transmitting station using simultaneous transmission of weighted versions of same signal using feedback from receiving side
    • H04B7/0636Feedback format
    • H04B7/0639Using selective indices, e.g. of a codebook, e.g. pre-distortion matrix index [PMI] or for beam selection
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04BTRANSMISSION
    • H04B7/00Radio transmission systems, i.e. using radiation field
    • H04B7/02Diversity systems; Multi-antenna system, i.e. transmission or reception using multiple antennas
    • H04B7/04Diversity systems; Multi-antenna system, i.e. transmission or reception using multiple antennas using two or more spaced independent antennas
    • H04B7/0413MIMO systems
    • H04B7/0456Selection of precoding matrices or codebooks, e.g. using matrices antenna weighting

Definitions

  • the present disclosure relates to wireless communications, and more specifically to user equipments (UEs) , base stations, processors, and methods for model pairing for artificial intelligence or machine learning (AI/ML) -based channel state information (CSI) compression.
  • UEs user equipments
  • AI/ML machine learning
  • CSI channel state information
  • a wireless communications system may include one or multiple network communication devices, such as base stations, which may be otherwise known as an eNodeB (eNB) , a next-generation NodeB (gNB) , or other suitable terminology.
  • Each network communication devices such as a base station may support wireless communications for one or multiple user communication devices, which may be otherwise known as user equipment (UE) , or other suitable terminology.
  • the wireless communications system may support wireless communications with one or multiple user communication devices by utilizing resources of the wireless communication system (e.g., time resources (e.g., symbols, slots, subframes, frames, or the like) or frequency resources (e.g., subcarriers, carriers) .
  • the wireless communications system may support wireless communications across various radio access technologies including third generation (3G) radio access technology, fourth generation (4G) radio access technology, fifth generation (5G) radio access technology, among other suitable radio access technologies beyond 5G (e.g., sixth generation (6G) ) .
  • 3G third generation
  • 4G fourth generation
  • 5G fifth generation
  • 6G sixth generation
  • CSI compression using two-sided model (such as artificial intelligence (AI) /machine learning (ML) models) has been introduced.
  • CSI compression using two-sided model may provide performance gain in most use cases.
  • AI artificial intelligence
  • ML machine learning
  • the present disclosure relates to methods, apparatuses, and systems that support model pairing for AI/ML-based CSI compression.
  • a user equipment may comprise: a processor; and a transceiver coupled to the processor, wherein the processor is configured to: receive, via the transceiver and from a base station, a configuration on a first codebook; and transmit, via the transceiver and to the base station, a first precoding matrix indicator (PMI) which is obtained based on the first codebook and a first channel state information (CSI) estimated by the UE, and at least one second PMI which is obtained by at least one first artificial intelligence or machine learning (AI/ML) model.
  • PMI precoding matrix indicator
  • CSI channel state information
  • AI/ML artificial intelligence or machine learning
  • the at least one first AI/ML model may be associated with CSI feedback compression.
  • transmitting the first PMI and the at least one second PMI may comprise: transmitting, via the transceiver and to the base station, a request for determining whether a first AI/ML model among the at least one first AI/ML model can be paired with a second AI/ML model of the base station; and transmitting, via the transceiver and to the base station, the first PMI and a second PMI associated with the first AI/ML model among the at least one first AI/ML model.
  • the processor may be further configured to: receive, via the transceiver and from the base station, a pairing result of the second AI/ML model and the first AI/ML model.
  • transmitting the first PMI and the at least one second PMI may comprise: receiving, via the transceiver and from the base station, a request associated with determining whether a second AI/ML model of the base station can be paired with an activated first AI/ML model among the at least one first AI/ML model, and transmitting, via the transceiver and to the base station, the first PMI and a second PMI associated with the activated first AI/ML model in response to the request.
  • the processor may be further configured to: receive, via the transceiver and from the base station, an indication to deactivate the activated first AI/ML model in the case that the second AI/ML model is determined to be not paired with the activated first AI/ML model by the base station.
  • the at least one first AI/ML model may include a plurality of first AI/ML models and the at least one second PMI include a plurality of second PMIs associated with the plurality of first AI/ML models
  • transmitting the first PMI and the at least one second PMI may comprise: transmitting, via the transceiver and to the base station a request for selecting a first AI/ML model from among the plurality of first AI/ML models; and transmitting, via the transceiver and to the base station, the first PMI and the plurality of second PMIs.
  • the processor may be further configured to: receive, via the transceiver and from the base station, an indication on a selecting result, in the case that a first AI/ML model is selected from among the plurality of first AI/ML models by the base station, the indication indicates a model identifier (ID) of the selected first AI/ML model, and in the case that no first AI/ML model is selected from among the plurality of first AI/ML models by the base station, the indication indicates pairing failure.
  • ID model identifier
  • the processor may be further configured to: transmit, via the transceiver and to the base station, an indication to indicate a number of the plurality of first AI/ML models, and receive, via the transceiver and from the base station, CSI report configuration and CSI reference signal (RS) configuration determined based on the indication of the plurality of first AI/ML models.
  • RS CSI reference signal
  • the configuration on the first codebook may be based on at least one AI/ML model or functionality at UE, the functionality may be associated with one or more AI/ML models.
  • the configuration on the first codebook may include scaling factors for parameters associated with a second codebook indicated in a codebook configuration transmitted from the base station.
  • the configuration on the first codebook may be transmitted as functionality/model-related information during a functionality or model identification procedure between the UE and the base station, or the configuration on the first codebook may be transmitted via radio resource control (RRC) signaling.
  • RRC radio resource control
  • the parameters may include at least one of: a number of beams; a number of phase quantization size; oversampling numbers; a number of beam amplitude scaling factors for both wideband and subband; or a number of beam combining coefficients or phases among beams, polarizations and layers.
  • transmitting the first PMI and the at least one second PMI may comprise: transmitting the first PMI and the at least one second PMI together via one of uplink control information (UCI) , media access control (MAC) control element (CE) or radio resource control (RRC) .
  • UCI uplink control information
  • MAC media access control
  • CE control element
  • RRC radio resource control
  • CSI or eigen values (EVs) of channel matrices constructed based on the first codebook may be inputs of the at least one first AI/ML model.
  • a base station may comprise: a processor; and a transceiver coupled to the processor, wherein the processor is configured to: transmit, via the transceiver and to a user equipment (UE) , a configuration on a first codebook; receive, via the transceiver and from the UE, a first precoding matrix indicator (PMI) which is obtained based on the first codebook and a first channel state information (CSI) estimated by the UE, and at least one second PMI which is obtained by at least one first artificial intelligence or machine learning (AI/ML) model; and determine whether a second AI/ML model of the base station can be paired with the at least one first AI/ML model, based on a third CSI obtained based on the first codebook and the received first PMI and at least one fourth CSI obtained based on the second AI/ML model and the received at least one second PMI.
  • PMI precoding matrix indicator
  • CSI channel state information
  • AI/ML artificial intelligence or machine learning
  • the second AI/ML model may be associated with CSI feedback decompression.
  • receiving the first PMI and the at least one second PMI may comprise: receiving, via the receiver and from the UE, a request for determining whether a first AI/ML model among the at least one first AI/ML model can be paired with the second AI/ML model; and receiving, via the transceiver and from the UE, the first PMI and a second PMI associated with the first AI/ML model among the at least one first AI/ML model.
  • the processor may be further configured to: compare the third CSI and a fourth CSI obtained based on the second AI/ML model and the received second PMI; and transmit, via the transceiver and to the UE, a pairing result of the second AI/ML model and the first AI/ML model determined based on the comparing result.
  • the pairing result in the case that a mean squared error (MSE) between the third CSI and the fourth CSI in a predetermined period is larger than a threshold, the pairing result may indicate that the second AI/ML model is not paired with the first AI/ML model; and in the case that the MSE is not larger than the threshold, the pairing result may indicate that the second AI/ML model is paired with the first AI/ML model, and a mode ID or a functionality ID associated with the second AI/ML model is indicated in the pairing result.
  • MSE mean squared error
  • receiving the first PMI and the at least one second PMI may comprise: transmitting, via the transceiver and to the UE, a request associated with determining whether the second AI/ML model can be paired with an activated first AI/ML model among the at least one first AI/ML model, and receiving, via the transceiver and from the UE, the first PMI and a second PMI associated with the activated first AI/ML model.
  • the processor may be further configured to: transmit, via the transceiver and to the UE, an indication to deactivate the activated first AI/ML model in the case that the second AI/ML model is determined to be not paired with the activated first AI/ML model by the base station.
  • the processor may be further configured to: calculate a mean squared error (MSE) between the third CSI and a fourth CSI in a predetermined period, wherein the fourth CSI is obtained based on the second AI/ML model and the second PMI associated with the activated first AI/ML model, in the case that the calculated MSE is larger than a threshold, the second AI/ML model is determined to be not paired with the activated first AI/ML model.
  • MSE mean squared error
  • the at least one second PMI may include a plurality of second PMIs associated with a plurality of first AI/ML models, and wherein receiving the first PMI and the at least one second PMI may comprise: receiving, via the transceiver and from the UE, a request for selecting a first AI/ML model from among the plurality of first AI/ML models; and receiving, via the transceiver and from the UE, the first PMI and the plurality of second PMIs.
  • the processor may be further configured to: select a first AI/ML model from among the plurality of first AI/ML models based on the third CSI and a plurality of fourth CSIs obtained based on the second AI/ML model and the plurality of second PMIs, and transmit, via the transceiver and to the UE, an indication on a selecting result, wherein in the case that a first AI/ML model is selected from among the plurality of first AI/ML models, the indication may indicate a model identifier (ID) of the selected first AI/ML model, and in the case that no first AI/ML model is selected, the indication may indicate pairing failure.
  • ID model identifier
  • selecting a first AI/ML model from among the plurality of first AI/ML models may comprise: selecting a first AI/ML model from the plurality of first AI/ML models based on mean squared errors (MSEs) between the third CSI and each of the plurality of fourth CSIs in a predetermined period.
  • MSEs mean squared errors
  • the processor may be further configured to: receive, via the transceiver and from the UE, an indication on the number of the plurality of first AI/ML models, and transmit, via the transceiver and to the UE, CSI report configuration and CSI reference signal (RS) configuration determined based on the indication of the plurality of first AI/ML models.
  • RS CSI reference signal
  • the configuration on the first codebook may be based on at least one AI/ML model or functionality at UE, wherein the functionality is associated with one or more AI/ML models.
  • the configuration on the first codebook may include scaling factors for parameters associated with a second codebook indicated in a codebook configuration transmitted from the base station.
  • the configuration on the first codebook may be transmitted as functionality/model-related information during a functionality or model identification procedure between the UE and the base station, or the configuration on the first codebook may be transmitted via radio resource control (RRC) signaling.
  • RRC radio resource control
  • the parameters may include at least one of: a number of beams; a number of phase quantization size; oversampling numbers; a number of beam amplitude scaling factors for both wideband and subband; or a number of beam combining coefficients or phases among beams, polarizations and layers.
  • the processor may be configured to receive the first PMI and the at least one second PMI together via one of uplink control information (UCI) , media access control (MAC) control element (CE) or radio resource control (RRC) .
  • UCI uplink control information
  • MAC media access control
  • CE control element
  • RRC radio resource control
  • CSI or eigen values (EVs) of channel matrices constructed based on the first codebook may be inputs of the at least one first AI/ML model.
  • a processor for wireless communication may comprise: at least one memory; and a controller coupled with the at least one memory and configured to cause the processor to: receive, via a transceiver and from a base station, a configuration on a first codebook; and transmit, via the transceiver and to the base station, a first precoding matrix indicator (PMI) which is obtained based on the first codebook and a first channel state information (CSI) estimated by the UE, and at least one second PMI which is obtained by at least one first artificial intelligence or machine learning (AI/ML) model.
  • PMI precoding matrix indicator
  • CSI channel state information
  • AI/ML artificial intelligence or machine learning
  • a method performed by a user equipment may comprise: receiving, via a transceiver and from a base station, a configuration on a first codebook; and transmitting, via the transceiver and to the base station, a first precoding matrix indicator (PMI) which is obtained based on the first codebook and a first channel state information (CSI) estimated by the UE, and at least one second PMI which is obtained by at least one first artificial intelligence or machine learning (AI/ML) model.
  • PMI precoding matrix indicator
  • CSI channel state information
  • AI/ML artificial intelligence or machine learning
  • a processor for wireless communication may comprise: at least one memory; and a controller coupled with the at least one memory and configured to cause the processor to: transmit, via a transceiver and to a user equipment (UE) , a configuration on a first codebook; receive, via the transceiver and from the UE, a first precoding matrix indicator (PMI) which is obtained based on the first codebook and a first channel state information (CSI) estimated by the UE, and at least one second PMI which is obtained by at least one first artificial intelligence or machine learning (AI/ML) model; and determine whether a second AI/ML model of base station can be paired with the at least one first AI/ML model, based on a third CSI obtained based on the first codebook and the received first PMI and at least on fourth CSI obtained based on the second AI/ML model and the received at least one second PMI.
  • PMI precoding matrix indicator
  • CSI channel state information
  • AI/ML artificial intelligence or machine learning
  • a method performed by a base station may comprise: transmitting, via a transceiver and to a user equipment (UE) , a configuration on a first codebook; receiving, via the transceiver and from the UE, a first precoding matrix indicator (PMI) which is obtained based on the first codebook and a first channel state information (CSI) estimated by the UE, and at least one second PMI which is obtained by at least one first artificial intelligence or machine learning (AI/ML) model; and determining whether a second AI/ML model of base station is paired with the at least one first AI/ML model, based on a third CSI obtained based on the first codebook and the received first PMI and at least one fourth CSI obtained based on the second AI/ML model and the received at least one second PMI.
  • PMI precoding matrix indicator
  • CSI channel state information
  • AI/ML artificial intelligence or machine learning
  • FIG. 1 illustrates an example of a wireless communications system that supports model pairing for AI/ML-based CSI compression in accordance with aspects of the present disclosure.
  • FIG. 2 illustrates an example of basis AI/ML model used in the CSI compression associated with aspects of the present disclosure.
  • FIG. 3 illustrates an example of a general training procedure of the two-sided model for CSI compression associated with aspects of the present disclosure.
  • FIGS. 4 to 6 illustrate examples of two-sided model training associated with aspects of the present disclosure.
  • FIG. 7 illustrates an example of signalling procedure for model pairing for AI/ML-based CSI compression in accordance with aspects of the present disclosure.
  • FIG. 8 illustrates an example conceptual diagram of dataset alignment indication and model paring procedure in accordance with aspects of the present disclosure.
  • FIG. 9 illustrates an example procedure for model paring between a base station and a UE with aligned dataset over air interface in accordance with aspects of the present disclosure.
  • FIG. 10 illustrates a schematic diagram for checking whether a model on one side can be paired with a model on the other side in accordance with aspects of the present disclosure.
  • FIG. 11 illustrates a schematic diagram for selecting a model in a set of models on one side to pair with a target model on the other side in accordance with aspects of the present disclosure.
  • FIGS. 12 and 13 illustrate examples of devices that support model pairing for AI/ML-based CSI compression in accordance with aspects of the present disclosure.
  • FIGS. 14 and 15 illustrate examples of processors that support model pairing for AI/ML-based CSI compression in accordance with aspects of the present disclosure.
  • FIGS. 16 and 17 illustrate flowcharts of methods that support model pairing for AI/ML-based CSI compression in accordance with aspects of the present disclosure.
  • references in the present disclosure to “one embodiment, ” “an example embodiment, ” “an embodiment, ” “some embodiments, ” and the like indicate that the embodiment (s) described may include a particular feature, structure, or characteristic, but it is not necessary that every embodiment includes the particular feature, structure, or characteristic. Moreover, such phrases do not necessarily refer to the same embodiment (s) . Further, when a particular feature, structure, or characteristic is described in connection with an embodiment, it is submitted that it is within the knowledge of one skilled in the art to affect such feature, structure, or characteristic in connection with other embodiments whether or not explicitly described.
  • first and second or the like may be used herein to describe various elements, these elements should not be limited by these terms. These terms are only used to distinguish one element from another element. For example, a first element could also be termed as a second element, and similarly, a second element could also be termed as a first element, without departing from the scope of embodiments. As used herein, the term “and/or” includes any and all combinations of one or more of the listed terms.
  • the term “communication network” refers to a network following any suitable communication standards, such as, 5G new radio (NR) , long term evolution (LTE) , LTE-advanced (LTE-A) , wideband code division multiple access (WCDMA) , high-speed packet access (HSPA) , narrow band internet of things (NB-IoT) , and so on.
  • NR 5G new radio
  • LTE long term evolution
  • LTE-A LTE-advanced
  • WCDMA wideband code division multiple access
  • HSPA high-speed packet access
  • NB-IoT narrow band internet of things
  • the communications between a terminal device and a network device in the communication network may be performed according to any suitable generation communication protocols, including but not limited to, the first generation (1G) , the second generation (2G) , 2.5G, 2.75G, the third generation (3G) , the fourth generation (4G) , 4.5G, the fifth generation (5G) communication protocols, and/or any other protocols either currently known or to be developed in the future.
  • any suitable generation communication protocols including but not limited to, the first generation (1G) , the second generation (2G) , 2.5G, 2.75G, the third generation (3G) , the fourth generation (4G) , 4.5G, the fifth generation (5G) communication protocols, and/or any other protocols either currently known or to be developed in the future.
  • Embodiments of the present disclosure may be applied in various communication systems. Given the rapid development in communications, there will also be future type communication technologies and systems in which the present disclosure may be embodied. It should not be seen as limiting the scope of the present disclosure to only the aforementioned systems.
  • the term “network device” generally refers to a node in a communication network via which a terminal device can access the communication network and receive services therefrom.
  • the network device may refer to a base station (BS) or an access point (AP) , for example, a node B (NodeB or NB) , a radio access network (RAN) node, an evolved NodeB (eNodeB or eNB) , a NR NB (also referred to as a gNB) , a remote radio unit (RRU) , a radio header (RH) , an infrastructure device for a V2X (vehicle-to-everything) communication, a transmission and reception point (TRP) , a reception point (RP) , a remote radio head (RRH) , a relay, an integrated access and backhaul (IAB) node, a low power node such as a femto BS, a pico BS, and so forth, depending on
  • terminal device generally refers to any end device that may be capable of wireless communications.
  • a terminal device may also be referred to as a communication device, a user equipment (UE) , an end user device, a subscriber station (SS) , an unmanned aerial vehicle (UAV) , a portable subscriber station, a mobile station (MS) , or an access terminal (AT) .
  • UE user equipment
  • SS subscriber station
  • UAV unmanned aerial vehicle
  • MS mobile station
  • AT access terminal
  • the terminal device may include, but is not limited to, a mobile phone, a cellular phone, a smart phone, a voice over IP (VoIP) phone, a wireless local loop phone, a tablet, a wearable terminal device, a personal digital assistant (PDA) , a portable computer, a desktop computer, an image capture terminal device such as a digital camera, a gaming terminal device, a music storage and playback appliance, a vehicle-mounted wireless terminal device, a wireless endpoint, a mobile station, laptop-embedded equipment (LEE) , laptop-mounted equipment (LME) , a USB dongle, a smart device, wireless customer-premises equipment (CPE) , an internet of things (loT) device, a watch or other wearable, a head-mounted display (HMD) , a vehicle, a drone, a medical device (for example, a remote surgery device) , an industrial device (for example, a robot and/or other wireless devices operating in an industrial and/or an automated processing chain
  • AI/ML is used to learn and perform certain tasks via training neural networks with vast amounts of data, which is successfully applied in computer vison (CV) and nature language processing (NLP) fields.
  • CV computer vison
  • NLP nature language processing
  • DL deep learning
  • NN multi-layered neural networks
  • AI/ML-based methods may obtain a better performance than a traditional one if being well trained.
  • PMI precoding matrix indicator
  • Type II CSI was introduced as part of 3GPP release 15, together with Type I CSI. Some important extensions and enhancements to Type II CSI, termed as eType II, were introduced in release 16. The higher spatial granularity of PMI feedback comes at the cost of significantly higher signaling overhead.
  • a PMI report for Type I CSI may consist of at most a few tens of bits, a PMI report for Type II CSI may consist of several hundred bits.
  • W 1 is reported as a wideband precoder with a structure of
  • the matrix W 2 provides an amplitude value (for partly wideband and partly subband reporting) and a phase value (for subband reporting) .
  • NR release 16 introduced an enhanced Type II CSI with the same basic principle as that in release-15, i.e., the reporting of a set of beams on a wideband basis together with the reporting of a set of combining coefficients on a more narrowband basis.
  • the reported beams are linearly combined by means of the combining coefficients to provide a set of precoder vectors, one for each layer.
  • An important feature of the release-16 enhanced Type II CSI is therefore to utilize correlations in the frequency domain to reduce the reporting overhead.
  • the release-16 Type II CSI allows for a factor of two improvements in the frequency-domain granularity of the PMI reporting.
  • the release-16 Type II CSI provides the transmitter side with a recommended precoder per FD unit, compared to one precoder per subband for the release-15 Type II CSI.
  • the reported precoder vectors for all FD units may, for the release-16 Type II CSI, be expressed as:
  • N is the number of FD units to be reported, and is the precoder vector of n th FD unit for a given layer k.
  • W 1 is the same as explained above for the release- 15 Type II CSI, where the columns of B correspond to L selected beams. Furthermore, W 1 is the same for all FD units (wideband reporting) and also the same for all layers.
  • the matrix W 2, k of size 2L ⁇ M maps from the delay domain to the beam domain, similar with the matrix for the release-15 Type II CSI based on the matrices W 2 for all the subbands. Furthermore, only a fraction ⁇ of the total of 2L ⁇ M elements of W 2, k are assumed to be non-zero and thus needs to be reported, where ⁇ is a configurable parameter.
  • the overhead reduction with the release-16 enhanced Type II CSI is due to two things: 1) .
  • AI/ML-based CSI compression was proposed as one of use cases to study AI/ML for air interface enhancement in Rel-18.
  • FIG. 2 illustrates an example of basis AI/ML model used in the CSI compression.
  • a two-sided AI/ML model is deployed to compress CSI/PMI.
  • an AI model i.e., UE-part model
  • the measured CSI e.g., the estimated full CSI
  • NN for example, a convolutional neural network (CNN) model, which also may be referred to as an encoder
  • ELD eigen value decomposition
  • the bits are transmitted on physical uplink share channel (PUSCH) /physical uplink control channel (PUCCH) as part of CSI report (i.e., compressed and quantized CSI) to gNB.
  • PUSCH physical uplink share channel
  • PUCCH physical uplink control channel
  • the received bits i.e., the compressed and quantized CSI
  • NW-part model another AI model deployed at gNB side
  • a CNN model which also may be referred to as a decoder
  • NW-part model i.e., NW-part model
  • FIG. 3 illustrates an example of a general training procedure of the two-sided model for CSI compression associated with aspects of the present disclosure.
  • a shared dataset of CSI/eigen values EVs
  • MSE mean squared error
  • FIGS. 4 to 6 illustrate examples of two-sided model training associated with aspects of the present disclosure.
  • FIG. 4 illustrates Type 1 training scheme for the two-sided model, i.e. joint training at one side.
  • the two models may be trained at one side, either UE side or NW side, and one of the trained models may be transferred to another side after training ready.
  • the two models are trained (402) at the NW side, and after the two trained models are training ready (404) for deployment/inference, a trained model for encoder, i.e., UE-part model, is transferred (406) and deployed (410) to the UE side, also, another trained mode for decoder is deployed (408) at the NW side.
  • a trained model for encoder i.e., UE-part model
  • FIG. 5 illustrates Type 2 training scheme for the two-sided model, i.e. joint training at both sides.
  • the two models are trained at both sides, including UE side and NW side.
  • the data during training i.e., values of forward propagation (FP) and backward propagation (BP)
  • FP forward propagation
  • BP backward propagation
  • the two models are trained (502, 504) at the UE side and the NW side, respectively, and after the two trained models are training ready (506, 508) for deployment/inference, the trained models are deployed (510, 512) at the UE side and the NW side, respectively.
  • FIG. 6 illustrates Type 3 training scheme for the two-sided model, i.e. sequential training.
  • sequential training is adopted, which means the two models are firstly trained at one side, either UE side or NW side.
  • the generated data i.e., compressed CSI
  • the trained model may be transferred to the other side, followed by the training on the other side with the transferred data and aligned dataset.
  • the two models are trained (602) at the NW firstly, and after the two trained models are training ready (604) , a trained model for the NW may be deployed (608) at the NW.
  • Another trained model for the UE as well as the compressed CSI are transferred to the UE side, followed by the training (606) on the UE side with the transferred data and aligned dataset. After the model for the UE is training ready at the UE, it may be deployed (610) at the UE.
  • Type 3 training scheme the two models may be potentially optimized according to the deploy software/hardware platform. It is necessary to transfer the data generated from the aligned dataset, which is less than the overhead in Type 2 and friendly to the diverse platform.
  • a model on NW side wants to support multiple models on UEs, it is better to use a model or a limited number of models to pair with the models in the UEs.
  • Type 3 training scheme could be much better than the other two types (i.e. Type 1 training scheme and Type 2 training scheme) if training is needed.
  • the quantization/dequantization method and the feedback message size between network and UE may be potential specification impact on alignment of the quantization/dequantization method and the feedback message size between network and UE, including: (1) for vector quantization scheme, the format and size of the VQ codebook, and the size and segmentation method of the CSI generation model output; and (2) for scalar quantization scheme, uniform and non-uniform quantization with format, e.g., quantization granularity, consisting of distribution of bits assigned to each float; (3) quantization alignment using 3GPP aware mechanism.
  • precoding matrix e.g., the precoding matrix in spatial-frequency domain or the precoding matrix represented using angular-delay domain projection
  • explicit channel matrix i.e., full Tx *Rx MIMO channel
  • enhancement of CSI-RS configuration to enable higher accuracy measurement enhancement of CSI-RS configuration to enable higher accuracy measurement
  • assistance information for UE data collection for categorizing the data in forms of ID for the purpose of differentiating characteristics of data due to specific configuration, scenarios, site, etc (the provision of assistance information needs to consider feasibility of disclosing proprietary information to the other side)
  • signaling for triggering the data collection enhancement of CSI-RS configuration to enable higher accuracy measurement
  • assistance information for UE data collection for categorizing the data in forms of ID for the purpose of differentiating characteristics of data due to specific configuration, scenarios, site, etc (the provision of assistance information needs to consider feasibility of disclosing proprietary information to the other side)
  • signaling for triggering the data collection for UE side data collection.
  • SRS sounding reference signal
  • CSI-RS measurement and/or CSI reporting to enable higher accuracy measurement
  • contents of the ground-truth CSI including data sample type (e.g., precoding matrix, channel matrix, etc.
  • data sample format including scaler quantization and/or codebook-based quantization (e.g., e-type II like) , and assistance information (e.g., time stamps, and/or cell ID, assistance information for network data collection for categorizing the data in forms of ID for the purpose of differentiating characteristics of data due to specific configuration, scenarios, site etc., and data quality indicator) ; (3) latency requirement for data collection; (4) signaling for triggering the data collection; (5) ground-truth CSI report for NW side data collection for model performance monitoring, including scalar quantization for ground-truth CSI, codebook-based quantization for ground-truth CSI, radio resource control (RRC) signalling and/or L1 signalling procedure to enable fast identification of AI/ML model performance, and aperiodic/semi-persistent or periodic ground-truth CSI report; (6) ground-truth CSI format for model training, including scalar or codebook-based quantization for ground-truth CSI.
  • assistance information e
  • CSI configuration and report there may be potential specification impact on the following aspects: (1) NW configuration to determine CSI payload size, e.g., possible CSI payload size, possible rank restriction and/or other related configuration; (2) how UE determines/reports the actual CSI payload size and/or other CSI related information within constraints configured by the network; and (3) relevant uplink control information (UCI) format considering the legacy CSI reporting principle with CSI Part 1 and Part 2 as a starting point, where Part 1 has a network configured fixed size and Part 2 size is dynamic, determined by information in Part 1.
  • NW configuration to determine CSI payload size, e.g., possible CSI payload size, possible rank restriction and/or other related configuration
  • UCI uplink control information
  • the codebook subset restriction e.g., input-CSI-NW/output- CSI-UE considered in angular-delay domain, beam restriction may be based on legacy SD basis vector-based input CSI in angular domain
  • CSI processing unit e.g., CSI processing unit
  • CSI-RS configurations not including CSI-RS pattern design enhancements
  • CSI configuration for network to indicate CSI reporting related information, e.g., gNB indication to the UE of one or more of Information indicating CSI payload size, information indicating quantization method/granularity, rank restriction, and other payload related aspects
  • CSI reporting configurations for UE determination/reporting of the actual CSI payload size, UE reports related information as configured by the NW
  • CSI report UCI mapping/priority/omission (5) CSI processing procedures.
  • the dataset for model training in either side is very large and proprietary, and it is not always permitted to transfer to the other side for the privacy issue.
  • the dataset for training plays a key role for any AI/ML-based approach.
  • the dataset for training in either side could be proprietary, such as the quantization level, data format. Though the dataset could be transferred to the other side, this approach is much more radio resource consuming and not acceptable for privacy.
  • the second issue is that the performance of two-sided model for the CSI compression is sensitive with the model paring. According to the evaluation results, the performance is highly related with the paired models. If the models on two sides are mis-matched, the performance may be degraded seriously. There should be specification impacts on the model paring procedures of the two-sided model.
  • the third issue is how to monitor the performance of a two-sided model for the AI/ML-based CSI compression. Similar with the other AI/ML-based approach, it is also necessary to monitor the performance for the two-sided model, which is different with the single-sided model for the metrics alignment for degradation.
  • the present disclosure proposes a solution to support model pairing for AI/ML-based CSI compression.
  • the UE and the base station may share a high resolution codebook.
  • a PMI may be obtained based on CSI estimated by the UE, and, together with PMI (s) generated via model (s) at the UE, transmitted to the base station, to assess whether a model at the UE side is paired with a model at base station side.
  • the two models for AI/ML-based CSI compression may be better paired over air interface, while the performance and further monitoring for the AI/ML-based CSI compression are also guaranteed.
  • FIG. 1 illustrates an example of a wireless communications system 100 that supports CSI compression accordance with aspects of the present disclosure.
  • the wireless communications system 100 may include one or more network entities 102 (also referred to as network equipment (NE) ) , one or more UEs 104, a core network 106, and a packet data network 108.
  • the wireless communications system 100 may support various radio access technologies.
  • the wireless communications system 100 may be a 4G network, such as an LTE network or an LTE-advanced (LTE-A) network.
  • LTE-A LTE-advanced
  • the wireless communications system 100 may be a 5G network, such as an NR network.
  • the wireless communications system 100 may be a combination of a 4G network and a 5G network, or other suitable radio access technology including institute of electrical and electronics engineers (IEEE) 802.11 (Wi-Fi) , IEEE 802.16 (WiMAX) , IEEE 802.20.
  • IEEE institute of electrical and electronics engineers
  • Wi-Fi Wi-Fi
  • WiMAX IEEE 802.16
  • IEEE 802.20 The wireless communications system 100 may support radio access technologies beyond 5G. Additionally, the wireless communications system 100 may support technologies, such as time division multiple access (TDMA) , frequency division multiple access (FDMA) , or code division multiple access (CDMA) , etc.
  • TDMA time division multiple access
  • FDMA frequency division multiple access
  • CDMA code division multiple access
  • the one or more network entities 102 may be dispersed throughout a geographic region to form the wireless communications system 100.
  • One or more of the network entities 102 described herein may be or include or may be referred to as a network node, a base station, a network element, a radio access network (RAN) , a base transceiver station, an access point, a NodeB, an eNodeB (eNB) , a next-generation NodeB (gNB) , or other suitable terminology.
  • a network entity 102 and a UE 104 may communicate via a communication link 110, which may be a wireless or wired connection.
  • a network entity 102 and a UE 104 may perform wireless communication (e.g., receive signaling, transmit signaling) over a Uu interface.
  • a network entity 102 may provide a geographic coverage area 112 for which the network entity 102 may support services (e.g., voice, video, packet data, messaging, broadcast, etc. ) for one or more UEs 104 within the geographic coverage area 112.
  • a network entity 102 and a UE 104 may support wireless communication of signals related to services (e.g., voice, video, packet data, messaging, broadcast, etc. ) according to one or multiple radio access technologies.
  • a network entity 102 may be moveable, for example, a satellite associated with a non-terrestrial network.
  • different geographic coverage areas 112 associated with the same or different radio access technologies may overlap, but the different geographic coverage areas 112 may be associated with different network entities 102.
  • Information and signals described herein may be represented using any of a variety of different technologies and techniques.
  • data, instructions, commands, information, signals, bits, symbols, and chips that may be referenced throughout the description may be represented by voltages, currents, electromagnetic waves, magnetic fields or particles, optical fields or particles, or any combination thereof.
  • the one or more UEs 104 may be dispersed throughout a geographic region of the wireless communications system 100.
  • a UE 104 may include or may be referred to as a mobile device, a wireless device, a remote device, a remote unit, a handheld device, or a subscriber device, or some other suitable terminology.
  • the UE 104 may be referred to as a unit, a station, a terminal, or a client, among other examples.
  • the UE 104 may be referred to as an internet-of-things (IoT) device, an internet-of-everything (IoE) device, or machine-type communication (MTC) device, among other examples.
  • IoT internet-of-things
  • IoE internet-of-everything
  • MTC machine-type communication
  • a UE 104 may be stationary in the wireless communications system 100.
  • a UE 104 may be mobile in the wireless communications system 100.
  • the one or more UEs 104 may be devices in different forms or having different capabilities. Some examples of UEs 104 are illustrated in FIG. 1.
  • a UE 104 may be capable of communicating with various types of devices, such as the network entities 102, other UEs 104, or network equipment (e.g., the core network 106, the packet data network 108, a relay device, an integrated access and backhaul (IAB) node, or another network equipment) , as shown in FIG. 1.
  • a UE 104 may support communication with other network entities 102 or UEs 104, which may act as relays in the wireless communications system 100.
  • a UE 104 may also be able to support wireless communication directly with other UEs 104 over a communication link 114.
  • a UE 104 may support wireless communication directly with another UE 104 over a device-to-device (D2D) communication link.
  • D2D device-to-device
  • the communication link 114 may be referred to as a sidelink.
  • a UE 104 may support wireless communication directly with another UE 104 over a PC5 interface.
  • a network entity 102 may support communications with the core network 106, or with another network entity 102, or both.
  • a network entity 102 may interface with the core network 106 through one or more backhaul links 116 (e.g., via an S1, N2, N2, or another network interface) .
  • the network entities 102 may communicate with each other over the backhaul links 116 (e.g., via an X2, Xn, or another network interface) .
  • the network entities 102 may communicate with each other directly (e.g., between the network entities 102) .
  • the network entities 102 may communicate with each other or indirectly (e.g., via the core network 106) .
  • one or more network entities 102 may include subcomponents, such as an access network entity, which may be an example of an access node controller (ANC) .
  • An ANC may communicate with the one or more UEs 104 through one or more other access network transmission entities, which may be referred to as a radio heads, smart radio heads, or transmission-reception points (TRPs) .
  • TRPs transmission-reception points
  • a network entity 102 may be configured in a disaggregated architecture, which may be configured to utilize a protocol stack physically or logically distributed among two or more network entities 102, such as an integrated access backhaul (IAB) network, an open radio access network (O-RAN) (e.g., a network configuration sponsored by the O-RAN Alliance) , or a virtualized RAN (vRAN) (e.g., a cloud RAN (C-RAN) ) .
  • IAB integrated access backhaul
  • O-RAN open radio access network
  • vRAN virtualized RAN
  • C-RAN cloud RAN
  • a network entity 102 may include one or more of a CU, a DU, a radio unit (RU) , a RAN intelligent controller (RIC) (e.g., a near-real time RIC (Near-RT RIC) , a non-real time RIC (Non-RT RIC) ) , a service management and orchestration (SMO) system, or any combination thereof.
  • RIC RAN intelligent controller
  • SMO service management and orchestration
  • An RU may also be referred to as a radio head, a smart radio head, a remote radio head (RRH) , a remote radio unit (RRU) , or a transmission reception point (TRP) .
  • One or more components of the network entities 102 in a disaggregated RAN architecture may be co-located, or one or more components of the network entities 102 may be located in distributed locations (e.g., separate physical locations) .
  • one or more network entities 102 of a disaggregated RAN architecture may be implemented as virtual units (e.g., a virtual CU (VCU) , a virtual DU (VDU) , a virtual RU (VRU) ) .
  • VCU virtual CU
  • VDU virtual DU
  • VRU virtual RU
  • Split of functionality between a CU, a DU, and an RU may be flexible and may support different functionalities depending upon which functions (e.g., network layer functions, protocol layer functions, baseband functions, radio frequency functions, and any combinations thereof) are performed at a CU, a DU, or an RU.
  • functions e.g., network layer functions, protocol layer functions, baseband functions, radio frequency functions, and any combinations thereof
  • a functional split of a protocol stack may be employed between a CU and a DU such that the CU may support one or more layers of the protocol stack and the DU may support one or more different layers of the protocol stack.
  • the CU may host upper protocol layer (e.g., a layer 3 (L3) , a layer 2 (L2) ) functionality and signaling (e.g., radio resource control (RRC) , service data adaption protocol (SDAP) , packet data convergence protocol (PDCP) ) .
  • the CU may be connected to one or more DUs or RUs, and the one or more DUs or RUs may host lower protocol layers, such as a layer 1 (L1) (e.g., physical (PHY) layer) or an L2 (e.g., radio link control (RLC) layer, medium access control (MAC) layer) functionality and signaling, and may each be at least partially controlled by the CU 160.
  • L1 e.g., physical (PHY) layer
  • L2 e.g., radio link control (RLC) layer, medium access control (MAC) layer
  • a functional split of the protocol stack may be employed between a DU and an RU such that the DU may support one or more layers of the protocol stack and the RU may support one or more different layers of the protocol stack.
  • the DU may support one or multiple different cells (e.g., via one or more RUs) .
  • a functional split between a CU and a DU, or between a DU and an RU may be within a protocol layer (e.g., some functions for a protocol layer may be performed by one of a CU, a DU, or an RU, while other functions of the protocol layer are performed by a different one of the CU, the DU, or the RU) .
  • a CU may be functionally split further into CU control plane (CU-CP) and CU user plane (CU-UP) functions.
  • a CU may be connected to one or more DUs via a midhaul communication link (e.g., F1, F1-c, F1-u)
  • a DU may be connected to one or more RUs via a fronthaul communication link (e.g., open fronthaul (FH) interface)
  • FH open fronthaul
  • a midhaul communication link or a fronthaul communication link may be implemented in accordance with an interface (e.g., a channel) between layers of a protocol stack supported by respective network entities 102 that are in communication via such communication links.
  • the core network 106 may support user authentication, access authorization, tracking, connectivity, and other access, routing, or mobility functions.
  • the core network 106 may be an evolved packet core (EPC) , or a 5G core (5GC) , which may include a control plane entity that manages access and mobility (e.g., a mobility management entity (MME) , an access and mobility management functions (AMF) ) and a user plane entity that routes packets or interconnects to external networks (e.g., a serving gateway (S-GW) , a packet data network (PDN) gateway (P-GW) , or a user plane function (UPF) ) .
  • EPC evolved packet core
  • 5GC 5G core
  • MME mobility management entity
  • AMF access and mobility management functions
  • S-GW serving gateway
  • PDN gateway packet data network gateway
  • UPF user plane function
  • control plane entity may manage non-access stratum (NAS) functions, such as mobility, authentication, and bearer management (e.g., data bearers, signal bearers, etc. ) for the one or more UEs 104 served by the one or more network entities 102 associated with the core network 106.
  • NAS non-access stratum
  • the core network 106 may communicate with the packet data network 108 over one or more backhaul links 116 (e.g., via an S1, N2, N2, or another network interface) .
  • the packet data network 108 may include an application server 118.
  • one or more UEs 104 may communicate with the application server 118.
  • a UE 104 may establish a session (e.g., a protocol data unit (PDU) session, or the like) with the core network 106 via a network entity 102.
  • the core network 106 may route traffic (e.g., control information, data, and the like) between the UE 104 and the application server 118 using the established session (e.g., the established PDU session) .
  • the PDU session may be an example of a logical connection between the UE 104 and the core network 106 (e.g., one or more network functions of the core network 106) .
  • the network entities 102 and the UEs 104 may use resources of the wireless communications system 100 (e.g., time resources (e.g., symbols, slots, subframes, frames, or the like) or frequency resources (e.g., subcarriers, carriers) ) to perform various operations (e.g., wireless communications) .
  • the network entities 102 and the UEs 104 may support different resource structures.
  • the network entities 102 and the UEs 104 may support different frame structures.
  • the network entities 102 and the UEs 104 may support a single frame structure.
  • the network entities 102 and the UEs 104 may support various frame structures (i.e., multiple frame structures) .
  • the network entities 102 and the UEs 104 may support various frame structures based on one or more numerologies.
  • One or more numerologies may be supported in the wireless communications system 100, and a numerology may include a subcarrier spacing and a cyclic prefix.
  • a time interval of a resource may be organized according to frames (also referred to as radio frames) .
  • Each frame may have a duration, for example, a 10 millisecond (ms) duration.
  • each frame may include multiple subframes.
  • each frame may include 10 subframes, and each subframe may have a duration, for example, a 1 ms duration.
  • each frame may have the same duration.
  • each subframe of a frame may have the same duration.
  • a time interval of a resource may be organized according to slots.
  • a subframe may include a number (e.g., quantity) of slots.
  • the number of slots in each subframe may also depend on the one or more numerologies supported in the wireless communications system 100.
  • Each slot may include a number (e.g., quantity) of symbols (e.g., OFDM symbols) .
  • the number (e.g., quantity) of slots for a subframe may depend on a numerology.
  • a slot For a normal cyclic prefix, a slot may include 14 symbols.
  • a slot For an extended cyclic prefix (e.g., applicable for 60 kHz subcarrier spacing) , a slot may include 12 symbols.
  • an electromagnetic (EM) spectrum may be split, based on frequency or wavelength, into various classes, frequency bands, frequency channels, etc.
  • the wireless communications system 100 may support one or multiple operating frequency bands, such as frequency range designations FR1 (410 MHz –7.125 GHz) , FR2 (24.25 GHz –52.6 GHz) , FR3 (7.125 GHz –24.25 GHz) , FR4 (52.6 GHz –114.25 GHz) , FR4a or FR4-1 (52.6 GHz –71 GHz) , and FR5 (114.25 GHz –300 GHz) .
  • FR1 410 MHz –7.125 GHz
  • FR2 24.25 GHz –52.6 GHz
  • FR3 7.125 GHz –24.25 GHz
  • FR4 (52.6 GHz –114.25 GHz)
  • FR4a or FR4-1 52.6 GHz –71 GHz
  • FR5 114.25 GHz
  • the network entities 102 and the UEs 104 may perform wireless communications over one or more of the operating frequency bands.
  • FR1 may be used by the network entities 102 and the UEs 104, among other equipment or devices for cellular communications traffic (e.g., control information, data) .
  • FR2 may be used by the network entities 102 and the UEs 104, among other equipment or devices for short-range, high data rate capabilities.
  • FR1 may be associated with one or multiple numerologies (e.g., at least three numerologies) .
  • FR2 may be associated with one or multiple numerologies (e.g., at least 2 numerologies) .
  • FIG. 7 illustrates an example of signalling procedure 700 for model pairing for AI/ML-based CSI compression in accordance with aspects of the present disclosure.
  • the signalling procedure 700 involves a base station 102 and a UE 104.
  • the base station 102 may also be referred to as NW 103, gNB 102 or the like.
  • the base station 102 may transmit a configuration on a first codebook to the UE 104.
  • the configuration on the first codebook is received by the UE 104, thereby the base station 102 and the UE 104 may share the same first codebook and perform model matching procedure based on the first codebook.
  • the first codebook may be a high-resolution codebook
  • the configuration on the first codebook may be a high-resolution codebook indication, which includes scaling factors for parameters associated with a second codebook indicated in a codebook configuration transmitted from the base station 102.
  • the second codebook may be a currently used codebook at the UE 104 and the base station 102, e.g., current Type II CSI codebook
  • the parameters associated with the second codebook may include at least one of a number of beams, a number of phase quantization size, oversampling numbers, a number of beam amplitude scaling factors for both wideband and subband, or a number of beam combining coefficients or phases among beams, polarizations and layers.
  • the configuration on the first codebook may be based on at least one AI/ML model or functionality at UE 104, the functionality may be associated with one or more AI/ML models.
  • the configuration on the first codebook may be functionality or model specific.
  • the configuration on the first codebook may be transmitted as functionality/model-related information during a functionality or model identification procedure between the UE 104 and the base station 102, or the configuration on the first codebook may be transmitted via radio resource control (RRC) signaling, which will be explained in detail in conjunction with FIG. 8 and FIG. 9.
  • RRC radio resource control
  • the UE 104 may transmit a first PMI which is obtained based on the first codebook and a CSI estimated by the UE, and at least one second PMI which is obtained by at least one first AI/ML model of the UE 104.
  • the at least one first AI/ML model may be associated with CSI feedback compression.
  • the UE 104 may transmit the first PMI and the at least one second PMI together via one of uplink control information (UCI) , media access control (MAC) control element (CE) or radio resource control (RRC) .
  • UCI uplink control information
  • MAC media access control
  • CE control element
  • RRC radio resource control
  • the base station 102 may receive the first PMI and the at least one second PMI, and determine, at step 706, whether a second AI/ML model of the base station 102 can be paired with the at least one first AI/ML model, based on the received first PMI and the received at least one second PMI, for example, based on a third CSI obtained based on the first codebook and the received first PMI and at least one fourth CSI obtained based on the second AI/ML model and the received at least one second PMI.
  • the model pairing procedure may be achieved.
  • the model pairing procedure may be used to check whether an AI/ML model at one of the UE side and the base station side can be paired with an AI/ML model at another one of the UE side and the base station side.
  • the model pairing procedure may be initiated by the UE 104 to check whether a first AI/ML model among the at least one first AI/ML model of UE 104 can be paired with a second AI/ML model of the base station 102.
  • step 704 may include more operations or actions.
  • the UE 104 may transmit a request for determining whether a first AI/ML model among the at least one first AI/ML model of the UE 104 can be paired with a second AI/ML model of the base station 102 to the base station 102, and transmit the first PMI and a second PMI associated with the first AI/ML model to the base station 102.
  • the request for determining whether the first AI/ML model can be paired with the second AI/ML model may be regarded as a request to trigger the model pairing procedure.
  • the base station 102 may receive the request transmitted from the UE 104, and receive the first PMI and the second PMI, the received first PMI and the received second PMI may be used to determine whether the first AI/ML model is paired with the second AI/ML model by the base station 102 at step 706.
  • the base station 102 may calculate a third CSI based on the first codebook and the received first PMI, and calculate a fourth CSI based on the second AI/ML model of the base station 102 and the received second PMI, and then compare the third CSI and the fourth CSI to determine whether first AI/ML model is paired with the second AI/ML model.
  • the base station 102 may transmit a pairing result, e.g., pairing failure or pairing success, of the second AI/ML model and the first AI/ML model to the UE 104 determined based on the above comparing.
  • a pairing result e.g., pairing failure or pairing success
  • the pairing result may indicate that the second AI/ML model is not paired with the first AI/ML model.
  • the pairing result may indicate that the second AI/ML model is paired with the first AI/ML model, and a mode ID or a functionality ID associated with the second AI/ML model may be indicated in the pairing result.
  • the base station 102 may configure CSI report configuration and/or CSI reference signal (RS) configuration for model pairing, and send the same to the UE 104.
  • the UE 104 may estimate the CSI from the configured CSI-RS based on the CSI-RS configuration, and obtain the first PMI based on the first codebook and the estimated CSI.
  • the UE 104 may obtain the second PMI by using a corresponding first AI/ML model, and the CSI or eigen values (EVs) of channel matrices constructed based on the first codebook from the estimated CSI are inputs of the first AI/ML model.
  • EVs CSI or eigen values
  • the model pairing procedure may be initiated by the base station 102 to check whether the second AI/ML model of the base station 102 can be paired with an activated first AI/ML model of the UE 104.
  • step 704 may include more operations or actions.
  • the base station 102 may transmit, to the UE 104, a request associated with determining whether the second AI/ML model can be paired with an activated first AI/ML model among the at least one first AI/ML model.
  • the UE 104 may receive the request, and transmit, to the base station 102, the first PMI and a second PMI associated with the activated first AI/ML model in response to the request.
  • the base station 102 may receive, from the UE 104, the first PMI and the second PMI, the received first PMI and the received second PMI may be used to determine whether the activated first AI/ML model is paired with the second AI/ML model by the base station 102 at step 706.
  • the base station 102 may calculate a third CSI based on the first codebook and the received first PMI, and calculate a fourth CSI based on the second AI/ML model and the received second PMI, and then compare the third CSI and the fourth CSI to determine whether the activated first AI/ML model is paired with the second AI/ML model. For example, the base station 102 may calculate a MSE between the third CSI and the fourth CSI in a predetermined period, and in the case that the calculated MSE is larger than a threshold, determine that the second AI/ML model is not paired with the activated first AI/ML model.
  • the base station 102 may transmit an indication to deactivate the activated first AI/ML model in the case that the second AI/ML model is determined to be not paired with the activated first AI/ML model. With this indication, the UE 104 may deactivate the activated first AI/ML model.
  • the UE 104 may determine another activated first AI/ML model that can be paired with the second AI//ML model if there are more than one first AI/ML model available at UE 104, or the UE 104 may adopt legacy CSI compression scheme if there is only one first AI/ML model (i.e., the first AI/ML model deactivated) at UE 104 or there are no first AI/ML model that can be paired with the second AI//ML model.
  • the base station 102 may not transmit any indication or information to the UE 104.
  • the UE 104 may determine that the second AI/ML model is paired with the activated first AI/ML model by default.
  • the base station 102 may configure CSI report configuration and/or CSI RS configuration for model pairing, and send the same to the UE 104.
  • the UE 104 may estimate the CSI from configured CSI-RS based on the CSI-RS configuration, and then obtain the first PMI based the first codebook and the estimated CSI.
  • the UE 104 may obtain the second PMI by using a corresponding first AI/ML model, and the CSI or EVs of channel matrices constructed based on the first codebook from the estimated CSI are inputs of the first AI/ML model.
  • the model pairing procedure may be further used to select a model in a set of models on one side to pair with a target model on the other side.
  • the model pairing procedure may be initiated by the UE 104.
  • step 704 may include more operations or actions.
  • the UE 104 may transmit a request for selecting a first AI/ML model from among the plurality of first AI/ML models to the base station 102, and transmit the first PMI and a plurality of second PMIs associated with the plurality of first AI/ML models to the base station 102.
  • the base station 102 may receive the request transmitted from the UE 104, and receive the first PMI and the plurality of second PMIs, the received first PMI and the received plurality of second PMIs may be used to determine which one of the plurality of first AI/ML models can be paired with the second AI/ML model For example, at step 706, the base station 102 may select a first AI/ML model from among the plurality of first AI/ML models based on the third CSI and a plurality of fourth CSIs obtained based on the second AI/ML model and the plurality of second PMIs.
  • the base station 102 may select a first AI/ML model from the plurality of first AI/ML models based on mean squared errors (MSEs) between the third CSI and each of the plurality of fourth CSIs in a predetermined period, e.g., a first AI/ML model with the minimum MSE, which is also not larger than the threshold as described above, may be selected by the base station 102.
  • MSEs mean squared errors
  • the base station 102 may transmit an indication on a selecting result to the UE 104.
  • the indication may indicate a model ID of the selected first AI/ML model, while in the case that no first AI/ML model is selected, the indication may indicate pairing failure.
  • the UE 104 may further transmit an indication to indicate a number of the plurality of first AI/ML models to the base station 102.
  • the base station 102 may receive this indication and configure CSI report configuration and/or CSI RS configuration for model pairing further considering this indication, and send the CSI report configuration and/or CSI RS configuration to the UE 104.
  • the UE 104 may estimate the CSI from configured CSI-RS based on the CSI-RS configuration, and obtain the first PMI based the first codebook and the estimated CSI.
  • the UE 104 may obtain each second PMI by using a corresponding first AI/ML model, and the CSI or EVs of channel matrices constructed based on the first codebook from the estimated CSI are inputs of the first AI/ML model.
  • the model pairing procedure may be triggered periodically, or with other types of request (s) , or with request (s) from other device other than the UE 104 and the base station 102.
  • FIG. 8 illustrates an example conceptual diagram of dataset alignment indication and model paring procedure in accordance with aspects of the present disclosure.
  • UE may correspond to the UE 104 described as above
  • NW may correspond to the base station 102 described as above.
  • a configuration on a high-resolution codebook i.e. the configuration on the first codebook or the high-resolution codebook indication as described above, may be shared (step 802) between the NW and the UE via a high-resolution codebook indication, and is used to re-construct the datasets on the both sides.
  • the re-constructed datasets on the NW side and the UE side may be generated via extending current used codebook, e.g., current Type II CSI codebook, from at least one of the following aspects: (1) to increase the number of beams, L>4; (2) to increase oversampling numbers (O 1 , O 2 ) ; (3) to increase the number of beam amplitude scaling factors for both wideband and subband; and (4) to increase the number of beam combining coefficients (phases) among beams, polarizations and layers.
  • current used codebook e.g., current Type II CSI codebook
  • scaling factors may be applied on above corresponding numbers to increase the resolution of codebook, which means that NR Rel-15 Type II codebook may be extended by extending RI ⁇ ⁇ 1, 2, 3, 4 ⁇ as the default codebook. In this way, it may avoid complicated re-designing on the codebook and PMI value calculation assumptions.
  • the configuration on the high-resolution codebook (which may also be referred to as codebook configuration for short below) may indicate some other ways for obtaining the high-resolution codebook, there is no limit on this point in present disclosure.
  • the codebook configuration may be functionality or model specific.
  • the datasets on both sides may be re-constructed.
  • the dataset at the NW side may be re-constructed in advance to share the high-resolution codebook indication with UE.
  • the codebook configuration at the NW side may be indicated directly during AI/ML functionality/model identification (which will be explained later) as a kind of functionality/model-related information.
  • the codebook configuration at NW side may be sent from the NW to the UE as a kind of higher layer parameters (different set of parameters may be associated with different codebook configuration ID) , when the model paring procedure is triggered by either the NW or the UE.
  • PMIs corresponding to the configured high-resolution codebook may be generated and reported by UE to the NW for model paring procedure (step 804) .
  • a dedicated CSI-RS configuration and CSI report configuration may be configured for model pairing such that the UE could report the PMIs corresponding to high-resolution codebook.
  • the compressed values i.e., output PMIs from UE-part model, e.g., a set of PMI AI
  • the NW may be generated and reported to the NW together with the PMIs from high-resolution codebook, e.g., a set of PMI hrcb .
  • the inputs of the UE-part model may be CSI/EVs of “channel matrices” constructed based on the reported PMI hrcb and the configured high-resolution codebook.
  • PMI AI and PMI hrcb are paired and reported together, e.g., in a same UCI report, via L1 (UCI) , L2 (MAC CE) or L3 (RRC) .
  • the CSI/EVs recovered from the NW-part model and high-resolution codebook may be compared.
  • the inputs of the NW-part model are the received compressed values, i.e., PMI AI from UE-part model, and the reference CSI/EVs are obtained by “channel matrices” constructed based on the received PMI hrcb and the configured high-resolution codebook.
  • the comparison between the two sets of CSI/EVs may be derived by calculating the MSE between the outputs of NW-part model and the reference CSI/EVs obtained according to the received set of PMI hrcb in a given or predetermined period. For example, if the calculated MSE is larger than a threshold, e.g.
  • the models paring is failed, followed by an indication indicating such failure to the UE. Otherwise, a successful pairing may be indicated together with a functionality ID or a model ID of the NW-part model in an indication to the UE.
  • the NW may select a model with minimum MSE as a paired NW-part model for the requesting UE-part model, if the MSE is not larger than the threshold. The selecting result together with a functionality ID or a model ID of the selected model may be indicated to the UE.
  • FIG. 9 illustrates an example procedure 900 for model paring between a base station 102 (which may also be called as NW 102, gNB 102 below) and a UE 104 with aligned dataset over air interface in accordance with aspects of the present disclosure.
  • NW 102 which may also be called as NW 102, gNB 102 below
  • UE 104 with aligned dataset over air interface in accordance with aspects of the present disclosure.
  • the procedure 900 may start after a functionality/model identification between the base station 102 and the UE 104, which indicates related features to support AI/ML-based CSI compression with two-sided model.
  • the functionality/model identification related information on the AI/ML functionality/model is shared and aligned between the base station side and the UE side, which at least including following information: (1) features or feature groups, indicated by the UE 104, to support AI/ML-based CSI compression, e.g., supported number of ports, number of subbands, ranks and quantization level of output of UE-part model in UE capability reporting; and (2) application conditions of the AI/ML functionality/models, e.g., applicable signal-to-noise ratio (SNR) , UE speed, and the like.
  • SNR signal-to-noise ratio
  • UE speed and the like.
  • Above information may be included in UE capability reporting, UE assistance information requesting and/or other new dedicated RRC signaling.
  • the base station 102 may indicate a high-resolution codebook configuration (which may also be called as configuration on the high-resolution codebook) for UE 104 to re-construct the dataset.
  • the configuration on the high-resolution codebook to re-construct the dataset may be indicated by base station 102 to the UE 104 based on the identified functionality/model at UE side, and it may be based on the current codebook used by the UE 104 and the base station 102 and may be values calculation assumptions.
  • the configuration on the high-resolution codebook may indicate at least one of the following: (1) scaling the number of beams, e.g., two or three times of current configuration on numberOfBeams; (2) scaling the number of phases size, e.g., two or four time of current configuration on phaseAlphabetSize; (3) enable subband amplitude quantization; and (4) introducing additional oversampling factors (O 1 , O 2 ) and the corresponding beam group offset configuration.
  • the configuration on the high-resolution codebook may include more or other different content used for extending the codebook.
  • the amplitude values of wideband and subband may also be scaled with two times via one bit added for each indication, which can be also indicated via RRC signaling to enable one more bit quantization.
  • the scaling factors above may be indicated via RRC signaling, which can be involved in the current CodebookConfig information element, or a new RRC IE, e.g., HRSCodebookConfigForModelPairing.
  • model pairing there are at least two cases to perform model pairing: (1) to check whether a model on one side can be paired with a model on the other side, and (2) to select a model from a set of models on one side to pair with the other model on the other side.
  • FIG. 10 illustrates a schematic diagram for checking whether a model on one side can be paired with a model on the other side in accordance with aspects of the present disclosure.
  • FIG. 11 illustrates a schematic diagram for selecting a model in a set of models on one side to pair with a target model on the other side in accordance with aspects of the present disclosure.
  • the model pairing procedure may be initiated by different sides. For example, if a model in UE 104 is newly deployed, updated or needs to be monitored, the UE 104 may initiate the model pairing procedure with a request to assess this model (step 902b) , which may be regarded as case (1a) . Furthermore, if a model in the NW 102 is newly deployed, updated or needs to be monitored, the NW 102 may initiate the model pairing procedure with a request to assess this model (step 902a) , which may be regarded as case (1b) , and in this case, a model ID or a functionality may be indicated, e.g. via the request, to the UE 104 such that the UE 104 may use an appropriate model for CSI generation.
  • the NW 102 may send configured CSI-RS for CSI measurement for the model pairing (step 903) , and the UE 104 may estimate CSI based on the configured CSI-RS (step 903) .
  • the PMI hrcb may be derived together with re-generated (i.e., quantized) CSI (step 904) .
  • the former one may be reported to the NW 102, and the later one may be fed into the UE-part model as input to generate PMI AI .
  • the CSI including ⁇ PMI hrcb , PMI AI ⁇ may be reported to the NW 102 (step 905) , as illustrated in FIG. 10.
  • step 906 once receiving ⁇ PMI hrcb , PMI AI ⁇ , the CSI from high-resolution codebook, and the recovered CSI from NW-part AI model, are derived for comparison (step 906) .
  • MSE as below may be used as an example for the comparison
  • an indication may sent to the UE 104 to indicate the successful pairing of the requesting model of the UE 104 (step 907) . Otherwise, an indication is sent to the UE104 to indicate pairing failure (step 907) . Moreover, a model ID may be assigned for further usage of this requesting model.
  • the model pairing procedure may be initiated by different sides.
  • the UE 104 may want to select a model from a set of models to pair with a specific model or functionality on the NW side (which may be regarded as case (2a) ) , and thus the model pairing procedure for this case may be initiated by UE 104.
  • the NW 102 may want to select a model from a set of models to pair with a specific model on the UE side or a specific identified functionality (which may be regarded as case (2b) ) , and thus the model pairing procedure for this case may be initiated by the NW 102.
  • Case (2a) may introduce new signaling design as illustrated in FIG. 11.
  • the UE 104 initiates the model pairing procedure for model selection, which may include the size of a set of models, i.e., number of candidate models on the UE side, to align the UCI report format/size (step 902b) .
  • the size of the set of models may be included in the request for triggering the model pairing procedure sent at step 902b, alternatively, at step 902b, an indication to indicate the size of the set of models may be sent separately from the request for triggering the model pairing procedure.
  • the base station 102 may configure the CSI-RS and CSI reports to indicate the number of PMIs from candidate models (corresponding to the size of the set of models) (step 903) . Additionally, different CSI-RS resources may be configured to be associated with different models at UE side.
  • the UE 104 may estimate CSI based on the configured CSI-RS. Based on the high-resolution codebook, the PMI hrcb may be derived together with the re-generated (i.e., quantized) CSI (step 904) . The former one may be reported to the NW 102, and the later one may be fed into the models on the UE 104 as input to generate output of corresponding model, PMI AI, k . Then, CSI including ⁇ PMI hrcb , PMI AI, 1 , PMI AI, 2 , ..., PMI AI, k ⁇ may be reported to the NW 104 (step 905) , as illustrated in FIG. 11.
  • a CSI from high-resolution codebook, and recovered CSIs from NW-part AI model are derived (step 906) to further calculate bias as below.
  • a model with the smallest bias, which is also smaller than a predetermined pr preconfigured threshold, among ⁇ bias 1 , bias 2 , ..., bias k ⁇ may be selected as the pairing model at the UE side.
  • FIG. 12 illustrates an example of a device 1200 that supports model pairing for AI/ML-based CSI compression in accordance with aspects of the present disclosure.
  • the device 1200 may be an example of a UE 104 as described herein.
  • the device 1200 may support wireless communication with one or more network entities 102, UEs 104, or any combination thereof.
  • the device 1200 may include components for bi-directional communications including components for transmitting and receiving communications, such as a processor 1202, a memory 1204, a transceiver 1206, and, optionally, an I/O controller 1208. These components may be in electronic communication or otherwise coupled (e.g., operatively, communicatively, functionally, electronically, electrically) via one or more interfaces (e.g., buses) .
  • the processor 1202, the memory 1204, the transceiver 1206, or various combinations thereof or various components thereof may be examples of means for performing various aspects of the present disclosure as described herein.
  • the processor 1202, the memory 1204, the transceiver 1206, or various combinations or components thereof may support a method for performing one or more of the operations described herein.
  • the processor 1202, the memory 1204, the transceiver 1206, or various combinations or components thereof may be implemented in hardware (e.g., in communications management circuitry) .
  • the hardware may include a processor, a digital signal processor (DSP) , an application-specific integrated circuit (ASIC) , a field-programmable gate array (FPGA) or other programmable logic device, a discrete gate or transistor logic, discrete hardware components, or any combination thereof configured as or otherwise supporting a means for performing the functions described in the present disclosure.
  • the processor 1202 and the memory 1204 coupled with the processor 1202 may be configured to perform one or more of the functions described herein (e.g., executing, by the processor 1202, instructions stored in the memory 1204) .
  • the processor 1202 may support wireless communication at the device 1200 in accordance with examples as disclosed herein.
  • the processor 1202 may be configured to operable to support a means for receiving, via the transceiver 1206 and from a base station 102, a configuration on a first codebook; means for transmitting, via the transceiver 1206 and to the base station 102, a first precoding matrix indicator (PMI) which is obtained based on the first codebook and a first channel state information (CSI) estimated by the UE, and at least one second PMI which is obtained by at least one first artificial intelligence or machine learning (AI/ML) model.
  • PMI precoding matrix indicator
  • CSI channel state information
  • AI/ML artificial intelligence or machine learning
  • the processor 1202 may include an intelligent hardware device (e.g., a general-purpose processor, a DSP, a CPU, a microcontroller, an ASIC, an FPGA, a programmable logic device, a discrete gate or transistor logic component, a discrete hardware component, or any combination thereof) .
  • the processor 1202 may be configured to operate a memory array using a memory controller.
  • a memory controller may be integrated into the processor 1202.
  • the processor 1202 may be configured to execute computer-readable instructions stored in a memory (e.g., the memory 1204) to cause the device 1200 to perform various functions of the present disclosure.
  • the memory 1204 may include random access memory (RAM) and read-only memory (ROM) .
  • the memory 1204 may store computer-readable, computer-executable code including instructions that, when executed by the processor 1202 cause the device 1200 to perform various functions described herein.
  • the code may be stored in a non-transitory computer-readable medium such as system memory or another type of memory.
  • the code may not be directly executable by the processor 1202 but may cause a computer (e.g., when compiled and executed) to perform functions described herein.
  • the memory 1204 may include, among other things, a basic I/O system (BIOS) which may control basic hardware or software operation such as the interaction with peripheral components or devices.
  • BIOS basic I/O system
  • the I/O controller 1208 may manage input and output signals for the device 1200.
  • the I/O controller 1208 may also manage peripherals not integrated into the device M02.
  • the I/O controller 1208 may represent a physical connection or port to an external peripheral.
  • the I/O controller 1208 may utilize an operating system such as or another known operating system.
  • the I/O controller 1208 may be implemented as part of a processor, such as the processor 1206.
  • a user may interact with the device 1200 via the I/O controller 1208 or via hardware components controlled by the I/O controller 1208.
  • the device 1200 may include a single antenna 1210. However, in some other implementations, the device 1200 may have more than one antenna 1210 (i.e., multiple antennas) , including multiple antenna panels or antenna arrays, which may be capable of concurrently transmitting or receiving multiple wireless transmissions.
  • the transceiver 1206 may communicate bi-directionally, via the one or more antennas 1210, wired, or wireless links as described herein.
  • the transceiver 1206 may represent a wireless transceiver and may communicate bi-directionally with another wireless transceiver.
  • the transceiver 1206 may also include a modem to modulate the packets, to provide the modulated packets to one or more antennas 1210 for transmission, and to demodulate packets received from the one or more antennas 1210.
  • the transceiver 1206 may include one or more transmit chains, one or more receive chains, or a combination thereof.
  • a transmit chain may be configured to generate and transmit signals (e.g., control information, data, packets) .
  • the transmit chain may include at least one modulator for modulating data onto a carrier signal, preparing the signal for transmission over a wireless medium.
  • the at least one modulator may be configured to support one or more techniques such as amplitude modulation (AM) , frequency modulation (FM) , or digital modulation schemes like phase-shift keying (PSK) or quadrature amplitude modulation (QAM) .
  • the transmit chain may also include at least one power amplifier configured to amplify the modulated signal to an appropriate power level suitable for transmission over the wireless medium.
  • the transmit chain may also include one or more antennas 1210 for transmitting the amplified signal into the air or wireless medium.
  • a receive chain may be configured to receive signals (e.g., control information, data, packets) over a wireless medium.
  • the receive chain may include one or more antennas 1210 for receive the signal over the air or wireless medium.
  • the receive chain may include at least one amplifier (e.g., a low-noise amplifier (LNA) ) configured to amplify the received signal.
  • the receive chain may include at least one demodulator configured to demodulate the receive signal and obtain the transmitted data by reversing the modulation technique applied during transmission of the signal.
  • the receive chain may include at least one decoder for decoding the processing the demodulated signal to receive the transmitted data.
  • FIG. 13 illustrates an example of a device 1300 that supports model pairing for AI/ML-based CSI compression in accordance with aspects of the present disclosure.
  • the device 1300 may be an example of a base station 104 as described herein, the base station 102 may include a plurality of TRPs.
  • the device 1300 may support wireless communication with one or more network entities 102, UEs 104, or any combination thereof.
  • the device 1300 may include components for bi-directional communications including components for transmitting and receiving communications, such as a processor 1302, a memory 1304, a transceiver 1306, and, optionally, an I/O controller 1308. These components may be in electronic communication or otherwise coupled (e.g., operatively, communicatively, functionally, electronically, electrically) via one or more interfaces (e.g., buses) .
  • the processor 1302, the memory 1304, the transceiver 1306, or various combinations thereof or various components thereof may be examples of means for performing various aspects of the present disclosure as described herein.
  • the processor 1302, the memory 1304, the transceiver 1306, or various combinations or components thereof may support a method for performing one or more of the operations described herein.
  • the processor 1302, the memory 1304, the transceiver 1306, or various combinations or components thereof may be implemented in hardware (e.g., in communications management circuitry) .
  • the hardware may include a processor, a digital signal processor (DSP) , an application-specific integrated circuit (ASIC) , a field-programmable gate array (FPGA) or other programmable logic device, a discrete gate or transistor logic, discrete hardware components, or any combination thereof configured as or otherwise supporting a means for performing the functions described in the present disclosure.
  • the processor 1302 and the memory 1304 coupled with the processor 1302 may be configured to perform one or more of the functions described herein (e.g., executing, by the processor 1302, instructions stored in the memory 1304) .
  • the processor 1302 may support wireless communication at the device 1300 in accordance with examples as disclosed herein.
  • the processor 1302 may be configured to operable to support a means for transmitting, via the transceiver 1306 and to a user equipment (UE) 104, a configuration on a first codebook; means for receiving, via the transceiver 1306 and from the UE 104, a first precoding matrix indicator (PMI) which is obtained based on the first codebook and a first channel state information (CSI) estimated by the UE, and at least one second PMI which is obtained by at least one first artificial intelligence or machine learning (AI/ML) model; and means for determining whether a second AI/ML model of the base station can be paired with the at least one first AI/ML model, based on a third CSI obtained based on the first codebook and the received first PMI and at least one fourth CSI obtained based on the second AI/ML model and the received at least one second PMI.
  • PMI precoding matrix indicator
  • CSI channel state information
  • the processor 1302 may include an intelligent hardware device (e.g., a general-purpose processor, a DSP, a CPU, a microcontroller, an ASIC, an FPGA, a programmable logic device, a discrete gate or transistor logic component, a discrete hardware component, or any combination thereof) .
  • the processor 1302 may be configured to operate a memory array using a memory controller.
  • a memory controller may be integrated into the processor 1302.
  • the processor 1302 may be configured to execute computer-readable instructions stored in a memory (e.g., the memory 1304) to cause the device 1300 to perform various functions of the present disclosure.
  • the memory 1304 may include random access memory (RAM) and read-only memory (ROM) .
  • the memory 1304 may store computer-readable, computer-executable code including instructions that, when executed by the processor 1302 cause the device 1300 to perform various functions described herein.
  • the code may be stored in a non-transitory computer-readable medium such as system memory or another type of memory.
  • the code may not be directly executable by the processor 1302 but may cause a computer (e.g., when compiled and executed) to perform functions described herein.
  • the memory 1304 may include, among other things, a basic I/O system (BIOS) which may control basic hardware or software operation such as the interaction with peripheral components or devices.
  • BIOS basic I/O system
  • the I/O controller 1308 may manage input and output signals for the device 1300.
  • the I/O controller 1308 may also manage peripherals not integrated into the device M02.
  • the I/O controller 1308 may represent a physical connection or port to an external peripheral.
  • the I/O controller 1308 may utilize an operating system such as or another known operating system.
  • the I/O controller 1308 may be implemented as part of a processor, such as the processor 1306.
  • a user may interact with the device 1300 via the I/O controller 1308 or via hardware components controlled by the I/O controller 1308.
  • the device 1300 may include a single antenna 1310. However, in some other implementations, the device 1300 may have more than one antenna 1310 (i.e., multiple antennas) , including multiple antenna panels or antenna arrays, which may be capable of concurrently transmitting or receiving multiple wireless transmissions.
  • the transceiver 1306 may communicate bi-directionally, via the one or more antennas 710, wired, or wireless links as described herein.
  • the transceiver 1306 may represent a wireless transceiver and may communicate bi-directionally with another wireless transceiver.
  • the transceiver 1306 may also include a modem to modulate the packets, to provide the modulated packets to one or more antennas 1310 for transmission, and to demodulate packets received from the one or more antennas 1310.
  • the transceiver 1306 may include one or more transmit chains, one or more receive chains, or a combination thereof.
  • a transmit chain may be configured to generate and transmit signals (e.g., control information, data, packets) .
  • the transmit chain may include at least one modulator for modulating data onto a carrier signal, preparing the signal for transmission over a wireless medium.
  • the at least one modulator may be configured to support one or more techniques such as amplitude modulation (AM) , frequency modulation (FM) , or digital modulation schemes like phase-shift keying (PSK) or quadrature amplitude modulation (QAM) .
  • the transmit chain may also include at least one power amplifier configured to amplify the modulated signal to an appropriate power level suitable for transmission over the wireless medium.
  • the transmit chain may also include one or more antennas 1310 for transmitting the amplified signal into the air or wireless medium.
  • a receive chain may be configured to receive signals (e.g., control information, data, packets) over a wireless medium.
  • the receive chain may include one or more antennas 1310 for receive the signal over the air or wireless medium.
  • the receive chain may include at least one amplifier (e.g., a low-noise amplifier (LNA) ) configured to amplify the received signal.
  • the receive chain may include at least one demodulator configured to demodulate the receive signal and obtain the transmitted data by reversing the modulation technique applied during transmission of the signal.
  • the receive chain may include at least one decoder for decoding the processing the demodulated signal to receive the transmitted data.
  • FIG. 14 illustrates an example of a processor 1400 that supports model pairing for AI/ML-based CSI compression in accordance with aspects of the present disclosure.
  • the processor 1400 may be an example of a processor configured to perform various operations in accordance with examples as described herein.
  • the processor 1400 may include a controller 1402 configured to perform various operations in accordance with examples as described herein.
  • the processor 1400 may optionally include at least one memory 1404, such as L1/L2/L3 cache. Additionally, or alternatively, the processor 1400 may optionally include one or more arithmetic-logic units (ALUs) 1400.
  • ALUs arithmetic-logic units
  • the processor 1400 may be a processor chipset and include a protocol stack (e.g., a software stack) executed by the processor chipset to perform various operations (e.g., receiving, obtaining, retrieving, transmitting, outputting, forwarding, storing, determining, identifying, accessing, writing, reading) in accordance with examples as described herein.
  • a protocol stack e.g., a software stack
  • operations e.g., receiving, obtaining, retrieving, transmitting, outputting, forwarding, storing, determining, identifying, accessing, writing, reading
  • the processor chipset may include one or more cores, one or more caches (e.g., memory local to or included in the processor chipset (e.g., the processor 1400) or other memory (e.g., random access memory (RAM) , read-only memory (ROM) , dynamic RAM (DRAM) , synchronous dynamic RAM (SDRAM) , static RAM (SRAM) , ferroelectric RAM (FeRAM) , magnetic RAM (MRAM) , resistive RAM (RRAM) , flash memory, phase change memory (PCM) , and others) .
  • RAM random access memory
  • ROM read-only memory
  • DRAM dynamic RAM
  • SDRAM synchronous dynamic RAM
  • SRAM static RAM
  • FeRAM ferroelectric RAM
  • MRAM magnetic RAM
  • RRAM resistive RAM
  • PCM phase change memory
  • the controller 1402 may be configured to manage and coordinate various operations (e.g., signaling, receiving, obtaining, retrieving, transmitting, outputting, forwarding, storing, determining, identifying, accessing, writing, reading) of the processor 1400 to cause the processor 1400 to support various operations of a base station in accordance with examples as described herein.
  • the controller 1402 may operate as a control unit of the processor 1400, generating control signals that manage the operation of various components of the processor 1400. These control signals include enabling or disabling functional units, selecting data paths, initiating memory access, and coordinating timing of operations.
  • the controller 1402 may be configured to fetch (e.g., obtain, retrieve, receive) instructions from the memory 1404 and determine subsequent instruction (s) to be executed to cause the processor 1400 to support various operations in accordance with examples as described herein.
  • the controller 1402 may be configured to track memory address of instructions associated with the memory 1404.
  • the controller 1402 may be configured to decode instructions to determine the operation to be performed and the operands involved.
  • the controller 1402 may be configured to interpret the instruction and determine control signals to be output to other components of the processor 1400 to cause the processor 1400 to support various operations in accordance with examples as described herein.
  • the controller 1402 may be configured to manage flow of data within the processor 1400.
  • the controller 1402 may be configured to control transfer of data between registers, arithmetic logic units (ALUs) , and other functional units of the processor 1400.
  • ALUs arithmetic logic units
  • the memory 1404 may include one or more caches (e.g., memory local to or included in the processor 1400 or other memory, such RAM, ROM, DRAM, SDRAM, SRAM, MRAM, flash memory, etc. In some implementation, the memory 1404 may reside within or on a processor chipset (e.g., local to the processor 1400) . In some other implementations, the memory 1404 may reside external to the processor chipset (e.g., remote to the processor 1400) .
  • caches e.g., memory local to or included in the processor 1400 or other memory, such RAM, ROM, DRAM, SDRAM, SRAM, MRAM, flash memory, etc.
  • the memory 1404 may reside within or on a processor chipset (e.g., local to the processor 1400) . In some other implementations, the memory 1404 may reside external to the processor chipset (e.g., remote to the processor 1400) .
  • the memory 1404 may store computer-readable, computer-executable code including instructions that, when executed by the processor 1400, cause the processor 1400 to perform various functions described herein.
  • the code may be stored in a non-transitory computer-readable medium such as system memory or another type of memory.
  • the controller 1402 and/or the processor 1400 may be configured to execute computer-readable instructions stored in the memory 1404 to cause the processor 1400 to perform various functions.
  • the processor 1400 and/or the controller 1402 may be coupled with or to the memory 1404, and the processor 1400, the controller 1402, and the memory 1404 may be configured to perform various functions described herein.
  • the processor 1400 may include multiple processors and the memory 1404 may include multiple memories. One or more of the multiple processors may be coupled with one or more of the multiple memories, which may, individually or collectively, be configured to perform various functions herein.
  • the one or more ALUs 1400 may be configured to support various operations in accordance with examples as described herein.
  • the one or more ALUs 1400 may reside within or on a processor chipset (e.g., the processor 1400) .
  • the one or more ALUs 1400 may reside external to the processor chipset (e.g., the processor 1400) .
  • One or more ALUs 1400 may perform one or more computations such as addition, subtraction, multiplication, and division on data.
  • one or more ALUs 1400 may receive input operands and an operation code, which determines an operation to be executed.
  • One or more ALUs 1400 be configured with a variety of logical and arithmetic circuits, including adders, subtractors, shifters, and logic gates, to process and manipulate the data according to the operation. Additionally, or alternatively, the one or more ALUs 1400 may support logical operations such as AND, OR, exclusive-OR (XOR) , not-OR (NOR) , and not-AND (NAND) , enabling the one or more ALUs 1400 to handle conditional operations, comparisons, and bitwise operations.
  • logical operations such as AND, OR, exclusive-OR (XOR) , not-OR (NOR) , and not-AND (NAND) , enabling the one or more ALUs 1400 to handle conditional operations, comparisons, and bitwise operations.
  • the processor 1400 may support wireless communication in accordance with examples as disclosed herein.
  • the processor 1400 may be configured to or operable to support a means for receiving, via the transceiver and from a base station, a configuration on a first codebook; and means for transmitting, via the transceiver and to the base station, a first precoding matrix indicator (PMI) which is obtained based on the first codebook and a first channel state information (CSI) estimated by the UE, and at least one second PMI which is obtained by at least one first artificial intelligence or machine learning (AI/ML) model.
  • PMI precoding matrix indicator
  • CSI channel state information
  • AI/ML artificial intelligence or machine learning
  • FIG. 15 illustrates an example of a processor 1500 that supports model pairing for AI/ML-based CSI compression in accordance with aspects of the present disclosure.
  • the processor 1500 may be an example of a processor configured to perform various operations in accordance with examples as described herein.
  • the processor 1500 may include a controller 1502 configured to perform various operations in accordance with examples as described herein.
  • the processor 1500 may optionally include at least one memory 1504, such as L1/L2/L3 cache. Additionally, or alternatively, the processor 1500 may optionally include one or more arithmetic-logic units (ALUs) 1500.
  • ALUs arithmetic-logic units
  • the processor 1500 may be a processor chipset and include a protocol stack (e.g., a software stack) executed by the processor chipset to perform various operations (e.g., receiving, obtaining, retrieving, transmitting, outputting, forwarding, storing, determining, identifying, accessing, writing, reading) in accordance with examples as described herein.
  • a protocol stack e.g., a software stack
  • operations e.g., receiving, obtaining, retrieving, transmitting, outputting, forwarding, storing, determining, identifying, accessing, writing, reading
  • the processor chipset may include one or more cores, one or more caches (e.g., memory local to or included in the processor chipset (e.g., the processor 1500) or other memory (e.g., random access memory (RAM) , read-only memory (ROM) , dynamic RAM (DRAM) , synchronous dynamic RAM (SDRAM) , static RAM (SRAM) , ferroelectric RAM (FeRAM) , magnetic RAM (MRAM) , resistive RAM (RRAM) , flash memory, phase change memory (PCM) , and others) .
  • RAM random access memory
  • ROM read-only memory
  • DRAM dynamic RAM
  • SDRAM synchronous dynamic RAM
  • SRAM static RAM
  • FeRAM ferroelectric RAM
  • MRAM magnetic RAM
  • RRAM resistive RAM
  • PCM phase change memory
  • the controller 1502 may be configured to manage and coordinate various operations (e.g., signaling, receiving, obtaining, retrieving, transmitting, outputting, forwarding, storing, determining, identifying, accessing, writing, reading) of the processor 1500 to cause the processor 1500 to support various operations of a UE in accordance with examples as described herein.
  • the controller 1502 may operate as a control unit of the processor 1500, generating control signals that manage the operation of various components of the processor 1500. These control signals include enabling or disabling functional units, selecting data paths, initiating memory access, and coordinating timing of operations.
  • the controller 1502 may be configured to fetch (e.g., obtain, retrieve, receive) instructions from the memory 1504 and determine subsequent instruction (s) to be executed to cause the processor 1500 to support various operations in accordance with examples as described herein.
  • the controller 1502 may be configured to track memory address of instructions associated with the memory 1504.
  • the controller 1502 may be configured to decode instructions to determine the operation to be performed and the operands involved.
  • the controller 1502 may be configured to interpret the instruction and determine control signals to be output to other components of the processor 1500 to cause the processor 1500 to support various operations in accordance with examples as described herein.
  • the controller 1502 may be configured to manage flow of data within the processor 1500.
  • the controller 1502 may be configured to control transfer of data between registers, arithmetic logic units (ALUs) , and other functional units of the processor 1500.
  • ALUs arithmetic logic units
  • the memory 1504 may include one or more caches (e.g., memory local to or included in the processor 1500 or other memory, such RAM, ROM, DRAM, SDRAM, SRAM, MRAM, flash memory, etc. In some implementation, the memory 1504 may reside within or on a processor chipset (e.g., local to the processor 1500) . In some other implementations, the memory 1504 may reside external to the processor chipset (e.g., remote to the processor 1500) .
  • caches e.g., memory local to or included in the processor 1500 or other memory, such RAM, ROM, DRAM, SDRAM, SRAM, MRAM, flash memory, etc.
  • the memory 1504 may reside within or on a processor chipset (e.g., local to the processor 1500) . In some other implementations, the memory 1504 may reside external to the processor chipset (e.g., remote to the processor 1500) .
  • the memory 1504 may store computer-readable, computer-executable code including instructions that, when executed by the processor 1500, cause the processor 1500 to perform various functions described herein.
  • the code may be stored in a non-transitory computer-readable medium such as system memory or another type of memory.
  • the controller 1502 and/or the processor 1500 may be configured to execute computer-readable instructions stored in the memory 1504 to cause the processor 1500 to perform various functions.
  • the processor 1500 and/or the controller 1502 may be coupled with or to the memory 1504, and the processor 1500, the controller 1502, and the memory 1504 may be configured to perform various functions described herein.
  • the processor 1500 may include multiple processors and the memory 1504 may include multiple memories. One or more of the multiple processors may be coupled with one or more of the multiple memories, which may, individually or collectively, be configured to perform various functions herein.
  • the one or more ALUs 1500 may be configured to support various operations in accordance with examples as described herein.
  • the one or more ALUs 1500 may reside within or on a processor chipset (e.g., the processor 1500) .
  • the one or more ALUs 1500 may reside external to the processor chipset (e.g., the processor 1500) .
  • One or more ALUs 1500 may perform one or more computations such as addition, subtraction, multiplication, and division on data.
  • one or more ALUs 1500 may receive input operands and an operation code, which determines an operation to be executed.
  • One or more ALUs 1500 be configured with a variety of logical and arithmetic circuits, including adders, subtractors, shifters, and logic gates, to process and manipulate the data according to the operation. Additionally, or alternatively, the one or more ALUs 1500 may support logical operations such as AND, OR, exclusive-OR (XOR) , not-OR (NOR) , and not-AND (NAND) , enabling the one or more ALUs 1500 to handle conditional operations, comparisons, and bitwise operations.
  • logical operations such as AND, OR, exclusive-OR (XOR) , not-OR (NOR) , and not-AND (NAND) , enabling the one or more ALUs 1500 to handle conditional operations, comparisons, and bitwise operations.
  • the processor 1500 may support wireless communication in accordance with examples as disclosed herein.
  • the processor 1500 may be configured to or operable to support means for transmitting, via the transceiver and to a user equipment (UE) , a configuration on a first codebook; means for receiving, via the transceiver and from the UE, a first precoding matrix indicator (PMI) which is obtained based on the first codebook and a first channel state information (CSI) estimated by the UE, and at least one second PMI which is obtained by at least one first artificial intelligence or machine learning (AI/ML) model; and means for determining whether a second AI/ML model of the base station can be paired with the at least one first AI/ML model, based on a third CSI obtained based on the first codebook and the received first PMI and at least one fourth CSI obtained based on the second AI/ML model and the received at least one second PMI.
  • PMI precoding matrix indicator
  • CSI channel state information
  • AI/ML artificial intelligence or machine learning
  • FIG. 16 illustrates a flowchart of a method 1600 that supports model pairing for AI/ML-based CSI compression in accordance with aspects of the present disclosure.
  • the operations of the method 1600 may be implemented by a device or its components as described herein.
  • the operations of the method 1600 may be performed by the UE 104 as described herein.
  • the device may execute a set of instructions to control the function elements of the device to perform the described functions. Additionally, or alternatively, the device may perform aspects of the described functions using special-purpose hardware.
  • the method may include receiving, via the transceiver and from a base station, a configuration on a first codebook.
  • the operations of 1605 may be performed in accordance with examples as described herein. In some implementations, aspects of the operations of 1605 may be performed by a device as described with reference to FIG. 1.
  • the method may include transmitting, via the transceiver and to the base station, a first precoding matrix indicator (PMI) which is obtained based on the first codebook and a first channel state information (CSI) estimated by the UE, and at least one second PMI which is obtained by at least one first artificial intelligence or machine learning (AI/ML) model.
  • PMI precoding matrix indicator
  • CSI channel state information
  • AI/ML artificial intelligence or machine learning
  • FIG. 17 illustrates a flowchart of a method 1700 that supports model pairing for AI/ML-based CSI compression in accordance with aspects of the present disclosure.
  • the operations of the method 1700 may be implemented by a device or its components as described herein.
  • the operations of the method 1700 may be performed by the base station 104 as described herein.
  • the device may execute a set of instructions to control the function elements of the device to perform the described functions. Additionally, or alternatively, the device may perform aspects of the described functions using special-purpose hardware.
  • the method may include transmitting, via the transceiver and to a user equipment (UE) , a configuration on a first codebook.
  • UE user equipment
  • the operations of 1705 may be performed in accordance with examples as described herein. In some implementations, aspects of the operations of 1705 may be performed by a device as described with reference to FIG. 1.
  • the method may include receiving, via the transceiver and from the UE, a first precoding matrix indicator (PMI) which is obtained based on the first codebook and a first channel state information (CSI) estimated by the UE, and at least one second PMI which is obtained by at least one first artificial intelligence or machine learning (AI/ML) model.
  • PMI precoding matrix indicator
  • CSI channel state information
  • AI/ML artificial intelligence or machine learning
  • the method may include determining whether a second AI/ML model of the base station can be paired with the at least one first AI/ML model, based on a third CSI obtained based on the first codebook and the received first PMI and at least one fourth CSI obtained based on the second AI/ML model and the received at least one second PMI.
  • the operations of 1715 may be performed in accordance with examples as described herein. In some implementations, aspects of the operations of 1715 may be performed by a device as described with reference to FIG. 1.
  • a general-purpose processor may be a microprocessor, but in the alternative, the processor may be any processor, controller, microcontroller, or state machine.
  • a processor may also be implemented as a combination of computing devices (e.g., a combination of a DSP and a microprocessor, multiple microprocessors, one or more microprocessors in conjunction with a DSP core, or any other such configuration.
  • the functions described herein may be implemented in hardware, software executed by a processor, firmware, or any combination thereof. If implemented in software executed by a processor, the functions may be stored on or transmitted over as one or more instructions or code on a computer-readable medium. Other examples and implementations are within the scope of the disclosure and appended claims. For example, due to the nature of software, functions described herein may be implemented using software executed by a processor, hardware, firmware, hardwiring, or combinations of any of these. Features implementing functions may also be physically located at various positions, including being distributed such that portions of functions are implemented at different physical locations.
  • Computer-readable media includes both non-transitory computer storage media and communication media including any medium that facilitates transfer of a computer program from one place to another.
  • a non-transitory storage medium may be any available medium that may be accessed by a general-purpose or special-purpose computer.
  • non-transitory computer-readable media may include RAM, ROM, electrically erasable programmable ROM (EEPROM) , flash memory, compact disk (CD) ROM or other optical disk storage, magnetic disk storage or other magnetic storage devices, or any other non-transitory medium that may be used to carry or store desired program code means in the form of instructions or data structures and that may be accessed by a general-purpose or special-purpose computer, or a general-purpose or special-purpose processor.
  • an article “a” before an element is unrestricted and understood to refer to “at least one” of those elements or “one or more” of those elements.
  • the terms “a, ” “at least one, ” “one or more, ” and “at least one of one or more” may be interchangeable.
  • a list of items indicates an inclusive list such that, for example, a list of at least one of A, B, or C means A or B or C or AB or AC or BC or ABC (i.e., A and B and C) .
  • the phrase “based on” shall not be construed as a reference to a closed set of conditions. For example, an example step that is described as “based on condition A” may be based on both a condition A and a condition B without departing from the scope of the present disclosure.
  • the phrase “based on” shall be construed in the same manner as the phrase “based at least in part on.
  • a “set” may include one or more elements.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Mobile Radio Communication Systems (AREA)

Abstract

Divers aspects de la présente invention concernent des équipements utilisateurs, des stations de base, des processeurs et des procédés d'appariement de modèles pour une compression de CSI basée sur IA/ML. Selon un aspect, un équipement utilisateur (UE) reçoit une configuration sur un premier livre de codes. L'UE transmet un premier indicateur de matrice de précodage (PMI) qui est obtenu sur la base du premier livre de codes et d'une première information d'état de canal (CSI) estimée par l'UE, et au moins un second PMI qui est obtenu par au moins un premier modèle d'intelligence artificielle ou d'apprentissage machine (IA/ML).
PCT/CN2023/134153 2023-11-24 2023-11-24 Appariement de modèles pour compression de csi basée sur ia/ml Pending WO2024179075A1 (fr)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/CN2023/134153 WO2024179075A1 (fr) 2023-11-24 2023-11-24 Appariement de modèles pour compression de csi basée sur ia/ml

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2023/134153 WO2024179075A1 (fr) 2023-11-24 2023-11-24 Appariement de modèles pour compression de csi basée sur ia/ml

Publications (1)

Publication Number Publication Date
WO2024179075A1 true WO2024179075A1 (fr) 2024-09-06

Family

ID=92589558

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2023/134153 Pending WO2024179075A1 (fr) 2023-11-24 2023-11-24 Appariement de modèles pour compression de csi basée sur ia/ml

Country Status (1)

Country Link
WO (1) WO2024179075A1 (fr)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023081187A1 (fr) * 2021-11-03 2023-05-11 Interdigital Patent Holdings, Inc. Procédés et appareils de rétroaction de csi multi-résolution pour systèmes sans fil
CN116830541A (zh) * 2023-04-07 2023-09-29 北京小米移动软件有限公司 生命周期管理方法、装置和存储介质

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023081187A1 (fr) * 2021-11-03 2023-05-11 Interdigital Patent Holdings, Inc. Procédés et appareils de rétroaction de csi multi-résolution pour systèmes sans fil
CN116830541A (zh) * 2023-04-07 2023-09-29 北京小米移动软件有限公司 生命周期管理方法、装置和存储介质

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
TSUYOSHI SHIMOMURA, FUJITSU: "Views on specification impact for CSI feedback enhancement", 3GPP DRAFT; R1-2311048; TYPE DISCUSSION; FS_NR_AIML_AIR, 3RD GENERATION PARTNERSHIP PROJECT (3GPP), 3 November 2023 (2023-11-03), XP052544707 *
YUSHU ZHANG, GOOGLE: "On Enhancement of AI/ML based CSI", 3GPP DRAFT; R1-2311573; TYPE DISCUSSION; FS_NR_AIML_AIR, 3RD GENERATION PARTNERSHIP PROJECT (3GPP), 3 November 2023 (2023-11-03), XP052545223 *

Similar Documents

Publication Publication Date Title
WO2021108940A1 (fr) Rétroaction d'informations d'état de canal
WO2024187797A1 (fr) Dispositifs et procédés de collecte de données
WO2024207850A1 (fr) Collecte de données pour apprentissage ou surveillance de modèle
WO2024179075A1 (fr) Appariement de modèles pour compression de csi basée sur ia/ml
WO2024150208A1 (fr) Amélioration de la précision d'une rétroaction d'informations d'état de canal (csi) basée sur l'intelligence artificielle/apprentissage automatique (ai/ml)
JP2025531053A (ja) Cqi値をともなうcsiレポート
WO2024069370A1 (fr) Création de rapport d'informations d'état de canal
WO2024183322A1 (fr) Compression de csi
WO2024198452A1 (fr) Établissement de rapport de csi
WO2025055364A1 (fr) Compression de csi basée sur l'ia/ml
WO2025194829A1 (fr) Prédiction de faisceau
WO2025175837A1 (fr) Indication de relation qcl dynamique pour ensemble de ressources rs
WO2024222021A1 (fr) Collecte de données radio d'ue
WO2024169259A1 (fr) Prédiction d'informations d'état de canal
WO2025112051A1 (fr) Transmission de signal fonctionnel double
WO2025232292A1 (fr) Modèle de rapport de csi
WO2024119859A1 (fr) Communication de rapports de mesures de faisceau
WO2025107605A1 (fr) Rapport de faisceaux
WO2024213187A1 (fr) Indication de condition supplémentaire côté réseau
US20250021881A1 (en) Training dataset updates for a training dataset partitioned into multiple dataset groups
WO2025030911A1 (fr) Configuration de ressources et conception d'uci pour rapport de csi
WO2025213873A1 (fr) Collecte de données de mesure basée sur une couche supérieure
WO2025145707A1 (fr) Acquisition précoce de csi pour mobilité déclenchée de couche l1/l2
WO2025241566A1 (fr) Transmission de données en liaison montante basée sur un état de canal de liaison montante
WO2025097820A1 (fr) Compte-rendu de mesures dans un message de couche 3

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23925025

Country of ref document: EP

Kind code of ref document: A1