WO2025055364A1 - Compression de csi basée sur l'ia/ml - Google Patents
Compression de csi basée sur l'ia/ml Download PDFInfo
- Publication number
- WO2025055364A1 WO2025055364A1 PCT/CN2024/092949 CN2024092949W WO2025055364A1 WO 2025055364 A1 WO2025055364 A1 WO 2025055364A1 CN 2024092949 W CN2024092949 W CN 2024092949W WO 2025055364 A1 WO2025055364 A1 WO 2025055364A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- model
- csi
- base station
- model structure
- processor
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
- G06N3/0455—Auto-encoder networks; Encoder-decoder networks
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/084—Backpropagation, e.g. using gradient descent
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L25/00—Baseband systems
- H04L25/02—Details ; arrangements for supplying electrical power along data transmission lines
- H04L25/0202—Channel estimation
- H04L25/024—Channel estimation channel estimation algorithms
- H04L25/0254—Channel estimation channel estimation algorithms using neural network algorithms
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L69/00—Network arrangements, protocols or services independent of the application payload and not provided for in the other groups of this subclass
- H04L69/04—Protocols for data compression, e.g. ROHC
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04B—TRANSMISSION
- H04B7/00—Radio transmission systems, i.e. using radiation field
- H04B7/02—Diversity systems; Multi-antenna system, i.e. transmission or reception using multiple antennas
- H04B7/04—Diversity systems; Multi-antenna system, i.e. transmission or reception using multiple antennas using two or more spaced independent antennas
- H04B7/06—Diversity systems; Multi-antenna system, i.e. transmission or reception using multiple antennas using two or more spaced independent antennas at the transmitting station
- H04B7/0613—Diversity systems; Multi-antenna system, i.e. transmission or reception using multiple antennas using two or more spaced independent antennas at the transmitting station using simultaneous transmission
- H04B7/0615—Diversity systems; Multi-antenna system, i.e. transmission or reception using multiple antennas using two or more spaced independent antennas at the transmitting station using simultaneous transmission of weighted versions of same signal
- H04B7/0619—Diversity systems; Multi-antenna system, i.e. transmission or reception using multiple antennas using two or more spaced independent antennas at the transmitting station using simultaneous transmission of weighted versions of same signal using feedback from receiving side
- H04B7/0621—Feedback content
- H04B7/0626—Channel coefficients, e.g. channel state information [CSI]
Definitions
- the present disclosure relates to wireless communications, and more specifically to user equipments (UEs) , base stations, processors, and methods for artificial intelligence or machine learning (AI/ML) -based channel state information (CSI) compression.
- UEs user equipments
- AI/ML machine learning
- CSI channel state information
- a wireless communications system may include one or multiple network communication devices, such as base stations, which may be otherwise known as an eNodeB (eNB) , a next-generation NodeB (gNB) , or other suitable terminology.
- Each network communication devices such as a base station may support wireless communications for one or multiple user communication devices, which may be otherwise known as user equipment (UE) , or other suitable terminology.
- the wireless communications system may support wireless communications with one or multiple user communication devices by utilizing resources of the wireless communication system (e.g., time resources (e.g., symbols, slots, subframes, frames, or the like) or frequency resources (e.g., subcarriers, carriers) .
- the wireless communications system may support wireless communications across various radio access technologies including third generation (3G) radio access technology, fourth generation (4G) radio access technology, fifth generation (5G) radio access technology, among other suitable radio access technologies beyond 5G (e.g., sixth generation (6G) ) .
- 3G third generation
- 4G fourth generation
- 5G fifth generation
- 6G sixth generation
- CSI compression using two-sided model (such as artificial intelligence (AI) /machine learning (ML) models) has been introduced.
- CSI compression using two-sided model may provide performance gain in most use cases. Further studies on CSI compression are still needed.
- the present disclosure relates to methods, apparatuses, and systems that support AI/ML-based CSI compression.
- the UE-side part and the network (NW) -side part for AI/ML-based CSI compression may be better paired over air interface, while the performance of the AI/ML-based CSI compression are also guaranteed.
- a UE transmits an indication of at least one model structure for at least one part of model to a base station.
- the at least one part of model is associated with a functionality on an AI/ML-based CSI compression.
- the UE then receives, from the base station, information of the at least one part of model trained using the at least one indicated model structure.
- the part of model at the network (NW) side and the part of model at the UE side may support each other.
- the at least one part of model may include at least one of a CSI generation part or a CSI reconstruction part
- the at least one model structure may include one of the following: a first model structure associated with the CSI generation part; a second model structure associated with the CSI reconstruction part; a first model structure associated with the CSI generation part and a second model structure associated with the CSI reconstruction part; or one model structure associated with the CSI generation part and the CSI reconstruction part.
- Some implementations of the method and apparatuses described herein may further include: transmitting, to the base station, an indication of the at least one part of model.
- Some implementations of the method and apparatuses described herein may further include: receiving, from the base station, an inquiry for the at least one part of model required by the UE.
- the indication of the at least one model structure may include a description on the at least one model structure.
- the description on the at least one model structure may include at least one of the following: a model type of the at least one model structure; layers of the at least one model structure; a connection between layers of the at least one model structure; a filter size of the at least one model structure; or activation functions of the at least one model structure.
- Some implementations of the method and apparatuses described herein may further include: applying the parameters on a CSI generation part at the UE; and transmitting, to the base station, an indication of confirmation on applying the parameters on the CSI generation part at the UE.
- Some implementations of the method and apparatuses described herein may further include: updating a CSI reconstruction part at the UE based on the parameters; concatenating a CSI generation part at the UE and the updated CSI reconstruction part at the UE; training the CSI generation part at the UE; and transmitting, to the base station, an indication of confirmation on completing training of the CSI generation part at the UE.
- the information of the at least one part of model may further include an assigned identity (ID) .
- the CSI generation part at the UE may be associated with a CSI reconstruction part at the base station based on the assigned ID.
- a temporal ID associated with the at least one part of model may be transmitted from the UE to the base station when the indication of the at least one model structure is transmitted.
- the assigned ID may be associated with the temporal ID.
- the indication of confirmation may be carried in one of the following: a radio resource control (RRC) message; a medium access control (MAC) control element (CE) message; or uplink control information (UCI) .
- RRC radio resource control
- MAC medium access control
- CE control element
- UCI uplink control information
- the information of the at least one part of model may be carried in one of the following: a signalling radio bearer (SRB) using a RRC message; or a dedicated data radio bearer (DRB) .
- SRB signalling radio bearer
- DRB dedicated data radio bearer
- the dedicated DRB is terminated at a dedicated protocol layer for AI/ML model handling.
- Some implementations of the method and apparatuses described herein may further include: transmitting, to the base station, an inquiry for a functionality on the AI/ML-based CSI compression at the base station; and receiving, from the base station, at least one of the following: an indication of acknowledgement of the functionality at the base station; or additional information for the AI/ML-based CSI compression.
- Some implementations of the method and apparatuses described herein may further include: receiving, from the base station, additional information for the AI/ML-based CSI compression; and transmitting, to the base station, an indication of acknowledgement to activate a functionality on the AI/ML-based CSI compression.
- the additional information may include at least one condition to activate the functionality.
- the at least one condition may include at least one of the following: at least one system configuration condition, or at least one scenario condition.
- a base station receives, from a UE, an indication of at least one model structure for at least one part of model.
- the at least one part of model is associated with a functionality on an AI/ML-based CSI compression.
- the base station trains the at least one part of model using the at least one indicated model structure and transmits information of the at least one part of model to the UE.
- the at least one part of model may include at least one of a CSI generation part or a CSI reconstruction part
- the at least one model structure may include one of the following: a first model structure associated with the CSI generation part; a second model structure associated with the CSI reconstruction part; a first model structure associated with the CSI generation part and a second model structure associated with the CSI reconstruction part; or one model structure associated with the CSI generation part and the CSI reconstruction part.
- Some implementations of the method and apparatuses described herein may further include: receiving, from the UE, an indication of the at least one part of model.
- Some implementations of the method and apparatuses described herein may further include: transmitting, to the UE, an inquiry for the at least one part of model required by the UE.
- the indication of the at least one model structure may include a description on the at least one model structure.
- the description on the at least one model structure may include at least one of the following: a model type of the at least one model structure; layers of the at least one model structure; a connection between layers of the at least one model structure; a filter size of the at least one model structure; or activation functions of the at least one model structure.
- the information of the at least one part of model may include parameters of a trained CSI generation part.
- Some implementations of the method and apparatuses described herein may further include: receiving, from the UE, an indication of confirmation on applying the parameters on a CSI generation part at the UE.
- the information of the at least one part of model may include parameters of a trained CSI reconstruction part.
- Some implementations of the method and apparatuses described herein may further include: receiving, from the UE, an indication of confirmation on completing training of a CSI generation part at the UE based on the parameters of the trained CSI reconstruction part.
- the information of the at least one part of model may further include an assigned identity (ID) .
- the CSI generation part at the UE may be associated with a CSI reconstruction part at the base station based on the assigned ID.
- a temporal ID associated with the at least one part of model may be received from the UE when the indication of the at least one model structure is received.
- the assigned ID may be associated with the temporal ID.
- the indication of confirmation may be carried in one of the following: a radio resource control (RRC) message; a medium access control (MAC) control element (CE) message; or uplink control information (UCI) .
- RRC radio resource control
- MAC medium access control
- CE control element
- UCI uplink control information
- the information of the at least one part of model may be carried in one of the following: a signalling radio bearer (SRB) using a RRC message; or a dedicated data radio bearer (DRB) .
- SRB signalling radio bearer
- DRB dedicated data radio bearer
- the dedicated DRB is terminated at a dedicated protocol layer for AI/ML model handling.
- Some implementations of the method and apparatuses described herein may further include: receiving, from the UE, an inquiry for a functionality on the AI/ML-based CSI compression at the base station; and transmitting, to the UE, at least one of the following: an indication of acknowledgement of the functionality at the base station; or additional information for the AI/ML-based CSI compression.
- Some implementations of the method and apparatuses described herein may further include: transmitting, to the UE, additional information for the AI/ML-based CSI compression; and receiving, from the UE, an indication of acknowledgement to activate a functionality on the AI/ML-based CSI compression.
- the additional information may include at least one condition to activate the functionality.
- the at least one condition may include at least one of the following: at least one system configuration condition, or at least one scenario condition.
- FIG. 1A illustrates an example of a wireless communications system that supports AI/ML-based CSI compression in accordance with aspects of the present disclosure.
- FIG. 1B illustrates an example of a two-sided AI/ML model used in the CSI compression associated with aspects of the present disclosure.
- FIG. 1C illustrates an example of a general training procedure of the two-sided model for CSI compression associated with aspects of the present disclosure.
- FIG. 1D illustrates an example of component information of an AI/ML model associated with aspects of the present disclosure.
- FIGS. 1E to 1G illustrate examples of two-sided model training associated with aspects of the present disclosure.
- FIG. 1H illustrates examples of two-sided model deployments in a multi-vendor scenario associated with aspects of the present disclosure.
- FIG. 2 illustrates an example of a signalling procedure for AI/ML-based CSI compression in accordance with aspects of the present disclosure.
- FIG. 3 illustrates an example process for functionality/model identification associated with AI/ML-based CSI compression in accordance with aspects of the present disclosure.
- FIGS. 4A to 4B illustrate examples process for triggering two-sided model development in accordance with aspects of the present disclosure.
- FIG. 5 illustrates an example process for requirements on model transfer associated with AI/ML-based CSI compression in accordance with aspects of the present disclosure.
- FIG. 6A illustrates a first example process for model transfer associated with AI/ML-based CSI compression in accordance with aspects of the present disclosure.
- FIG. 6B illustrates an example conceptual diagram of the two-sided model for training at the base station in the first example process in FIG. 6A.
- FIG. 7A illustrates a second example process for model transfer associated with AI/ML-based CSI compression in accordance with aspects of the present disclosure.
- FIG. 7B illustrates an example conceptual diagram of the two-sided model for training at the base station in the second example process in FIG. 7A.
- FIG. 7C illustrates an example conceptual diagram of the two-sided model for training at the user equipment in the second example process in FIG. 7A.
- FIG. 8A illustrates a third example process for model transfer associated with AI/ML-based CSI compression in accordance with aspects of the present disclosure.
- FIG. 8B illustrates an example conceptual diagram of the two-sided model for training at the base station in the third example process in FIG. 8A.
- FIG. 9 illustrates an example of a device that supports AI/ML-based CSI compression in accordance with aspects of the present disclosure.
- FIG. 10 illustrates an example of a processor that supports AI/ML-based CSI compression in accordance with aspects of the present disclosure.
- FIGS. 11 and 12 illustrate flowcharts of methods that support AI/ML-based CSI compression in accordance with aspects of the present disclosure.
- references in the present disclosure to “one embodiment, ” “an example embodiment, ” “an embodiment, ” “some embodiments, ” and the like indicate that the embodiment (s) described may include a particular feature, structure, or characteristic, but it is not necessary that every embodiment includes the particular feature, structure, or characteristic. Moreover, such phrases do not necessarily refer to the same embodiment (s) . Further, when a particular feature, structure, or characteristic is described in connection with an embodiment, it is submitted that it is within the knowledge of one skilled in the art to affect such feature, structure, or characteristic in connection with other embodiments whether or not explicitly described.
- first and second or the like may be used herein to describe various elements, these elements should not be limited by these terms. These terms are only used to distinguish one element from another element. For example, a first element could also be termed as a second element, and similarly, a second element could also be termed as a first element, without departing from the scope of embodiments. As used herein, the term “and/or” includes any and all combinations of one or more of the listed terms.
- the term “communication network” refers to a network following any suitable communication standards, such as, 5G new radio (NR) , long term evolution (LTE) , LTE-advanced (LTE-A) , wideband code division multiple access (WCDMA) , high-speed packet access (HSPA) , narrow band internet of things (NB-IoT) , and so on.
- NR 5G new radio
- LTE long term evolution
- LTE-A LTE-advanced
- WCDMA wideband code division multiple access
- HSPA high-speed packet access
- NB-IoT narrow band internet of things
- the communications between a terminal device and a network device in the communication network may be performed according to any suitable generation communication protocols, including but not limited to, the first generation (1G) , the second generation (2G) , 2.5G, 2.75G, the third generation (3G) , the fourth generation (4G) , 4.5G, the fifth generation (5G) communication protocols, and/or any other protocols either currently known or to be developed in the future.
- any suitable generation communication protocols including but not limited to, the first generation (1G) , the second generation (2G) , 2.5G, 2.75G, the third generation (3G) , the fourth generation (4G) , 4.5G, the fifth generation (5G) communication protocols, and/or any other protocols either currently known or to be developed in the future.
- Embodiments of the present disclosure may be applied in various communication systems. Given the rapid development in communications, there will also be future type communication technologies and systems in which the present disclosure may be embodied. It should not be seen as limiting the scope of the present disclosure to only the aforementioned systems.
- the term “network device” generally refers to a node in a communication network via which a terminal device can access the communication network and receive services therefrom.
- the network device may refer to a base station (BS) or an access point (AP) , for example, a node B (NodeB or NB) , a radio access network (RAN) node, an evolved NodeB (eNodeB or eNB) , a NR NB (also referred to as a gNB) , a remote radio unit (RRU) , a radio header (RH) , an infrastructure device for a V2X (vehicle-to-everything) communication, a transmission and reception point (TRP) , a reception point (RP) , a remote radio head (RRH) , a relay, an integrated access and backhaul (IAB) node, a low power node such as a femto BS, a pico BS, and so forth, depending on
- terminal device generally refers to any end device that may be capable of wireless communications.
- a terminal device may also be referred to as a communication device, a user equipment (UE) , an end user device, a subscriber station (SS) , an unmanned aerial vehicle (UAV) , a portable subscriber station, a mobile station (MS) , or an access terminal (AT) .
- UE user equipment
- SS subscriber station
- UAV unmanned aerial vehicle
- MS mobile station
- AT access terminal
- the terminal device may include, but is not limited to, a mobile phone, a cellular phone, a smart phone, a voice over IP (VoIP) phone, a wireless local loop phone, a tablet, a wearable terminal device, a personal digital assistant (PDA) , a portable computer, a desktop computer, an image capture terminal device such as a digital camera, a gaming terminal device, a music storage and playback appliance, a vehicle-mounted wireless terminal device, a wireless endpoint, a mobile station, laptop-embedded equipment (LEE) , laptop-mounted equipment (LME) , a USB dongle, a smart device, wireless customer-premises equipment (CPE) , an internet of things (loT) device, a watch or other wearable, a head-mounted display (HMD) , a vehicle, a drone, a medical device (for example, a remote surgery device) , an industrial device (for example, a robot and/or other wireless devices operating in an industrial and/or an automated processing chain
- AI/ML is used to learn and perform certain tasks via training neural networks with vast amounts of data, which is successfully applied in computer vison (CV) and nature language processing (NLP) fields.
- CV computer vison
- NLP nature language processing
- DL deep learning
- NN multi-layered neural networks
- AI/ML-based methods if being well trained, may obtain a better performance than a traditional one.
- AI/ML into air interface for some selected use-cases, including CSI feedback enhancement, beam management and position accuracy improvement, with some agreed general framework, evaluation methodologies and results.
- CSI feedback enhancements it is expected to use AI/ML approaches to reduce the overhead of CSI report.
- the AI/ML-based CSI compression is proposed as one of use cases to study AI/ML for air interface enhancement.
- FIG. 1B illustrates an example of a two-sided AI/ML model used in the CSI compression.
- a two-sided AI/ML model is deployed to compress CSI or precoding matrix indicator (PMI) .
- PMI precoding matrix indicator
- an AI model i.e., UE-part model or CSI generation part
- the measured CSI e.g., the estimated full CSI
- NN for example, a convolutional neural network (CNN) model, which also may be referred to as an encoder
- some pre-processing e.g., eigen value decomposition (EVD) .
- eigen value decomposition eigen value decomposition
- the bits are transmitted on physical uplink share channel (PUSCH) /physical uplink control channel (PUCCH) as part of CSI report (i.e., compressed and quantized CSI) to gNB.
- PUSCH physical uplink share channel
- PUCCH physical uplink control channel
- the received bits i.e., the compressed and quantized CSI
- NW network -part model or CSI reconstruction part
- FIG. 1C illustrates an example of a general training procedure of the two-sided model for CSI compression associated with aspects of the present disclosure.
- the NW-part model i.e., CSI reconstruction part
- the UE-part model i.e., CSI generation part
- a shared dataset of CSI/eigen values may be used for both models.
- the shared dataset may be used as input of the UE-part model and label data for the NW-part model. If mean squared error (MSE) between the output of the NW-part model and the label data is small enough after a number of iterations between the two models, the two-sided model can be regarded as ready for deployment/inference on both sides.
- MSE mean squared error
- FIG. 1D illustrates an example of component information of an AI/ML model associated with aspects of the present disclosure.
- an AI/ML model i.e., a data driven algorithm
- Some relevant information may be needed to describe the AI/ML model, for example but not limited to, model structure-related information (also referred to as NN-related information) 121, dataset-related information 122, weights/parameters 124 and a binary file 125.
- the model structure-related information 121 may include the NN types (e.g., CNN/RNN) , model input/output information, number of layers and an activation function of each layer. Because of the diverse hardware platforms in practice, the preferred model structure may be selected and/or reported via UE capability by the target device, if configured with the dedicated accelerator.
- NN types e.g., CNN/RNN
- model input/output information e.g., number of layers and an activation function of each layer.
- the dataset-related information 122 may include a specific scenario/configuration indication, a condition to collect data and the labeled data.
- the samples in the dataset can be collected from the specific scenario/configuration.
- the inference performance with different structures could be very close. From the aspect of the expected performance, a well-defined dataset can be used to identify a model in some degree for a given model structure.
- a set of weights/parameters 124 is generated for the given model structure to maximize an objective function given the dataset, e.g., minimum MSE.
- the weights/parameters 124 and the related configurations need to be indicated. In general, there could be tens of thousands of weights/parameters for a model. It should be noted that the weights are highly related with the model structure, which needs to be separately discussed in the context of model delivery/transfer.
- a binary file 125 is generated for the target device for inference, which could’ t be recognized by the other devices, especially by different vendors.
- the information on the model can be transferred and used to facilitate the model pairing.
- FIG. 1E illustrates a Type 1 training scheme for the two-sided model, i.e. joint training at one side.
- the two models may be trained at one side, either UE side or NW side, and one of the trained models may be transferred to another side after training ready.
- the two models are trained (131) at the NW side, and after the two trained models are training ready (132) for deployment/inference, a trained model for encoder, i.e., UE-part model, is transferred (133) to and deployed (135) at the UE side; also, another trained mode for decoder is deployed (134) at the NW side.
- a trained model for encoder i.e., UE-part model
- the model to be transferred to the other side would be challenging in practice. Because of the software and hardware may be unavailable to the other side, the trained model could not be optimized for that.
- FIG. 1F illustrates a Type 2 training scheme for the two-sided model, i.e. joint training at both sides.
- the two models are trained at both sides, including UE side and NW side.
- the two models are trained (141, 142) at the UE side and the NW side, respectively, and after the training on the two trained models are ready (143, 144) for deployment/inference, the trained models are deployed (145, 146) at the UE side and the NW side, respectively.
- FIG. 1G illustrates a Type 3 training scheme for the two-sided model, i.e. sequential training.
- sequential training is adopted, which means the two models are firstly trained at one side, either UE side or NW side.
- the generated data i.e., compressed CSI
- the trained model may be transferred to the other side, followed by the training on the other side with the transferred data and aligned dataset.
- the two models are trained (151) at the NW firstly, and after the two trained models are training ready (152) , a trained model for the NW may be deployed (154) at the NW.
- Another trained model for the UE as well as the compressed CSI are transferred to the UE side, followed by the training (153) on the UE side with the transferred data and aligned dataset. After the model for the UE is training ready at the UE, it may be deployed (155) at the UE.
- the two models may be potentially optimized according to the deploy software/hardware platform. It is necessary to transfer the data generated from the aligned dataset, which is less than the overhead in Type 2 and friendly to the diverse platform.
- a model on NW side wants to support multiple models on UEs, it is better to use a model or a limited number of models to pair with the models in the UEs.
- FIG. 1H illustrates three examples of two-sided model deployments in a multi-vendor scenario associated with aspects of the present disclosure.
- the decoder “dec” at the NW side needs to support multiple encoders (e.g., “enc1” , “enc2” and “enc3” ) at different UEs from different vendors.
- the encoder “enc” at the UE side needs to support multiple decoders (e.g., “dec1” , “dec2” and “dec3” ) at different base stations from different vendors considering the mobility of the UE.
- each decoder at the NW side needs to support multiple encoders (e.g., “enc1” , “enc2” and “enc3” ) at different UEs from different vendors, and each encoder at the UE side needs to support multiple decoders (e.g., “dec1” , “dec2” and “dec3” ) at different base stations from different vendors considering the mobility of the UE.
- encoders e.g., “enc1” , “enc2” and “enc3”
- Enhancements on the AI/ML-based CSI compression are still needed considering the inter-vendor collaborative training on a two-sided model.
- the performance of the two-sided model for the CSI compression is sensitive with whether the models are well paired or not. According to the evaluation results, the performance is highly related with the paired models. If the models on two sides are mis-matched, the performance is degraded seriously. There should be specification impacts on the model paring procedures of the two-sided model.
- there are diverse hardware and software environments from different vendors to support the AI/ML functionality there are diverse hardware and software environments from different vendors to support the AI/ML functionality.
- the AI/ML model to be deployed in either NW or UE is always proprietary by the vendor for the proper hardware and software, which results in very diverse model implementations. Therefore, it is necessary to design a procedure and detailed signallings to support inter-vendor training collaboration on a two-sided model used for an AI/ML-based CSI compression scheme.
- Some embodiments of the present disclosure propose a solution to support AI/ML-based CSI compression.
- the UE may indicate the model structure (s) for the part (s) of model needed by the UE and the base station may train the part (s) of model needed by the UE using the model structure (s) indicated by the UE and transmit information of the trained part (s) of model to the UE.
- the UE-side part and the NW-side part for AI/ML-based CSI compression may be better paired over air interface, while the performance of the AI/ML-based CSI compression are also guaranteed.
- FIG. 1A illustrates an example of a wireless communications system 100 that supports CSI compression accordance with aspects of the present disclosure.
- the wireless communications system 100 may include one or more network entities 102 (also referred to as network equipment (NE) ) , one or more UEs 104, a core network 106, and a packet data network 108.
- the wireless communications system 100 may support various radio access technologies.
- the wireless communications system 100 may be a 4G network, such as an LTE network or an LTE-advanced (LTE-A) network.
- LTE-A LTE-advanced
- the wireless communications system 100 may be a 5G network, such as an NR network.
- the wireless communications system 100 may be a combination of a 4G network and a 5G network, or other suitable radio access technology including institute of electrical and electronics engineers (IEEE) 802.11 (Wi-Fi) , IEEE 802.16 (WiMAX) , IEEE 802.20.
- IEEE institute of electrical and electronics engineers
- Wi-Fi Wi-Fi
- WiMAX IEEE 802.16
- IEEE 802.20 The wireless communications system 100 may support radio access technologies beyond 5G. Additionally, the wireless communications system 100 may support technologies, such as time division multiple access (TDMA) , frequency division multiple access (FDMA) , or code division multiple access (CDMA) , etc.
- TDMA time division multiple access
- FDMA frequency division multiple access
- CDMA code division multiple access
- the one or more network entities 102 may be dispersed throughout a geographic region to form the wireless communications system 100.
- One or more of the network entities 102 described herein may be or include or may be referred to as a network node, a base station, a network element, a radio access network (RAN) , a base transceiver station, an access point, a NodeB, an eNodeB (eNB) , a next-generation NodeB (gNB) , or other suitable terminology.
- a network entity 102 and a UE 104 may communicate via a communication link 110, which may be a wireless or wired connection.
- a network entity 102 and a UE 104 may perform wireless communication (e.g., receive signaling, transmit signaling) over a Uu interface.
- a network entity 102 may provide a geographic coverage area 112 for which the network entity 102 may support services (e.g., voice, video, packet data, messaging, broadcast, etc. ) for one or more UEs 104 within the geographic coverage area 112.
- a network entity 102 and a UE 104 may support wireless communication of signals related to services (e.g., voice, video, packet data, messaging, broadcast, etc. ) according to one or multiple radio access technologies.
- a network entity 102 may be moveable, for example, a satellite associated with a non-terrestrial network.
- different geographic coverage areas 112 associated with the same or different radio access technologies may overlap, but the different geographic coverage areas 112 may be associated with different network entities 102.
- Information and signals described herein may be represented using any of a variety of different technologies and techniques.
- data, instructions, commands, information, signals, bits, symbols, and chips that may be referenced throughout the description may be represented by voltages, currents, electromagnetic waves, magnetic fields or particles, optical fields or particles, or any combination thereof.
- the one or more UEs 104 may be dispersed throughout a geographic region of the wireless communications system 100.
- a UE 104 may include or may be referred to as a mobile device, a wireless device, a remote device, a remote unit, a handheld device, or a subscriber device, or some other suitable terminology.
- the UE 104 may be referred to as a unit, a station, a terminal, or a client, among other examples.
- the UE 104 may be referred to as an internet-of-things (IoT) device, an internet-of-everything (IoE) device, or machine-type communication (MTC) device, among other examples.
- IoT internet-of-things
- IoE internet-of-everything
- MTC machine-type communication
- a UE 104 may be stationary in the wireless communications system 100.
- a UE 104 may be mobile in the wireless communications system 100.
- the one or more UEs 104 may be devices in different forms or having different capabilities. Some examples of UEs 104 are illustrated in FIG. 1A.
- a UE 104 may be capable of communicating with various types of devices, such as the network entities 102, other UEs 104, or network equipment (e.g., the core network 106, the packet data network 108, a relay device, an integrated access and backhaul (IAB) node, or another network equipment) , as shown in FIG. 1A.
- a UE 104 may support communication with other network entities 102 or UEs 104, which may act as relays in the wireless communications system 100.
- a UE 104 may also be able to support wireless communication directly with other UEs 104 over a communication link 114.
- a UE 104 may support wireless communication directly with another UE 104 over a device-to-device (D2D) communication link.
- D2D device-to-device
- the communication link 114 may be referred to as a sidelink.
- a UE 104 may support wireless communication directly with another UE 104 over a PC5 interface.
- a network entity 102 may support communications with the core network 106, or with another network entity 102, or both.
- a network entity 102 may interface with the core network 106 through one or more backhaul links 116 (e.g., via an S1, N2, N2, or another network interface) .
- the network entities 102 may communicate with each other over the backhaul links 116 (e.g., via an X2, Xn, or another network interface) .
- the network entities 102 may communicate with each other directly (e.g., between the network entities 102) .
- the network entities 102 may communicate with each other or indirectly (e.g., via the core network 106) .
- one or more network entities 102 may include subcomponents, such as an access network entity, which may be an example of an access node controller (ANC) .
- An ANC may communicate with the one or more UEs 104 through one or more other access network transmission entities, which may be referred to as a radio heads, smart radio heads, or transmission-reception points (TRPs) .
- TRPs transmission-reception points
- a network entity 102 may be configured in a disaggregated architecture, which may be configured to utilize a protocol stack physically or logically distributed among two or more network entities 102, such as an integrated access backhaul (IAB) network, an open radio access network (O-RAN) (e.g., a network configuration sponsored by the O-RAN Alliance) , or a virtualized RAN (vRAN) (e.g., a cloud RAN (C-RAN) ) .
- IAB integrated access backhaul
- O-RAN open radio access network
- vRAN virtualized RAN
- C-RAN cloud RAN
- a network entity 102 may include one or more of a CU, a DU, a radio unit (RU) , a RAN intelligent controller (RIC) (e.g., a near-real time RIC (Near-RT RIC) , a non-real time RIC (Non-RT RIC) ) , a service management and orchestration (SMO) system, or any combination thereof.
- RIC RAN intelligent controller
- SMO service management and orchestration
- An RU may also be referred to as a radio head, a smart radio head, a remote radio head (RRH) , a remote radio unit (RRU) , or a transmission reception point (TRP) .
- One or more components of the network entities 102 in a disaggregated RAN architecture may be co-located, or one or more components of the network entities 102 may be located in distributed locations (e.g., separate physical locations) .
- one or more network entities 102 of a disaggregated RAN architecture may be implemented as virtual units (e.g., a virtual CU (VCU) , a virtual DU (VDU) , a virtual RU (VRU) ) .
- VCU virtual CU
- VDU virtual DU
- VRU virtual RU
- Split of functionality between a CU, a DU, and an RU may be flexible and may support different functionalities depending upon which functions (e.g., network layer functions, protocol layer functions, baseband functions, radio frequency functions, and any combinations thereof) are performed at a CU, a DU, or an RU.
- functions e.g., network layer functions, protocol layer functions, baseband functions, radio frequency functions, and any combinations thereof
- a functional split of a protocol stack may be employed between a CU and a DU such that the CU may support one or more layers of the protocol stack and the DU may support one or more different layers of the protocol stack.
- the CU may host upper protocol layer (e.g., a layer 3 (L3) , a layer 2 (L2) ) functionality and signaling (e.g., radio resource control (RRC) , service data adaption protocol (SDAP) , packet data convergence protocol (PDCP) ) .
- the CU may be connected to one or more DUs or RUs, and the one or more DUs or RUs may host lower protocol layers, such as a layer 1 (L1) (e.g., physical (PHY) layer) or an L2 (e.g., radio link control (RLC) layer, medium access control (MAC) layer) functionality and signaling, and may each be at least partially controlled by the CU 160.
- L1 e.g., physical (PHY) layer
- L2 e.g., radio link control (RLC) layer, medium access control (MAC) layer
- a functional split of the protocol stack may be employed between a DU and an RU such that the DU may support one or more layers of the protocol stack and the RU may support one or more different layers of the protocol stack.
- the DU may support one or multiple different cells (e.g., via one or more RUs) .
- a functional split between a CU and a DU, or between a DU and an RU may be within a protocol layer (e.g., some functions for a protocol layer may be performed by one of a CU, a DU, or an RU, while other functions of the protocol layer are performed by a different one of the CU, the DU, or the RU) .
- a CU may be functionally split further into CU control plane (CU-CP) and CU user plane (CU-UP) functions.
- a CU may be connected to one or more DUs via a midhaul communication link (e.g., F1, F1-c, F1-u)
- a DU may be connected to one or more RUs via a fronthaul communication link (e.g., open fronthaul (FH) interface)
- FH open fronthaul
- a midhaul communication link or a fronthaul communication link may be implemented in accordance with an interface (e.g., a channel) between layers of a protocol stack supported by respective network entities 102 that are in communication via such communication links.
- the core network 106 may support user authentication, access authorization, tracking, connectivity, and other access, routing, or mobility functions.
- the core network 106 may be an evolved packet core (EPC) , or a 5G core (5GC) , which may include a control plane entity that manages access and mobility (e.g., a mobility management entity (MME) , an access and mobility management functions (AMF) ) and a user plane entity that routes packets or interconnects to external networks (e.g., a serving gateway (S-GW) , a packet data network (PDN) gateway (P-GW) , or a user plane function (UPF) ) .
- EPC evolved packet core
- 5GC 5G core
- MME mobility management entity
- AMF access and mobility management functions
- S-GW serving gateway
- PDN gateway packet data network gateway
- UPF user plane function
- control plane entity may manage non-access stratum (NAS) functions, such as mobility, authentication, and bearer management (e.g., data bearers, signal bearers, etc. ) for the one or more UEs 104 served by the one or more network entities 102 associated with the core network 106.
- NAS non-access stratum
- the core network 106 may communicate with the packet data network 108 over one or more backhaul links 116 (e.g., via an S1, N2, N2, or another network interface) .
- the packet data network 108 may include an application server 118.
- one or more UEs 104 may communicate with the application server 118.
- a UE 104 may establish a session (e.g., a protocol data unit (PDU) session, or the like) with the core network 106 via a network entity 102.
- the core network 106 may route traffic (e.g., control information, data, and the like) between the UE 104 and the application server 118 using the established session (e.g., the established PDU session) .
- the PDU session may be an example of a logical connection between the UE 104 and the core network 106 (e.g., one or more network functions of the core network 106) .
- the network entities 102 and the UEs 104 may use resources of the wireless communications system 100 (e.g., time resources (e.g., symbols, slots, subframes, frames, or the like) or frequency resources (e.g., subcarriers, carriers) ) to perform various operations (e.g., wireless communications) .
- the network entities 102 and the UEs 104 may support different resource structures.
- the network entities 102 and the UEs 104 may support different frame structures.
- the network entities 102 and the UEs 104 may support a single frame structure.
- the network entities 102 and the UEs 104 may support various frame structures (i.e., multiple frame structures) .
- the network entities 102 and the UEs 104 may support various frame structures based on one or more numerologies.
- One or more numerologies may be supported in the wireless communications system 100, and a numerology may include a subcarrier spacing and a cyclic prefix.
- a first subcarrier spacing e.g., 15 kHz
- a normal cyclic prefix e.g. 15 kHz
- the first numerology associated with the first subcarrier spacing (e.g., 15 kHz) may utilize one slot per subframe.
- a time interval of a resource may be organized according to frames (also referred to as radio frames) .
- Each frame may have a duration, for example, a 10 millisecond (ms) duration.
- each frame may include multiple subframes.
- each frame may include 10 subframes, and each subframe may have a duration, for example, a 1 ms duration.
- each frame may have the same duration.
- each subframe of a frame may have the same duration.
- a time interval of a resource may be organized according to slots.
- a subframe may include a number (e.g., quantity) of slots.
- the number of slots in each subframe may also depend on the one or more numerologies supported in the wireless communications system 100.
- Each slot may include a number (e.g., quantity) of symbols (e.g., OFDM symbols) .
- the number (e.g., quantity) of slots for a subframe may depend on a numerology.
- a slot For a normal cyclic prefix, a slot may include 14 symbols.
- a slot For an extended cyclic prefix (e.g., applicable for 60 kHz subcarrier spacing) , a slot may include 12 symbols.
- an electromagnetic (EM) spectrum may be split, based on frequency or wavelength, into various classes, frequency bands, frequency channels, etc.
- the wireless communications system 100 may support one or multiple operating frequency bands, such as frequency range designations FR1 (410 MHz –7.125 GHz) , FR2 (24.25 GHz –52.6 GHz) , FR3 (7.125 GHz –24.25 GHz) , FR4 (52.6 GHz –114.25 GHz) , FR4a or FR4-1 (52.6 GHz –71 GHz) , and FR5 (114.25 GHz –300 GHz) .
- FR1 410 MHz –7.125 GHz
- FR2 24.25 GHz –52.6 GHz
- FR3 7.125 GHz –24.25 GHz
- FR4 (52.6 GHz –114.25 GHz)
- FR4a or FR4-1 52.6 GHz –71 GHz
- FR5 114.25 GHz
- the network entities 102 and the UEs 104 may perform wireless communications over one or more of the operating frequency bands.
- FR1 may be used by the network entities 102 and the UEs 104, among other equipment or devices for cellular communications traffic (e.g., control information, data) .
- FR2 may be used by the network entities 102 and the UEs 104, among other equipment or devices for short-range, high data rate capabilities.
- FR1 may be associated with one or multiple numerologies (e.g., at least three numerologies) .
- FR2 may be associated with one or multiple numerologies (e.g., at least 2 numerologies) .
- FIG. 2 illustrates an example of signalling procedure for AI/ML-based CSI compression in accordance with aspects of the present disclosure.
- the process 200 will be described with reference to FIG. 1A, and the process 200 may involve a UE 104 and a network entity 102 as shown in FIG. 1 A.
- the network entity 102 may be implemented as a base station. It is to be understood that the steps and the order of the steps in FIG. 2 are merely for illustration, and not for limitation. It is to be understood that process 200 may further include additional blocks not shown and/or omit some shown blocks, and the scope of the present disclosure is not limited in this regard.
- the UE 104 transmits (202) , to the base station 102, an indication 204 of at least one model structure for at least one part of model.
- the at least one part of model is associated with a functionality on an AI/ML-based CSI compression.
- the base station 102 receives (206) the indication 204 of the at least one model structure for the at least one part of model from the UE 104.
- the base station 102 trains (208) the at least one part of model using the at least one indicated model structure and transmits (210) information 212 of the at least one trained part of model to the UE 104.
- the UE 104 receives (214) the information 212 of the at least one trained part of model from the base station 102. In this way, the part of model at the base station side and the part of model at the UE side may support each other.
- the at least one part of model may include at least one of a CSI generation part or a CSI reconstruction part.
- the at least one model structure may include one of the following: ⁇ CSI generation part ⁇ , ⁇ CSI reconstruction part ⁇ or ⁇ CSI generation part, CSI reconstruction part ⁇ .
- the at least one part of model may include a CSI generation part and the at least one model structure may include a first model structure associated with the CSI generation part.
- the UE 104 may indicate a model structure for the CSI generation part.
- the indicated model structure may be a preferred model structure for the CSI generation part at the UE 104.
- the base station 102 may train a CSI generation part using the model structure indicated by the UE 104.
- the at least one part of model may include a CSI reconstruction part and the at least one model structure may include a second model structure associated with the CSI reconstruction part.
- the UE 104 may indicate a model structure for the CSI reconstruction part.
- the indicated model structure may be a preferred model structure for the CSI reconstruction part at the UE 104.
- the base station 102 may train a CSI reconstruction part using the model structure indicated by the UE 104.
- the at least one part of model may include a CSI generation part or a CSI reconstruction part and the at least one model structure may include a first model structure associated with the CSI generation part and a second model structure associated with the CSI reconstruction part.
- the UE 104 may indicate respective model structures for the CSI generation part and the CSI reconstruction part.
- the indicated model structures may be respective preferred model structures for the CSI generation part and the CSI reconstruction part at the UE 104.
- the base station 102 may train a CSI generation part using the indicated model structure for the CSI generation part and train a CSI reconstruction part using the indicated model structure for the CSI reconstruction part.
- the at least one part of model may include at least one of a CSI generation part or a CSI reconstruction part and the at least one model structure may include one model structure associated with the CSI generation part and the CSI reconstruction part.
- the UE 104 may indicate a model structure to the base station 102.
- the indicated model structure may be a preferred model structure at the UE 104.
- the preferred model structure may apply to the CSI generation part and the CSI reconstruction part.
- the base station 102 may train at least one part of model using the indicated model structure.
- the UE 104 may transmit an indication of the at least one part of model to the base station 102.
- the UE 104 may transmit the indication of the at least one part of model to the base station 102 when transmitting the indication 204 of the at least one model structure for at least one part of model to the base station 102.
- the at least one part of model may be predefined.
- the at least one part of model may be predefined as a CSI generation part.
- the at least one part of model may be predefined as a CSI reconstruction part.
- the at least one part of model may be predefined as a CSI generation part and a CSI reconstruction part.
- the UE 104 may receive an inquiry for the at least one part of model required by the UE 104 from the base station 102.
- the UE 104 may transmit the indication 204 of the at least one model structure for at least one part of model to the base station 102 after receiving the inquiry for the at least one part of model required by the UE 104 from the base station 102.
- the inquiry may be optional.
- the UE 104 may transmit the indication 204 of the at least one model structure for at least one part of model to the base station 102 on its own initiative.
- the indication 204 of the at least one model structure may include a description on the at least one model structure.
- the description on the at least one model structure may include a model type of the at least one model structure.
- the description on the at least one model structure may include layers of the at least one model structure.
- the description on the at least one model structure may include a connection between layers of the at least one model structure.
- the description on the at least one model structure may include a filter size of the at least one model structure.
- the description on the at least one model structure may include activation functions of the at least one model structure.
- the at least one model structure may include a model structure associated with a CSI generation part.
- the information 212 of the at least one trained part of model may include parameters of the CSI generation part trained using the indicated model structure.
- the base station 102 may assist training a CSI generation part based on the model structure for the CSI generation part indicated by the UE 102.
- the base station 102 may train the CSI generation part together with a CSI reconstruction part at the base station 102 and transfer the parameters of the trained CSI generation part to the UE 102.
- the at least one model structure may include a model structure associated with a CSI generation part and a model structure associated with a CSI reconstruction part
- the information 212 of the at least one trained part of model may include parameters of the CSI generation part trained using the indicated model structure for the CSI generation part.
- the base station 102 may assist training a CSI generation part and a CSI reconstruction part based on respective model structures for the CSI generation part and the CSI reconstruction part indicated by the UE 102. The base station 102 may then transfer the parameters of the trained CSI generation part to the UE 102.
- the UE 104 may apply the parameters on a CSI generation part at the UE 104 and transmit, to the base station 102, an indication of confirmation on applying the parameters on the CSI generation part.
- the UE 104 may apply the parameters provided by the base station 102 on its CSI generation part and confirm the deployment.
- the at least one part of model may include a model structure associated with a CSI reconstruction part
- the information 212 of the at least one trained part of model may include parameters of the CSI reconstruction part trained using the indicated model structure.
- the base station 102 may assist training a CSI reconstruction part based on the model structure for the CSI reconstruction part indicated by the UE 102.
- the base station 102 may train the CSI reconstruction part together with a CSI generation part at the base station 102 and transfer the parameters of the trained CSI reconstruction part to the UE 102.
- the UE 104 may update a CSI reconstruction part at the UE 104 based on the received parameters, concatenate a CSI generation part at the UE 104 and the updated CSI reconstruction part and train the CSI generation part at the UE 104.
- the UE 104 may transmit, to the base station 102, an indication of confirmation on completing training of the CSI generation part at the UE.
- the UE 104 may apply the parameters provided by the base station 102 on its CSI reconstruction part and train its CSI generation part based on the updated CSI reconstruction part. The UE 104 may then confirm the completion of training on its CSI generation part.
- the information 212 of the at least one trained part of model may further include an assigned ID.
- the CSI generation part at the UE 104 may be associated with a CSI reconstruction part at the base station 102 based on the assigned ID. By associating the CSI generation part at the UE 104 and the CSI reconstruction part at the base station 102 based on the assigned ID, the CSI generation part at the UE 104 may be paired with or matched to the CSI reconstruction part at the base station 102. In some examples, the CSI generation part at the UE 104 that is paired with (or matched to) the CSI reconstruction part at the base station 102 may be the CSI generation part applied with the parameters provided by the base station 102.
- the CSI generation part at the UE 104 that is paired with (or matched to) the CSI reconstruction part at the base station 102 may be the CSI generation part trained by the UE 104 based on the CSI reconstruction part applied with the parameters provided by the base station 102 updated.
- the UE 104 when transmitting the indication 204 of the at least one model structure, may further transmit a temporal ID associated with the at least one part of model to the base station 102.
- the assigned ID may be associated with the temporal ID.
- the indication of confirmation is carried in one of the following: a radio resource control (RRC) message; a medium access control (MAC) control element (CE) message; or uplink control information (UCI) .
- RRC radio resource control
- MAC medium access control
- CE control element
- UCI uplink control information
- the information 212 of the at least one trained part of model is carried in a signalling radio bearer (SRB) using a RRC message.
- the information 212 of the at least one trained part of model is carried in a dedicated data radio bearer (DRB) .
- the dedicated DRB is terminated at a dedicated protocol layer for AI/ML model handling.
- the UE 104 may transmit, to the base station 102, an inquiry for a functionality on the AI/ML-based CSI compression at the base station 102.
- the UE 104 may receive, from the base station 102, an indication of acknowledgement of the functionality at the base station 102.
- the UE 104 may receive, from the base station 102, additional information for the AI/ML-based CSI compression. For example, if the UE 104 wants to train or update a CSI generation part of a two-sided model to pair with the CSI reconstruction part at the base station 102, the UE 104 may trigger the procedure by inquiring the functionality on the AI/ML-based CSI compression at the base station 102.
- the UE 104 may receive, from the base station 102, additional information for the AI/ML-based CSI compression.
- the UE 104 may then transmit, to the base station 102, an indication of acknowledgement to activate a functionality on the AI/ML-based CSI compression. For example, if the base station 102 wants to train or update a CSI reconstruction part of a two-sided model to pair with the CSI generation part at the UE 104, the base station 102 may trigger the procedure by providing additional information for the AI/ML-based CSI compression to the UE 104.
- the additional information may include at least one condition to activate the functionality.
- the at least one condition may include at least one system configuration condition.
- the at least one condition may include at least one scenario condition.
- FIGS. 3-8B For the purpose of discussion, the processes in FIGS. 3-8B will be described with reference to FIG. 1A. It would be appreciated that although the processes in FIGS. 3-8B have been described referring to the network environment 100 of FIG. 1A, these processes may be likewise applied to other similar communication scenarios.
- the processes in FIGS. 3-8B may be regarded as a specific example implementation of the process 200 of FIG. 2. These process may involve the UE 104 and the base station 102.
- a UE needs to report its capability on the functionality to support the AI/ML-based CSI compression.
- FIG. 3 illustrates an example process 300 for functionality/model identification associated with AI/ML-based CSI compression in accordance with aspects of the present disclosure.
- functionality/model identification related information on the AI/ML functionality/model is shared and aligned between the base station 102 and the UE 104.
- the UE 104 may transmit an indication on the AI/ML functionality on the AI/ML-based CSI compression to the base station 102 to indicate its capability to support the AI/ML functionality.
- the capability to support the AI/ML functionality on the AI/ML-based CSI compression may be reported during the UE capability report.
- the indication on the AI/ML functionality may include features or a feature group to enable an AI/ML-based CSI compression scheme and applicable conditions to activate the functionality. Some examples of the features or the feature group may include supported number of ports, number of subbands, ranks and quantization level of output of UE-part model.
- Some examples for the applicable conditions may include system configurations (e.g., system information, synchronization signal and PBCH block (SSB) configurations, CSI-RS resource configurations, CSI report configurations, etc. ) and scenarios (e.g., UE mobility, applicable signal-to-noise ratio (SNR) , etc. ) .
- system configurations e.g., system information, synchronization signal and PBCH block (SSB) configurations, CSI-RS resource configurations, CSI report configurations, etc.
- scenarios e.g., UE mobility, applicable signal-to-noise ratio (SNR) , etc.
- the features/feature group to enable AIML based CSI compression scheme may be transmitted via the RRC UECapabilityInformation message, and the applicable conditions may be reported to the base station 102 in a different UEAssistanceInformation message.
- the base station 102 may know about whether the UE 104 can support AI/ML-based CSI compression functionality with dedicated configurations in some scenarios. Then, the procedure of a two-sided model development, i.e., training or updating, can be triggered by the UE 104 or the base station 102, if being requested for some events, such as the applicable conditions monitoring or performance monitoring.
- FIG. 4A illustrates an example process 400A for triggering two-sided model development by the UE in accordance with aspects of the present disclosure.
- the procedure of two-sided model development may be triggered by the UE 104.
- the UE 104 may inquire the functionality of the AI/ML-based CSI compression at the base station 102. If the base station 102 support the functionality on the AI/ML-based CSI compression, at 402, the base station 102 may acknowledge (ACK) the inquiry by providing additional information on this functionality, such as the applicable conditions. Otherwise, a negative acknowledgement (NACK) message may be provided to the UE 104 and the procedure of two-sided model development may be terminated with no further steps.
- ACK negative acknowledgement
- FIG. 4B illustrates an example process 400B for triggering two-sided model development by the base station in accordance with aspects of the present disclosure.
- the procedure of two-sided model development may be triggered by the base station 102.
- the base station 102 may provide the additional information, such as the applicable conditions, on the functionality of the AI/ML-based CSI compression to the UE 104.
- the UE 104 would feedback an ACK message and activate the functionality for two-sided model development. Otherwise, a NACK message may be provided to the base station 102 and the procedure of two-sided model development may be terminated with no further steps.
- the UE and the base station may perform the two-sided model development by training/updating the two-sided model and transferring related parameters of the trained/updated part (s) of model.
- the part (s) of model trained at the base station and transferred to a UE on request can be a CSI generation part, a CSI reconstruction part or both parts.
- FIG. 5 illustrates an example process 500 for requirements on model transfer associated with AI/ML-based CSI compression in accordance with aspects of the present disclosure.
- the procedure and information exchanged between the base station 102 and the UE 104 for the indication on the part (s) of a two-sided model to be transferred and the preferred model structure is illustrated in FIG. 5, which can include the inquiry from the base station 102 (if needed) and the indication on the needed part (s) and preferred model structure from the UE 104.
- the base station 102 may inquire the needed part (s) by the UE 104, i.e., ⁇ CSI generation part ⁇ , ⁇ CSI reconstruction part ⁇ or ⁇ CSI generation part, CSI reconstruction part ⁇ .
- the step 501 may be optional and may be needed if for example the two-sided model development is triggered by the base station 102 as illustrated in FIG. 4B.
- the UE 104 may report the indication of the needed part (s) of the two-sided model and the model structure (s) of the needed part (s) .
- the indication transmitted at 502 may include the following information: an indication on one of ⁇ CSI generation part ⁇ , ⁇ CSI reconstruction part ⁇ or ⁇ CSI generation part, CSI reconstruction part ⁇ ; a temporal ID assigned with the preference by the UE 104; and model structure-related information.
- the model structure-related information may include at least the model types, input/output information, number of layers, filter size, etc.
- the information may be transmitted via a SRB using a RRC message, e.g., as an OCTET STRING container in a RRC message.
- the information may be transmitted in a dedicated data radio bearer.
- the dedicated radio bearer may be terminated at a dedicated protocol layer used for AI/ML model handling, and the dedicated protocol layer may be above the packet data convergence protocol (PDCP) layer.
- PDCP packet data convergence protocol
- the base station 102 may know about which part (s) of model need to be transferred to the UE 104 and which model structure should the part (s) of model use. It should be noted that the process 500 is merely for illustrations. Other processes are also possible. For example, the part (s) of model that need to be transferred to the UE 104 may be predefined and the UE 104 may only report the model structure for the needed part (s) of model.
- the base station may assist training CSI generation part together with a local CSI reconstruction part at the base station, followed by transferring the parameters of the CSI generation part to the UE after the model training.
- the UE reports the preferred model structure for the CSI generation part and the part (s) of model that need to be transferred to the UE is the CSI generation part.
- the base station 102 may transmit, to the UE 104, the set of parameters of the trained CSI generation part (i.e., the CSI generation part 630 in FIG. 6B) and an assigned ID.
- the assigned ID may be associated with the temporal ID, which is provided in process 500 in the initialization/pre-training procedure.
- the UE 104 may apply the parameters on the local CSI generation part at the UE 104.
- the local CSI generation part at the UE 104 may be paired with the CSI reconstruction part at the base station 102 (i.e., the CSI reconstruction part 610 in FIG. 6B) base on the assigned ID.
- the UE 104 may confirm the deployment of the parameters on the CSI generation part and the ID assignment. After the confirmation, this two-sided model can be activated/deactivated/switched by the life cycle management (LCM) decision made by base station 102. This confirmation can be sent via RRC, MAC CE or UCI. There may be a predefined delay between the confirmation of the deployment and the activation/application of the corresponding model.
- LCM life cycle management
- This information may at least include the configurations on the parameters in the set, such as size, values and quantization level etc., and an associated ID of the model to be applied, such as model ID if defined.
- the base station 102 transfers the information to the UE 104 in DL can be considered.
- the information may be transmitted via a SRB using a RRC message, e.g., as an OCTET STRING container in a RRC message.
- the information may be transmitted in a dedicated data radio bearer.
- the dedicated radio bearer may be terminated at a dedicated protocol layer used for AI/ML model handling, and the dedicated protocol layer may be above the PDCP layer.
- the base station may transfer the parameters of the CSI reconstruction part of the re-trained two-sided model to the UE, and UE may train the local CSI generation part with the updated local CSI reconstruction part.
- the UE reports the preferred model structure for the CSI reconstruction part and the part (s) of model that need to be transferred to the UE is the CSI reconstruction part.
- FIG. 7A illustrates a second example process for model transfer associated with AI/ML-based CSI compression in accordance with aspects of the present disclosure.
- the base station 102 may concatenate the CSI reconstruction part of the two-sided model according to the received preferred structure indication with the local CSI generation part and start training and testing the concatenated model.
- FIG. 7B illustrates an example conceptual diagram 700B of the two-sided model for training at the base station in the second process in FIG. 7A.
- the local parts of model at the base station may have the corresponding local model structures of the base station. If the UE reports the preferred model structures for the CSI reconstruction part, the base station may rebuild a CSI reconstruction part 730 based on the preferred model structure reported by the UE. Instead of training the local CSI reconstruction part 710 and the local CSI generation part 720, the base station may concatenate the rebuilt CSI reconstruction part 740 and the local CSI generation part 730 and perform model training accordingly.
- the base station 102 may transmit, to the UE 104, the parameters of the trained CSI reconstruction part and an assigned ID associated with the temporal ID, which is provided in the initialization/pre-training procedure.
- the UE 104 may concatenate the local CSI generation part of the two-sided model with the updated CSI reconstruction part with the received parameters and start training and testing the concatenated model.
- FIG. 7C illustrates an example conceptual diagram 700C of the two-sided model for training at the UE in the second process in FIG. 7A.
- the local parts of model at the UE may have the corresponding local model structures of the UE.
- the CSI reconstruction part 750 has the same model structure as reported to the base station and applied in the rebuilt CSI reconstruction part 740 at the base station in FIG. 7B.
- the UE may update the CSI reconstruction part 750 based on the parameters.
- the UE may concatenate the updated CSI reconstruction part 750 and the local CSI generation part 760 and perform model training accordingly.
- the UE 104 may use the trained CSI generate part 760 to pair with the CSI reconstruction part at the base station 102 (e.g., the CSI reconstruction part 740 or the CSI reconstruction part 710 in FIG. 7B) based on the assigned ID.
- the base station 102 e.g., the CSI reconstruction part 740 or the CSI reconstruction part 710 in FIG. 7B
- the UE may confirm the completion of training on the CSI generation part and the ID assignment. After the confirmation, this two-sided model can be activated/deactivated/switched by the LCM decision made by base station 102. This confirmation can be sent via RRC, MAC CE or UCI. There may be a predefined delay between the confirmation of the deployment and the activation/application of the corresponding model.
- This information may at least include the configurations on the parameters in the set, such as size, values and quantization level etc., and an associated ID of the model to be applied, such as model ID if defined.
- the base station 102 transfers the information to the UE 104 in DL can be considered.
- the information may be transmitted via a SRB using a RRC message, e.g., as an OCTET STRING container in a RRC message.
- the information may be transmitted in a dedicated data radio bearer.
- the dedicated radio bearer may be terminated at a dedicated protocol layer used for AI/ML model handling, and the dedicated protocol layer may be above the PDCP layer.
- the base station may assist training both parts together with the reported preferred model structure for both the CSI generation and reconstruction parts, followed by transferring the parameters of the CSI generation part to the UE after model training.
- the UE reports the preferred model structures for the CSI reconstruction part and CSI generation part.
- FIG. 8A illustrates a third example process for model transfer associated with AI/ML-based CSI compression in accordance with aspects of the present disclosure.
- the base station 102 may re-train the whole model with updated CSI generation and reconstruction parts according to the preferred model structures.
- FIG. 8B illustrates an example conceptual diagram 800B of the two-sided model for training at the base station in the third process in FIG. 8A.
- the local parts of model at the base station may have the corresponding local model structures of the base station. If the UE reports the preferred model structures for the CSI reconstruction part and CSI generation part, the base station may rebuild a CSI generation part 830 and a CSI reconstruction part 840 based on the preferred model structures reported by the UE. Instead of training the local CSI reconstruction part 810 and the local CSI generation part 820, the base station may concatenate the rebuilt CSI generation part 830 and the rebuilt CSI reconstruction part 840 and perform model training accordingly.
- the base station 102 may transmit, to the UE 104, the set of parameters of the trained CSI generation part (i.e., the CSI generation part 830 in FIG. 8B) and an assigned ID.
- the assigned ID may be associated with the temporal ID, which is provided in process 500 in the initialization/pre-training procedure.
- the parameters of the CSI reconstruction part may be optionally provided.
- the UE 104 mat apply the parameters on the local CSI generation part at the UE 104.
- the local CSI generation part at the UE 104 may be paired with the CSI reconstruction part at the base station 102 (e.g., the CSI reconstruction part 840 or the CSI reconstruction part 810 in FIG. 8B) based on the same associated ID, which is provided in the initialization/pre-training procedure.
- the UE 104 may confirm the deployment of the parameters on the CSI generation part and the ID assignment. After the confirmation, this two-sided model can be activated/deactivated/switched by the LCM decision made by base station 102. This confirmation can be sent via RRC, MAC CE or UCI. There may be a predefined delay between the confirmation of the deployment and the activation/application of the corresponding model.
- This information may at least include the configurations on the parameters in the set, such as size, values and quantization level etc., and an associated ID of the model to be applied, such as model ID if defined.
- the base station 102 transfers the information to the UE 104 in DL can be considered.
- the information may be transmitted via a SRB using a RRC message, e.g., as an OCTET STRING container in a RRC message.
- the information may be transmitted in a dedicated data radio bearer.
- the dedicated radio bearer may be terminated at a dedicated protocol layer used for AI/ML model handling, and the dedicated protocol layer may be above the PDCP layer.
- a procedure and associated signallings are designed to support inter-vendor collaboration for the AI/ML-based CSI compression scheme using a two-sided model via model transfer.
- a set of information is transmitted from the UE to the base station to indicate the part (s) of a two-sided model with preferred model structure to support model transfer for training and updating.
- model parameters and the association between theses model parameters are may be reported to facilitate model pairing between multiple vendors.
- the procedures and associated signallings to pair the two parts of a two-sided model are designed.
- the part (s) needed by the UE may be trained at the base station based on the preferred model structure reported by the UE, followed by the parameter transfer to the UE with the associated ID.
- FIG. 9 illustrates an example of a device 900 that supports AI/ML-based CSI compression in accordance with aspects of the present disclosure.
- the device 900 may be an example of a UE 104 or a base station 102 as described herein.
- the device 900 may support wireless communication with one or more network entities 102, UEs 104, or any combination thereof.
- the device 900 may include components for bi-directional communications including components for transmitting and receiving communications, such as a processor 902, a memory 904, a transceiver 906, and, optionally, an I/O controller 908. These components may be in electronic communication or otherwise coupled (e.g., operatively, communicatively, functionally, electronically, electrically) via one or more interfaces (e.g., buses) .
- the processor 902, the memory 904, the transceiver 906, or various combinations thereof or various components thereof may be examples of means for performing various aspects of the present disclosure as described herein.
- the processor 902, the memory 904, the transceiver 906, or various combinations or components thereof may support a method for performing one or more of the operations described herein.
- the processor 902, the memory 904, the transceiver 906, or various combinations or components thereof may be implemented in hardware (e.g., in communications management circuitry) .
- the hardware may include a processor, a digital signal processor (DSP) , an application-specific integrated circuit (ASIC) , a field-programmable gate array (FPGA) or other programmable logic device, a discrete gate or transistor logic, discrete hardware components, or any combination thereof configured as or otherwise supporting a means for performing the functions described in the present disclosure.
- the processor 902 and the memory 904 coupled with the processor 902 may be configured to perform one or more of the functions described herein (e.g., executing, by the processor 902, instructions stored in the memory 904) .
- the processor 902 may support wireless communication at the device 900 in accordance with examples as disclosed herein.
- the processor 902 may be configured to operable to support a means for transmitting, to a base station, an indication of at least one model structure for at least one part of model, wherein the at least one part of model is associated with a functionality on an artificial intelligence or machine learning (AI/ML) based channel state information (CSI) compression; and a means for receiving, from the base station, information of the at least one part of model trained using the at least one indicated model structure.
- AI/ML artificial intelligence or machine learning
- CSI channel state information
- the processor 902 may support wireless communication at the device 900 in accordance with examples as disclosed herein.
- the processor 902 may be configured to operable to support a means for receiving, from a user equipment (UE) , an indication of at least one model structure for at least one part of model, wherein the at least one part of model is associated with a functionality on an artificial intelligence or machine learning (AI/ML) based channel state information (CSI) compression; a means for training the at least one part of model using the at least one indicated model structure; and a means for transmitting, to the UE, information of the at least one part of model.
- UE user equipment
- AI/ML artificial intelligence or machine learning
- CSI channel state information
- the processor 902 may include an intelligent hardware device (e.g., a general-purpose processor, a DSP, a CPU, a microcontroller, an ASIC, an FPGA, a programmable logic device, a discrete gate or transistor logic component, a discrete hardware component, or any combination thereof) .
- the processor 902 may be configured to operate a memory array using a memory controller.
- a memory controller may be integrated into the processor 902.
- the processor 902 may be configured to execute computer-readable instructions stored in a memory (e.g., the memory 904) to cause the device 900 to perform various functions of the present disclosure.
- the memory 904 may include random access memory (RAM) and read-only memory (ROM) .
- the memory 904 may store computer-readable, computer-executable code including instructions that, when executed by the processor 902 cause the device 900 to perform various functions described herein.
- the code may be stored in a non-transitory computer-readable medium such as system memory or another type of memory.
- the code may not be directly executable by the processor 902 but may cause a computer (e.g., when compiled and executed) to perform functions described herein.
- the memory 904 may include, among other things, a basic I/O system (BIOS) which may control basic hardware or software operation such as the interaction with peripheral components or devices.
- BIOS basic I/O system
- the I/O controller 908 may manage input and output signals for the device 900.
- the I/O controller 908 may also manage peripherals not integrated into the device M02.
- the I/O controller 908 may represent a physical connection or port to an external peripheral.
- the I/O controller 908 may utilize an operating system such as or another known operating system.
- the I/O controller 908 may be implemented as part of a processor, such as the processor 906.
- a user may interact with the device 900 via the I/O controller 908 or via hardware components controlled by the I/O controller 908.
- a transmit chain may be configured to generate and transmit signals (e.g., control information, data, packets) .
- the transmit chain may include at least one modulator for modulating data onto a carrier signal, preparing the signal for transmission over a wireless medium.
- the at least one modulator may be configured to support one or more techniques such as amplitude modulation (AM) , frequency modulation (FM) , or digital modulation schemes like phase-shift keying (PSK) or quadrature amplitude modulation (QAM) .
- the transmit chain may also include at least one power amplifier configured to amplify the modulated signal to an appropriate power level suitable for transmission over the wireless medium.
- the transmit chain may also include one or more antennas 910 for transmitting the amplified signal into the air or wireless medium.
- a receive chain may be configured to receive signals (e.g., control information, data, packets) over a wireless medium.
- the receive chain may include one or more antennas 910 for receive the signal over the air or wireless medium.
- the receive chain may include at least one amplifier (e.g., a low-noise amplifier (LNA) ) configured to amplify the received signal.
- the receive chain may include at least one demodulator configured to demodulate the receive signal and obtain the transmitted data by reversing the modulation technique applied during transmission of the signal.
- the receive chain may include at least one decoder for decoding the processing the demodulated signal to receive the transmitted data.
- FIG. 10 illustrates an example of a processor 1000 that supports AI/ML-based CSI compression in accordance with aspects of the present disclosure.
- the processor 1000 may be an example of a processor configured to perform various operations in accordance with examples as described herein.
- the processor 1000 may include a controller 1002 configured to perform various operations in accordance with examples as described herein.
- the processor 1000 may optionally include at least one memory 1004, such as L1/L2/L3 cache. Additionally, or alternatively, the processor 1000 may optionally include one or more arithmetic-logic units (ALUs) 1006.
- ALUs arithmetic-logic units
- One or more of these components may be in electronic communication or otherwise coupled (e.g., operatively, communicatively, functionally, electronically, electrically) via one or more interfaces (e.g., buses) .
- the processor 1000 may be a processor chipset and include a protocol stack (e.g., a software stack) executed by the processor chipset to perform various operations (e.g., receiving, obtaining, retrieving, transmitting, outputting, forwarding, storing, determining, identifying, accessing, writing, reading) in accordance with examples as described herein.
- a protocol stack e.g., a software stack
- operations e.g., receiving, obtaining, retrieving, transmitting, outputting, forwarding, storing, determining, identifying, accessing, writing, reading
- the processor chipset may include one or more cores, one or more caches (e.g., memory local to or included in the processor chipset (e.g., the processor 1000) or other memory (e.g., random access memory (RAM) , read-only memory (ROM) , dynamic RAM (DRAM) , synchronous dynamic RAM (SDRAM) , static RAM (SRAM) , ferroelectric RAM (FeRAM) , magnetic RAM (MRAM) , resistive RAM (RRAM) , flash memory, phase change memory (PCM) , and others) .
- RAM random access memory
- ROM read-only memory
- DRAM dynamic RAM
- SDRAM synchronous dynamic RAM
- SRAM static RAM
- FeRAM ferroelectric RAM
- MRAM magnetic RAM
- RRAM resistive RAM
- PCM phase change memory
- the controller 1002 may be configured to manage and coordinate various operations (e.g., signaling, receiving, obtaining, retrieving, transmitting, outputting, forwarding, storing, determining, identifying, accessing, writing, reading) of the processor 1000 to cause the processor 1000 to support various operations of a base station in accordance with examples as described herein.
- the controller 1002 may operate as a control unit of the processor 1000, generating control signals that manage the operation of various components of the processor 1000. These control signals include enabling or disabling functional units, selecting data paths, initiating memory access, and coordinating timing of operations.
- the controller 1002 may be configured to fetch (e.g., obtain, retrieve, receive) instructions from the memory 1004 and determine subsequent instruction (s) to be executed to cause the processor 1000 to support various operations in accordance with examples as described herein.
- the controller 1002 may be configured to track memory address of instructions associated with the memory 1004.
- the controller 1002 may be configured to decode instructions to determine the operation to be performed and the operands involved.
- the controller 1002 may be configured to interpret the instruction and determine control signals to be output to other components of the processor 1000 to cause the processor 1000 to support various operations in accordance with examples as described herein.
- the controller 1002 may be configured to manage flow of data within the processor 1000.
- the controller 1002 may be configured to control transfer of data between registers, arithmetic logic units (ALUs) , and other functional units of the processor 1000.
- ALUs arithmetic logic units
- the memory 1004 may include one or more caches (e.g., memory local to or included in the processor 1000 or other memory, such RAM, ROM, DRAM, SDRAM, SRAM, MRAM, flash memory, etc. In some implementation, the memory 1004 may reside within or on a processor chipset (e.g., local to the processor 1000) . In some other implementations, the memory 1004 may reside external to the processor chipset (e.g., remote to the processor 1000) .
- caches e.g., memory local to or included in the processor 1000 or other memory, such RAM, ROM, DRAM, SDRAM, SRAM, MRAM, flash memory, etc.
- the memory 1004 may reside within or on a processor chipset (e.g., local to the processor 1000) . In some other implementations, the memory 1004 may reside external to the processor chipset (e.g., remote to the processor 1000) .
- the memory 1004 may store computer-readable, computer-executable code including instructions that, when executed by the processor 1000, cause the processor 1000 to perform various functions described herein.
- the code may be stored in a non-transitory computer-readable medium such as system memory or another type of memory.
- the controller 1002 and/or the processor 1000 may be configured to execute computer-readable instructions stored in the memory 1004 to cause the processor 1000 to perform various functions.
- the processor 1000 and/or the controller 1002 may be coupled with or to the memory 1004, and the processor 1000, the controller 1002, and the memory 1004 may be configured to perform various functions described herein.
- the processor 1000 may include multiple processors and the memory 1004 may include multiple memories. One or more of the multiple processors may be coupled with one or more of the multiple memories, which may, individually or collectively, be configured to perform various functions herein.
- the one or more ALUs 1006 may be configured to support various operations in accordance with examples as described herein.
- the one or more ALUs 1006 may reside within or on a processor chipset (e.g., the processor 1000) .
- the one or more ALUs 1006 may reside external to the processor chipset (e.g., the processor 1000) .
- One or more ALUs 1006 may perform one or more computations such as addition, subtraction, multiplication, and division on data.
- one or more ALUs 1006 may receive input operands and an operation code, which determines an operation to be executed.
- One or more ALUs 1006 be configured with a variety of logical and arithmetic circuits, including adders, subtractors, shifters, and logic gates, to process and manipulate the data according to the operation. Additionally, or alternatively, the one or more ALUs 1006 may support logical operations such as AND, OR, exclusive-OR (XOR) , not-OR (NOR) , and not-AND (NAND) , enabling the one or more ALUs 1006 to handle conditional operations, comparisons, and bitwise operations.
- logical operations such as AND, OR, exclusive-OR (XOR) , not-OR (NOR) , and not-AND (NAND) , enabling the one or more ALUs 1006 to handle conditional operations, comparisons, and bitwise operations.
- the processor 1000 may support wireless communication in accordance with examples as disclosed herein.
- the processor 1000 may be configured to or operable to support a means for transmitting, to a base station, an indication of at least one model structure for at least one part of model, wherein the at least one part of model is associated with a functionality on an artificial intelligence or machine learning (AI/ML) based channel state information (CSI) compression; and a means for receiving, from the base station, information of the at least one part of model trained using the at least one indicated model structure.
- AI/ML artificial intelligence or machine learning
- CSI channel state information
- the processor 1000 may support wireless communication in accordance with examples as disclosed herein.
- the processor 1000 may be configured to or operable to support a means for receiving, from a user equipment (UE) , an indication of at least one model structure for at least one part of model, wherein the at least one part of model is associated with a functionality on an artificial intelligence or machine learning (AI/ML) based channel state information (CSI) compression; a means for training the at least one part of model using the at least one indicated model structure; and a means for transmitting, to the UE, information of the at least one part of model.
- UE user equipment
- AI/ML artificial intelligence or machine learning
- CSI channel state information
- FIG. 11 illustrates a flowchart of a method 1100 that supports AI/ML-based CSI compression in accordance with aspects of the present disclosure.
- the operations of the method 1100 may be implemented by a device or its components as described herein.
- the operations of the method 1100 may be performed by the UE 104 as described herein.
- the device may execute a set of instructions to control the function elements of the device to perform the described functions. Additionally, or alternatively, the device may perform aspects of the described functions using special-purpose hardware.
- the method may include transmitting, to a base station, an indication of at least one model structure for at least one part of model, wherein the at least one part of model is associated with a functionality on an artificial intelligence or machine learning (AI/ML) based channel state information (CSI) compression.
- AI/ML artificial intelligence or machine learning
- CSI channel state information
- the method may include receiving, from the base station, information of the at least one part of model trained using the at least one indicated model structure.
- the operations of 1110 may be performed in accordance with examples as described herein. In some implementations, aspects of the operations of 1110 may be performed by a device as described with reference to FIG. 1A.
- FIG. 12 illustrates a flowchart of a method 1200 that supports AI/ML-based CSI compression in accordance with aspects of the present disclosure.
- the operations of the method 1200 may be implemented by a device or its components as described herein.
- the operations of the method 1200 may be performed by the base station 104 as described herein.
- the device may execute a set of instructions to control the function elements of the device to perform the described functions. Additionally, or alternatively, the device may perform aspects of the described functions using special-purpose hardware.
- the method may include receiving, from a user equipment (UE) , an indication of at least one model structure for at least one part of model, wherein the at least one part of model is associated with a functionality on an artificial intelligence or machine learning (AI/ML) based channel state information (CSI) compression.
- UE user equipment
- AI/ML artificial intelligence or machine learning
- CSI channel state information
- the method may include training the at least one part of model using the at least one indicated model structure.
- the operations of 1210 may be performed in accordance with examples as described herein. In some implementations, aspects of the operations of 1210 may be performed by a device as described with reference to FIG. 1A.
- the method may include transmitting, to the UE, information of the at least one part of model.
- the operations of 1215 may be performed in accordance with examples as described herein. In some implementations, aspects of the operations of 1215 may be performed by a device as described with reference to FIG. 1A.
- a general-purpose processor may be a microprocessor, but in the alternative, the processor may be any processor, controller, microcontroller, or state machine.
- a processor may also be implemented as a combination of computing devices (e.g., a combination of a DSP and a microprocessor, multiple microprocessors, one or more microprocessors in conjunction with a DSP core, or any other such configuration.
- the functions described herein may be implemented in hardware, software executed by a processor, firmware, or any combination thereof. If implemented in software executed by a processor, the functions may be stored on or transmitted over as one or more instructions or code on a computer-readable medium. Other examples and implementations are within the scope of the disclosure and appended claims. For example, due to the nature of software, functions described herein may be implemented using software executed by a processor, hardware, firmware, hardwiring, or combinations of any of these. Features implementing functions may also be physically located at various positions, including being distributed such that portions of functions are implemented at different physical locations.
- Computer-readable media includes both non-transitory computer storage media and communication media including any medium that facilitates transfer of a computer program from one place to another.
- a non-transitory storage medium may be any available medium that may be accessed by a general-purpose or special-purpose computer.
- non-transitory computer-readable media may include RAM, ROM, electrically erasable programmable ROM (EEPROM) , flash memory, compact disk (CD) ROM or other optical disk storage, magnetic disk storage or other magnetic storage devices, or any other non-transitory medium that may be used to carry or store desired program code means in the form of instructions or data structures and that may be accessed by a general-purpose or special-purpose computer, or a general-purpose or special-purpose processor.
- an article “a” before an element is unrestricted and understood to refer to “at least one” of those elements or “one or more” of those elements.
- the terms “a, ” “at least one, ” “one or more, ” and “at least one of one or more” may be interchangeable.
- a list of items indicates an inclusive list such that, for example, a list of at least one of A, B, or C means A or B or C or AB or AC or BC or ABC (i.e., A and B and C) .
- the phrase “based on” shall not be construed as a reference to a closed set of conditions. For example, an example step that is described as “based on condition A” may be based on both a condition A and a condition B without departing from the scope of the present disclosure.
- the phrase “based on” shall be construed in the same manner as the phrase “based at least in part on.
- a “set” may include one or more elements.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Evolutionary Computation (AREA)
- Artificial Intelligence (AREA)
- General Health & Medical Sciences (AREA)
- General Physics & Mathematics (AREA)
- Computational Linguistics (AREA)
- Data Mining & Analysis (AREA)
- Biomedical Technology (AREA)
- Life Sciences & Earth Sciences (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- General Engineering & Computer Science (AREA)
- Biophysics (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Health & Medical Sciences (AREA)
- Signal Processing (AREA)
- Computer Networks & Wireless Communication (AREA)
- Power Engineering (AREA)
- Computer Security & Cryptography (AREA)
- Mobile Radio Communication Systems (AREA)
Abstract
Divers aspects de la présente divulgation concernent la compression d'informations d'état de canal (CSI) basées sur l'intelligence artificielle ou l'apprentissage automatique (IA/ML). Selon un aspect, un équipement utilisateur (UE) émet une indication d'au moins une structure de modèle pour au moins une partie du modèle à une station de base. La ou les parties de modèle sont associées à une fonctionnalité sur une compression de CSI basée sur l'IA/ML. L'UE reçoit ensuite, en provenance de la station de base, des informations de ladite partie de modèle entraîné à l'aide de ladite structure de modèle indiquée.
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| PCT/CN2024/092949 WO2025055364A1 (fr) | 2024-05-13 | 2024-05-13 | Compression de csi basée sur l'ia/ml |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| PCT/CN2024/092949 WO2025055364A1 (fr) | 2024-05-13 | 2024-05-13 | Compression de csi basée sur l'ia/ml |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| WO2025055364A1 true WO2025055364A1 (fr) | 2025-03-20 |
Family
ID=95020888
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| PCT/CN2024/092949 Pending WO2025055364A1 (fr) | 2024-05-13 | 2024-05-13 | Compression de csi basée sur l'ia/ml |
Country Status (1)
| Country | Link |
|---|---|
| WO (1) | WO2025055364A1 (fr) |
Citations (4)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| WO2022266582A1 (fr) * | 2021-06-15 | 2022-12-22 | Qualcomm Incorporated | Configuration d'un modèle d'apprentissage machine dans des réseaux sans fil |
| WO2022265400A1 (fr) * | 2021-06-15 | 2022-12-22 | Samsung Electronics Co., Ltd. | Procédé et appareil d'adaptation de motif de symbole de référence |
| CN116017543A (zh) * | 2022-12-27 | 2023-04-25 | 京信网络系统股份有限公司 | 信道状态信息反馈增强方法、装置、系统和存储介质 |
| WO2024031538A1 (fr) * | 2022-08-11 | 2024-02-15 | Qualcomm Incorporated | Compression dans le domaine fréquentiel d'informations d'état de canal |
-
2024
- 2024-05-13 WO PCT/CN2024/092949 patent/WO2025055364A1/fr active Pending
Patent Citations (4)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| WO2022266582A1 (fr) * | 2021-06-15 | 2022-12-22 | Qualcomm Incorporated | Configuration d'un modèle d'apprentissage machine dans des réseaux sans fil |
| WO2022265400A1 (fr) * | 2021-06-15 | 2022-12-22 | Samsung Electronics Co., Ltd. | Procédé et appareil d'adaptation de motif de symbole de référence |
| WO2024031538A1 (fr) * | 2022-08-11 | 2024-02-15 | Qualcomm Incorporated | Compression dans le domaine fréquentiel d'informations d'état de canal |
| CN116017543A (zh) * | 2022-12-27 | 2023-04-25 | 京信网络系统股份有限公司 | 信道状态信息反馈增强方法、装置、系统和存储介质 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| WO2024187797A1 (fr) | Dispositifs et procédés de collecte de données | |
| WO2024109110A1 (fr) | Fourniture d'informations d'assistance pour gcv | |
| WO2024156270A1 (fr) | Synchronisation pour un dispositif iot ambiant | |
| WO2024207851A1 (fr) | Techniques de prise en charge d'intelligence artificielle native dans des systèmes de communications sans fil | |
| WO2024093447A1 (fr) | Procédure de préparation pour ltm | |
| WO2024093428A1 (fr) | Mécanisme pour cho avec des scg candidats | |
| WO2025055364A1 (fr) | Compression de csi basée sur l'ia/ml | |
| WO2025194829A1 (fr) | Prédiction de faisceau | |
| WO2024179075A1 (fr) | Appariement de modèles pour compression de csi basée sur ia/ml | |
| WO2024213187A1 (fr) | Indication de condition supplémentaire côté réseau | |
| WO2024222021A1 (fr) | Collecte de données radio d'ue | |
| WO2025145707A1 (fr) | Acquisition précoce de csi pour mobilité déclenchée de couche l1/l2 | |
| WO2025102789A1 (fr) | Activation et commutation de fonctionnalités d'intelligence artificielle | |
| WO2025161490A1 (fr) | Saut de fréquence dans un système iot | |
| WO2024250732A1 (fr) | Commande pour système iot ambiant | |
| WO2025213873A1 (fr) | Collecte de données de mesure basée sur une couche supérieure | |
| WO2024093275A1 (fr) | Groupe d'états indicateurs de configuration de transmission | |
| WO2025232292A1 (fr) | Modèle de rapport de csi | |
| WO2025175815A1 (fr) | Collecte de données pour fonctionnalités prises en charge | |
| WO2024148935A1 (fr) | Gestion du cycle de vie prenant en charge l'ia/le ml pour améliorer l'interface radio | |
| WO2024183322A1 (fr) | Compression de csi | |
| WO2025107712A1 (fr) | Segmentation pour système d'ido-a | |
| WO2024183344A1 (fr) | Configuration pour signal de référence d'informations d'état de canal | |
| WO2025097815A1 (fr) | Transmission dans un système d'ido-a | |
| WO2025123687A1 (fr) | Mesure et rapport d'interférence dans un système d'ido-a |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| 121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 24864089 Country of ref document: EP Kind code of ref document: A1 |