WO2024096780A1 - Methods for implicit csi feedback with rank greater than one - Google Patents
Methods for implicit csi feedback with rank greater than one Download PDFInfo
- Publication number
- WO2024096780A1 WO2024096780A1 PCT/SE2023/051052 SE2023051052W WO2024096780A1 WO 2024096780 A1 WO2024096780 A1 WO 2024096780A1 SE 2023051052 W SE2023051052 W SE 2023051052W WO 2024096780 A1 WO2024096780 A1 WO 2024096780A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- layer
- channel
- rank
- mapping
- csi
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Ceased
Links
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04B—TRANSMISSION
- H04B7/00—Radio transmission systems, i.e. using radiation field
- H04B7/02—Diversity systems; Multi-antenna system, i.e. transmission or reception using multiple antennas
- H04B7/04—Diversity systems; Multi-antenna system, i.e. transmission or reception using multiple antennas using two or more spaced independent antennas
- H04B7/06—Diversity systems; Multi-antenna system, i.e. transmission or reception using multiple antennas using two or more spaced independent antennas at the transmitting station
- H04B7/0613—Diversity systems; Multi-antenna system, i.e. transmission or reception using multiple antennas using two or more spaced independent antennas at the transmitting station using simultaneous transmission
- H04B7/0615—Diversity systems; Multi-antenna system, i.e. transmission or reception using multiple antennas using two or more spaced independent antennas at the transmitting station using simultaneous transmission of weighted versions of same signal
- H04B7/0619—Diversity systems; Multi-antenna system, i.e. transmission or reception using multiple antennas using two or more spaced independent antennas at the transmitting station using simultaneous transmission of weighted versions of same signal using feedback from receiving side
- H04B7/0621—Feedback content
- H04B7/063—Parameters other than those covered in groups H04B7/0623 - H04B7/0634, e.g. channel matrix rank or transmit mode selection
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/044—Recurrent networks, e.g. Hopfield networks
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
- G06N3/0455—Auto-encoder networks; Encoder-decoder networks
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/0464—Convolutional networks [CNN, ConvNet]
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/0495—Quantised networks; Sparse networks; Compressed networks
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/084—Backpropagation, e.g. using gradient descent
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04B—TRANSMISSION
- H04B7/00—Radio transmission systems, i.e. using radiation field
- H04B7/02—Diversity systems; Multi-antenna system, i.e. transmission or reception using multiple antennas
- H04B7/04—Diversity systems; Multi-antenna system, i.e. transmission or reception using multiple antennas using two or more spaced independent antennas
- H04B7/06—Diversity systems; Multi-antenna system, i.e. transmission or reception using multiple antennas using two or more spaced independent antennas at the transmitting station
- H04B7/0613—Diversity systems; Multi-antenna system, i.e. transmission or reception using multiple antennas using two or more spaced independent antennas at the transmitting station using simultaneous transmission
- H04B7/0615—Diversity systems; Multi-antenna system, i.e. transmission or reception using multiple antennas using two or more spaced independent antennas at the transmitting station using simultaneous transmission of weighted versions of same signal
- H04B7/0619—Diversity systems; Multi-antenna system, i.e. transmission or reception using multiple antennas using two or more spaced independent antennas at the transmitting station using simultaneous transmission of weighted versions of same signal using feedback from receiving side
- H04B7/0658—Feedback reduction
- H04B7/0663—Feedback reduction using vector or matrix manipulations
-
- H—ELECTRICITY
- H03—ELECTRONIC CIRCUITRY
- H03M—CODING; DECODING; CODE CONVERSION IN GENERAL
- H03M7/00—Conversion of a code where information is represented by a given sequence or number of digits to a code where the same, similar or subset of information is represented by a different sequence or number of digits
- H03M7/30—Compression; Expansion; Suppression of unnecessary data, e.g. redundancy reduction
- H03M7/3059—Digital compression and data reduction techniques where the original information is represented by a subset or similar information, e.g. lossy compression
-
- H—ELECTRICITY
- H03—ELECTRONIC CIRCUITRY
- H03M—CODING; DECODING; CODE CONVERSION IN GENERAL
- H03M7/00—Conversion of a code where information is represented by a given sequence or number of digits to a code where the same, similar or subset of information is represented by a different sequence or number of digits
- H03M7/30—Compression; Expansion; Suppression of unnecessary data, e.g. redundancy reduction
- H03M7/60—General implementation details not specific to a particular type of compression
- H03M7/6041—Compression optimized for errors
-
- H—ELECTRICITY
- H03—ELECTRONIC CIRCUITRY
- H03M—CODING; DECODING; CODE CONVERSION IN GENERAL
- H03M7/00—Conversion of a code where information is represented by a given sequence or number of digits to a code where the same, similar or subset of information is represented by a different sequence or number of digits
- H03M7/30—Compression; Expansion; Suppression of unnecessary data, e.g. redundancy reduction
- H03M7/70—Type of the data to be coded, other than image and sound
Definitions
- This disclosure relates to the communication of channel state information (CSI) from a User Equipment (UE) to a base station (e.g. a Next Generation (NG) NodeB (gNB)).
- CSI channel state information
- UE User Equipment
- gNB Next Generation NodeB
- the 5 th generation (5G) mobile wireless communication system uses Orthogonal Frequency Division Multiplexing (OFDM) with configurable bandwidths and subcarrier spacing to efficiently support a diverse set of use cases and deployment scenarios.
- OFDM Orthogonal Frequency Division Multiplexing
- LTE Long Term Evolution
- NR improves in deployment flexibility, user throughputs, latency and reliability.
- MIMO Multi-User Multiple Input Multiple Output
- MU-MIMO Multi-User Multiple Input Multiple Output
- MU-MI MO operations is illustrated in Figure 1 where a multi-antenna base station with N TX antenna ports is spatially transmitting information to several UEs, in which sequence S (1) is aimed for UE(1), S !2) is aimed for UE(2), etc. Before modulation and transmission, precoding V/ 7 - 1 is applied to each sequence to spatially separate the transmissions, i.e., to mitigate multiplexing interference.
- each UE demodulates its received signal and combines receive antenna signals to obtain an estimate S (t) of transmitted sequence.
- This estimate can be expressed as where the second term represents the spatial multiplexing interference seen by UE(i).
- the goal for the base station is to construct the set of precoders w ⁇ 7 ' ) such that the norm
- the precoder Wy shall correlate well with the channel observed by UE(i) whereas it shall correlate poorly with other channels.
- the base station needs to acquire detailed knowledge of the channels H(i).
- channel knowledge can be acquired from sounding reference signals (SRS) that are transmitted periodically, or on demand, by active UEs. Based on these SRS, the base station estimates However, when channel reciprocity does not hold or when SRS coverage is limited, active UEs need to feedback channel details to the base station. In NR (as well as in LTE), this is done by having the base station to periodically transmit Channel State Information reference signals (CSI-RS) from which a UE can estimate its channel. The UE then reports CSI from which the base station can determine suitable precoders for MU- MIMO.
- CSI-RS Channel State Information reference signals
- the CSI feedback mechanism targeting MU-MIMO operations in NR is referred to as CSI type II, in which a UE reports CSI feedback with high CSI resolution (Reference [1]). It is based on specifying sets of Discrete Fourier Transform (DFT) base functions (grid of beams) from which the UE selects those that best match its channel conditions (like classical codebook Precoder Matrix Indicator (PMI)).
- DFT Discrete Fourier Transform
- PMI Precoder Matrix Indicator
- the number of beams the UE reports is configurable via Radio Resource Control (RRC) signaling, and may be 2 or 4 for Rel-15 Type II or 2, 4 or 6 for Rel-16 Type II.
- RRC Radio Resource Control
- the CSI report can be further compressed in the frequency domain (FD), where a set of FD DFT basis vectors are selected by the UE.
- the number of selected FD basis vectors is a function of the number of Channel Quality information (CQI) subbands, the number of PMI subbands per CQI subband and a ratio that determines the FD compression (termed as p v in Reference [1], where v is the layer index), which is configured by gNB via RRC signaling.
- CQI Channel Quality information
- the UE also reports non-zero coefficients (NZCs) associated with the selected beams for Rel-15 Type II, which informs the gNB how these beams should be combined in terms of relative amplitude scaling and co-phasing for each subband.
- NZCs non-zero coefficients
- the reported NZCs are then associated with selected beams and FD basis vectors.
- gNB to further compress the CSI report, gNB also configures a ratio, termed as , to the UE via RRC signaling, that determines the maximum number of NZCs to be reported. For example, for a single layer transmission where 2L beams and M FD basis vectors are configured by gNB, there are in total 2LM linear combination coefficients.
- the precoder reported by the UE can be expressed as
- the reporting overhead for Type II CSI is generally large, especially when comparing to the Type I CSI
- a dominant part of the reporting overhead is from subband reporting, e.g., the layer-specific NZCs. For instance, it requires about 7 bits (the actual number depends on the release version and parameter configuration) to report the phase and amplitude for one coefficient.
- Autoencoders for Artificial Intelligence (AD/Machine Learning (ML)-enhanced CSI reporting - Recently neural network based autoencoders (AEs) have shown promising results for compressing downlink Ml MO channel estimates for uplink (UL) feedback.
- AEs can be used to improve the accuracy of reported CSI from the UE to the network (NW).
- an AE is a type of artificial neural network (NN) that can be used to compress and decompress data, in an unsupervised manner, often with high fidelity.
- FIG. 3 illustrates a simple fully connected (dense) AE.
- the AE is divided into two parts:
- AEs can have different architectures.
- AEs can be based on dense NNs, multi-dimensional convolution NNs, variational, recurrent NNs, transformer networks, or any combination thereof.
- all AE architectures possess an encoder-bottleneck-decoder structure illustrated in Figure 3.
- the size of the codeword (denoted by Y in Figure 3) of an AE is typically a lot smaller than the size of the input data ( in Figure 3).
- the AE encoder thus reduces the dimensionality of the input features X down to Y.
- the decoder part of the AE tries to invert the encoder and reconstruct X with minimal error, according to some predefined loss function.
- Figure 4 illustrates how an AE might be used for AI/ML-enhanced CSI reporting in NR.
- the UE measures the channel in the downlink using CSI-RS.
- the UE estimates that channel for each subcarrier (SC) from each base station transmit (TX) antenna and at each UE receive (RX) antenna.
- SC subcarrier
- TX base station transmit
- RX UE receive
- the estimate can be viewed as a three-dimensional channel matrix.
- the 3D channel matrix represents the MIMO channel estimated over several SCs and is input to the encoder.
- the AE encoder is implemented in the UE, and the AE decoder is implemented in the NW.
- the output of the AE encoder is signalled from the UE to the NW over the uplink.
- the codeword can be viewed a learned latent representation of the channel.
- the architecture of an AE e.g. number of layers, nodes per layer, activation function etc
- properties of the data e.g., CSI-RS channel estimates
- the channel size, uplink feedback rate, and hardware limitations of the encoder and decoder all need to be considered when optimizing the AE’s architecture.
- the weights and biases of an AE are trained to minimize the reconstruction error (the error between the input X and output X) on some training dataset.
- the weights and biases can be trained to minimize the mean squared error (MSE) (X - X) 2 .
- MSE mean squared error
- Model training is typically done using some variant of the gradient descent algorithm on a large training data set. To achieve good performance during live operation, the training data set should be representative of the actual data the AE will encounter during live operation.
- the output of the UE-side encoder needs to be communicated over the air interface to the gNB decoder with the assigned CSI reporting payload and, therefore, needs to be quantized to a finite number of bits (e.g., 1-4 bits per sample for the Uplink Control Information (UCI)) to obtain an efficient transmission, as shown in Figure 5.
- Figure 5 Illustrates a quantization operation at the output of the encoder to fit the CSI payload over the air interface. Accordingly, a quantization layer is usually connected at the output of the encoder or directly included in the encoder.
- the quantization layer quantizes the output of each neuron of the encoder output layer (the bottleneck layer of AE) to generate bits to fit the CSI reporting payload in the UCI.
- this quantization may be done only during the inference (i.e., quantization non-aware training) or may also be included during the training (i.e., quantization-aware training) (Reference [5]).
- Pre-processing for input data to the AE - A proper pre-processing on the input to the encoder can greatly reduce the size and complexity for designing and/or training an AI/ML model, and in the meantime, improving the scalability and transferability of the model.
- a pre-processing method could be a transformation of the channel from antenna-frequency domain to beam-delay domain.
- the pre-processing is used to reduce the need for multiple models depending on bandwidth variation and variation in the number of antenna ports at the gNB.
- the channel representation in the antenna-frequency domain is usually rich and hard to compress, however, its equivalent form in the beam-delay domain is sparse and easier to compress.
- Such sparsity reflects the physical interpretation of a propagation channel. That is, it reflects how the numerous sinusoidal signals traverse from the transmitting end, along different paths, to the receiving end.
- each beam can be associated with a certain direction of a propagation path, and each delay can reflect the relative difference in distance if a signal propagates along different paths.
- each pair of beam and delay is associated with a single propagation path, if there is infinite spatial resolution and delay resolution.
- the sparsity can be further exploited by removing a number of insignificant beams and delays, so that the input dimensions could also be reduced with a marginal loss, likely resulting in smaller AI/ML models.
- the beam-delay transformation and feature extraction can be applied both cases of explicit channel feedback and eigenvector-based feedback.
- the explicit channel feedback is discussed in Reference [7], Herein, the focus is on the implicit eigenvector-based feedback.
- the first step is that the UE measures the channel on CSI-RS. For example, let the UE has 4 Rx-ports, the configured CSI-format has 32 virtual Tx-ports, and the bandwidth are 52 Resource Blocks (RBs) corresponding to 10 MHz at 15 kHz subcarrier spacing.
- RBs Resource Blocks
- Figure 6 shows the pre-processing for implicit feedback of precoding matrices. The steps are as follows:
- the UE does a spatial domain DFT on the 32x4 matrix per RB and selects the L strongest beams out of 16 (for one polarization). This is done in a wideband manner, including the spatial oversampling of the spatial- domain (SD) basis, and the same beams are used for both polarizations.
- the covariance of the beam-space channel is summed over, e.g. , 4 RBs to produce a covariance matrix for each subband.
- the UE For each covariance matrix (per subband) the UE extracts a number of eigenvectors and may select the rank, i.e., number of layers.
- the UE does a frequency domain DFT per layer, transforming to delay domain, whereafter it selects the M strongest taps.
- the resulting tensor of dimensions 2L x number of layers x M is called the linear combination coefficients and can be used to reconstruct, by the UE suggested, precoding matrices.
- a tap is a cluster in the wireless channel through which the signal propagates between the transmitter and the receiver. There can be many clusters in the wireless channel, where each cluster represents the reflections and/or scattering which the signal encounters during the transmission. Each cluster results in a propagation delay and signal attenuation.
- each tap is associated with a delay and path gain value
- the M such strongest taps are selected (there are >M clusters in the wireless channel through which the signal propagates between the gNB and UE). Further information about taps can be found in 3GPP TS 38.214 V16.7.0, section 5.2.2.2 5.
- the tensor of linear combination coefficients is used as input in the AI/ML model.
- the input could be further enhanced with information about the selected beams and taps, noise levels, etc.
- the output payload (#UCI bits) can be optimized and be different for each rank
- Rank selection is expected to be outside the AI/ML model, as the UE select the model to use and report the corresponding Rl.
- Legacy rank selection method may be re-used.
- UCI (standardized): o Possibly a latent space payload indicator o UCI contains the latent space information associated with the preferred precoder for a rank r transmission, for each subband, where r is given by the Rank Indication (Rl) field in the UCI
- Model output (standardized): o
- the model output indicates the preferred precoder (per subband) for a rank r transmission, where r is given by the Rl field in the UCI
- Figure 7 shows a rank specific model showing a rank 3 selection by legacy Rl selection module. A total of four models need to be trained.
- Rank selection may be inside the AI/ML model, the AI/ML model also outputs the preferred rank Rl
- the output payload (#UCI bits) may be optimized to depend on the output rank Rl from the model
- UCI (standardized): o Possibly a latent space payload indicator o UCI contains the latent space information associated with the preferred precoder for a rank r transmission, for each subband, where r is given by the Rl field in the UCI
- Model output (standardized): o
- the model output indicates the preferred precoder (per subband) for a rank r transmission, where r is given by the Rl field in the UCI Figure 8 shows a rank common model which implies rank 3 specific model is used. A single model needs to be trained.
- One AI/ML model is trained to be used for all layers and applied repeatedly for corresponding layers to perform individual inference o Layers are ordered and numbered, e.g., the lowest layer index corresponds to the largest eigenvalue
- UCI (standardized) contains: o A rank indicator Rl o Possibly a latent space payload indicator for a layer o
- Each layer latent space is represented by the same number of bits
- Model output (standardized): o
- the model output indicates the eigenvector(s) of the Rl strongest layers, per subband where r is given by the Rl field in the UCI
- Figure 9 illustrates a layer common and rank independent model showing a rank 3 selection which implies rank 3 specific model is used consisting of three identical models, one for each layer.
- a single layer model needs to be trained
- a pre-processing step maps the channel to e.g., eigenvectors (layers)
- Layers are ordered and numbered, e.g., the lowest layer index corresponds to the largest eigenvalue
- the output payload (#UCI bits) can be optimized and different for each layer
- UCI contains o A rank indicator Rl o Possibly a latent space payload indicator, per layer o
- Each layer latent space is represented by different number of bits
- Model output (standardized): o
- the model output indicates the eigenvector(s) of the Rl strongest layers, per subband where r is given by the Rl field in the UCI o
- Figure 10 illustrates a layer specific and rank independent model showing a rank 3 selection which implies rank 3 specific model is used consisting of three different models, one for each layer.
- Four layer models need to be trained.
- a pre-processing step maps the channel to e.g., eigenvectors (layers).
- the output payload (#UCI bits) can be optimized and different for each rank
- UCI contains o A rank indicator Rl o Possibly a latent space payload indicator, per layer and per rank o
- Each layer latent space is represented by different number of bits
- Model output (standardized): o
- the model output indicates the eigenvector(s) of the Rl strongest layers, per subband where r is given by the Rl field in the UCI o
- Figure 11 illustrates a layer common and rank dependent model showing a rank 3 selection which implies rank 3 specific model is used consisting of three identical models, one for each layer.
- Four layer models need to be trained.
- a pre-processing step maps the channel to e.g. eigenvectors (layers).
- Pre-processing to extract the layers is necessary (e.g., eigenvector) • Separate AI/ML models are trained for all layers and for each rank and applied for corresponding layers to perform individual inference o Layers are ordered and numbered, e.g. , the lowest layer index corresponds to the largest eigenvalue
- the output payload (#UCI bits) can be optimized and different for each rank and each layer within each rank
- UCI contains o A rank indicator Rl o Possibly a latent space payload indicator, per layer and per rank o
- Each layer latent space for each rank is represented by different number of bits
- Model output (standardized): o
- the model output indicates the eigenvector(s) of the Rl strongest layers, per subband where r is given by the Rl field in the UCI o
- Figure 12 illustrates a layer specific and rank dependent model showing a rank 3 selection which implies rank 3 specific model is used consisting of three different models, one for each layer.
- Ten layer models need to be trained
- a preprocessing step maps the channel to e.g. eigenvectors (layers)
- the encoder at the UE compresses, quantizes, and feedbacks the layer information of the estimated channel, for example, the eigenvectors per layer, to the gNB, it can do so by using the same number of bits per layer or different number of bits per layer depending on the rank and/or the ordering of the layers.
- the UE has to explicitly signal to the gNB the mapping between the feedback bits to the layers, such that gNB can reconstruct the layer information by processing the bits in the correct order. This requires additional UCI overhead. Further, with a fixed CSI payload configured by the gNB, the additional UCI overhead may reduce the number of bits available for feedback of the compressed layer information, thereby hampering the reconstruction performance at the gNB.
- the proposed solution provides methodology for feedback of compressed and quantized information from the encoder deployed at the UE to the decoder deployed at the gNB for each layer of the estimated rank without additional UCI overhead.
- a method performed by a user equipment, UE comprises: using an encoder of an autoencoder to compress multiple input multiple output, Ml MO, channel state information, CSI, for a channel between the UE and a base station; forming a CSI report using a mapping between quantised bits representing the compressed Ml MO CSI and layers of the channel, wherein the mapping is dependent on an estimated rank for the channel and/or a layer order; and sending, to the base station, the CSI report and a rank indication for the estimated rank.
- Ml MO multiple input multiple output
- CSI channel state information
- a method performed by a base station using multi-user multiple input multiple output, MIMO comprises: receiving, from a user equipment, UE, a channel state information, CSI, report comprising quantised bits representing compressed MIMO CSI for a channel between the UE and the base station, and a rank indication for the channel; and using a decoder of an autoencoder to decompress the compressed MIMO CSI based on (I) the rank indication and (ii) a mapping between quantised bits representing the compressed MIMO CSI and layers of the channel, wherein the mapping is dependent on an estimated rank for the channel and/or a layer order.
- a computer program product comprising a computer readable medium having computer readable code embodied therein, the computer readable code being configured such that, on execution by a suitable computer or processor, the computer or processor is caused to perform the method according to the first aspect, the second aspect, or any embodiment thereof.
- a user equipment configured to perform the method according to the first aspect or any embodiment thereof.
- a user equipment comprising a processor and a memory, said memory containing instructions executable by said processor whereby said UE is operative to perform the method according to the first aspect or any embodiment thereof.
- a base station configured to perform the method according to the second aspect or any embodiment thereof.
- a base station comprising a processor and a memory, said memory containing instructions executable by said processor whereby said base station is operative to perform the method according to the second aspect or any embodiment thereof.
- a user equipment comprising: processing circuitry configured to cause the user equipment to perform any of the steps of any of the methods according to the first aspect or any embodiment thereof; and power supply circuitry configured to supply power to the processing circuitry.
- a base station comprising: processing circuitry configured to cause the base station to perform any of the steps of any of the methods according to the second aspect or any embodiment thereof; and power supply circuitry configured to supply power to the processing circuitry.
- a user equipment comprising: an antenna configured to send and receive wireless signals; radio front-end circuitry connected to the antenna and to processing circuitry, and configured to condition signals communicated between the antenna and the processing circuitry; the processing circuitry being configured to perform any of the steps of any of the methods according to the first aspect or any embodiment thereof; an input interface connected to the processing circuitry and configured to allow input of information into the UE to be processed by the processing circuitry; an output interface connected to the processing circuitry and configured to output information from the UE that has been processed by the processing circuitry; and a battery connected to the processing circuitry and configured to supply power to the UE.
- Certain embodiments may provide one or more of the following technical advantage(s).
- the proposed solution provides methodology for an implicit mapping between the feedback bits from the encoder deployed at the UE to the layer information based on the Rl and/or the layer ordering, which allows the base station (e.g. gNB) to reconstruct the layer information through the decoder without additional UCI overhead. Further solutions allowing explicit signaling between the gNB and the UE are discussed to provide a more flexible configuration for the layer information processing.
- Figure 1 illustrates MU-MIMO operations
- Figure 2 illustrates CSI type II feedback
- Figure 3 illustrates a fully connected autoencoder
- Figure 4 illustrates the use of an autoencoder for CSI compression
- Figure 5 illustrates a quantization operation at the output of the encoder to fit a CSI payload over the air interface
- Figure 6 illustrates pre-processing for implicit feedback of precoding matrices
- Figure 7 illustrates a rank specific model showing a rank 3 selection by a legacy Rl selection module
- Figure 8 illustrates a rank common model which implies a rank 3 specific model is used
- Figure 9 illustrates a layer common and rank independent model showing a rank 3 selection which implies rank 3 specific model is used consisting of three identical models, one for each layer;
- Figure 10 illustrates a layer specific and rank independent model showing a rank 3 selection which implies rank 3 specific model is used consisting of three different models, one for each layer;
- Figure 11 illustrates a layer common and rank dependent model showing a rank 3 selection which implies rank 3 specific model is used consisting of three identical models, one for each layer;
- Figure 12 illustrates a layer specific and rank dependent model showing a rank 3 selection which implies rank 3 specific model is used consisting of three different models, one for each layer;
- Figure 13 is a flow chart/signalling diagram illustrating operations of a gNB and UE according to some exemplary embodiments
- Figure 14 illustrates an architecture of the channel eigenvector feedback approach for mapping between feedback bits and layer information corresponding to an estimated rank
- Figure 15 illustrates the bottleneck layer for the layer specific and rank independent model with different quantization bits per channel layer
- Figure 16 illustrates the bottleneck layer for the layer specific and rank independent model with different sizes of the bottleneck per channel layer
- Figure 17 illustrates the bottleneck layer for the layer specific and rank independent model with different sizes of the bottleneck per channel layer
- Figure 18 illustrates the bottleneck layer for the layer common and rank dependent model with different quantization bits per channel layer
- Figure 19 illustrates the bottleneck layer for the layer common and rank dependent model with different sizes of the bottleneck per channel layer
- Figure 20 illustrates the bottleneck layer for the layer common and rank dependent model with different sizes of the bottleneck per channel layer
- Figure 21 illustrates the bottleneck layer for the layer specific and rank dependent model with different quantization bits per channel layer
- Figure 22 illustrates the bottleneck layer for the layer specific and rank dependent model with different sizes of the bottleneck per channel layer
- Figure 23 illustrates the bottleneck layer for the layer specific and rank dependent model with different sizes of the bottleneck per channel layer
- Figure 24 illustrates the bottleneck layer of Figure 16 without neurons that have a O-bit output
- Figure 25 illustrates the bottleneck layer of Figure 19 without neurons that have a O-bit output
- Figure 26 illustrates the bottleneck layer of Figure 22 without neurons that have a O-bit output
- Figure 28 is a graph illustrating mean user throughput vs served traffic performance for the layer specific and rank dependent AE model
- Figure 29 is a flow chart illustrating a method performed by a UE in accordance with some embodiments.
- Figure 30 is a flow chart illustrating a method performed by a base station in accordance with some embodiments.
- Figure 31 shows an example of a communication system in accordance with some embodiments.
- Figure 32 shows a UE in accordance with some embodiments
- Figure 33 shows a RAN network node in accordance with some embodiments.
- Figure 34 is a block diagram illustrating a virtualization environment in which functions implemented by some embodiments may be virtualized.
- this disclosure provides a methodology for feedback of compressed and quantized information from the encoder deployed at the UE to the decoder deployed at the gNB for each layer of the estimated rank without additional UCI overhead.
- the methodology involves a pre-defined mapping between the feedback bits and the layer information based on the estimated rank and/or layer ordering.
- the gNB infers the bits required to reconstruct the layer information at the decoder through the rank indicated by the UE.
- an implicit mapping of either the quantization bits per neuron or/and the number of feedback bits at the output of the encoder deployed at the UE to the layer information, i.e., the eigenvectors per layer, based on the estimated rank and/or the corresponding layer ordering can be defined such that the gNB can reconstruct the layer information at the decoder output based on the indicated rank without additional UCI overhead.
- Such mapping option/s may be included in specification text or communicated between gNB and UE vendors via, e.g., bilateral agreements.
- mapping may be communicated via over-the-air signaling among the gNB and the UE, e.g., the UE may indicate the used mapping from the options available in the specification or the gNB may specify the mapping that the UE should apply.
- the UE may indicate the used mapping from the options available in the specification or the gNB may specify the mapping that the UE should apply.
- new fields are added to, e.g., the UECapabilityEnquiry RRC message, the UECapabilitylnformation RRC message, the Downlink Control Information (DCI), or the UCI.
- the gNB transmits the configured CSI-RS and the CSI reporting payload to the UE.
- the gNB may also indicate a suggested/enforced mapping option specifying the number of quantization bits per neuron.
- the UE receives (as shown by signal 1303) the CSI-RS from the gNB along with the CSI reporting payload size, where the gNB may further configure the maximum number of quantization bits per neuron to be used by the encoder at the UE according to the CSI reporting payload and the AE model.
- the maximum number of Quantization bits per neuron that a UE encoder may implement may be bounded by the specification.
- the UE estimates (in step 1304) the downlink (DL) channel with the received CSI-RS, determines a Rl, and further processes the channel to obtain the eigenvectors per layer based on the selected Rl.
- step 1304 the UE compresses and quantizes the eigenvectors per layer through the encoder with a pre-defined methodology based on the selected Rl and maps the quantized information according to a predefined mapping based on the selected Rl.
- the UE reports (step 1305) the output of the Al-based processing unit along with the Rl to the gNB in the CSI report, where the UE may further indicate the selected mapping option specifying the number of quantization bits per neuron based on layer order and/or selected Rl
- a method in a base station e.g., a gNB according to some exemplary embodiments:
- the gNB receives the feedback bits from the UE in the CSI report along with the indicated Rl. o The gNB may further receive selected mapping information from the UE to process the feedback bits per layer based on Rl.
- the gNB reconstructs the eigenvectors at the output of the decoder (step 1306).
- step 1307 the gNB can further process the estimated eigenvectors per layer to obtain the precoders per layer for Physical Downlink Shared Channel (PDSCH) transmission.
- Signal 1308 represents the transmission of the PDSCH.
- PDSCH Physical Downlink Shared Channel
- the UE estimates the downlink (DL) channel based on the configured DL reference signals (e.g., CSI-RS, Demodulation Reference Signal (DMRS), etc.), and produces a channel estimate H, for example, in the antennafrequency domain.
- the raw channel H can be expressed per CSI-RS port (TX side), per receive antenna (RX side), per frequency subband, and measured at one or more points in time.
- the channel H is a four-dimensional matrix or tensor.
- the raw channel estimate H is leveraged to estimate the appropriate rank for the downlink transmission and further processed to extract the eigenvector corresponding to each layer according to the estimated rank r.
- the extracted e r L are compressed and quantized at the encoder into bits, such that b lr represents the bits for quantizing the Z r -th layer.
- the processing of the channel to produce the precoders for each transmitted layer through the autoencoder (AE) is shown in Figure 14.
- Figure 14 shows the architecture of the channel eigenvector feedback approach for the proposed methodology for mapping between the feedback bits and the layer information corresponding to the estimated rank.
- H may be further pre-processed to have reduced dimension compared to the raw eigenvectors per layer based on feature extraction of the eigenvectors.
- M can be chosen from Rel-16 Type-ll pre-processing defined in Reference [1] (M depends on the value of p v in 3GPP TS 38.214, where v is the layer index).
- the encoder at the UE compresses and quantizes W 2 , where the reduced dimension of W 2 simplifies the AE model size and training complexity.
- the feedback of NZCs of W 2 contributes to major overhead for Type-ll, which can be reduced leveraging feedback through an AE.
- the above pre-processing for eigenvectors requires the UE to explicitly feedback ⁇ L, M ⁇ back to the gNB with occupies additional UCI overhead.
- eigenvectors the per-layer input of the encoder.
- eigenvector is used in a wide sense that incorporates different ways for the UE to extract precoding information for different layers. Examples are: 1. Eigenvectors of the transmit covariance corresponding to the raw channel H, for a fixed frequency index, and in the most general case also a fixed time index. o This is equivalent to the singular vectors of the raw channel H.
- Eigenvectors of an (weighted) averaged transmit covariance of the channel H estimated over different time and/or frequency resources.
- the aggregation can be, e.g., to average the covariance matrices over 2, 4, or 8 RBs in frequency.
- the approximate eigenvectors can correspond to any of 1 or 2 above.
- the approximate eigenvectors can correspond to any of 1 or 2 above.
- eigenvalue is used in a wide sense. The term should be interpreted relative to the term eigenvector, where in general a larger eigenvalue means that the UE estimates better reception quality in the sense of Signal to Noise Ratio (SNR) or Signal to Interference plus Noise Ratio (SI NR), for the used precoding vector.
- SNR Signal to Noise Ratio
- SI NR Signal to Interference plus Noise Ratio
- ‘network’ and/or a gNB can be understood as a generic network node, gNB, base station, unit within the base station to handle at least some ML operation, relay node, core network node, a core network node that handle at least some ML operations, or a device supporting Device-to-Device (D2D) communication.
- the node may be deployed in a 5G network, or a 6 th Generation (6G) network.
- the number of quantization bits for the neurons of the bottleneck layer at the encoder can be set as function of the layers. Accordingly, the number of quantization bits for neurons of the bottleneck layer can inversely change with increasing order of layers to make the feedback bits fit the CSI reporting payload. Note that the layers can be ordered such that the first layer corresponds to the eigenvector corresponding to the largest eigenvalue.
- the number of quantization bits to compress the Z-th layer can be set as q r where Q B is the maximum quantization bits per neuron of the bottleneck layer of the encoder, r is the estimated rank, such that I e ⁇ 1,2, ... , r ⁇ .
- the gNB can use the first n b Q B feedback bits to reconstruct eigenvector for first layer, the next n b 11 / feedback bits to reconstruct eigenvector for second layer and so on.
- the number in each neuron represents the quantization bits used to quantize the layer information for the specific layer based on the ordering of layers.
- the quantization bits to quantize the layer information depends on the order of the layer.
- the encoder uses the same model per layer to compress the layers IG ⁇ 1 ,2, .. ,,r ⁇ , independent of the rank as described in Option 4 above (‘‘Layer specific and rank independent layer models”).
- the UE can feedback different number of bits per layer from the bottleneck layer of the encoder. For example, with the bottleneck layer having n b neurons and with the number of quantization bits per neuron set as q r , where r is the estimated rank, the UE can feedback the Z-th layer with q r bits.
- the layers can be ordered such that the first layer corresponds to the eigenvector corresponding to the largest eigenvalue.
- the gNB can use the first n b q r feedback bits to reconstruct eigenvector for first layer, the next n b / 2 r fe e bback bits to reconstruct eigenvector for second layer and so on.
- the UE can feedback only the first /pr bits out of n b q r bits for each layer, i.e., output from first p 6 /;] neurons at the bottleneck layer for each layer, as shown in Figure 16, or,
- the UE can feedback only bits from the first neuron and every Z-th neuron after the first neuron of the output layer of the encoder, as shown in Figure 17.
- the number in each neuron represents the quantization bits used to quantize the layer information for the specific layer based on the ordering of layers.
- the feedback bits per layer is taken from initial portion of total feedback bits per layer depending on the order of the layer.
- the encoder uses the same model per layer to compress the layers le ⁇ 1 ,2, . ,.,r ⁇ , independent of the rank as described in Option 4 above ("Layer specific and rank independent layer models”).
- the number in each neuron represents the quantization bits used to quantize the layer information for the specific layer based on the ordering of layers.
- the feedback bits per layer is taken from non- consecutive neurons in the bottleneck layer of the encoder depending on the order of the layer.
- the encoder uses the same model per layer to compress the layers le ⁇ 1 ,2, ...,r ⁇ , independent of the rank as described in Option 4 above ("Layer specific and rank independent layer models”).
- the number in each neuron represents the quantization bits used to quantize the layer information for the specific layer based on the estimated rank.
- the quantization bits to quantize the layer information depends on the estimated rank.
- the encoder uses the same model for all the layers based on the estimated r to compress the layers l(r)e ⁇ 1,2, ...,r ⁇ , as described in Option 5 above (Layer common and rank dependent layer models).
- the UE can feedback different number of bits per layer from the bottleneck layer of the encoder, depending on the estimated rank r. For example, with the bottleneck layer having n b neurons and the number of quantization bits per neuron set as q r , the UE can feedback [ nb /r]dr bits per layer to the gNB.
- the gNB can interpret the feedback bits for decoding eigenvector of each layer based on the indicated rank, i.e., the gNB can use the first nb / r q r feedback bits to reconstruct the eigenvector for first layer, the next f nb / r ]q r feedback bits to reconstruct the eigenvector for second layer and so on
- the UE can feedback only the first nb / r ]q r bits out of n b q r bits for each layer, i.e., output from first nb / r ] neurons for each layer, as shown in Figure 19, or,
- the UE can feedback bits only from the first neuron and every r-th neuron after the first neuron of the output layer of the encoder, as shown in Figure 20.
- the UE feedback 40, 40, 48 and 48 bits for r - 1, 2, 3 and 4, respectively, to the gNB.
- the number in each neuron represents the quantization bits used to quantize the layer information for the specific layer based on the estimated rank.
- the feedback bits per layer is taken from initial portion of total feedback bits per layer depending on the estimated rank.
- the encoder uses the same model for all the layers based on the estimated r to compress the layers l(r)G ⁇ 1,2,...,r ⁇ , as described in Option 5 above (Layer common and rank dependent layer models).
- the number in each neuron represents the quantization bits used to quantize the layer information for the specific layer based on the estimated rank.
- the feedback bits per layer is taken from non- consecutive neurons in the bottleneck layer of the encoder depending on the estimated rank.
- the encoder uses the same model for all the layers based on the estimated r to compress the layers l(r)e ⁇ 1,2,...,r ⁇ , as described in Option 5 above (Layer common and rank dependent layer models).
- Embodiments for layer specific and rank dependent layer models are set as function of the estimated rank and ordering of the corresponding layers. Accordingly, the quantization bits can inversely change with increasing rank and the ordering of the corresponding layers to make the feedback bits fit the CSI reporting payload.
- the layers can be ordered such that the first layer corresponds to the eigenvector corresponding to the largest eigenvalue.
- the gNB can use the first n / r j feedback bits to reconstruct eigenvector for first layer, the next n b l ⁇ b ⁇ 2r] Redback bits to reconstruct eigenvector for second layer and so on.
- the number in each neuron represents the quantization bits used to quantize the layer information for the specific layer based on the estimated rank and the ordering of corresponding layers.
- the quantization bits to quantize the layer information depends on the estimated rank and the order of the corresponding layer.
- the encoder uses the different model for different rank and the corresponding layers based on the estimated r to compress the layers Z r e ⁇ 1 ,2, .. ,,r ⁇ , as described in Option 6 above (“Layer specific and rank dependent layer models”).
- the gNB can decode the feedback bits by processing bits for different layers depending on the indicate rank and the ordering of the corresponding layers. For example, with the bottleneck layer having n b neurons and the number of quantization bits per neuron set as q r , where r is the indicated rank by the UE, the UE can feedback the Z r -th layer with ] q r bits to the gNB.
- the UE can feedback only the first q r bits out of n b q r bits for each layer, i e., output from first ⁇ nb /l r] neurons f° r eac layer, as shown in Figure 22, or,
- the UE can feedback bits only from the first neuron and every (rZ r )-th neuron after the first neuron of the output layer of the encoder, as shown in Figure 23.
- the UE will feedback 40, 32, 32 and 28 bits for r — 1, 2, 3 and 4, respectively, to the gNB.
- the number in each neuron represents the quantization bits used to quantize the layer information for the specific layer based on the estimated rank and ordering of the layers.
- the feedback bits per layer is taken from initial portion of total feedback bits per layer depending on the estimated rank and order of the layer.
- the encoder uses the different model for different rank and the corresponding layers based on the estimated r to compress the layers Z r e ⁇ 1,2, ...,r ⁇ , as described in Option 6 above (“Layer specific and rank dependent layer models”).
- the number in each neuron represents the quantization bits used to quantize the layer information for the specific layer based on the estimated rank and ordering of the layers.
- the feedback bits per layer is taken from non-consecutive neurons in the bottleneck layer of the encoder depending on the estimated rank and order of the layer.
- the encoder uses the different model for different rank and the corresponding layers based on the estimated r to compress the layers Z r e ⁇ 1 ,2, .. ,,r ⁇ , as described in Option 6 above (“Layer specific and rank dependent layer models”).
- the denominator value to determine the value of the quantization bit and/or the value of number of pre-quantized encoder output is based on the value of r and/or l r . It should be understood that the above should serve as a preferred formula.
- the value of the denominator may be pre-determined, e.g., in the standard text.
- the denominator value can be set to a first, a second, a third, and a fourth denominator value for the first, the second, the third, and the fourth layer or rank, respectively. The set of value may be different depends on the number of configured layers.
- a first set of denominator values is applied for a first value of configured layer and a second set of denominator values is applied for a second value of configured layer.
- the set of denominator values may also additionally or optionally depend on the number of neurons of the bottleneck.
- the values of the k(r, Z) may be pre-determined, e.g., in the standard text.
- the values of k(r, Z) may depend on one or both of the variables r and, I and may be written out as specific values in a table.
- k(r, Z) may also additionally or optionally depend on the number of neurons of the bottleneck, i.e., be replaced by k(r, I, n b ).
- the bottleneck layer has n b neurons and the output of some of those are quantized with 0 bits.
- These embodiments can also be implemented by ML-models that exclude those bottleneck neurons.
- Figure 24 illustrates this implementation difference in the example described in Figure 16 for the layer specific and rank independent layer models.
- Figure 25 illustrates this implementation difference in the example described in Figure 19 for the layer common and rank dependent layer models.
- Figure 26 illustrates this implementation difference in the example described in Figure 22 for the layer specific and rank dependent layer models.
- a layer specific and rank dependent model can be defined where AEs are trained separately for all layers for r ⁇ f and different AEs are trained separately for all layers for r > f .
- the different models for r ⁇ r and r > f can refer to the processing defined in the above embodiments, which can result in different processing of latent space for each layer depending on the rank by varying either the quantization bits across the neurons of the bottleneck layers and/or the number of active neurons in the bottleneck layer.
- the UE can feedback n b q r bits and f nb / ⁇ q r bits per layer for r ⁇ f and r > , respectively.
- the gNB can interpret the feedback bits for decoding eigenvector of each layer based on the indicated rank. Specifically, the gNB can sequentially use n b q r or f nb / ⁇ q r feedback bits to reconstruct the eigenvector for each layer if r ⁇ r or n b / q r , respectively.
- the AEs are trained separately for each layer, where the UE generate n b q r feedback bits per layer.
- the AEs are trained separately for each layer, where the UE generate feedback bits per layer.
- Embodiments for configuration of latent space for AE model - the gNB and the UE will initially agree on the procedure that the UE follows to select the latent space coefficients (i.e. , the output from the neurons at the UE encoder) that will be quantized and transmitted to the gNB.
- these procedures may be provided in specification text, and/or they may be agreed explicitly (e.g., by indicating the specific latent coefficient selection procedure selected when executing a given model) or implicitly (e.g., by associating a latent coefficient selection procedure to a given model) via over-the-air signaling or bilateral vendor agreements.
- some embodiments may describe in the specification text that only the outputs from the first N neurons should be quantized and sent back to the gNB.
- the gNB and the UE will agree on the number of latent space coefficients N that the UE feeds back to the gNB. Such number may be layer-dependent and/or rank-dependent.
- the number of latent space coefficients that can be fed back may be limited to a fixed set of values (e.g., only multiples of two latent coefficients may be allowed). This may both facilitate the signaling - reducing the overhead - and the AI/ML encoder model training time.
- such fixed set of values may be UE-specific and selected by the UE from a set of allowed values provided by the specification.
- a given UE may communicate to the gNB the specific set of values allowed for such UE via, e.g., a field in the RRC UE capability information.
- the gNB will indicate the UE the number of latent space coefficients that the UE should fed back to the gNB.
- Such indication (which may be a recommendation or enforcement) may be explicit (e.g., by indicating the number of latent space coefficients that the UE should feed back to the gNB) or implicit (e.g., it may be determined by the number of time/frequency resource elements allocated for UCI, the recommended Modulation and Coding Scheme (MCS) for those resources, and the number of bits used to quantize each latent space coefficient).
- MCS Modulation and Coding Scheme
- Such indication may be performed, e.g., via DCI or RRC
- the UE will indicate to the gNB (i.e., within the UCI) the number of latent space coefficients fed back to the gNB
- the gNB and/or the UE will indicate the offset number (positive or negative) of latent space coefficients with respect to a previously agreed baseline number.
- the gNB and the UE will agree on the number of bits that the UE utilizes to quantize each latent space coefficient. Such number may be layerdependent and/or rank-dependent.
- such signaling may be implicit, e.g , there may be a pre-agreement between gNB and the UE on the number of bits per latent space coefficient and the number of latent space coefficients requested to be fed back may be directly determined by the number of time/frequency resource elements allocated for UCI and the recommended MCS for those resources.
- the UE will indicate to the gNB (i e., within the UCI) the number of bits used to quantize the latent space coefficient fed back to the gNB.
- Such indication could be layer- and/or rank-dependent.
- the indication could be defined such that the number of quantization bits for layer X (say 2-bits with four possible quantization levels), is signaled relative to a previous layer, e.g. layer X-1 (say 1-bit indicating same or one less bit used for quantization), to reduce required signaling.
- the gNB and/or the UE will indicate the offset number (positive or negative) of quantization bits per latent space coefficients with respect to a previously agreed baseline number.
- Embodiments for configuration of maximum quantization bits for latent space coefficients the maximum number of quantization bits per neuron Q B at the bottleneck layer of the encoder:
- the UE may communicate to the gNB such capability, e.g., when responding to a gNB-sent UECapabilityEnquiry RRC message with a UECapabilitylnformation RRC message.
- the capability may o be sent explicitly in the UECapabilitylnformation RRC message, or o be sent implicitly in the sense that the UECapabilitylnformation RRC message contains a model id based on which the gNB may look up, or derive, the information.
- • is configured by the gNB depending on the AE model information, e.g., the Model ID, o along with the UCI overhead for CSI reporting and signaled to the UE on DCI. o through RRC signaling.
- the AE model information e.g., the Model ID
- the gNB can configure Q B such that Q B ⁇ Q max , where Q max is the maximum quantization bits per neuron that the AE model can process as provided by the AE model information.
- Q max is the maximum quantization bits per neuron that the AE model can process as provided by the AE model information.
- the gNB may also consider that I) the AE model may have a minimum number of quantization bits per neuron, Q min , such that Q B > Qmin, and/or ii) the number of quantization bits used during the training phase, (/train, which may be a single number of a lit of numbers if the AE is trained for multiple quantization levels.
- the gNB may choose Q B ⁇ /train or choose Q B as one of the entries in the list of trained quantization levels (/ trajn .
- the NW may configure the UE with the size of the CSI report instead of Q B
- the UE may derive the value of Q B based on the configured CSI report size and the value of n b
- the configured CSI report size may, for example, represent the CSI report size for the case of the UE reports CSI with rank 1
- the configured CSI report size may represent the CSI report size for the case of the UE reports CSI with maximum rank, i.e., the same with the configured layer.
- the value of Q B may also be determined by the UE, and be informed to the NW, e.g., together with the Rl.
- the NW may (instead of Q B configure the UE with Q B max and/or Q B-min , where Q B max and Q B min are the maximum Q B value that can be used by the UE and the minimum Q B value that can be used by the UE, respectively.
- such configuration may be performed via DCI and/or via RRC signaling.
- the set of allowed values for Q B max and Q B min may be provided in the specification.
- the set of allowed values for Q B max and Q B min for a given model is provided to the network by UE capability signaling, by bilateral agreement between vendors, or through other type of control signaling between the UE and network node.
- the UE may be configured with several possible value of Q BI from which the UE can select one of the values to be used in the CSI report.
- Embodiments for CQI computation based on the configuration of quantization bits for AE model the UE will calculate the CQI to be reported back to the gNB independently of the configuration of quantization bits for AE model. In other embodiments, the CQI reported back to the gNB will be dependent on the selected configuration of quantization bits for the AE model. For instance, UEs may apply a fixed offset to the SINR/CQI computed prior to compression and quantization, with such offset being implementation-specific in some embodiments and provided or limited by the specification in others.
- Figure 28 illustrates the advantage of the proposed solution with the model described in Figure 27, i.e., a layer specific and rank dependent AE model.
- CSI-RS ports are non-beamformed so that each antenna port transmits one CSI-RS port.
- the antenna-frequency domain ‘raw’ channel H have a dimension of 32 x 52 x 4 (N TX x W 3 x W RX ).
- the resultant linear combination coefficient W 2 is compressed and quantized at the encoder per layer.
- the training data set consists of unquantized W 2 from the Rel-16 Type-ll based pre-processing, where the AE is trained to minimize the Normalized Mean Square Error (NMSE) of the reconstructed W 2 across the layers for the estimated rank.
- NMSE Normalized Mean Square Error
- the gNB can process all the 40 feedback bits for layer 1.
- the gNB can process the first 40 feedback bits for layer 1 and the final 40 feedback bits for layer 2.
- the gNB can process the first 20 feedback bits for layer 1, the next 20 feedback bits for layer 2, and the final 20 bits for layer 3.
- the gNB can process the first 20 feedback bits for layer 1 , the next 20 feedback bits for layer 2, the next 20 bits for layer 3, and the final 20 bits for layer 4
- Figure 28 shows the mean user throughput vs served traffic performance for the layer specific and rank dependent AE model with the legacy Rel-16 Type-ll with different parameter combination (parComb) as the baseline for r ⁇ 1 transmission.
- the evaluation was done with 7 sites and 10000 UEs in a Umi scenario.
- the overhead for Rel-16 Type-ll ParCombl is 62, 113, 100 and 111 bits for rank 1 , 2, 3 and 4, respectively.
- the overhead for Rel-16 Type-ll ParComb2 is 91, 169, 156 and 167 bits for rank 1 , 2, 3 and 4, respectively.
- the proposed method has a throughput gain of around 3 to 5% gain over parCombl and 30-40% overhead reduction over parComb2, depending on the traffic load.
- Figure 29 is a flow chart illustrating a method according to various embodiments performed by a UE.
- the UE may perform the method in response to executing suitably formulated computer readable code.
- the computer readable code may be embodied or stored on a computer readable medium, such as a memory chip, optical disc, or other storage medium.
- the computer readable medium may be part of a computer program product.
- the UE uses an encoder of an autoencoder to compress MIMO CSI for a channel between the UE and a base station.
- the UE forms a CSI report using a mapping between quantised bits representing the compressed MIMO CSI and layers of the channel. The mapping is dependent on an estimated rank for the channel and/or a layer order.
- a layer of the channel is a data stream transmitted from the base station (e.g. gNB). Multiple layers denote multiple data streams (spatial multiplexing).
- the layers are transmitted from the gNB using different ‘generalized beams’ represented by different precoders.
- the precoders can be the eigenvector of the downlink channel, or some function of the eigenvectors.
- the layer information is considered to be extracted from the estimated CSI at the UE, which can be eigenvectors or some pre-processed eigenvectors, then compressed at the encoder and fed back to the gNB.
- step 2903 the UE sends the CSI report and a rank indication for the estimated rank to the base station.
- the mapping used in step 2902 may be defined in a 3GPP Standard Specification.
- the UE sends a mapping indication to the base station that comprises an indication of the mapping used by the UE to form the CSI report.
- the UE receives a mapping indication from the base station that comprises an indication of the mapping to use to form the CSI report.
- the mapping indication may be a suggested mapping option, and the UE can determine a mapping to use to form the CSI report from the suggested mapping option and at least one other mapping option.
- the mapping indication can be an enforced mapping that the UE is required to use.
- the CSI report formed in step 2902 may comprise the quantised bits concatenated across all layers of the channel according to the layer order.
- the quantised bits may be output by the encoder of the autoencoder at the UE.
- compressed MIMO CSI is output by the encoder of the autoencoder at the UE, and the method further comprises the UE, prior to forming the CSI report, quantising the compressed MIMO CSI.
- compressed MIMO CSI is output by the autoencoder at the UE, and the method further comprises the UE, prior to forming the CSI report, quantising a subset of the output compressed MIMO CSI corresponding to selected neurons of the encoder of the autoencoder.
- the subset can be determined based on one or more of: layer; the estimated rank; UE capability information; RRC signalling; an indication received from the base station; DCI; a number of time and/or frequency resource elements allocated for UCI; and an active encoder model used by the UE.
- the UE may send an indication to the base station of a number of quantised bits sent in the CSI report.
- the UE can receive an indication of a number of quantised bits to be sent in the CSI report from the base station.
- the method performed by the UE may further comprise the UE receiving, from the base station, downlink reference signals (RS) for the channel, and a CSI reporting payload size.
- the downlink reference signals may comprise one or both of: CSI-RS and DMRS.
- the method performed by the UE may further comprise the UE determining the CSI based on the received downlink reference signals.
- the method performed by the UE can further comprise the UE determining the rank indication based on the MIMO CSI.
- the MIMO CSI that is input into the encoder of the autoencoder may comprise eigenvectors for each layer of the channel.
- the method performed by the UE may further comprise the UE, prior to step 2901 , pre-processing the MIMO CSI.
- the pre-processing may be based on the rank indication.
- Pre-processing can comprise one or more of the following steps: (i) performing spatial-domain DFT per layer to transform the MIMO CSI from antenna-frequency domain to beamfrequency domain and selecting a set of beams; (ii) obtaining an eigenvector for each layer of the channel; and (iii) performing frequency-domain DFT per layer to transform the eigenvectors from antenna/beam-frequency domain to antenna/beam-delay domain and selecting a set of taps.
- the pre-processing may further comprise extracting features of the eigenvectors per layer in the beam-delay domain.
- the layer order may indicate that the layers are to be ordered by decreasing eigenvalue.
- the method performed by the UE may further comprise the UE receiving a quantization indication from the base station that comprises an indication of a maximum number, Q B , of quantised bits per neuron of the autoencoder output to be used for determining the CSI report.
- the method performed by the UE may further comprise the UE transmitting a quantization indication to the base station that comprises an indication of a maximum number, Q B , of quantised bits per neuron of the output of the encoder of the autoencoder to be used for determining the CSI report.
- the mapping may indicate a number of quantised bits per neuron of the output of the encoder of the autoencoder as a function of channel layer based on the layer order.
- the mapping for a given channel layer may be independent of the estimated rank for the channel.
- the number of quantised bits per neuron of the output of the encoder of the autoencoder may decrease with decreasing channel layer eigenvalue.
- the mapping can indicate a number of quantised bits per neuron of the output of the encoder of the autoencoder as a function of the estimated rank for the channel.
- the mapping for a given estimated rank may be independent of the channel layer.
- the number of quantised bits per neuron of the output of the encoder of the autoencoder can decrease with increasing rank.
- the mapping can indicate a number of quantised bits per neuron of the output of the encoder of the autoencoder as a function of the estimated rank for the channel and order of the corresponding channel layers.
- the number of quantised bits per neuron of the output of the encoder of the autoencoder can decrease with increasing rank.
- the mapping can indicate a set of pre-determined values corresponding to the number of quantised bits for each neuron of the output of the encoder of the autoencoder, and the pre-determined values depend on a total number of channel layers and/or a total number of neurons of the encoder of the autoencoder.
- Figure 30 is a flow chart illustrating a method according to various embodiments performed by a base station, such as a gNB.
- the base station may perform the method in response to executing suitably formulated computer readable code.
- the computer readable code may be embodied or stored on a computer readable medium, such as a memory chip, optical disc, or other storage medium.
- the computer readable medium may be part of a computer program product.
- the base station receives a CSI report from a UE.
- the CSI report comprises quantised bits representing compressed MIMO CSI for a channel between the UE and the base station, and a rank indication for the channel.
- the base station uses a decoder of an autoencoder to decompress the compressed MIMO CSI based on (i) the rank indication and (ii) a mapping between quantised bits representing the compressed MIMO CSI and layers of the channel.
- the mapping is dependent on an estimated rank for the channel and/or a layer order.
- Step 3002 may comprise determining an eigenvector for each layer of the channel.
- the method may further include the step of the base station determining multi-user MIMO precoders for each layer of the channel based on the result of step 3002.
- the base station may then perform transmissions on the channel using the determined precoders. These transmissions may be performed on the PDSCH.
- the mapping used in step 3002 may be defined in a 3GPP Standard Specification.
- the base station receives a mapping indication from the UE that comprises an indication of the mapping used by the UE to form the CSI report.
- the base station sends a mapping indication to the UE that comprises an indication of the mapping to use to form the CSI report.
- the mapping indication may be a suggested mapping option, or an enforced mapping that the UE is required to use.
- the CSI report received in step 3001 may comprise the quantised bits concatenated across all layers of the channel according to the layer order.
- the base station may receive an indication from the UE of a number of quantised bits sent in the CSI report.
- the base station can send an indication to the UE of a number of quantised bits to be sent in the CSI report.
- the method performed by the base station may further comprise the base station sending, to the UE, downlink reference signals (RS) for the channel, and a CSI reporting payload size.
- RS downlink reference signals
- the downlink reference signals may comprise one or both of: CSI-RS and DMRS.
- the layer order may indicate that the layers are to be ordered by decreasing eigenvalue.
- the method performed by the base station may further comprise the base station sending a quantization indication to the UE that comprises an indication of a maximum number, Q B , of quantised bits per neuron of the autoencoder output to be used for determining the CSI report.
- the method performed by the base station may further comprise the base station receiving a quantization indication from the UE that comprises an indication of a maximum number, Q B , of quantised bits per neuron of the output of the encoder of the autoencoder to be used for determining the CSI report.
- the mapping may indicate a number of quantised bits per neuron of the output of the encoder of the autoencoder as a function of channel layer based on the layer order.
- the mapping for a given channel layer may be independent of the estimated rank for the channel.
- the number of quantised bits per neuron of the output of the encoder of the autoencoder may decrease with decreasing channel layer eigenvalue.
- the mapping can indicate a number of quantised bits per neuron of the output of the encoder of the autoencoder as a function of the estimated rank for the channel.
- the mapping for a given estimated rank may be independent of the channel layer.
- the number of quantised bits per neuron of the output of the encoder of the autoencoder can decrease with increasing rank.
- the mapping can indicate a number of quantised bits per neuron of the output of the encoder of the autoencoder as a function of the estimated rank for the channel and order of the corresponding channel layers.
- the number of quantised bits per neuron of the output of the encoder of the autoencoder can decrease with increasing rank.
- the mapping can indicate a set of pre-determined values corresponding to the number of quantised bits for each neuron of the output of the encoder of the autoencoder, and the pre-determined values depend on a total number of channel layers and/or a total number of neurons of the encoder of the autoencoder.
- Figure 31 shows an example of a communication system 3100 in accordance with some embodiments.
- the communication system 3100 includes a telecommunication network 3102 that includes an access network 3104, such as a radio access network (RAN), and a core network 3106, which includes one or more core network nodes 3108.
- the access network 3104 includes one or more access network nodes, such as access network nodes 3110a and 3110b (one or more of which may be generally referred to as access network nodes 3110), or any other similar 3 rd Generation Partnership Project (3GPP) access node or non-3GPP access point.
- 3GPP 3 rd Generation Partnership Project
- the access network nodes 3110 facilitate direct or indirect connection of wireless devices (also referred to interchangeably herein as user equipment (UE)), such as by connecting UEs 3112a, 3112b, 3112c, and 3112d (one or more of which may be generally referred to as UEs 3112) to the core network 3106 over one or more wireless connections.
- the access network nodes 3110 may be, for example, access points (APs) (e.g. radio access points), base stations (BSs) (e.g. radio base stations, Node Bs, evolved Node Bs (eNBs) and NR NodeBs (gNBs)).
- APs access points
- BSs base stations
- eNBs evolved Node Bs
- gNBs NR NodeBs
- Example wireless communications over a wireless connection include transmitting and/or receiving wireless signals using electromagnetic waves, radio waves, infrared waves, and/or other types of signals suitable for conveying information without the use of wires, cables, or other material conductors.
- the communication system 3100 may include any number of wired or wireless networks, network nodes, UEs, and/or any other components or systems that may facilitate or participate in the communication of data and/or signals whether via wired or wireless connections.
- the communication system 3100 may include and/or interface with any type of communication, telecommunication, data, cellular, radio network, and/or other similar type of system.
- the wireless devices/UEs 3112 may be any of a wide variety of communication devices, including wireless devices arranged, configured, and/or operable to communicate wirelessly with the network nodes 3110 and other communication devices.
- the access network nodes 3110 are arranged, capable, configured, and/or operable to communicate directly or indirectly with the UEs 3112 and/or with other network nodes or equipment in the telecommunication network 3102 to enable and/or provide network access, such as wireless network access, and/or to perform other functions, such as administration in the telecommunication network 3102.
- the core network 3106 includes one more core network nodes (e.g. core network node 3108) that are structured with hardware and software components. Features of these components may be substantially similar to those described with respect to the wireless devices/UEs, access network nodes, such that the descriptions thereof are generally applicable to the corresponding components of the core network node 3108.
- core network nodes e.g. core network node 3108
- Example core network nodes include functions of one or more of a Mobile Switching Center (MSC), Mobility Management Entity (MME), Home Subscriber Server (HSS), Access and Mobility Management Function (AMF), Session Management Function (SMF), Authentication Server Function (AUSF), Subscription Identifier De-concealing function (SIDF), Unified Data Management (UDM), Security Edge Protection Proxy (SEPP), Network Exposure Function (NEF), and/or a User Plane Function (UPF).
- MSC Mobile Switching Center
- MME Mobility Management Entity
- HSS Home Subscriber Server
- AMF Access and Mobility Management Function
- SMF Session Management Function
- AUSF Authentication Server Function
- SIDF Subscription Identifier De-concealing function
- UDM Unified Data Management
- SEPP Security Edge Protection Proxy
- NEF Network Exposure Function
- UPF User Plane Function
- the communication system 3100 of Figure 31 enables connectivity between the wireless devices/UEs, network nodes.
- the communication system may be configured to operate according to predefined rules or procedures, such as specific standards that include, but are not limited to: Global System for Mobile Communications (GSM); Universal Mobile Telecommunications System (UMTS); Long Term Evolution (LTE), and/or other suitable 2G, 3G, 4G, 5G standards, or any applicable future generation standard (e.g.
- GSM Global System for Mobile Communications
- UMTS Universal Mobile Telecommunications System
- LTE Long Term Evolution
- WLAN wireless local area network
- IEEE Institute of Electrical and Electronics Engineers
- WiFi wireless local area network
- WiMax Worldwide Interoperability for Microwave Access
- Bluetooth Wireless Fidelity
- Z-Wave Wireless Fidelity
- NFC Near Field Communication
- LIFI LIFI
- LPWAN low-power wide-area network
- the telecommunication network 3102 is a cellular network that implements 3GPP standardized features. Accordingly, the telecommunications network 3102 may support network slicing to provide different logical networks to different devices that are connected to the telecommunication network 3102. For example, the telecommunications network 3102 may provide Ultra Reliable Low Latency Communication (URLLC) services to some UEs, while providing Enhanced Mobile Broadband (eMBB) services to other UEs, and/or Massive Machine Type Communication (mMTC)ZMassive Internet of Things (loT) services to yet further UEs.
- URLLC Ultra Reliable Low Latency Communication
- eMBB Enhanced Mobile Broadband
- mMTC Massive Machine Type Communication
- LoT Massive Machine Type Communication
- the UEs 3112 are configured to transmit and/or receive information without direct human interaction.
- a UE may be designed to transmit information to the access network 3104 on a predetermined schedule, when triggered by an internal or external event, or in response to requests from the access network 3104.
- a UE may be configured for operating in single- or multi-Radio Access Technology (RAT) or multi-standard mode.
- RAT Radio Access Technology
- a UE may operate with any one or combination of Wi-Fi, NR (New Radio) and LTE, i.e. being configured for multi-radio dual connectivity (MR-DC), such as E-UTRAN (Evolved-UMTS Terrestrial Radio Access Network) New Radio - Dual Connectivity (EN-DC).
- MR-DC multi-radio dual connectivity
- the hub 3114 communicates with the access network 3104 to facilitate indirect communication between one or more UEs (e.g. UE 3112c and/or 3112d) and access network nodes (e.g. access network node 3110b).
- the hub 3114 may be a controller, router, a content source and analytics node, or any of the other communication devices described herein regarding UEs.
- the hub 3114 may be a broadband router enabling access to the core network 3106 for the UEs.
- the hub 3114 may be a controller that sends commands or instructions to one or more actuators in the UEs.
- the hub 3114 may be a data collector that acts as temporary storage for UE data and, in some embodiments, may perform analysis or other processing of the data.
- the hub 3114 may be a content source. For example, for a UE that is a VR headset, display, loudspeaker or other media delivery device, the hub 3114 may retrieve VR assets, video, audio, or other media or data related to sensory information via a network node, which the hub 3114 then provides to the UE either directly, after performing local processing, and/or after adding additional local content.
- the hub 3114 acts as a proxy server or orchestrator for the UEs, in particular in if one or more of the UEs are low energy loT devices.
- the hub 3114 may have a constant/persistent or intermittent connection to the network node 3110b.
- the hub 3114 may also allow for a different communication scheme and/or schedule between the hub 3114 and UEs (e.g. UE 3112c and/or 3112d), and between the hub 3114 and the core network 3106.
- the hub 3114 is connected to the core network 3106 and/or one or more UEs via a wired connection.
- the hub 3114 may be configured to connect to an M2M service provider over the access network 3104 and/or to another UE over a direct connection.
- UEs may establish a wireless connection with the network nodes 3110 while still connected via the hub 3114 via a wired or wireless connection.
- the hub 31 14 may be a dedicated hub - that is, a hub whose primary function is to route communications to/from the UEs from/to the network node 3110b.
- the hub 3114 may be a non-dedicated hub - that is, a device which is capable of operating to route communications between the UEs and network node 3110b, but which is additionally capable of operating as a communication start and/or end point for certain data channels
- FIG 32 shows a wireless device or UE 3200 in accordance with some embodiments.
- a UE refers to a device capable, configured, arranged and/or operable to communicate wirelessly with network nodes and/or other UEs.
- Examples of a wireless device/UE include, but are not limited to, a smart phone, mobile phone, cell phone, voice over IP (VoIP) phone, wireless local loop phone, desktop computer, personal digital assistant (PDA), wireless camera, gaming console or device, music storage device, playback appliance, wearable terminal device, wireless endpoint, mobile station, tablet, laptop, laptop-embedded equipment (LEE), laptop-mounted equipment (LME), smart device, wireless customer-premise equipment (CPE), vehicle-mounted or vehicle embedded/integrated wireless device, etc.
- VoIP voice over IP
- LME laptop-embedded equipment
- LME laptop-mounted equipment
- CPE wireless customer-premise equipment
- UEs identified by the 3rd Generation Partnership Project (3GPP), including a narrow band internet of things (NB-loT) UE, a machine type communication (MTC) UE, and/or an enhanced MTC (eMTC) UE.
- 3GPP 3rd Generation Partnership Project
- NB-loT narrow band internet of things
- MTC machine type communication
- eMTC enhanced MTC
- a wireless device/UE may support device-to-device (D2D) communication, for example by implementing a 3GPP standard for sidelink communication, Dedicated Short-Range Communication (DSRC), vehicle-to-vehicle (V2V), vehicle-to-infrastructure (V2I), or vehicle-to-everything (V2X).
- a UE may not necessarily have a user in the sense of a human user who owns and/or operates the relevant device.
- a UE may represent a device that is intended for sale to, or operation by, a human user but which may not, or which may not initially, be associated with a specific human user (e.g. a smart sprinkler controller).
- a UE may represent a device that is not intended for sale to, or operation by, an end user but which may be associated with or operated for the benefit of a user (e.g. a smart power meter).
- the UE 3200 includes processing circuitry 3202 that is operatively coupled via a bus 3204 to an input/output interface 3206, a power source 3208, a memory 3210, a communication interface 3212, and/or any other component, or any combination thereof.
- Certain UEs may utilize all or a subset of the components shown in Figure 32. The level of integration between the components may vary from one UE to another UE. Further, certain UEs may contain multiple instances of a component, such as multiple processors, memories, transceivers, transmitters, receivers, etc.
- the processing circuitry 3202 is configured to process instructions and data and may be configured to implement any sequential state machine operative to execute instructions stored as machine-readable computer programs in the memory 3210.
- the processing circuitry 3202 may be implemented as one or more hardware-implemented state machines (e.g. in discrete logic, field-programmable gate arrays (FPGAs), application specific integrated circuits (ASICs), etc.); programmable logic together with appropriate firmware; one or more stored computer programs, general-purpose processors, such as a microprocessor or digital signal processor (DSP), together with appropriate software; or any combination of the above.
- the processing circuitry 3202 may include multiple central processing units (CPUs).
- the processing circuitry 3202 may be operable to provide, either alone or in conjunction with other UE 3200 components, such as the memory 3210, to provide UE 3200 functionality.
- the processing circuitry 3202 may be configured to cause the UE 3202 to perform the methods as described with reference to Figure 29.
- the input/output interface 3206 may be configured to provide an interface or interfaces to an input device, output device, or one or more input and/or output devices
- an output device include a speaker, a sound card, a video card, a display, a monitor, a printer, an actuator, an emitter, a smartcard, another output device, or any combination thereof.
- An input device may allow a user to capture information into the UE 3200. Examples of an input device include a touch-sensitive or presence-sensitive display, a camera (e.g.
- the presence-sensitive display may include a capacitive or resistive touch sensor to sense input from a user.
- a sensor may be, for instance, an accelerometer, a gyroscope, a tilt sensor, a force sensor, a magnetometer, an optical sensor, a proximity sensor, a biometric sensor, etc., or any combination thereof.
- An output device may use the same type of interface port as an input device. For example, a Universal Serial Bus (USB) port may be used to provide an input device and an output device.
- USB Universal Serial Bus
- the power source 3208 is structured as a battery or battery pack. Other types of power sources, such as an external power source (e.g. an electricity outlet), photovoltaic device, or power cell, may be used.
- the power source 3208 may further include power circuitry for delivering power from the power source 3208 itself, and/or an external power source, to the various parts of the UE 3200 via input circuitry or an interface such as an electrical power cable. Delivering power may be, for example, for charging of the power source 3208.
- Power circuitry may perform any formatting, converting, or other modification to the power from the power source 3208 to make the power suitable for the respective components of the UE 3200 to which power is supplied.
- the memory 3210 may be or be configured to include memory such as random access memory (RAM), read-only memory (ROM), programmable read-only memory (PROM), erasable programmable read-only memory (EPROM), electrically erasable programmable read-only memory (EEPROM), magnetic disks, optical disks, hard disks, removable cartridges, flash drives, and so forth.
- the memory 3210 includes one or more application programs 3214, such as an operating system, web browser application, a widget, gadget engine, or other application, and corresponding data 3216.
- the memory 3210 may store, for use by the UE 3200, any of a variety of various operating systems or combinations of operating systems.
- the memory 3210 may be configured to include a number of physical drive units, such as redundant array of independent disks (RAID), flash memory, USB flash drive, external hard disk drive, thumb drive, pen drive, key drive, high-density digital versatile disc (HD-DVD) optical disc drive, internal hard disk drive, Blu-Ray optical disc drive, holographic digital data storage (HDDS) optical disc drive, external mini-dual in-line memory module (DIMM), synchronous dynamic random access memory (SDRAM), external micro-DIMM SDRAM, smartcard memory such as tamper resistant module in the form of a universal integrated circuit card (UICC) including one or more subscriber identity modules (SIMs), such as a Universal SIM (USIM) and/or Integrated SIM (ISIM), other memory, or any combination thereof.
- RAID redundant array of independent disks
- HD-DVD high-density digital versatile disc
- HDDS holographic digital data storage
- DIMM external mini-dual in-line memory module
- SDRAM synchronous dynamic random access memory
- the UICC may for example be an embedded UICC (eUlCC), integrated UICC (IUICC) or a removable UICC commonly known as 'SIM card.
- the memory 3210 may allow the UE 3200 to access instructions, application programs and the like, stored on transitory or non-transitory memory media, to off-load data, or to upload data.
- An article of manufacture, such as one utilizing a communication system may be tangibly embodied as or in the memory 3210, which may be or comprise a device-readable storage medium.
- the processing circuitry 3202 may be configured to communicate with an access network or other network using the communication interface 3212.
- the communication interface 3212 may comprise one or more communication subsystems and may include or be communicatively coupled to an antenna 3222.
- the communication interface 3212 may include one or more transceivers used to communicate, such as by communicating with one or more remote transceivers of another device capable of wireless communication (e.g. another UE or a network node in an access network).
- Each transceiver may include a transmitter 3218 and/or a receiver 3220 appropriate to provide network communications (e.g. optical, electrical, frequency allocations, and so forth).
- the transmitter 3218 and receiver 3220 may be coupled to one or more antennas (e.g.
- communication functions of the communication interface 3212 may include cellular communication, Wi-Fi communication, LPWAN communication, data communication, voice communication, multimedia communication, short-range communications such as Bluetooth, near-field communication, location-based communication such as the use of the global positioning system (GPS) to determine a location, another like communication function, or any combination thereof.
- GPS global positioning system
- Communications may be implemented in according to one or more communication protocols and/or standards, such as IEEE 802.11, Code Division Multiplexing Access (CDMA), Wideband Code Division Multiple Access (WCDMA), GSM, LTE, New Radio (NR), UMTS, WiMax, Ethernet, transmission control protocol/internet protocol (TCP/IP), synchronous optical networking (SONET), Asynchronous Transfer Mode (ATM), QUIC, Hypertext Transfer Protocol (HTTP), and so forth.
- CDMA Code Division Multiplexing Access
- WCDMA Wideband Code Division Multiple Access
- WCDMA Wideband Code Division Multiple Access
- GSM Global System for Mobile communications
- LTE Long Term Evolution
- NR New Radio
- UMTS Worldwide Interoperability for Microwave Access
- WiMax Ethernet
- TCP/IP transmission control protocol/internet protocol
- SONET synchronous optical networking
- ATM Asynchronous Transfer Mode
- QUIC Hypertext Transfer Protocol
- HTTP Hypertext Transfer Protocol
- a UE may provide an output of data captured by its sensors, through its communication interface 3212, via a wireless connection to a network node.
- Data captured by sensors of a UE can be communicated through a wireless connection to a network node via another UE.
- the output may be periodic (e.g. once every 15 minutes if it reports the sensed temperature), random (e.g. to even out the load from reporting from several sensors), in response to a triggering event (e.g. when moisture is detected an alert is sent), in response to a request (e.g. a user initiated request), or a continuous stream (e.g. a live video feed of a patient).
- a UE comprises an actuator, a motor, or a switch, related to a communication interface configured to receive wireless input from a network node via a wireless connection.
- the states of the actuator, the motor, or the switch may change.
- the UE may comprise a motor that adjusts the control surfaces or rotors of a drone in flight according to the received input or controls a robotic arm performing a medical procedure according to the received input.
- a UE when in the form of an Internet of Things (loT) device, may be a device for use in one or more application domains, these domains comprising, but not limited to, city wearable technology, extended industrial application and healthcare.
- loT device are devices which are or which are embedded in: a connected refrigerator or freezer, a TV, a connected lighting device, an electricity meter, a robot vacuum cleaner, a voice controlled smart speaker, a home security camera, a motion detector, a thermostat, a smoke detector, a door/window sensor, a flood/moisture sensor, an electrical door lock, a connected doorbell, an air conditioning system like a heat pump, an autonomous vehicle, a surveillance system, a weather monitoring device, a vehicle parking monitoring device, an electric vehicle charging station, a smart watch, a fitness tracker, a head-mounted display for Augmented Reality (AR) or Virtual Reality (VR), a wearable for tactile augmentation or sensory enhancement, a water sprinkler, an animal- or item-tracking device
- AR Augmented
- a UE in the form of an loT device comprises circuitry and/or software in dependence on the intended application of the loT device in addition to other components as described in relation to the UE 3200 shown in Figure 32.
- a UE may represent a machine or other device that performs monitoring and/or measurements, and transmits the results of such monitoring and/or measurements to another UE and/or a network node.
- the UE may in this case be an M2M device, which may in a 3GPP context be referred to as an MTC device.
- the UE may implement the 3GPP NB-loT standard.
- a UE may represent a vehicle, such as a car, a bus, a truck, a ship and an airplane, or other equipment that is capable of monitoring and/or reporting on its operational status or other functions associated with its operation.
- a first UE might be or be integrated in a drone and provide the drone's speed information (obtained through a speed sensor) to a second UE that is a remote controller operating the drone.
- the first UE may adjust the throttle on the drone (e.g. by controlling an actuator) to increase or decrease the drone's speed.
- the first and/or the second UE can also include more than one of the functionalities described above.
- a UE might comprise the sensor and the actuator, and handle communication of data for both the speed sensor and the actuators.
- FIG 33 shows a network node 3300 in accordance with some embodiments.
- network node refers to equipment capable, configured, arranged and/or operable to communicate directly or indirectly with a UE and/or with other network nodes or equipment, in a telecommunication network.
- network nodes include, but are not limited to, access network nodes such as access points (APs) (e.g. radio access points), base stations (BSs) (e.g. radio base stations, Node Bs, evolved Node Bs (eNBs) and NR NodeBs (gNBs)).
- APs access points
- BSs base stations
- Node Bs Node Bs
- eNBs evolved Node Bs
- gNBs NR NodeBs
- Base stations may be categorized based on the amount of coverage they provide (or, stated differently, their transmit power level) and so, depending on the provided amount of coverage, may be referred to as femto base stations, pico base stations, micro base stations, or macro base stations.
- a base station may be a relay node or a relay donor node controlling a relay.
- a network node may also include one or more (or all) parts of a distributed radio base station such as centralized digital units and/or remote radio units (RRUs), sometimes referred to as Remote Radio Heads (RRHs). Such remote radio units may or may not be integrated with an antenna as an antenna integrated radio.
- RRUs remote radio units
- RRHs Remote Radio Heads
- Such remote radio units may or may not be integrated with an antenna as an antenna integrated radio.
- Parts of a distributed radio base station may also be referred to as nodes in a distributed antenna system (DAS).
- DAS distributed antenna system
- network nodes include multiple transmission point (multi-TRP) 5G access nodes, multi-standard radio (MSR) equipment such as MSR BSs, network controllers such as radio network controllers (RNCs) or base station controllers (BSCs), base transceiver stations (BTSs), transmission points, transmission nodes, multi-cell/multicast coordination entities (MCEs), Operation and Maintenance (O&M) nodes, Operations Support System (OSS) nodes, Self-Organizing Network (SON) nodes, positioning nodes (e.g. Evolved Serving Mobile Location Centers (E-SMLCs)), and/or Minimization of Drive Tests (MDTs).
- MSR multi-standard radio
- RNCs radio network controllers
- BSCs base station controllers
- BTSs base transceiver stations
- OFDM Operation and Maintenance
- OSS Operations Support System
- SON Self-Organizing Network
- positioning nodes e.g. Evolved Serving Mobile Location Centers (E-SMLCs)
- the network node 3300 includes processing circuitry 3302, a memory 3304, a communication interface 3306, and a power source 3308, and/or any other component, or any combination thereof.
- the network node 3300 may be composed of multiple physically separate components (e.g. a NodeB component and a RNC component, or a BTS component and a BSC component, etc.), which may each have their own respective components.
- the network node 3300 comprises multiple separate components (e.g. BTS and BSC components)
- one or more of the separate components may be shared among several network nodes.
- a single RNC may control multiple NodeBs.
- each unique NodeB and RNC pair may in some instances be considered a single separate network node.
- the network node 3300 may be configured to support multiple radio access technologies (RATs).
- RATs radio access technologies
- some components may be duplicated (e.g. separate memory 3304 for different RATs) and some components may be reused (e.g. a same antenna 3310 may be shared by different RATs).
- the network node 3300 may also include multiple sets of the various illustrated components for different wireless technologies integrated into network node 3300, for example GSM, WCDMA, LTE, NR, WiFi, Zigbee, Z-wave, LoRaWAN, Radio Frequency Identification (RFID) or Bluetooth wireless technologies. These wireless technologies may be integrated into the same or different chip or set of chips and other components within network node 3300.
- RFID Radio Frequency Identification
- the processing circuitry 3302 may comprise a combination of one or more of a microprocessor, controller, microcontroller, central processing unit, digital signal processor, application-specific integrated circuit, field programmable gate array, or any other suitable computing device, resource, or combination of hardware, software and/or encoded logic operable to provide, either alone or in conjunction with other network node 3300 components, such as the memory 3304, to provide network node 3300 functionality.
- the processing circuitry 3302 may be configured to cause the network node to perform the methods as described with reference to Figure 30.
- the processing circuitry 3302 includes a system on a chip (SOC). In some embodiments, the processing circuitry 3302 includes one or more of radio frequency (RF) transceiver circuitry 3312 and baseband processing circuitry 3314. In some embodiments, the radio frequency (RF) transceiver circuitry 3312 and the baseband processing circuitry 3314 may be on separate chips (or sets of chips), boards, or units, such as radio units and digital units. In alternative embodiments, part or all of RF transceiver circuitry 3312 and baseband processing circuitry 3314 may be on the same chip or set of chips, boards, or units.
- SOC system on a chip
- the processing circuitry 3302 includes one or more of radio frequency (RF) transceiver circuitry 3312 and baseband processing circuitry 3314.
- the radio frequency (RF) transceiver circuitry 3312 and the baseband processing circuitry 3314 may be on separate chips (or sets of chips), boards, or units, such as radio units and digital units. In alternative embodiments, part or all of
- the memory 3304 may comprise any form of volatile or non-volatile computer-readable memory including, without limitation, persistent storage, solid-state memory, remotely mounted memory, magnetic media, optical media, random access memory (RAM), read-only memory (ROM), mass storage media (for example, a hard disk), removable storage media (for example, a flash drive, a Compact Disk (CD) or a Digital Video Disk (DVD)), and/or any other volatile or non-volatile, non-transitory device-readable and/or computer-executable memory devices that store information, data, and/or instructions that may be used by the processing circuitry 3302.
- volatile or non-volatile computer-readable memory including, without limitation, persistent storage, solid-state memory, remotely mounted memory, magnetic media, optical media, random access memory (RAM), read-only memory (ROM), mass storage media (for example, a hard disk), removable storage media (for example, a flash drive, a Compact Disk (CD) or a Digital Video Disk (DVD)), and/or any other volatile or non-
- the memory 3304 may store any suitable instructions, data, or information, including a computer program, software, an application including one or more of logic, rules, code, tables, and/or other instructions capable of being executed by the processing circuitry 3302 and utilized by the network node 3300.
- the memory 3304 may be used to store any calculations made by the processing circuitry 3302 and/or any data received via the communication interface 3306.
- the processing circuitry 3302 and memory 3304 is integrated.
- the communication interface 3306 is used in wired or wireless communication of signalling and/or data between network nodes, the access network, the core network, and/or a UE. As illustrated, the communication interface 3306 comprises port(s)/terminal(s) 3316 to send and receive data, for example to and from a network over a wired connection.
- the communication interface 3306 also includes radio front-end circuitry 3318 that may be coupled to, or in certain embodiments a part of, the antenna 3310.
- Radio front-end circuitry 3318 comprises filters 3320 and amplifiers 3322.
- the radio front-end circuitry 3318 may be connected to an antenna 3310 and processing circuitry 3302.
- the radio front-end circuitry may be configured to condition signals communicated between antenna 3310 and processing circuitry 3302.
- the radio front-end circuitry 3318 may receive digital data that is to be sent out to other network nodes or UEs via a wireless connection.
- the radio front-end circuitry 3318 may convert the digital data into a radio signal having the appropriate channel and bandwidth parameters using a combination of filters 3320 and/or amplifiers 3322.
- the radio signal may then be transmitted via the antenna 3310.
- the antenna 3310 may collect radio signals which are then converted into digital data by the radio front-end circuitry 3318.
- the digital data may be passed to the processing circuitry 3302.
- the communication interface may comprise different components and/or different combinations of components.
- the access network node 3300 does not include separate radio front-end circuitry 3318, instead, the processing circuitry 3302 includes radio front-end circuitry and is connected to the antenna 3310 Similarly, in some embodiments, all or some of the RF transceiver circuitry 3312 is part of the communication interface 3306. In still other embodiments, the communication interface 3306 includes one or more ports or terminals 3316, the radio front-end circuitry 3318, and the RF transceiver circuitry 3312, as part of a radio unit (not shown), and the communication interface 3306 communicates with the baseband processing circuitry 3314, which is part of a digital unit (not shown).
- the antenna 3310 may include one or more antennas, or antenna arrays, configured to send and/or receive wireless signals.
- the antenna 3310 may be coupled to the radio front-end circuitry 3318 and may be any type of antenna capable of transmitting and receiving data and/or signals wirelessly.
- the antenna 3310 is separate from the network node 3300 and connectable to the network node 3300 through an interface or port.
- the antenna 3310, communication interface 3306, and/or the processing circuitry 3302 may be configured to perform any receiving operations and/or certain obtaining operations described herein as being performed by the network node. Any information, data and/or signals may be received from a UE, another network node and/or any other network equipment. Similarly, the antenna 3310, the communication interface 3306, and/or the processing circuitry 3302 may be configured to perform any transmitting operations described herein as being performed by the network node. Any information, data and/or signals may be transmitted to a UE, another network node and/or any other network equipment.
- the power source 3308 provides power to the various components of network node 3300 in a form suitable for the respective components (e.g. at a voltage and current level needed for each respective component).
- the power source 3308 may further comprise, or be coupled to, power management circuitry to supply the components of the network node 3300 with power for performing the functionality described herein.
- the network node 3300 may be connectable to an external power source (e.g. the power grid, an electricity outlet) via an input circuitry or interface such as an electrical cable, whereby the external power source supplies power to power circuitry of the power source 3308.
- the power source 3308 may comprise a source of power in the form of a battery or battery pack which is connected to, or integrated in, power circuitry. The battery may provide backup power should the external power source fail.
- Embodiments of the network node 3300 may include additional components beyond those shown in Fig. 33 for providing certain aspects of the network node’s functionality, including any of the functionality described herein and/or any functionality necessary to support the subject matter described herein.
- the network node 3300 may include user interface equipment to allow input of information into the network node 3300 and to allow output of information from the network node 3300. This may allow a user to perform diagnostic, maintenance, repair, and other administrative functions for the network node 3300
- FIG. 34 is a block diagram illustrating a virtualization environment 3400 in which functions implemented by some embodiments may be virtualized.
- virtualizing means creating virtual versions of apparatuses or devices which may include virtualizing hardware platforms, storage devices and networking resources.
- virtualization can be applied to any device described herein, or components thereof, and relates to an implementation in which at least a portion of the functionality is implemented as one or more virtual components.
- Some or all of the functions described herein may be implemented as virtual components executed by one or more virtual machines (VMs) implemented in one or more virtual environments 3400 hosted by one or more of hardware nodes, such as a hardware computing device that operates as an access network node, a wireless device/UE, a core network node.
- VMs virtual machines
- the node may be entirely virtualized.
- Hardware 3404 includes processing circuitry, memory that stores software and/or instructions executable by hardware processing circuitry, and/or other hardware devices as described herein, such as a network interface, input/output interface, and so forth.
- Software may be executed by the processing circuitry to instantiate one or more virtualization layers 3406 (also referred to as hypervisors or virtual machine monitors (VMMs)), provide VMs 3408a and 3408b (one or more of which may be generally referred to as VMs 3408), and/or perform any of the functions, features and/or benefits described in relation with some embodiments described herein.
- the virtualization layer 3406 may present a virtual operating platform that appears like networking hardware to the VMs 3408.
- the VMs 3408 comprise virtual processing, virtual memory, virtual networking or interface and virtual storage, and may be run by a corresponding virtualization layer 3406.
- a virtualization layer 3406 Different embodiments of the instance of a virtual appliance 3402 may be implemented on one or more of VMs 3408, and the implementations may be made in different ways.
- Virtualization of the hardware is in some contexts referred to as network function virtualization (NFV). NFV may be used to consolidate many network equipment types onto industry standard high volume server hardware, physical switches, and physical storage, which can be located in data centers, and customer premise equipment.
- NFV network function virtualization
- a VM 3408 may be a software implementation of a physical machine that runs programs as if they were executing on a physical, non-virtualized machine.
- Each of the VMs 3408, and that part of hardware 3404 that executes that VM be it hardware dedicated to that VM and/or hardware shared by that VM with others of the VMs, forms separate virtual network elements.
- a virtual network function is responsible for handling specific network functions that run in one or more VMs 3408 on top of the hardware 3404 and corresponds to the application 3402
- Hardware 3404 may be implemented in a standalone network node with generic or specific components. Hardware 3404 may implement some functions via virtualization. Alternatively, hardware 3404 may be part of a larger cluster of hardware (e.g. such as in a data center or CPE) where many hardware nodes work together and are managed via management and orchestration 3410, which, among others, oversees lifecycle management of applications 3402.
- hardware 3404 is coupled to one or more radio units that each include one or more transmitters and one or more receivers that may be coupled to one or more antennas. Radio units may communicate directly with other hardware nodes via one or more appropriate network interfaces and may be used in combination with the virtual components to provide a virtual node with radio capabilities, such as a radio access node or a base station.
- some signalling can be provided with the use of a control system 3412 which may alternatively be used for communication between hardware nodes and radio units.
- computing devices described herein may include the illustrated combination of hardware components
- computing devices may comprise multiple different physical components that make up a single illustrated component, and functionality may be partitioned between separate components.
- a communication interface may be configured to include any of the components described herein, and/or the functionality of the components may be partitioned between the processing circuitry and the communication interface.
- non-computationally intensive functions of any of such components may be implemented in software or firmware and computationally intensive functions may be implemented in hardware.
- processing circuitry executing instructions stored on in memory, which in certain embodiments may be a computer program product in the form of a non-transitory computer-readable storage medium.
- some or all of the functionality may be provided by the processing circuitry without executing instructions stored on a separate or discrete device-readable storage medium, such as in a hard-wired manner.
- the processing circuitry can be configured to perform the described functionality. The benefits provided by such functionality are not limited to the processing circuitry alone or to other components of the computing device, but are enjoyed by the computing device as a whole, and/or by end users and a wireless network generally.
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Mathematical Physics (AREA)
- Computational Linguistics (AREA)
- General Health & Medical Sciences (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Software Systems (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- Health & Medical Sciences (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Signal Processing (AREA)
- Computer Networks & Wireless Communication (AREA)
- Mobile Radio Communication Systems (AREA)
Abstract
Description
Claims
Applications Claiming Priority (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| EP22306671 | 2022-11-04 | ||
| EP22306671.3 | 2022-11-04 |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| WO2024096780A1 true WO2024096780A1 (en) | 2024-05-10 |
Family
ID=84366975
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| PCT/SE2023/051052 Ceased WO2024096780A1 (en) | 2022-11-04 | 2023-10-25 | Methods for implicit csi feedback with rank greater than one |
Country Status (1)
| Country | Link |
|---|---|
| WO (1) | WO2024096780A1 (en) |
Citations (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20220149904A1 (en) * | 2019-03-06 | 2022-05-12 | Telefonaktiebolaget Lm Ericsson (Publ) | Compression and Decompression of Downlink Channel Estimates |
| US20220239357A1 (en) * | 2019-04-30 | 2022-07-28 | Lg Electronics Inc. | Method for reporting channel state information in wireless communication system, and device therefor |
-
2023
- 2023-10-25 WO PCT/SE2023/051052 patent/WO2024096780A1/en not_active Ceased
Patent Citations (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20220149904A1 (en) * | 2019-03-06 | 2022-05-12 | Telefonaktiebolaget Lm Ericsson (Publ) | Compression and Decompression of Downlink Channel Estimates |
| US20220239357A1 (en) * | 2019-04-30 | 2022-07-28 | Lg Electronics Inc. | Method for reporting channel state information in wireless communication system, and device therefor |
Non-Patent Citations (11)
| Title |
|---|
| "Physical layer procedures for data (Release 16", 3GPP TS 38.214 |
| "Study on Artificial Intelligence (AI)/Machine Learning (ML) for NR Air Interface", RP-213599, December 2021 (2021-12-01) |
| APPLE INC: "Discussion on other aspects of AI/ML for CSI enhancement", vol. RAN WG1, no. e-Meeting; 20221010 - 20221019, 30 September 2022 (2022-09-30), XP052259050, Retrieved from the Internet <URL:https://ftp.3gpp.org/tsg_ran/WG1_RL1/TSGR1_110b-e/Docs/R1-2209577.zip R1-2209577 - AI CSI others.docx> [retrieved on 20220930] * |
| ERICSSON: "Discussion on general aspects of AI/ML framework", R1-2208908, October 2022 (2022-10-01) |
| ERICSSON: "Discussions on AI-CSI", R1-2208728, October 2022 (2022-10-01) |
| ERICSSON: "Discussions on AI-CSI", vol. RAN WG1, no. Online; 20221010 - 20221019, 30 September 2022 (2022-09-30), XP052276651, Retrieved from the Internet <URL:https://ftp.3gpp.org/tsg_ran/WG1_RL1/TSGR1_110b-e/Docs/R1-2208728.zip R1-2208728 Ericsson Discussions on AI-CSI.docx> [retrieved on 20220930] * |
| HUAWEI ET AL: "Discussion on AI/ML for CSI feedback enhancement", vol. RAN WG1, no. e-Meeting; 20221010 - 20221019, 30 September 2022 (2022-09-30), XP052276355, Retrieved from the Internet <URL:https://ftp.3gpp.org/tsg_ran/WG1_RL1/TSGR1_110b-e/Docs/R1-2208430.zip R1-2208430.docx> [retrieved on 20220930] * |
| QUALCOMM: "Rel. 18 Network AI/ML", TSG RAN REL-18 WORKSHOP, 28 June 2021 (2021-06-28) |
| RAN1 CHAIR'S NOTES, October 2022 (2022-10-01) |
| ZHILIN LU ET AL., ARXIV, 2105.00354 V1, 2021 |
| ZHILIN LUXUDONG ZHANGHONGYI HEJINTAO WANGJIAN SONG: "Binarized Aggregated Network with Quantization: Flexible Deep Learning Deployment for CSI Feedback in MassiveMIMO System", ARXIV, 2105.00354 V1, May 2021 (2021-05-01) |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| US20240364400A1 (en) | Frequency domain csi compression for coherent joint transmission | |
| US20240275519A1 (en) | Training Network-Based Decoders of User Equipment (UE) Channel State Information (CSI) Feedback | |
| US20250159518A1 (en) | Type ii csi reporting for cjt with angle and delay reciprocity | |
| US20240283509A1 (en) | Combining Proprietary and Standardized Techniques for Channel State Information (CSI) Feedback | |
| US20250055528A1 (en) | Spatial domain csi compression for coherent joint transmission | |
| US20250125926A1 (en) | CONFIGURING CSI-RS FOR CSI FEEDBACK ASSOCIATED WITH DL TRANSMISSION FROM MULTIPLE TRPs | |
| WO2023084421A1 (en) | Method and systems for csi reporting enhancement for type ii pmi prediction | |
| EP4595290A1 (en) | Systems and methods for artificial information-based channel state information reporting | |
| WO2025038021A1 (en) | Performance monitoring of a two-sided artificial intelligence / machine learning model at the user equipment side | |
| EP4566173A1 (en) | Ue selecting and reporting the number of spatial beams for coherent joint transmission | |
| WO2024096780A1 (en) | Methods for implicit csi feedback with rank greater than one | |
| WO2023199288A1 (en) | Deep learning based uplink-downlink channel covariance matrix mapping in fdd massive mimo system | |
| WO2024175955A1 (en) | Beamspace eigen precoding via subspace tracking | |
| WO2025038018A1 (en) | Methods for csi-reporting during performance monitoring | |
| WO2025038016A1 (en) | Codebook subset restriction for channel state information (csi) feedback in spatial-frequency domain | |
| WO2025183611A1 (en) | Channel state information acquisition for unconventional arrays | |
| WO2025210610A1 (en) | CSI PREDICTION USING CSI-RSs WITH DIFFERENT TIME DOMAIN BEHAVIORS | |
| WO2025183605A1 (en) | Report configuration for channel state information feedback | |
| WO2025181634A1 (en) | Methods for csi feedback with csi-rs port subset indication with ai/ml models | |
| EP4555683A1 (en) | Channel state information prediction using machine learning | |
| WO2024154076A1 (en) | Methods for signaling type ii cjt parameter combination | |
| WO2025146616A1 (en) | Csi feedback with beam specific power backoff | |
| WO2025233923A1 (en) | Methods for cri based reporting of type ii csi | |
| EP4595262A1 (en) | Methods of channel quality indication reporting with type ii codebook for high velocity | |
| WO2023033698A1 (en) | Systems and methods for channel state information reporting for channel prediction |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| 121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 23798305 Country of ref document: EP Kind code of ref document: A1 |
|
| WWE | Wipo information: entry into national phase |
Ref document number: 202517053829 Country of ref document: IN |
|
| NENP | Non-entry into the national phase |
Ref country code: DE |
|
| WWP | Wipo information: published in national office |
Ref document number: 202517053829 Country of ref document: IN |
|
| 122 | Ep: pct application non-entry in european phase |
Ref document number: 23798305 Country of ref document: EP Kind code of ref document: A1 |