[go: up one dir, main page]

WO2024166040A1 - Method and system for transmission of samples - Google Patents

Method and system for transmission of samples Download PDF

Info

Publication number
WO2024166040A1
WO2024166040A1 PCT/IB2024/051179 IB2024051179W WO2024166040A1 WO 2024166040 A1 WO2024166040 A1 WO 2024166040A1 IB 2024051179 W IB2024051179 W IB 2024051179W WO 2024166040 A1 WO2024166040 A1 WO 2024166040A1
Authority
WO
WIPO (PCT)
Prior art keywords
samples
sample
information
metric
model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Ceased
Application number
PCT/IB2024/051179
Other languages
French (fr)
Inventor
Vahid POURAHMADI
Venkata Srinivas Kothapalli
Ahmed HINDY
Vijay Nangia
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Lenovo Singapore Pte Ltd
Original Assignee
Lenovo Singapore Pte Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Lenovo Singapore Pte Ltd filed Critical Lenovo Singapore Pte Ltd
Publication of WO2024166040A1 publication Critical patent/WO2024166040A1/en
Anticipated expiration legal-status Critical
Ceased legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L25/00Baseband systems
    • H04L25/02Details ; arrangements for supplying electrical power along data transmission lines
    • H04L25/0202Channel estimation
    • H04L25/024Channel estimation channel estimation algorithms
    • H04L25/0254Channel estimation channel estimation algorithms using neural network algorithms
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04BTRANSMISSION
    • H04B7/00Radio transmission systems, i.e. using radiation field
    • H04B7/02Diversity systems; Multi-antenna system, i.e. transmission or reception using multiple antennas
    • H04B7/04Diversity systems; Multi-antenna system, i.e. transmission or reception using multiple antennas using two or more spaced independent antennas
    • H04B7/06Diversity systems; Multi-antenna system, i.e. transmission or reception using multiple antennas using two or more spaced independent antennas at the transmitting station
    • H04B7/0613Diversity systems; Multi-antenna system, i.e. transmission or reception using multiple antennas using two or more spaced independent antennas at the transmitting station using simultaneous transmission
    • H04B7/0615Diversity systems; Multi-antenna system, i.e. transmission or reception using multiple antennas using two or more spaced independent antennas at the transmitting station using simultaneous transmission of weighted versions of same signal
    • H04B7/0619Diversity systems; Multi-antenna system, i.e. transmission or reception using multiple antennas using two or more spaced independent antennas at the transmitting station using simultaneous transmission of weighted versions of same signal using feedback from receiving side
    • H04B7/0621Feedback content
    • H04B7/0626Channel coefficients, e.g. channel state information [CSI]

Definitions

  • the technical field is the selection of information samples for transmission from a first device to a second device in a radio communications network.
  • 3GPP 5G New Radio uses massive Multiple Input Multiple Output (MIMO) to boost transmission capabilities.
  • MIMO Multiple Input Multiple Output
  • BSs Base Stations
  • UE User Equipment
  • MIMO knowledge of channel conditions is needed in order to pre-code antennae in the respective antenna arrays.
  • a method implemented in a radio communications network at a first device comprises receiving or generating a first set of information, wherein the first set of information comprises one or more samples, wherein each sample comprises one or more components.
  • a first metric is set for at least a sample where the sample is one of the samples of the first set of information or is generated from a subset of samples of the first set of information based on an output of a first function.
  • the respective input of the first function is at least one of components of the respective sample or subset of samples of the first set of information.
  • One or more samples of the previous set is selected for inclusion in a second set of information based on a respective first metric value for each sample.
  • At least a subset of samples of the second set of information is transmitted to a second device, wherein the subset is selected based on at least one of: a respective first metric associated with each sample, and a set of requirements determined at the first device.
  • Figure 1 illustrates an example of a wireless communications system 100 in accordance with aspects of the present disclosure
  • Figure 2 illustrates a system 200
  • Figure 3 illustrates a high-level structure of such a two-sided method 300
  • Figure 4 is a flow chart 400 illustrating a method according to an implementation
  • Figure 5 illustrates a process 500 of the generation of new samples
  • Figure 6 illustrates a method 600 for determining one or more of these metrics according to implementations
  • Figure 7 illustrates a process 700 of creation of the subsets according to implementations.
  • Figure 8 is a diagram illustrating the components of a first device according to an implementation.
  • FIG. 1 illustrates an example of a wireless communications system 100 in accordance with aspects of the present disclosure.
  • the wireless communications system 100 may include one or more network equipment (NE) NE 102, one or more UE 104, and a core network (CN) 106.
  • the wireless communications system 100 may support various radio access technologies.
  • the wireless communications system 100 may be a 4G network, such as an LTE network or an LTE- Advanced (LTE-A) network.
  • LTE-A LTE- Advanced
  • the wireless communications system 100 may be a NR network, such as a 5G network, a 5G- Advanced (5G-A) network, or a 5G ultrawideband (5G-UWB) network.
  • 5G-A 5G- Advanced
  • 5G-UWB 5G ultrawideband
  • the wireless communications system 100 may be a combination of a 4G network and a 5G network, or other suitable radio access technology including Institute of Electrical and Electronics Engineers (IEEE) 802.11 (Wi-Fi), IEEE 802.16 (WiMAX), IEEE 802.20.
  • IEEE Institute of Electrical and Electronics Engineers
  • the wireless communications system 100 may support radio access technologies beyond 5G, for example, 6G. Additionally, the wireless communications system 100 may support technologies, such as time division multiple access (TDMA), frequency division multiple access (FDMA), or code division multiple access (CDMA), etc.
  • TDMA time division multiple access
  • FDMA frequency division multiple access
  • CDMA code division multiple access
  • the one or more NE 102 may be dispersed throughout a geographic region to form the wireless communications system 100.
  • One or more of the NE 102 described herein may be or include or may be referred to as a network node, a base station, a network element, a network function, a network entity, a radio access network (RAN), a NodeB, an eNodeB (eNB), a next-generation NodeB (gNB), or other suitable terminology.
  • An NE 102 and a UE 104 may communicate via a communication link, which may be a wireless or wired connection.
  • an NE 102 and a UE 104 may perform wireless communication (e.g., receive signaling, transmit signaling) over a Uu interface.
  • An NE 102 may provide a geographic coverage area for which the NE 102 may support services for one or more UEs 104 within the geographic coverage area.
  • an NE 102 and a UE 104 may support wireless communication of signals related to services (e.g., voice, video, packet data, messaging, broadcast, etc.) according to one or multiple radio access technologies.
  • an NE 102 may be moveable, for example, a satellite associated with a non-terrestrial network (NTN).
  • NTN non-terrestrial network
  • different geographic coverage areas associated with the same or different radio access technologies may overlap, but the different geographic coverage areas may be associated with different NE 102.
  • the one or more UEs 104 may be dispersed throughout a geographic region of the wireless communications system 100.
  • a UE 104 may include or may be referred to as a remote unit, a mobile device, a wireless device, a remote device, a subscriber device, a transmitter device, a receiver device, or some other suitable terminology.
  • the UE 104 may be referred to as a unit, a station, a terminal, or a client, among other examples.
  • the UE 104 may be referred to as an Internet-of-Things (loT) device, an Internet-of-Everything (loE) device, or machine-type communication (MTC) device, among other examples.
  • LoT Internet-of-Things
  • LoE Internet-of-Everything
  • MTC machine-type communication
  • a UE 104 may be able to support wireless communication directly with other UEs 104 over a communication link.
  • a UE 104 may support wireless communication directly with another UE 104 over a device-to-device (D2D) communication link.
  • D2D device-to-device
  • the communication link may be referred to as a sidelink.
  • a UE 104 may support wireless communication directly with another UE 104 over a PC5 interface.
  • An NE 102 may support communications with the CN 106, or with another NE 102, or both.
  • an NE 102 may interface with other NE 102 or the CN 106 through one or more backhaul links (e.g., S1 , N2, N6, or other network interface).
  • the NE 102 may communicate with each other directly.
  • the NE 102 may communicate with each other indirectly (e.g. , via the CN 106).
  • one or more NE 102 may include subcomponents, such as an access network entity, which may be an example of an access node controller (ANC).
  • An ANC may communicate with the one or more UEs 104 through one or more other access network transmission entities, which may be referred to as a radio heads, smart radio heads, or transmission-reception points (TRPs).
  • TRPs transmission-reception points
  • the CN 106 may support user authentication, access authorization, tracking, connectivity, and other access, routing, or mobility functions.
  • the CN 106 may be an evolved packet core (EPC), or a 5G core (5GC), which may include a control plane entity that manages access and mobility (e.g., a mobility management entity (MME), an access and mobility management functions (AMF)) and a user plane entity that routes packets or interconnects to external networks (e.g., a serving gateway (S-GW), a packet data network (PDN) gateway (P-GW), or a user plane function (UPF)).
  • EPC evolved packet core
  • 5GC 5G core
  • MME mobility management entity
  • AMF access and mobility management functions
  • S-GW serving gateway
  • PDN gateway packet data network gateway
  • UPF user plane function
  • control plane entity may manage non-access stratum (NAS) functions, such as mobility, authentication, and bearer management (e.g., data bearers, signal bearers, etc.) for the one or more UEs 104 served by the one or more NE 102 associated with the CN 106.
  • NAS non-access stratum
  • the CN 106 may communicate with a packet data network over one or more backhaul links (e.g., via an S1 , N2, N6, or other network interface).
  • the packet data network may include an application server.
  • one or more UEs 104 may communicate with the application server.
  • a UE 104 may establish a session (e.g., a protocol data unit (PDU) session, or the like) with the CN 106 via an NE 102.
  • the CN 106 may route traffic (e.g., control information, data, and the like) between the UE 104 and the application server using the established session (e.g., the established PDU session).
  • the PDU session may be an example of a logical connection between the UE 104 and the CN 106 (e.g., one or more network functions of the CN 106).
  • the NEs 102 and the UEs 104 may use resources of the wireless communications system 100 (e.g., time resources (e.g., symbols, slots, subframes, frames, or the like) or frequency resources (e.g., subcarriers, carriers)) to perform various operations (e.g., wireless communications).
  • the NEs 102 and the UEs 104 may support different resource structures.
  • the NEs 102 and the UEs 104 may support different frame structures.
  • the NEs 102 and the UEs 104 may support a single frame structure.
  • the NEs 102 and the UEs 104 may support various frame structures (i.e., multiple frame structures).
  • the NEs 102 and the UEs 104 may support various frame structures based on one or more numerologies.
  • One or more numerologies may be supported in the wireless communications system 100, and a numerology may include a subcarrier spacing and a cyclic prefix.
  • the first subcarrier spacing e.g., 15 kHz
  • a time interval of a resource may be organized according to frames (also referred to as radio frames).
  • Each frame may have a duration, for example, a 10 millisecond (ms) duration.
  • each frame may include multiple subframes.
  • each frame may include 10 subframes, and each subframe may have a duration, for example, a 1 ms duration.
  • each frame may have the same duration.
  • each subframe of a frame may have the same duration.
  • a time interval of a resource e.g., a communication resource
  • a subframe may include a number (e.g., quantity) of slots.
  • the number of slots in each subframe may also depend on the one or more numerologies supported in the wireless communications system 100.
  • respective subcarrier spacings 15 kHz, 30 kHz, 60 kHz, 120 kHz, and 240 kHz may utilize a single slot per subframe, two slots per subframe, four slots per subframe, eight slots per subframe, and 16 slots per subframe, respectively.
  • Each slot may include a number (e.g., quantity) of symbols (e.g., OFDM symbols).
  • the number (e.g., quantity) of slots for a subframe may depend on a numerology.
  • a slot may include 14 symbols.
  • an extended cyclic prefix e.g., applicable for 60 kHz subcarrier spacing
  • a slot may include 12 symbols.
  • a first subcarrier spacing e.g., 15 kHz
  • an electromagnetic (EM) spectrum may be split, based on frequency or wavelength, into various classes, frequency bands, frequency channels, etc.
  • the wireless communications system 100 may support one or multiple operating frequency bands, such as frequency range designations FR1 (410 MHz - 7.125 GHz), FR2 (24.25 GHz - 52.6 GHz), FR3 (7.125 GHz - 24.25 GHz), FR4 (52.6 GHz - 114.25 GHz), FR4a or FR4- 1 (52.6 GHz - 71 GHz), and FR5 (114.25 GHz - 300 GHz).
  • FR1 410 MHz - 7.125 GHz
  • FR2 24.25 GHz - 52.6 GHz
  • FR3 7.125 GHz - 24.25 GHz
  • FR4 (52.6 GHz - 114.25 GHz
  • FR4a or FR4- 1 (52.6 GHz - 71 GHz
  • FR5 114.25 GHz - 300 GHz
  • the NEs 102 and the UEs 104 may perform wireless communications over one or more of the operating frequency bands.
  • FR1 may be used by the NEs 102 and the UEs 104, among other equipment or devices for cellular communications traffic (e.g., control information, data).
  • FR2 may be used by the NEs 102 and the UEs 104, among other equipment or devices for short-range, high data rate capabilities.
  • FR1 may be associated with one or multiple numerologies (e.g., at least three numerologies).
  • FR2 may be associated with one or multiple numerologies (e.g., at least 2 numerologies).
  • FIG. 2 shows a system 200 with a NE 102, known as a “g node B” (gNB) equipped with M antennae and K UEs 104 denoted by U 1 , U 2 , --- , U K .
  • gNB g node B
  • K UEs 104 denoted by U 1 , U 2 , --- , U K .
  • Each UE has N antennae.
  • H k (t) denotes the channel at time t over frequency band I, I e ⁇ 1,2, ... , £ ⁇ , between the gNB and user equipment U k .
  • H k (t) is a matrix of size N x M with complex entries, i.e., H k (t) e C WxM .
  • w k (t) e C Mxl is a precoding vector.
  • y k (t) is the signal received at U k .
  • n k (t) represents the noise vector at the receiver.
  • the gNB selects w k (t) such as to maximize the received signal to noise ratio (SNR).
  • SNR received signal to noise ratio
  • Several schemes have been proposed for good selection of w k (t), most of which rely on having some knowledge about H k (t).
  • Knowledge of H k (t) can be attained by the gNB by direct measurement (e.g., in TDD mode and assuming reciprocity of the channel), or indirectly using the information that the UE sends to the gNB (e.g., in FDD mode). In the latter case, a large amount of feedback is required to send accurate information about H k (t). This becomes particularly important if there are large number of antennae or/and large frequency bands.
  • H k (t) can be denoted using H k .
  • H fe (t) is defined as matrix of size N x M x L which is composed by stacking H ; fe for all frequency bands, i.e., the entries at H fe [n,m, Z](t) is equal to H ; fe [n,m](t). In total therefore, each UE is feeding back the information about the most recent JV x M x L complex numbers to the gNB.
  • a group of these methods consist of two parts where the first part is deployed at the UE side and the second part is deployed at the gNB side.
  • Figure 3 illustrates a high-level structure of such a two-sided method 300.
  • Neural network (NN)-based UE and gNB sides are referred to here as M e (encoding model) and M d (decoding model) respectively.
  • the UE (302) and gNB (304) sides consist of a one or a few neural network, NN, blocks which are trained using data driven approaches.
  • the UE side is responsible for computing a latent representation (306) of the input data (308), which is to be transferred to the gNB (304) with as few bits as possible.
  • Receiving what has been transmitted by the UE side 302, the gNB 304 side reconstructs the information intended to be transmitted to the gNB.
  • the training entity can be the UE itself, the gNB, a node at the UE side, or a node at the network side.
  • a training dataset is required, which is composed of different samples corresponding to the input and expected output of the system.
  • the dataset may have samples for end-to- end mapping, for only the encoder, or for only the decoder.
  • the training dataset may be created using samples from simulations. Performance may be improved if samples are collected from the actual environment.
  • the training dataset (or collected samples) may need to be transferred to another node, e.g. from the UE to the gNB.
  • Other instances may include collecting samples for UE positioning, fingerprinting, beam management, load balancing, and UE mobility management.
  • collected samples at one node may need to be transmitted to other nodes.
  • the transmitted samples might be used for several reasons, such as: a) initial training of a model b) updating the existing model with the new environment or with the new state of the environment, c) set of samples for model monitoring, model selection or model switching, d) for storage of the samples for a later use.
  • An exemplary method and system for reducing the size of a sample set in a radio communications network is presented here.
  • An exemplary method and system is provided for determining a metric for samples of at least a set of collected samples or a set of generated samples which could be used for representing the importance and necessity of sending that sample. For example, sending the higher priority samples, instead of all samples, may help in lowering the communication cost, speeding up the transmission of important samples, and reducing data size for storage.
  • An exemplary use of the method is to select samples for Channel State Information, CSI, in a radio communications network.
  • CSI Channel State Information
  • the method may apply to other aspects of a radio communications network.
  • CSI feedback may be used to select precoding from the 5G New Radio, NR, Codebook. A summary of this is given below.
  • the gNB is equipped with a two-dimensional (2D) antenna array with N1 , N2 antenna ports per polarization placed horizontally and vertically, and communication occurs over N3 PMI sub-bands, wherein N1 , N2 and N3 are integer values.
  • a PMI sub-band consists of a set of resource blocks, each resource block consisting of a set of subcarriers.
  • 2N1 N2 CSI-RS ports are utilized to enable downlink (DL) channel estimation with high resolution for NR Rel. 15 Type-ll codebook.
  • a Discrete Fourier transform (DFT)-based CSI compression of the spatial domain is applied to L dimensions per polarization, where L ⁇ N1 N2.
  • the indices of the 2L dimensions are referred as the Spatial Domain (SD) basis indices.
  • the magnitude and phase values of the linear combination coefficients for each sub-band are fed back to the gNB as part of the CSI report.
  • the 2N1 N2xN3 codebook per layer I takes on the form where Wi is a 2NIN 2 X2L block-diagonal matrix (L ⁇ NIN2) with two identical diagonal blocks, and B i matrix with columns drawn from a 2D oversampled DFT matrix, as follows.
  • W 2 ,i is a 2Lx N 3 matrix, where the f h column corresponds to the linear combination coefficients of the 2L beams in the I th sub-band. Only the indices of the L selected columns of B are reported, along with the oversampling index taking on OiO 2 values. Note that W 2 ,/ are independent for different layers.
  • K (where K ⁇ 2N1 N2) beamformed CSI-RS ports are utilized in DL transmission, in order to reduce complexity.
  • W2 follow the same structure as the conventional NR Rel.15 Type- 11 Codebook, and are layer specific.
  • w s is a Kx2L block-diagonal matrix with two identical diagonal blocks, i.e., s an RRC parameter which takes on the values ⁇ 1,2, 3, 4 ⁇ under the condition d PS min(K/2, L), whereas m P s takes on the values f Io, ..., - 1] and is reported as part of the UL CSI J feedback overhead.
  • Wi is common across all layers.
  • m PS parametrizes the location of the first 1 in the first column of E, whereas dps represents the row shift corresponding to different values of mps.
  • NR Rel. 15 Type-I codebook can be depicted as a low-resolution version of NR Rel. 15 Type-ll codebook with spatial beam selection per layer-pair and phase combining only.
  • the gNB is equipped with a two-dimensional (2D) antenna array with N1 , N2 antenna ports per polarization placed horizontally and vertically and communication occurs over N3 PMI sub-bands.
  • a PMI sub-band consists of a set of resource blocks, each resource block consisting of a set of subcarriers.
  • 2N1 N2N3 CSI-RS ports are utilized to enable DL channel estimation with high resolution for NR Rel. 16 Type-ll codebook.
  • a Discrete Fourier transform (DFT)-based CSI compression of the spatial domain is applied to L dimensions per polarization, where L ⁇ N1 N2.
  • DFT Discrete Fourier transform
  • each beam of the frequencydomain precoding vectors is transformed using an inverse DFT matrix to the delay domain, and the magnitude and phase values of a subset of the delay-domain coefficients are selected and fed back to the gNB as part of the CSI report.
  • the 2N1 N2xN3 codebook per layer takes on the form [1] where 1/1 is a 2NIN 2 X2L block-diagonal matrix (L ⁇ NIN 2 ) with two identical diagonal blocks, and B matrix with columns drawn from a 2D oversampled DFT matrix, as follows. where the superscript T denotes a matrix transposition operation. Note that Oi, 0 2 oversampling factors are assumed for the 2D DFT matrix from which matrix B is drawn. Note that Wi is common across all layers.
  • W f is an N 3 xM matrix (M ⁇ Ns) with columns selected from a critically-sampled size-/V 3 DFT matrix, as follows
  • Magnitude and phase values of an approximately p fraction of the 2LM available coefficients are reported to the gNB (p ⁇ 1) as part of the CSI report. Coefficients with zero magnitude are indicated via a per-layer bitmap. Since all coefficients reported within a layer are normalized with respect to the coefficient with the largest magnitude (strongest coefficient), the relative value of that coefficient is set to unity, and no magnitude or phase information is explicitly reported for this coefficient. Only an indication of the index of the strongest coefficient per layer is reported.
  • magnitude and phase values of a maximum of [2pLM]-1 coefficients are reported per layer, leading to significant reduction in CSI report size, compared with reporting 2N1 N2xN3 -1 coefficients’ information.
  • K (where K ⁇ 2N1 N2) beamformed CSI-RS ports are utilized in DL transmission, in order to reduce complexity.
  • W 2 t and Wf,l follow the same structure as the conventional NR Rel. 16 Type-ll Codebook, where both are layer specific.
  • the matrix w s is a Kx2L blockdiagonal matrix with the same structure as that in the NR Rel. 15 Type-ll Port Selection Codebook.
  • Rel. 17 Type-ll Port Selection codebook follows a similar structure as that of Rel. 15 and Rel. 16 port-selection codebooks, as follows
  • the port-selection matrix w s supports free selection of the K ports, or more precisely the
  • K/2 ports per polarization out of the N1 N2 CSI-RS ports per polarization i.e., bits are used to identify the K/2 selected ports per polarization, wherein this selection is common across all layers.
  • the method is applied to CSI feedback, where training samples are at a UE-side node and need to be transmitted to a node at the network-side.
  • training samples are at a UE-side node and need to be transmitted to a node at the network-side.
  • each of these samples are real-valued vectors of size m. These samples can be transmitted differently.
  • each entry of the vectors is quantized into 2 r levels and then the quantized values are transferred to the other node.
  • this scheme we need the total of K x m x r. Increasing r increases the amount of data that needs to be transferred and, in return, a more accurate representation of the actual samples will be available at the entity receiving the samples.
  • the samples correspond to conventional schemes for CSI feedback, as discussed above.
  • each eigenvector can be encoded using “I" bits. T ransmitting the total of I x K bits, the receiving entity can use this information to reconstruct the actual eigen vectors.
  • using feedback schemes that require a higher number of feedback bits increases the amount of data that needs to be transferred and, in return, a more accurate representation of the actual samples will be available at the entity receives the samples.
  • This feedback scheme is not limited to the methods presented above and an additional extension can be defined which needs higher number of feedback bits.
  • the method can be applied to methods that use the set of collected samples to construct another set of samples (which has fewer samples). For example, determining the corsets of the samples of collected samples and sending the corset samples instead of the original sample set.
  • this scheme does not further prioritize between samples of the constructed set so it is not clear which samples should be transmitted if it is not possible to transmit the complete constructed data set.
  • the construction of such a dataset usually requires that the method have access to the complete data set first. So, the node may need to collect all samples before it can decide which ones to keep and which ones to discard.
  • FIG. 4 is a flow chart 400 illustrating a method according to an implementation.
  • a first set of information is received or generated (402) at a first device (nb. use of the term “device” herein is non-limiting and in particular encompasses both a single unit and multiple cooperating units).
  • the method selects samples, from either the first set of information or from samples generated from a subset of the first set of information, to be transmitted to a second device.
  • the first set of information is received from the second device or from a third device different from the first or the second devices.
  • the selection is based on one or more thresholds. The thresholds may be received from the second device or a third device different from the second device.
  • the first device is a user equipment (UE).
  • the first set of information comprises one or more samples, wherein each sample comprises one of more components.
  • the samples of the first set of information are typically based on a representation of channel data during a first time-frequency-space region.
  • the first device which collects the data, has access to a set of samples to be transmitted to the second device.
  • Each sample comprises one or more components.
  • each sample comprises one vector wherein, typically, the components of the vector represent the input and/or an expected output of the model or a tuple of multiple vectors, for example a tuple of two vectors representing the input and expected output of a mode.
  • the components of the samples comprise a first component and a second components which are based on an input and an expected output respectively of an artificial intelligence (Al)Zmachine learning (ML) model.
  • the first component could be some measurements related to channel state of the UE at the current position and the second component could be related to the coordinates of the position of the UE.
  • the first device has access to the samples sequentially, i.e., instead of having initial access to a complete set, the first device receives samples one by one.
  • the first device is configured to store all previous samples and in other implementations, the first device has some restriction on the number of samples it can store. In the latter case, the first device is configured to determine whether it needs to store a current sample, or it can disregard it, or it needs to replace one the previous stored samples with the new sample.
  • the node has access to a set of samples and thereafter it receives new samples gradually. The samples may be received in a periodic or an aperiodic manner.
  • the next step comprises a first metric being set or obtained (404) for at least a sample of the first set of information (in some implementations the first set referred to here may be considered as a “second set” where the originally obtained sample set is the “first set”).
  • the metric is based on an output of a first function.
  • the respective input of the first function is at least one of the first or second components of the respective sample or subset of samples of the first set of information.
  • the metric is one or more of an importance measure, a weight measure representing the frequency of observation of a value of a component of the sample, a representation of the degree of novelty of a sample, and a representation of a degree to which a sample is an outlier.
  • the metric may be a weighted combination of metrics. Samples with higher novelty may have more new information for updating of the model since the model should have already learned the previous samples so there is less benefit on observing them again. Samples with higher novelty may therefore be given a higher priority.
  • the weight measure can directly or indirectly influence the importance metric as well.
  • the function does not assign the metric to all samples of the first set.
  • the function is a model.
  • the model is based on a neural network, NN, block which is determined based on a set of parameters, which includes a structure of a neural network or weights of a neural network where the output of the NN block is used to determine at least the importance value and the weights of the samples.
  • the model is already trained.
  • an input and an output of the NN block is based on at least one of the first or second component of the first set of information.
  • the set of parameters, and/or the model are received from the second device, or a third device, the third device being different from the second device.
  • the set of parameters, and/or the model may be received in a periodic or an aperiodic manner.
  • the neural network is a conventional neural network. In implementations, the neural network is a convolutional network.
  • the function computes the metric based on the output of the model forthe first component of a sample and the second component of a sample. This may comprise determining a difference between the output of the model and the second component, which is then used to determine the importance measure of the sample. Alternatively, it may be used to determine the novelty of the value of the sample, and/or whether the sample is an outlier.
  • Figure 6 illustrates a method 600 for determining one or more of these metrics according to implementations.
  • the first component of a sample (602) is input into the model (604) which produces an output which is compare (606) with the second component of a sample, whereupon a difference between the two is determined and one or more metrics, i.e. an importance measure, a measure of novelty or an indication of an outlier, are determined (608).
  • one or more metrics i.e. an importance measure, a measure of novelty or an indication of an outlier
  • the importance measure is based on an error between the output of the model and an expected output associated with that input data or based on the norm of the gradient of the weights computed using a back-propagation scheme of a machine learning model.
  • the metric is based on the gradient of the required updates for the weights of the model for samples in the first set of information and/or on the joint or individual geometric properties of at least one of the first or second component of the samples
  • additional samples are generated from one or more of the samples of the first set of information.
  • the generated samples include processed versions of one or more of the collected samples.
  • the processing comprises applying a noise removal filter.
  • new samples are generated using a corset method. Each of the alternative methods of generating new samples may be used in combination.
  • Figure 5 illustrates a process 500 of the generation of new samples.
  • the input of the process comprises the first set of information (502).
  • the samples of the samples are processed (504) by, for example, noise filtering, and new samples (506) are generated.
  • the next step comprises selecting (406) one or more samples of the previous step for inclusion in a second set of information based on a respective first metric value for each sample (NB.
  • This “second set” may be considered in some examples as a “third set”, where the aforementioned “first set” is considered to be a “second set”).
  • the samples of the second set of information comprises on one or more of: a subset of samples of the first set of information, samples generated based at least a subset of the first set of information, and samples that are not currently in the first set of information.
  • the samples of the second set of information are updated based on the samples of the first set of information.
  • the selection is based on one or more thresholds. The thresholds may be received from the second device or a third device different from the second device.
  • a first metric might then be updated (408) for some samples of a second set of information based on an output of the first function.
  • the respective input of the first function is at least one of the first or second components of the subset of samples of the second set of information.
  • a further selection is made of a subset of the samples of the second set of information, which are to be transmitted to the second device.
  • this subset is selected (410) based on at least one of: a respective first metric associated with each sample, and a set of requirements determined at the first device.
  • the additional set of requirements may comprise one or more of: available bandwidth, storage capacity at the first device, age of the data, and other resources for transmission.
  • the set of requirements may be received or configured by the second device, or a third device which is different from the second device.
  • One or more thresholds may be used to select the samples to be transmitted. In implementations, these thresholds are determined by the set of requirements.
  • Figure 7 illustrates a process 700 of creation of the subsets according to implementations.
  • the samples of this set are processed (704) to produce new samples (706).
  • a subset of the first set of information is taken and combined with the new samples to produce the second set (708).
  • a subset of the second set is then selected (710) for transmission.
  • the transmitted data may be compressed.
  • the metric is further used to send samples with different protection levels.
  • the first function is further based on at least a third set of information comprising a set of samples of at least one of the first and the second component where the first and the second components are based on the input and the expected output of an AI/ML model.
  • the first function may be based on a statistical similarity of a sample of the first set of information and the third set of information.
  • the third set of information may include a set of channel data representations during a second time-frequency-space region and used to train the AI/ML model.
  • the statistical similarity may be based one or more of a Kolmogorov-Smirnov (KS) Test, Anderson-Darling Test or an anomaly detection algorithm.
  • the KS-test is one of the non-parametric tests and it can further be classified into one sample K-S test and two sample K-S test.
  • the one Sample K-S test can be used to determine the probability that a set of data samples belong to a particular reference distribution
  • the two sample K-S test is useful to determine the probability that two sets of samples belong to the same distribution.
  • K-S test on the input CSI samples to the UE, it can be determined whether the samples belong to the distribution(s) for which the AI/ML model is trained.
  • the UE can initiate the model update procedure.
  • the “desired” distribution(s) implies the input data distribution(s) for which the AI/ML model has been trained.
  • the Anderson-Darling Test is a statistical test to determine whether a given sample comes from a given specific distribution. It requires a lower number of samples than the K-S test for a more reliable measurement.
  • An exemplary purpose of the above implementations is to set a priority for samples of input data.
  • the first device knows how the samples will be used. For example, if the samples are intended for model training, it knows what is the structure of the model that should be trained. In another implementation, if the samples are intended for updating an already trained model, it knows the latest copy of the trained model completely or in part. In another implementation, if the samples are going to be used at the other node for model monitoring, the node have information about the model monitoring procedure. In these cases, the node collecting the data can use the extra information to determine how valuable the current sample is for the receiver and therefore it can assign a value as the importance metric to the current sample. The node can then store the sample locally if it has enough space, or it can discard it if its importance is lower than some threshold value, or it can replace one of the previous samples (stored locally) with the new samples which has higher importance.
  • the first device updates the importance level of the previously collected samples based on the new samples it receives or based on other metrics that are changed. For example, some samples might be important at a certain time but, as times pass, the samples might be less valuable, in which case the first device can change the importance level of the samples accordingly.
  • Figure 8 illustrates a device 800 for implementing a method according to any of the previous implementations.
  • the device 800 comprises a processor 802, a memory 804, a receiver 806 and a transmitter 808.
  • the processor 802 of the device 800 may support wireless communication in accordance with examples as disclosed herein.
  • the processor 802 includes at least one controller coupled with at least one memory, and the at least one controller is configured to and/or operable to cause the processor to one or more of receive or generate a first set of information, wherein the first set of information comprises one or more samples, and wherein each sample comprises one or more components; set a first metric for at least a sample where the sample is one of the samples of the first set of information or generated from a subset of samples of the first set of information based on an output of a first function, the respective input of the first function being at least one of the first or second components of the respective sample or subset of samples of the first set of information; select one or more samples of the previous step for inclusion in a second set of information based on a respective first metric value for each sample; and transmit at least a subset of samples of the second set of information to a second device, wherein the subset is selected
  • implementations discussed herein include at least the following:
  • the techniques described herein relate to a user equipment (UE) for wireless communication, including: at least one memory; and at least one processor coupled with the at least one memory and configured to cause the UE to: one or more of receive or generate a first set of information, wherein the first set of information includes one or more samples, and wherein each sample includes one or more components; set a first metric for at least a sample where the sample is one of the samples of the first set of information or generated from a subset of samples of the first set of information based on an output of a first function, a respective input of the first function being at least one of first or second components of the respective sample or subset of samples of the first set of information; select one or more samples for inclusion in a second set of information based on a respective first metric value for each sample; and transmit at least a subset of samples of the second set of information to a second device, wherein the subset is selected based on at least one of a first metric associated with each sample and a set of
  • the techniques described herein relate to a UE, wherein to transmit the at least a subset of samples, the at least one processor is configured to cause the UE to select the at least a subset of samples based at least in part on a first metric that is either the metric set from setting a metric or a further metric derived from that metric.
  • the techniques described herein relate to a UE, wherein generating samples includes processing samples from the first set of information.
  • the techniques described herein relate to a UE, wherein the processing includes noise filtering.
  • the techniques described herein relate to a UE, wherein the first function is a model.
  • the techniques described herein relate to a UE, wherein the model is an artificial intelligence/machine, AI/ML, learning model.
  • the techniques described herein relate to a UE, wherein the model is already trained.
  • each sample includes a vector with a first component and a second component, the first component being an input of the model and the second component being an output of the model.
  • the techniques described herein relate to a UE, where the model is designed to generate a desired output based on the first component of the samples.
  • the techniques described herein relate to a UE, where the function computes the metric based on the output of the model forthe first component of a sample and the second component of a sample.
  • the techniques described herein relate to a UE, wherein the one or more samples include at least one of a first and a second component where the first and the second components are based on an input and an expected output of the model respectively.
  • the techniques described herein relate to a UE, wherein the function is a model including a ML or Al model, for example a model including a neural network (NN) block which is determined based on a set of parameters including a structure of a neural network or weights of a neural network where the output of the NN block is used to determine at least an importance value and the weights of the samples.
  • NN neural network
  • the techniques described herein relate to a UE, wherein the set of parameters are received from one of the second device or a third device, where the third device is different from the second device. [0094] In some aspects, the techniques described herein relate to a UE, wherein the model is received from and/or configured by one of the second device or a third device, where the third device is different from the second device.
  • the techniques described herein relate to a UE, wherein the model is received in a periodic or an aperiodic manner.
  • the techniques described herein relate to a UE, wherein an input and an output of the NN block is based on at least one of the first or second component of the first set of information.
  • the techniques described herein relate to a UE, wherein the function computes the metric based on a gradient of updates for the weights of the model for samples in the first set of information.
  • the techniques described herein relate to a UE, wherein the metric is one of an importance level of the sample, an age of the sample, and a weight of the sample, wherein the weight is an indication of a frequency of occurrence of a value of a component of the sample.
  • the techniques described herein relate to a processor for wireless communication, including: at least one controller coupled with at least one memory and configured to cause the processor to: one or more of receive or generate a first set of information, wherein the first set of information includes one or more samples, and wherein each sample includes one or more components; set a first metric for at least a sample where the sample is one of the samples of the first set of information or generated from a subset of samples of the first set of information based on an output of a first function, a respective input of the first function being at least one of first or second components of the respective sample or subset of samples of the first set of information; select one or more samples for inclusion in a second set of information based on a respective first metric value for each sample; and transmit at least a subset of samples of the second set of information to a second device, wherein the subset is selected based on at least one of a first metric associated with each sample and a set of requirements determined at the first device.
  • the techniques described herein relate to a method performed by a user equipment (UE), the method including: one or more of receiving or generating a first set of information, wherein the first set of information includes one or more samples, and wherein each sample includes one or more components; setting a first metric for at least a sample where the sample is one of the samples of the first set of information or generated from a subset of samples of the first set of information based on an output of a first function, a respective input of the first function being at least one of first or second components of the respective sample or subset of samples of the first set of information; selecting one or more samples for inclusion in a second set of information based on a respective first metric value for each sample; and transmitting at least a subset of samples of the second set of information to a second device, wherein the subset is selected based on at least one of a first metric associated with each sample and a set of requirements determined at the first device.
  • UE user equipment

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Power Engineering (AREA)
  • Mobile Radio Communication Systems (AREA)

Abstract

The described techniques include receiving or generating a first set of information, wherein the first set of information comprises one or more samples, wherein each sample comprises one or more components. A first metric is set for at least a sample where the sample is one of the samples of the first set of information or is generated from a subset of samples of the first set of information based on an output of a first function. One or more samples of the previous set is selected for inclusion in a second set of information based on a respective first metric value for each sample. At least a subset of samples of the second set of information is transmitted to a second device, wherein the subset is selected based on a respective first metric associated with each sample and/or a set of requirements determined at the first device.

Description

METHOD AND SYSTEM FOR TRANSMISSION OF SAMPLES
Related Application
[0001] This application claims priority to U.S. Provisional Application Serial No. 63/484,418 filed 10 February 2023 entitled “METHOD AND SYSTEM FOR TRANSMISSION OF SAMPLES,” the disclosure of which is incorporated by reference herein in its entirety.
Technical field
[0002] The technical field is the selection of information samples for transmission from a first device to a second device in a radio communications network.
Background
[0003] 3GPP 5G New Radio (NR) uses massive Multiple Input Multiple Output (MIMO) to boost transmission capabilities. Both Base Stations (BSs) and User Equipment (UE) are provided with multiple antennae. In order to implement MIMO, knowledge of channel conditions is needed in order to pre-code antennae in the respective antenna arrays.
Summary
[0004] According to an aspect, there is provided a method implemented in a radio communications network at a first device. The method comprises receiving or generating a first set of information, wherein the first set of information comprises one or more samples, wherein each sample comprises one or more components. A first metric is set for at least a sample where the sample is one of the samples of the first set of information or is generated from a subset of samples of the first set of information based on an output of a first function. The respective input of the first function is at least one of components of the respective sample or subset of samples of the first set of information. One or more samples of the previous set is selected for inclusion in a second set of information based on a respective first metric value for each sample. At least a subset of samples of the second set of information is transmitted to a second device, wherein the subset is selected based on at least one of: a respective first metric associated with each sample, and a set of requirements determined at the first device.
[0005] Other aspects are set out in the appended claims. Brief description of drawings
[0006] Figure 1 illustrates an example of a wireless communications system 100 in accordance with aspects of the present disclosure;
[0007] Figure 2 illustrates a system 200;
[0008] Figure 3 illustrates a high-level structure of such a two-sided method 300;
[0009] Figure 4 is a flow chart 400 illustrating a method according to an implementation;
[0010] Figure 5 illustrates a process 500 of the generation of new samples;
[0011] Figure 6 illustrates a method 600 for determining one or more of these metrics according to implementations;
[0012] Figure 7 illustrates a process 700 of creation of the subsets according to implementations; and
[0013] Figure 8 is a diagram illustrating the components of a first device according to an implementation.
Detailed description
[0014] Figure 1 illustrates an example of a wireless communications system 100 in accordance with aspects of the present disclosure. The wireless communications system 100 may include one or more network equipment (NE) NE 102, one or more UE 104, and a core network (CN) 106. The wireless communications system 100 may support various radio access technologies. In some implementations, the wireless communications system 100 may be a 4G network, such as an LTE network or an LTE- Advanced (LTE-A) network. In some other implementations, the wireless communications system 100 may be a NR network, such as a 5G network, a 5G- Advanced (5G-A) network, or a 5G ultrawideband (5G-UWB) network. In other implementations, the wireless communications system 100 may be a combination of a 4G network and a 5G network, or other suitable radio access technology including Institute of Electrical and Electronics Engineers (IEEE) 802.11 (Wi-Fi), IEEE 802.16 (WiMAX), IEEE 802.20. The wireless communications system 100 may support radio access technologies beyond 5G, for example, 6G. Additionally, the wireless communications system 100 may support technologies, such as time division multiple access (TDMA), frequency division multiple access (FDMA), or code division multiple access (CDMA), etc. [0015] The one or more NE 102 may be dispersed throughout a geographic region to form the wireless communications system 100. One or more of the NE 102 described herein may be or include or may be referred to as a network node, a base station, a network element, a network function, a network entity, a radio access network (RAN), a NodeB, an eNodeB (eNB), a next-generation NodeB (gNB), or other suitable terminology. An NE 102 and a UE 104 may communicate via a communication link, which may be a wireless or wired connection. For example, an NE 102 and a UE 104 may perform wireless communication (e.g., receive signaling, transmit signaling) over a Uu interface.
[0016] An NE 102 may provide a geographic coverage area for which the NE 102 may support services for one or more UEs 104 within the geographic coverage area. For example, an NE 102 and a UE 104 may support wireless communication of signals related to services (e.g., voice, video, packet data, messaging, broadcast, etc.) according to one or multiple radio access technologies. In some implementations, an NE 102 may be moveable, for example, a satellite associated with a non-terrestrial network (NTN). In some implementations, different geographic coverage areas associated with the same or different radio access technologies may overlap, but the different geographic coverage areas may be associated with different NE 102.
[0017] The one or more UEs 104 may be dispersed throughout a geographic region of the wireless communications system 100. A UE 104 may include or may be referred to as a remote unit, a mobile device, a wireless device, a remote device, a subscriber device, a transmitter device, a receiver device, or some other suitable terminology. In some implementations, the UE 104 may be referred to as a unit, a station, a terminal, or a client, among other examples. Additionally, or alternatively, the UE 104 may be referred to as an Internet-of-Things (loT) device, an Internet-of-Everything (loE) device, or machine-type communication (MTC) device, among other examples.
[0018] A UE 104 may be able to support wireless communication directly with other UEs 104 over a communication link. For example, a UE 104 may support wireless communication directly with another UE 104 over a device-to-device (D2D) communication link. In some implementations, such as vehicle-to-vehicle (V2V) deployments, vehicle-to-everything (V2X) deployments, or cellular-V2X deployments, the communication link may be referred to as a sidelink. For example, a UE 104 may support wireless communication directly with another UE 104 over a PC5 interface.
[0019] An NE 102 may support communications with the CN 106, or with another NE 102, or both. For example, an NE 102 may interface with other NE 102 or the CN 106 through one or more backhaul links (e.g., S1 , N2, N6, or other network interface). In some implementations, the NE 102 may communicate with each other directly. In some other implementations, the NE 102 may communicate with each other indirectly (e.g. , via the CN 106). In some implementations, one or more NE 102 may include subcomponents, such as an access network entity, which may be an example of an access node controller (ANC). An ANC may communicate with the one or more UEs 104 through one or more other access network transmission entities, which may be referred to as a radio heads, smart radio heads, or transmission-reception points (TRPs).
[0020] The CN 106 may support user authentication, access authorization, tracking, connectivity, and other access, routing, or mobility functions. The CN 106 may be an evolved packet core (EPC), or a 5G core (5GC), which may include a control plane entity that manages access and mobility (e.g., a mobility management entity (MME), an access and mobility management functions (AMF)) and a user plane entity that routes packets or interconnects to external networks (e.g., a serving gateway (S-GW), a packet data network (PDN) gateway (P-GW), or a user plane function (UPF)). In some implementations, the control plane entity may manage non-access stratum (NAS) functions, such as mobility, authentication, and bearer management (e.g., data bearers, signal bearers, etc.) for the one or more UEs 104 served by the one or more NE 102 associated with the CN 106.
[0021] The CN 106 may communicate with a packet data network over one or more backhaul links (e.g., via an S1 , N2, N6, or other network interface). The packet data network may include an application server. In some implementations, one or more UEs 104 may communicate with the application server. A UE 104 may establish a session (e.g., a protocol data unit (PDU) session, or the like) with the CN 106 via an NE 102. The CN 106 may route traffic (e.g., control information, data, and the like) between the UE 104 and the application server using the established session (e.g., the established PDU session). The PDU session may be an example of a logical connection between the UE 104 and the CN 106 (e.g., one or more network functions of the CN 106).
[0022] In the wireless communications system 100, the NEs 102 and the UEs 104 may use resources of the wireless communications system 100 (e.g., time resources (e.g., symbols, slots, subframes, frames, or the like) or frequency resources (e.g., subcarriers, carriers)) to perform various operations (e.g., wireless communications). In some implementations, the NEs 102 and the UEs 104 may support different resource structures. For example, the NEs 102 and the UEs 104 may support different frame structures. In some implementations, such as in 4G, the NEs 102 and the UEs 104 may support a single frame structure. In some other implementations, such as in 5G and among other suitable radio access technologies, the NEs 102 and the UEs 104 may support various frame structures (i.e., multiple frame structures). The NEs 102 and the UEs 104 may support various frame structures based on one or more numerologies.
[0023] One or more numerologies may be supported in the wireless communications system 100, and a numerology may include a subcarrier spacing and a cyclic prefix. A first numerology (e.g., /z=0) may be associated with a first subcarrier spacing (e.g., 15 kHz) and a normal cyclic prefix. In some implementations, the first numerology (e.g., /z=0) associated with the first subcarrier spacing (e.g., 15 kHz) may utilize one slot per subframe. A second numerology (e.g., /z=1) may be associated with a second subcarrier spacing (e.g., 30 kHz) and a normal cyclic prefix. A third numerology (e.g., n=2) may be associated with a third subcarrier spacing (e.g., 60 kHz) and a normal cyclic prefix or an extended cyclic prefix. A fourth numerology (e.g., /z=3) may be associated with a fourth subcarrier spacing (e.g., 120 kHz) and a normal cyclic prefix. A fifth numerology (e.g., /z=4) may be associated with a fifth subcarrier spacing (e.g., 240 kHz) and a normal cyclic prefix.
[0024] A time interval of a resource (e.g., a communication resource) may be organized according to frames (also referred to as radio frames). Each frame may have a duration, for example, a 10 millisecond (ms) duration. In some implementations, each frame may include multiple subframes. For example, each frame may include 10 subframes, and each subframe may have a duration, for example, a 1 ms duration. In some implementations, each frame may have the same duration. In some implementations, each subframe of a frame may have the same duration. [0025] Additionally or alternatively, a time interval of a resource (e.g., a communication resource) may be organized according to slots. For example, a subframe may include a number (e.g., quantity) of slots. The number of slots in each subframe may also depend on the one or more numerologies supported in the wireless communications system 100. For instance, the first, second, third, fourth, and fifth numerologies (i.e., /z=0, /z=1 , n=2, /z=3, /z=4) associated with respective subcarrier spacings of 15 kHz, 30 kHz, 60 kHz, 120 kHz, and 240 kHz may utilize a single slot per subframe, two slots per subframe, four slots per subframe, eight slots per subframe, and 16 slots per subframe, respectively. Each slot may include a number (e.g., quantity) of symbols (e.g., OFDM symbols). In some implementations, the number (e.g., quantity) of slots for a subframe may depend on a numerology. For a normal cyclic prefix, a slot may include 14 symbols. For an extended cyclic prefix (e.g., applicable for 60 kHz subcarrier spacing), a slot may include 12 symbols. The relationship between the number of symbols per slot, the number of slots per subframe, and the number of slots per frame for a normal cyclic prefix and an extended cyclic prefix may depend on a numerology. It should be understood that reference to a first numerology (e.g., /z=0) associated with a first subcarrier spacing (e.g., 15 kHz) may be used interchangeably between subframes and slots.
[0026] In the wireless communications system 100, an electromagnetic (EM) spectrum may be split, based on frequency or wavelength, into various classes, frequency bands, frequency channels, etc. By way of example, the wireless communications system 100 may support one or multiple operating frequency bands, such as frequency range designations FR1 (410 MHz - 7.125 GHz), FR2 (24.25 GHz - 52.6 GHz), FR3 (7.125 GHz - 24.25 GHz), FR4 (52.6 GHz - 114.25 GHz), FR4a or FR4- 1 (52.6 GHz - 71 GHz), and FR5 (114.25 GHz - 300 GHz). In some implementations, the NEs 102 and the UEs 104 may perform wireless communications over one or more of the operating frequency bands. In some implementations, FR1 may be used by the NEs 102 and the UEs 104, among other equipment or devices for cellular communications traffic (e.g., control information, data). In some implementations, FR2 may be used by the NEs 102 and the UEs 104, among other equipment or devices for short-range, high data rate capabilities. [0027] FR1 may be associated with one or multiple numerologies (e.g., at least three numerologies). For example, FR1 may be associated with a first numerology (e.g., /z=0), which includes 15 kHz subcarrier spacing; a second numerology (e.g., /z=1), which includes 30 kHz subcarrier spacing; and a third numerology (e.g., /z=2), which includes 60 kHz subcarrier spacing. FR2 may be associated with one or multiple numerologies (e.g., at least 2 numerologies). For example, FR2 may be associated with a third numerology (e.g., /z=2), which includes 60 kHz subcarrier spacing; and a fourth numerology (e.g., /z=3), which includes 120 kHz subcarrier spacing.
[0028] Figure 2 shows a system 200 with a NE 102, known as a “g node B” (gNB) equipped with M antennae and K UEs 104 denoted by U1, U2, --- , UK. Each UE has N antennae. Transmission can be modelled according to equation 1 : y; fc(t) = H; fc (t)w,k (t)x; fc (t) + nk(t) [Equation 1]
Hk(t) denotes the channel at time t over frequency band I, I e {1,2, ... , £} , between the gNB and user equipment Uk. Hk(t) is a matrix of size N x M with complex entries, i.e., Hk(t) e CWxM. xk(t) represents a message to be sent by the gNB, 102, to a UE Uk (104), where k = {1,2,
Figure imgf000009_0001
, at time t and frequency band I. wk(t) e CMxl is a precoding vector. yk(t), is the signal received at Uk. nk(t) represents the noise vector at the receiver.
[0029] To improve the achievable rate of the link, the gNB selects wk(t) such as to maximize the received signal to noise ratio (SNR). Several schemes have been proposed for good selection of wk(t), most of which rely on having some knowledge about Hk(t). Knowledge of Hk(t) can be attained by the gNB by direct measurement (e.g., in TDD mode and assuming reciprocity of the channel), or indirectly using the information that the UE sends to the gNB (e.g., in FDD mode). In the latter case, a large amount of feedback is required to send accurate information about Hk(t). This becomes particularly important if there are large number of antennae or/and large frequency bands.
[0030] For simplicity in description, only a single time slot is considered, but the proposed scheme can be further extended to the cases with more than a single time slot. Without loss of generality, therefore, Hk(t) can be denoted using Hk. Hfe(t) is defined as matrix of size N x M x L which is composed by stacking H; fe for all frequency bands, i.e., the entries at Hfe[n,m, Z](t) is equal to H; fe[n,m](t). In total therefore, each UE is feeding back the information about the most recent JV x M x L complex numbers to the gNB.
[0031] Several methods have been proposed to reduce the rate of required feedback. A group of these methods, usually referred to as two-sided methods, consist of two parts where the first part is deployed at the UE side and the second part is deployed at the gNB side. Figure 3 illustrates a high-level structure of such a two-sided method 300. Neural network (NN)-based UE and gNB sides are referred to here as Me (encoding model) and Md (decoding model) respectively.
[0032] The UE (302) and gNB (304) sides consist of a one or a few neural network, NN, blocks which are trained using data driven approaches. The UE side is responsible for computing a latent representation (306) of the input data (308), which is to be transferred to the gNB (304) with as few bits as possible. Receiving what has been transmitted by the UE side 302, the gNB 304 side reconstructs the information intended to be transmitted to the gNB.
[0033] There are several methods that can be used to train the NN modules at the UE and gNB sides including centralized training, simultaneous training and separate training. Similarly, updating a two-sided model can be carried out centrally on one entity, on different entities but simultaneously, or separately. The training entity can be the UE itself, the gNB, a node at the UE side, or a node at the network side.
[0034] In order to train the model, a training dataset is required, which is composed of different samples corresponding to the input and expected output of the system. Depending on the training scheme, the dataset may have samples for end-to- end mapping, for only the encoder, or for only the decoder. The training dataset may be created using samples from simulations. Performance may be improved if samples are collected from the actual environment.
[0035] Depending on the training entity therefore, the training dataset (or collected samples) may need to be transferred to another node, e.g. from the UE to the gNB. This represents one of potentially many instances in which it may be necessary to transmit samples between nodes. Other instances may include collecting samples for UE positioning, fingerprinting, beam management, load balancing, and UE mobility management. In all of these instances, collected samples at one node may need to be transmitted to other nodes. The transmitted samples might be used for several reasons, such as: a) initial training of a model b) updating the existing model with the new environment or with the new state of the environment, c) set of samples for model monitoring, model selection or model switching, d) for storage of the samples for a later use.
[0036] In many of the potential instances, it is important to reduce the size of the sample set which needs to be transmitted. For example, when the data needs to be transmitted over a resource constrained wireless link, we need to transmit data as fast as possible, due to delay constrains.
[0037] There is presented here an exemplary method and system for reducing the size of a sample set in a radio communications network. An exemplary method and system is provided for determining a metric for samples of at least a set of collected samples or a set of generated samples which could be used for representing the importance and necessity of sending that sample. For example, sending the higher priority samples, instead of all samples, may help in lowering the communication cost, speeding up the transmission of important samples, and reducing data size for storage.
[0038] An exemplary use of the method is to select samples for Channel State Information, CSI, in a radio communications network. However, the method may apply to other aspects of a radio communications network.
[0039] Considering the CSI related example, CSI feedback may be used to select precoding from the 5G New Radio, NR, Codebook. A summary of this is given below.
NR Rel. 15 Type-ll Codebook
[0040] In this instance, the gNB is equipped with a two-dimensional (2D) antenna array with N1 , N2 antenna ports per polarization placed horizontally and vertically, and communication occurs over N3 PMI sub-bands, wherein N1 , N2 and N3 are integer values. A PMI sub-band consists of a set of resource blocks, each resource block consisting of a set of subcarriers. In such a case, 2N1 N2 CSI-RS ports are utilized to enable downlink (DL) channel estimation with high resolution for NR Rel. 15 Type-ll codebook. In order to reduce the uplink (UL) feedback overhead, a Discrete Fourier transform (DFT)-based CSI compression of the spatial domain is applied to L dimensions per polarization, where L<N1 N2. In the sequel the indices of the 2L dimensions are referred as the Spatial Domain (SD) basis indices. The magnitude and phase values of the linear combination coefficients for each sub-band are fed back to the gNB as part of the CSI report. The 2N1 N2xN3 codebook per layer I takes on the form
Figure imgf000012_0001
where Wi is a 2NIN2X2L block-diagonal matrix (L<NIN2) with two identical diagonal blocks, and B i
Figure imgf000012_0002
matrix with columns drawn from a 2D oversampled DFT matrix, as follows.
Figure imgf000012_0003
where the superscript T denotes a matrix transposition operation. Oi, O2 oversampling factors are assumed for the 2D DFT matrix from which matrix B is drawn. Wi is common across all layers. W2,i is a 2Lx N3 matrix, where the fh column corresponds to the linear combination coefficients of the 2L beams in the Ith sub-band. Only the indices of the L selected columns of B are reported, along with the oversampling index taking on OiO2 values. Note that W2,/ are independent for different layers.
NR Rel. 15 Type-ll Port Selection Codebook
[0041] For Type-ll Port Selection codebook, only K (where K < 2N1 N2) beamformed CSI-RS ports are utilized in DL transmission, in order to reduce complexity. The KxN3 codebook matrix per layer takes on the form [1] Wt = W W2:l.
[0042] Here, W2 follow the same structure as the conventional NR Rel.15 Type- 11 Codebook, and are layer specific. w s is a Kx2L block-diagonal matrix with two identical diagonal blocks, i.e.,
Figure imgf000013_0003
s an RRC parameter which takes on the values {1,2, 3, 4} under the condition dPS min(K/2, L), whereas mPs takes on the values f Io, ..., - 1] and is reported as part of the UL CSI
Figure imgf000013_0001
J feedback overhead. Wi is common across all layers.
For K=16, L=4 and dPS =1, the 8 possible realizations of E corresponding to mPS =
{0,1 ,...,7} are as follows
Figure imgf000013_0002
When dPS =2, the 4 possible realizations of E corresponding to mPS ={0,1, 2, 3} are as follows
T 0 0 0 0 0 0 0 0 0 0 0- 0 0 1 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 1 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 1 0 0 0 -0 0 0 0 -0 0 0 0 0 0 0 1- 0 1 0 0 When c/pS =3, the 3 possible realizations of E corresponding of mPS ={0,1 ,2} are as follows rl 0 0 O r0 0 0 0-1 1-0 0 1 0-1 0 1 0 0 0 0 0 0 0 0 0 1 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 1 1 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 1 1 0 0 0 -0 0 0 0- -0 0 0 0- -0 1 0 0-
When dps =4, the 2 possible realizations of E corresponding of mps ={0,1} are as follows
-1 0 0 0- 0 0 0 0- 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 -0 0 0 0- -0 0 0 1-
To summarize, mPS parametrizes the location of the first 1 in the first column of E, whereas dps represents the row shift corresponding to different values of mps.
NR Rel. 15 Type-I Codebook
[0043] NR Rel. 15 Type-I codebook is the baseline codebook for NR, with a variety of configurations. The most common utility of Rel. 15 Type-I codebook is a special case of NR Rel. 15 Type-ll codebook with L=1 for Rl=1 ,2, wherein a phase coupling value is reported for each sub-band, i.e., W2,l is 2xN3, with the first row equal to [1 , 1 , ... , 1 ] and the second row equal to [e;27r0% ... , e727r0W3-1]. Under specific configurations, 0= 1 ...= 0, i.e., wideband reporting. For Rl>2 different beams are used for each pair of layers. NR Rel. 15 Type-I codebook can be depicted as a low-resolution version of NR Rel. 15 Type-ll codebook with spatial beam selection per layer-pair and phase combining only.
NR Rel. 16 Type-ll Codebook
[0044] In this instance, the gNB is equipped with a two-dimensional (2D) antenna array with N1 , N2 antenna ports per polarization placed horizontally and vertically and communication occurs over N3 PMI sub-bands. A PMI sub-band consists of a set of resource blocks, each resource block consisting of a set of subcarriers. In such case, 2N1 N2N3 CSI-RS ports are utilized to enable DL channel estimation with high resolution for NR Rel. 16 Type-ll codebook. In order to reduce the UL feedback overhead, a Discrete Fourier transform (DFT)-based CSI compression of the spatial domain is applied to L dimensions per polarization, where L<N1 N2. Similarly, additional compression in the frequency domain is applied, where each beam of the frequencydomain precoding vectors is transformed using an inverse DFT matrix to the delay domain, and the magnitude and phase values of a subset of the delay-domain coefficients are selected and fed back to the gNB as part of the CSI report. The 2N1 N2xN3 codebook per layer takes on the form [1]
Figure imgf000015_0001
where 1/1 is a 2NIN2X2L block-diagonal matrix (L<NIN2) with two identical diagonal blocks, and B
Figure imgf000015_0002
matrix with columns drawn from a 2D oversampled DFT matrix, as follows.
Figure imgf000015_0004
where the superscript T denotes a matrix transposition operation. Note that Oi, 02 oversampling factors are assumed for the 2D DFT matrix from which matrix B is drawn. Note that Wi is common across all layers. Wf is an N3xM matrix (M<Ns) with columns selected from a critically-sampled size-/V3 DFT matrix, as follows
Figure imgf000015_0003
[0045] Only the indices of the L selected columns of B are reported, along with the oversampling index taking on 0102 values. Similarly, forWf,l, only the indices of the M selected columns out of the predefined size-N3 DFT matrix are reported. In the sequel the indices of the M dimensions are referred as the selected Frequency Domain (FD) basis indices. Hence, L, M represent the equivalent spatial and frequency dimensions after compression, respectively. Finally, the 2LxM matrix W2 represents the linear combination coefficients (LCCs) of the spatial and frequency DFT-basis vectors. Both W2, Wf are selected independent for different layers. Magnitude and phase values of an approximately p fraction of the 2LM available coefficients are reported to the gNB (p<1) as part of the CSI report. Coefficients with zero magnitude are indicated via a per-layer bitmap. Since all coefficients reported within a layer are normalized with respect to the coefficient with the largest magnitude (strongest coefficient), the relative value of that coefficient is set to unity, and no magnitude or phase information is explicitly reported for this coefficient. Only an indication of the index of the strongest coefficient per layer is reported. Hence, for a single-layer transmission, magnitude and phase values of a maximum of [2pLM]-1 coefficients (along with the indices of selected L, M DFT vectors) are reported per layer, leading to significant reduction in CSI report size, compared with reporting 2N1 N2xN3 -1 coefficients’ information.
NR Rel. 16 Type-ll Port Selection Codebook
[0046] For Type-ll Port Selection codebook, only K (where K < 2N1 N2) beamformed CSI-RS ports are utilized in DL transmission, in order to reduce complexity. The. The KxN3 codebook matrix per layer takes on the form [1], Wi = w^sw2:lw^.
[0047] Here, W2 t and Wf,l follow the same structure as the conventional NR Rel. 16 Type-ll Codebook, where both are layer specific. The matrix w s is a Kx2L blockdiagonal matrix with the same structure as that in the NR Rel. 15 Type-ll Port Selection Codebook.
NR Rel. 17 Type-ll Port Selection Codebook
[0048] Rel. 17 Type-ll Port Selection codebook follows a similar structure as that of Rel. 15 and Rel. 16 port-selection codebooks, as follows
Figure imgf000016_0001
[0049] However, unlike Rel. 15 and Rel. 16 Type-ll port-selection codebooks, the port-selection matrix w s supports free selection of the K ports, or more precisely the
K/2 ports per polarization out of the N1 N2 CSI-RS ports per polarization, i.e., bits are used to identify the K/2 selected ports per polarization, wherein
Figure imgf000016_0002
this selection is common across all layers. Here, W2 L and Wf,l follow the same structure as the conventional NR Rel. 16 Type-ll Codebook, however M is limited to 1 ,2 only, with the network configuring a window of size N ={2,4} for M =2. Moreover, the bitmap is reported unless p=1 and the UE reports all the coefficients for a rank up to a value of two.
[0050] In an implementation, the method is applied to CSI feedback, where training samples are at a UE-side node and need to be transmitted to a node at the network-side. In an example of the use of the method, there are K samples representing the largest eigenvectors associated with K different channel realizations of the environment. In this example each of these samples are real-valued vectors of size m. These samples can be transmitted differently.
[0051] In a first scheme, each entry of the vectors is quantized into 2r levels and then the quantized values are transferred to the other node. In this scheme, we need the total of K x m x r. Increasing r increases the amount of data that needs to be transferred and, in return, a more accurate representation of the actual samples will be available at the entity receiving the samples.
[0052] In a second scheme, the samples correspond to conventional schemes for CSI feedback, as discussed above. For example, considering the codebook design of NR Rel. 16 Type-ll Codebook (section 0) with certain parameters, each eigenvector can be encoded using “I" bits. T ransmitting the total of I x K bits, the receiving entity can use this information to reconstruct the actual eigen vectors. Similarly to the first method, using feedback schemes that require a higher number of feedback bits (larger Z), increases the amount of data that needs to be transferred and, in return, a more accurate representation of the actual samples will be available at the entity receives the samples.
[0053] This feedback scheme is not limited to the methods presented above and an additional extension can be defined which needs higher number of feedback bits. As another approach, the method can be applied to methods that use the set of collected samples to construct another set of samples (which has fewer samples). For example, determining the corsets of the samples of collected samples and sending the corset samples instead of the original sample set. Although an efficient method, this scheme does not further prioritize between samples of the constructed set so it is not clear which samples should be transmitted if it is not possible to transmit the complete constructed data set. Additionally, the construction of such a dataset usually requires that the method have access to the complete data set first. So, the node may need to collect all samples before it can decide which ones to keep and which ones to discard.
[0054] Figure 4 is a flow chart 400 illustrating a method according to an implementation. Referring to Figure 4, a first set of information is received or generated (402) at a first device (nb. use of the term “device” herein is non-limiting and in particular encompasses both a single unit and multiple cooperating units). The method selects samples, from either the first set of information or from samples generated from a subset of the first set of information, to be transmitted to a second device. In implementations, the first set of information is received from the second device or from a third device different from the first or the second devices. In implementations, the selection is based on one or more thresholds. The thresholds may be received from the second device or a third device different from the second device.
[0055] Typically, the first device is a user equipment (UE). In an implementation, the first set of information comprises one or more samples, wherein each sample comprises one of more components. The samples of the first set of information are typically based on a representation of channel data during a first time-frequency-space region.
[0056] In implementations, the first device, which collects the data, has access to a set of samples to be transmitted to the second device. Each sample comprises one or more components. In implementations, each sample comprises one vector wherein, typically, the components of the vector represent the input and/or an expected output of the model or a tuple of multiple vectors, for example a tuple of two vectors representing the input and expected output of a mode. In implementations, the components of the samples comprise a first component and a second components which are based on an input and an expected output respectively of an artificial intelligence (Al)Zmachine learning (ML) model. For example, the first component could be some measurements related to channel state of the UE at the current position and the second component could be related to the coordinates of the position of the UE.
[0057] In implementations, the first device has access to the samples sequentially, i.e., instead of having initial access to a complete set, the first device receives samples one by one. In implementations, the first device is configured to store all previous samples and in other implementations, the first device has some restriction on the number of samples it can store. In the latter case, the first device is configured to determine whether it needs to store a current sample, or it can disregard it, or it needs to replace one the previous stored samples with the new sample. In other implementations, the node has access to a set of samples and thereafter it receives new samples gradually. The samples may be received in a periodic or an aperiodic manner.
[0058] Referring again to Figure 4, the next step comprises a first metric being set or obtained (404) for at least a sample of the first set of information (in some implementations the first set referred to here may be considered as a “second set” where the originally obtained sample set is the “first set”). The metric is based on an output of a first function. The respective input of the first function is at least one of the first or second components of the respective sample or subset of samples of the first set of information. In implementations, the metric is one or more of an importance measure, a weight measure representing the frequency of observation of a value of a component of the sample, a representation of the degree of novelty of a sample, and a representation of a degree to which a sample is an outlier. The metric may be a weighted combination of metrics. Samples with higher novelty may have more new information for updating of the model since the model should have already learned the previous samples so there is less benefit on observing them again. Samples with higher novelty may therefore be given a higher priority. The weight measure can directly or indirectly influence the importance metric as well.
[0059] In implementations, the function does not assign the metric to all samples of the first set.
[0060] In implementations, the function is a model. In implementations, the model is based on a neural network, NN, block which is determined based on a set of parameters, which includes a structure of a neural network or weights of a neural network where the output of the NN block is used to determine at least the importance value and the weights of the samples. In implementations, the model is already trained. In implementations, an input and an output of the NN block is based on at least one of the first or second component of the first set of information. In implementations, the set of parameters, and/or the model are received from the second device, or a third device, the third device being different from the second device. The set of parameters, and/or the model may be received in a periodic or an aperiodic manner.
[0061] In implementations, the neural network is a conventional neural network. In implementations, the neural network is a convolutional network.
[0062] In implementations, the function computes the metric based on the output of the model forthe first component of a sample and the second component of a sample. This may comprise determining a difference between the output of the model and the second component, which is then used to determine the importance measure of the sample. Alternatively, it may be used to determine the novelty of the value of the sample, and/or whether the sample is an outlier.
[0063] In implementations, combinations of these techniques are used. Figure 6 illustrates a method 600 for determining one or more of these metrics according to implementations. The first component of a sample (602) is input into the model (604) which produces an output which is compare (606) with the second component of a sample, whereupon a difference between the two is determined and one or more metrics, i.e. an importance measure, a measure of novelty or an indication of an outlier, are determined (608).
[0064] In an implementation, the importance measure is based on an error between the output of the model and an expected output associated with that input data or based on the norm of the gradient of the weights computed using a back-propagation scheme of a machine learning model.
[0065] In implementations, the metric is based on the gradient of the required updates for the weights of the model for samples in the first set of information and/or on the joint or individual geometric properties of at least one of the first or second component of the samples
[0066] In implementations, additional samples are generated from one or more of the samples of the first set of information. In implementations, the generated samples include processed versions of one or more of the collected samples. In implementations, the processing comprises applying a noise removal filter. In implementations, new samples are generated using a corset method. Each of the alternative methods of generating new samples may be used in combination. Figure 5 illustrates a process 500 of the generation of new samples. The input of the process comprises the first set of information (502). The samples of the samples are processed (504) by, for example, noise filtering, and new samples (506) are generated.
[0067] Referring again to Figure 4, the next step comprises selecting (406) one or more samples of the previous step for inclusion in a second set of information based on a respective first metric value for each sample (NB. This “second set” may be considered in some examples as a “third set”, where the aforementioned “first set” is considered to be a “second set”). In implementations, the samples of the second set of information comprises on one or more of: a subset of samples of the first set of information, samples generated based at least a subset of the first set of information, and samples that are not currently in the first set of information. In implementations, the samples of the second set of information are updated based on the samples of the first set of information. In implementations, the selection is based on one or more thresholds. The thresholds may be received from the second device or a third device different from the second device.
[0068] A first metric might then be updated (408) for some samples of a second set of information based on an output of the first function. The respective input of the first function is at least one of the first or second components of the subset of samples of the second set of information.
[0069] When a transmission opportunity arises for the first device, which may be periodic or aperiodic, a further selection is made of a subset of the samples of the second set of information, which are to be transmitted to the second device. In implementations, this subset is selected (410) based on at least one of: a respective first metric associated with each sample, and a set of requirements determined at the first device.
[0070] The additional set of requirements may comprise one or more of: available bandwidth, storage capacity at the first device, age of the data, and other resources for transmission. The set of requirements may be received or configured by the second device, or a third device which is different from the second device. One or more thresholds may be used to select the samples to be transmitted. In implementations, these thresholds are determined by the set of requirements.
[0071] Figure 7 illustrates a process 700 of creation of the subsets according to implementations. Starting with the first set of information (702) the samples of this set are processed (704) to produce new samples (706). A subset of the first set of information is taken and combined with the new samples to produce the second set (708). A subset of the second set is then selected (710) for transmission. In exemplary implementations, the transmitted data may be compressed.
[0072] In exemplary implementations, the metric is further used to send samples with different protection levels.
[0073] In exemplary implementations, the first function is further based on at least a third set of information comprising a set of samples of at least one of the first and the second component where the first and the second components are based on the input and the expected output of an AI/ML model. The first function may be based on a statistical similarity of a sample of the first set of information and the third set of information. The third set of information may include a set of channel data representations during a second time-frequency-space region and used to train the AI/ML model.
[0074] In exemplary implementations, the statistical similarity may be based one or more of a Kolmogorov-Smirnov (KS) Test, Anderson-Darling Test or an anomaly detection algorithm. The KS-test is one of the non-parametric tests and it can further be classified into one sample K-S test and two sample K-S test. The one Sample K-S test can be used to determine the probability that a set of data samples belong to a particular reference distribution The two sample K-S test is useful to determine the probability that two sets of samples belong to the same distribution. Using the K-S test on the input CSI samples to the UE, it can be determined whether the samples belong to the distribution(s) for which the AI/ML model is trained. When the probability that the input samples belong to a different distribution than the desired distribution(s) increases beyond a certain threshold, the UE can initiate the model update procedure. The “desired” distribution(s) implies the input data distribution(s) for which the AI/ML model has been trained. The Anderson-Darling Test is a statistical test to determine whether a given sample comes from a given specific distribution. It requires a lower number of samples than the K-S test for a more reliable measurement.
[0075] An exemplary purpose of the above implementations is to set a priority for samples of input data. In some implementations, the first device knows how the samples will be used. For example, if the samples are intended for model training, it knows what is the structure of the model that should be trained. In another implementation, if the samples are intended for updating an already trained model, it knows the latest copy of the trained model completely or in part. In another implementation, if the samples are going to be used at the other node for model monitoring, the node have information about the model monitoring procedure. In these cases, the node collecting the data can use the extra information to determine how valuable the current sample is for the receiver and therefore it can assign a value as the importance metric to the current sample. The node can then store the sample locally if it has enough space, or it can discard it if its importance is lower than some threshold value, or it can replace one of the previous samples (stored locally) with the new samples which has higher importance.
[0076] In implementations, the first device updates the importance level of the previously collected samples based on the new samples it receives or based on other metrics that are changed. For example, some samples might be important at a certain time but, as times pass, the samples might be less valuable, in which case the first device can change the importance level of the samples accordingly.
[0077] Figure 8 illustrates a device 800 for implementing a method according to any of the previous implementations. The device 800 comprises a processor 802, a memory 804, a receiver 806 and a transmitter 808.
[0078] The processor 802 of the device 800, such as a UE 104, may support wireless communication in accordance with examples as disclosed herein. The processor 802 includes at least one controller coupled with at least one memory, and the at least one controller is configured to and/or operable to cause the processor to one or more of receive or generate a first set of information, wherein the first set of information comprises one or more samples, and wherein each sample comprises one or more components; set a first metric for at least a sample where the sample is one of the samples of the first set of information or generated from a subset of samples of the first set of information based on an output of a first function, the respective input of the first function being at least one of the first or second components of the respective sample or subset of samples of the first set of information; select one or more samples of the previous step for inclusion in a second set of information based on a respective first metric value for each sample; and transmit at least a subset of samples of the second set of information to a second device, wherein the subset is selected based on at least one of a first metric associated with each sample and a set of requirements determined at the first device. The at least one controller is further configured to cause the processor 802 to perform any of the various operations described herein, such as with reference to a UE 104.
[0079] It will be appreciated by the person of skill in the art that various modifications may be made to the above-described implementations without departing from the scope of the present invention.
[0080] In addition, implementations discussed herein include at least the following:
[0081] In some aspects, the techniques described herein relate to a user equipment (UE) for wireless communication, including: at least one memory; and at least one processor coupled with the at least one memory and configured to cause the UE to: one or more of receive or generate a first set of information, wherein the first set of information includes one or more samples, and wherein each sample includes one or more components; set a first metric for at least a sample where the sample is one of the samples of the first set of information or generated from a subset of samples of the first set of information based on an output of a first function, a respective input of the first function being at least one of first or second components of the respective sample or subset of samples of the first set of information; select one or more samples for inclusion in a second set of information based on a respective first metric value for each sample; and transmit at least a subset of samples of the second set of information to a second device, wherein the subset is selected based on at least one of a first metric associated with each sample and a set of requirements determined at the first device.
[0082] In some aspects, the techniques described herein relate to a UE, wherein to transmit the at least a subset of samples, the at least one processor is configured to cause the UE to select the at least a subset of samples based at least in part on a first metric that is either the metric set from setting a metric or a further metric derived from that metric.
[0083] In some aspects, the techniques described herein relate to a UE, wherein generating samples includes processing samples from the first set of information.
[0084] In some aspects, the techniques described herein relate to a UE, wherein the processing includes noise filtering.
[0085] In some aspects, the techniques described herein relate to a UE, wherein the first function is a model.
[0086] In some aspects, the techniques described herein relate to a UE, wherein the model is an artificial intelligence/machine, AI/ML, learning model.
[0087] In some aspects, the techniques described herein relate to a UE, wherein the model is already trained.
[0088] In some aspects, the techniques described herein relate to a UE, wherein each sample includes a vector with a first component and a second component, the first component being an input of the model and the second component being an output of the model.
[0089] In some aspects, the techniques described herein relate to a UE, where the model is designed to generate a desired output based on the first component of the samples.
[0090] In some aspects, the techniques described herein relate to a UE, where the function computes the metric based on the output of the model forthe first component of a sample and the second component of a sample.
[0091] In some aspects, the techniques described herein relate to a UE, wherein the one or more samples include at least one of a first and a second component where the first and the second components are based on an input and an expected output of the model respectively.
[0092] In some aspects, the techniques described herein relate to a UE, wherein the function is a model including a ML or Al model, for example a model including a neural network (NN) block which is determined based on a set of parameters including a structure of a neural network or weights of a neural network where the output of the NN block is used to determine at least an importance value and the weights of the samples.
[0093] In some aspects, the techniques described herein relate to a UE, wherein the set of parameters are received from one of the second device or a third device, where the third device is different from the second device. [0094] In some aspects, the techniques described herein relate to a UE, wherein the model is received from and/or configured by one of the second device or a third device, where the third device is different from the second device.
[0095] In some aspects, the techniques described herein relate to a UE, wherein the model is received in a periodic or an aperiodic manner.
[0096] In some aspects, the techniques described herein relate to a UE, wherein an input and an output of the NN block is based on at least one of the first or second component of the first set of information.
[0097] In some aspects, the techniques described herein relate to a UE, wherein the function computes the metric based on a gradient of updates for the weights of the model for samples in the first set of information.
[0098] In some aspects, the techniques described herein relate to a UE, wherein the metric is one of an importance level of the sample, an age of the sample, and a weight of the sample, wherein the weight is an indication of a frequency of occurrence of a value of a component of the sample.
[0099] In some aspects, the techniques described herein relate to a processor for wireless communication, including: at least one controller coupled with at least one memory and configured to cause the processor to: one or more of receive or generate a first set of information, wherein the first set of information includes one or more samples, and wherein each sample includes one or more components; set a first metric for at least a sample where the sample is one of the samples of the first set of information or generated from a subset of samples of the first set of information based on an output of a first function, a respective input of the first function being at least one of first or second components of the respective sample or subset of samples of the first set of information; select one or more samples for inclusion in a second set of information based on a respective first metric value for each sample; and transmit at least a subset of samples of the second set of information to a second device, wherein the subset is selected based on at least one of a first metric associated with each sample and a set of requirements determined at the first device.
[0100] In some aspects, the techniques described herein relate to a method performed by a user equipment (UE), the method including: one or more of receiving or generating a first set of information, wherein the first set of information includes one or more samples, and wherein each sample includes one or more components; setting a first metric for at least a sample where the sample is one of the samples of the first set of information or generated from a subset of samples of the first set of information based on an output of a first function, a respective input of the first function being at least one of first or second components of the respective sample or subset of samples of the first set of information; selecting one or more samples for inclusion in a second set of information based on a respective first metric value for each sample; and transmitting at least a subset of samples of the second set of information to a second device, wherein the subset is selected based on at least one of a first metric associated with each sample and a set of requirements determined at the first device.

Claims

1 . A user equipment (UE) for wireless communication, comprising: at least one memory; and at least one processor coupled with the at least one memory and configured to cause the UE to: one or more of receive or generate a first set of information, wherein the first set of information comprises one or more samples, and wherein each sample comprises one or more components; set a first metric for at least a sample where the sample is one of the samples of the first set of information or generated from a subset of samples of the first set of information based on an output of a first function, a respective input of the first function being at least one of first or second components of the respective sample or subset of samples of the first set of information; select one or more samples for inclusion in a second set of information based on a respective first metric value for each sample; and transmit at least a subset of samples of the second set of information to a second device, wherein the subset is selected based on at least one of a first metric associated with each sample and a set of requirements determined at the first device.
2. The UE of claim 1 , wherein to transmit the at least a subset of samples, the at least one processor is configured to cause the UE to select the at least a subset of samples based at least in part on a first metric that is either the metric set from setting a metric or a further metric derived from that metric.
3. The UE of claim 1 , wherein generating samples comprises processing samples from the first set of information.
4. The UE of claim 3, wherein the processing comprises noise filtering.
5. The UE of claim 1 , wherein the first function is a model.
6. The UE of claim 5, wherein the model is an artificial intelligence/machine, AI/ML, learning model.
7. The UE of claim 5, wherein the model is already trained.
8. The UE of claim 5, wherein each sample comprises a vector with a first component and a second component, the first component being an input of the model and the second component being an output of the model.
9. The UE of claim 8, where the model is designed to generate a desired output based on the first component of the samples.
10. The UE of claim 8, where the function computes the metric based on the output of the model forthe first component of a sample and the second component of a sample.
11 . The UE of claim 8, wherein the one or more samples comprise at least one of a first and a second component where the first and the second components are based on an input and an expected output of the model respectively.
12. The UE of claim 5, wherein the function is a model comprising a ML or Al model, for example a model comprising a neural network (NN) block which is determined based on a set of parameters including a structure of a neural network or weights of a neural network where the output of the NN block is used to determine at least an importance value and the weights of the samples.
13. The UE of claim 12, wherein the set of parameters are received from one of the second device or a third device, where the third device is different from the second device.
14. The UE of claim 5, wherein the model is received from and/or configured by one of the second device or a third device, where the third device is different from the second device.
15. The UE of claim 14, wherein the model is received in a periodic or an aperiodic manner.
16. The UE of claim 12, wherein an input and an output of the NN block is based on at least one of the first or second component of the first set of information.
17. The UE of claim 12, wherein the function computes the metric based on a gradient of updates for the weights of the model for samples in the first set of information.
18. The UE of claim 1 , wherein the metric is one of an importance level of the sample, an age of the sample, and a weight of the sample, wherein the weight is an indication of a frequency of occurrence of a value of a component of the sample.
19. A processor for wireless communication, comprising: at least one controller coupled with at least one memory and configured to cause the processor to: one or more of receive or generate a first set of information, wherein the first set of information comprises one or more samples, and wherein each sample comprises one or more components; set a first metric for at least a sample where the sample is one of the samples of the first set of information or generated from a subset of samples of the first set of information based on an output of a first function, a respective input of the first function being at least one of first or second components of the respective sample or subset of samples of the first set of information; select one or more samples for inclusion in a second set of information based on a respective first metric value for each sample; and transmit at least a subset of samples of the second set of information to a second device, wherein the subset is selected based on at least one of a first metric associated with each sample and a set of requirements determined at the first device.
20. A method performed by a user equipment (UE), the method comprising: one or more of receiving or generating a first set of information, wherein the first set of information comprises one or more samples, and wherein each sample comprises one or more components; setting a first metric for at least a sample where the sample is one of the samples of the first set of information or generated from a subset of samples of the first set of information based on an output of a first function, a respective input of the first function being at least one of first or second components of the respective sample or subset of samples of the first set of information; selecting one or more samples for inclusion in a second set of information based on a respective first metric value for each sample; and transmitting at least a subset of samples of the second set of information to a second device, wherein the subset is selected based on at least one of a first metric associated with each sample and a set of requirements determined at the first device.
PCT/IB2024/051179 2023-02-10 2024-02-08 Method and system for transmission of samples Ceased WO2024166040A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US202363484418P 2023-02-10 2023-02-10
US63/484,418 2023-02-10

Publications (1)

Publication Number Publication Date
WO2024166040A1 true WO2024166040A1 (en) 2024-08-15

Family

ID=89905774

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/IB2024/051179 Ceased WO2024166040A1 (en) 2023-02-10 2024-02-08 Method and system for transmission of samples

Country Status (1)

Country Link
WO (1) WO2024166040A1 (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020183059A1 (en) * 2019-03-14 2020-09-17 Nokia Technologies Oy An apparatus, a method and a computer program for training a neural network
US20210273706A1 (en) * 2020-02-28 2021-09-02 Qualcomm Incorporated Channel state information feedback using channel compression and reconstruction
WO2021208061A1 (en) * 2020-04-17 2021-10-21 Qualcomm Incorporated Configurable neural network for channel state feedback (csf) learning

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020183059A1 (en) * 2019-03-14 2020-09-17 Nokia Technologies Oy An apparatus, a method and a computer program for training a neural network
US20210273706A1 (en) * 2020-02-28 2021-09-02 Qualcomm Incorporated Channel state information feedback using channel compression and reconstruction
WO2021208061A1 (en) * 2020-04-17 2021-10-21 Qualcomm Incorporated Configurable neural network for channel state feedback (csf) learning

Similar Documents

Publication Publication Date Title
CN112514276B (en) Communication equipment and method for providing CSI feedback, transmitter and method and medium for transmitting
US12445872B2 (en) Online training and augmentation of neural networks for channel state feedback
CN112997418A (en) Codebook-based doppler precoding and CSI reporting for wireless communication systems
EP4014340A1 (en) Csi reporting and codebook structure for doppler codebook-based precoding in a wireless communications system
CN113796020A (en) CSI report and codebook structure for precoding based on doppler delay codebook in wireless communication system
US11956031B2 (en) Communication of measurement results in coordinated multipoint
WO2022040160A2 (en) Neural network or layer configuration indicator for a channel state information scheme
CN117356050A (en) Neural network assisted communication technology
WO2024028701A1 (en) Operation of a two-sided model
WO2024033809A1 (en) Performance monitoring of a two-sided model
WO2023224533A1 (en) Nodes and methods for ml-based csi reporting
WO2024166040A1 (en) Method and system for transmission of samples
WO2024150208A1 (en) Improving accuracy of artificial intelligence/machine learning (ai/ml) based channel state information (csi) feedback
CN120958466A (en) Systems and methods for machine learning based CSI codebook generation and CSI reporting
Lee et al. Predictive Channel State Information (CSI) Framework: Evolutional CSI Neural Network (evoCSINet)
US20250119196A1 (en) Method and device in nodes used for wireless communication
EP4503495A1 (en) Method and apparatus for wireless communication
WO2024075097A1 (en) Efficiently transmitting a set of samples to another device
US20250096873A1 (en) Method and device in nodes used for wireless communication
EP4513777A1 (en) Method and apparatus for wireless communication
WO2025199727A1 (en) Channel state information precoding permutations
US20250351164A1 (en) Techniques for link-adapted codebooks for unsourced random access
US20250373300A1 (en) Techniques for coherent joint transmission across multiple transmission-reception point sets
US20250373287A1 (en) Multiple-input and multiple-output channel feedback with dictionary learning
WO2024020709A1 (en) Signaling for dictionary learning techniques for channel estimation

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 24704913

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE