[go: up one dir, main page]

WO2025146034A1 - Procédé et appareil de compression de csi reposant sur l'ia, et terminal et dispositif côté réseau - Google Patents

Procédé et appareil de compression de csi reposant sur l'ia, et terminal et dispositif côté réseau Download PDF

Info

Publication number
WO2025146034A1
WO2025146034A1 PCT/CN2024/144184 CN2024144184W WO2025146034A1 WO 2025146034 A1 WO2025146034 A1 WO 2025146034A1 CN 2024144184 W CN2024144184 W CN 2024144184W WO 2025146034 A1 WO2025146034 A1 WO 2025146034A1
Authority
WO
WIPO (PCT)
Prior art keywords
csi
time window
terminal
target time
unit
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
PCT/CN2024/144184
Other languages
English (en)
Chinese (zh)
Inventor
谢天
杨昂
吴昊
王园园
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vivo Mobile Communication Co Ltd
Original Assignee
Vivo Mobile Communication Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vivo Mobile Communication Co Ltd filed Critical Vivo Mobile Communication Co Ltd
Publication of WO2025146034A1 publication Critical patent/WO2025146034A1/fr
Pending legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L69/00Network arrangements, protocols or services independent of the application payload and not provided for in the other groups of this subclass
    • H04L69/04Protocols for data compression, e.g. ROHC
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04BTRANSMISSION
    • H04B7/00Radio transmission systems, i.e. using radiation field
    • H04B7/02Diversity systems; Multi-antenna system, i.e. transmission or reception using multiple antennas
    • H04B7/04Diversity systems; Multi-antenna system, i.e. transmission or reception using multiple antennas using two or more spaced independent antennas
    • H04B7/06Diversity systems; Multi-antenna system, i.e. transmission or reception using multiple antennas using two or more spaced independent antennas at the transmitting station
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04BTRANSMISSION
    • H04B7/00Radio transmission systems, i.e. using radiation field
    • H04B7/02Diversity systems; Multi-antenna system, i.e. transmission or reception using multiple antennas
    • H04B7/04Diversity systems; Multi-antenna system, i.e. transmission or reception using multiple antennas using two or more spaced independent antennas
    • H04B7/06Diversity systems; Multi-antenna system, i.e. transmission or reception using multiple antennas using two or more spaced independent antennas at the transmitting station
    • H04B7/0613Diversity systems; Multi-antenna system, i.e. transmission or reception using multiple antennas using two or more spaced independent antennas at the transmitting station using simultaneous transmission
    • H04B7/0615Diversity systems; Multi-antenna system, i.e. transmission or reception using multiple antennas using two or more spaced independent antennas at the transmitting station using simultaneous transmission of weighted versions of same signal
    • H04B7/0619Diversity systems; Multi-antenna system, i.e. transmission or reception using multiple antennas using two or more spaced independent antennas at the transmitting station using simultaneous transmission of weighted versions of same signal using feedback from receiving side
    • H04B7/0621Feedback content
    • H04B7/0626Channel coefficients, e.g. channel state information [CSI]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L1/00Arrangements for detecting or preventing errors in the information received
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L1/00Arrangements for detecting or preventing errors in the information received
    • H04L1/0001Systems modifying transmission characteristics according to link quality, e.g. power backoff
    • H04L1/0023Systems modifying transmission characteristics according to link quality, e.g. power backoff characterised by the signalling
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W8/00Network data management
    • H04W8/22Processing or transfer of terminal data, e.g. status or physical capabilities

Definitions

  • the present application belongs to the field of communication technology, and specifically relates to an AI-based CSI compression method, device, terminal and network-side equipment.
  • the transmitter can optimize the signal transmission based on CSI to make it more compatible with the channel state.
  • the channel quality indicator CQI
  • MCS modulation and coding scheme
  • PMI precoding matrix indicator
  • MIMO multi-input multi-output
  • the base station sends a Channel State Information-Reference Signal (CSI-RS) on certain time-frequency resources in a certain time slot.
  • CSI-RS Channel State Information-Reference Signal
  • the terminal performs channel estimation based on the CSI-RS, calculates the channel information on this slot, and feeds back the PMI to the base station through the codebook.
  • the base station combines the channel information based on the codebook information fed back by the terminal, and uses this to perform data precoding and multi-user scheduling before the next CSI report.
  • an evolved codebook solution that is, the terminal can change the PMI reported for each subband to reporting the PMI according to the delay. Since the channels in the delay domain are more concentrated, the PMIs with fewer delays can approximately represent the PMIs of all subbands, that is, the delay domain information is compressed before reporting.
  • a neural network or machine learning method can be used to compress the CSI, that is, the AI unit is used to compress the CSI.
  • the terminal and the network side device need to exchange necessary information to realize the compression of CSI based on the AI unit.
  • the information exchange method when the terminal and the network side device compress the CSI based on the AI unit is not given at present.
  • the embodiments of the present application provide a CSI compression method, apparatus, terminal and network-side equipment based on AI, and provide an information interaction method when the terminal and the network-side equipment compress CSI based on the AI unit.
  • a CSI compression method based on AI comprising:
  • the terminal sends capability information of the terminal about an artificial intelligence AI unit to the network side device, where the AI unit is used to compress the channel state information CSI;
  • the terminal receives CSI configuration information sent by the network side device according to the capability information
  • the terminal compresses the CSI through the AI unit according to the CSI configuration information.
  • a CSI compression method based on AI comprising:
  • the network side device receives capability information of the terminal about an artificial intelligence AI unit sent by the terminal, where the AI unit is used to compress channel state information CSI;
  • the network side device determines, according to the capability information, CSI configuration information for instructing the terminal to compress the CSI;
  • a first sending module is used to send capability information of the terminal about an artificial intelligence AI unit to a network side device, where the AI unit is used to compress channel state information CSI;
  • a first receiving module configured to receive CSI configuration information sent by the network side device according to the capability information
  • an AI-based CSI compression device which is applied to a network side device, and the device includes:
  • a second receiving module is used to receive capability information of an artificial intelligence AI unit of the terminal sent by the terminal, where the AI unit is used to compress channel state information CSI;
  • an information determination module configured to determine, according to the capability information, CSI configuration information for instructing the terminal to compress the CSI
  • the second sending module is used to send the CSI configuration information to the terminal.
  • a terminal comprising a processor and a memory, wherein the memory stores a program or instruction that can be run on the processor, and when the program or instruction is executed by the processor, the steps of the method described in the first aspect are implemented.
  • a terminal including a processor and a communication interface
  • the communication interface is used for:
  • a network side device including a processor and a communication interface
  • the communication interface is used to: receive capability information of the terminal about an artificial intelligence AI unit sent by the terminal, and the AI unit is used to compress channel state information CSI;
  • the processor is used to: determine, according to the capability information, CSI configuration information for instructing the terminal to compress the CSI;
  • the communication interface is further used to: send the CSI configuration information to the terminal.
  • a chip comprising a processor and a communication interface, wherein the communication interface is coupled to the processor, and the processor is used to run a program or instructions to implement the steps of the method described in the first aspect, or to implement the steps of the method described in the second aspect.
  • an embodiment of the present application provides an AI-based CSI compression device, which is used to execute the steps of the AI-based CSI compression method as described in the first aspect or the second aspect.
  • FIG1 is a block diagram of a wireless communication system to which an embodiment of the present application can be applied;
  • FIG2 is a schematic diagram of a neural network in an embodiment of the present application.
  • FIG5 is a schematic diagram of a packaged time-frequency-spatial domain CSI compression solution in an embodiment of the present application
  • FIG6 is a schematic diagram of a progressive time-frequency-spatial domain CSI compression scheme for a shared model on multiple slots in an embodiment of the present application
  • FIG7 is a schematic diagram of a progressive time-frequency-spatial domain CSI compression scheme for a dedicated model on multiple slots in an embodiment of the present application
  • FIG8 is a schematic diagram of a slot interval pattern in the reasoning phase in an embodiment of the present application.
  • FIG9 is a flowchart of another AI-based CSI compression method in an embodiment of the present application.
  • FIG10 is a structural block diagram of an AI-based CSI compression device in an embodiment of the present application.
  • FIG11 is a structural block diagram of another AI-based CSI compression device in an embodiment of the present application.
  • FIG12 is a structural block diagram of a communication device in an embodiment of the present application.
  • FIG13 is a block diagram of a terminal in an embodiment of the present application.
  • FIG14 is a structural block diagram of a network side device in an embodiment of the present application.
  • first, second, etc. in this application are used to distinguish similar objects, and are not used to describe a specific order or sequence. It should be understood that the terms used in this way are interchangeable where appropriate, so that the embodiments of the present application can be implemented in an order other than those illustrated or described herein, and the objects distinguished by “first” and “second” are generally of one type, and the number of objects is not limited, for example, the first object can be one or more.
  • “or” in this application represents at least one of the connected objects.
  • “A or B” covers three schemes, namely, Scheme 1: including A but not including B; Scheme 2: including B but not including A; Scheme 3: including both A and B.
  • the character "/" generally indicates that the objects associated with each other are in an "or” relationship.
  • indication in this application can be a direct indication (or explicit indication) or an indirect indication (or implicit indication).
  • a direct indication can be understood as the sender explicitly informing the receiver of specific information, operations to be performed, or request results in the sent indication;
  • an indirect indication can be understood as the receiver determining the corresponding information according to the indication sent by the sender, or making a judgment and determining the operation to be performed or the request result according to the judgment result.
  • FIG1 shows a block diagram of a wireless communication system applicable to the embodiment of the present application.
  • the wireless communication system includes a terminal 11 and a network side device 12 .
  • the terminal 11 can be a mobile phone, a tablet computer (Tablet Personal Computer), a laptop computer (Laptop Computer), a notebook computer, a personal digital assistant (PDA), a handheld computer, a netbook, an ultra-mobile personal computer (Ultra-mobile Personal Computer, UMPC), a mobile Internet device (Mobile Internet Device, MID), an augmented reality (Augmented Reality, AR), a virtual reality (Virtual Reality, VR) device, a robot, a wearable device (Wearable Device), a flight vehicle (flight vehicle), a vehicle user equipment (VUE), a shipborne equipment, a pedestrian terminal (Pedestrian User Equipment, PUE), a smart home (home appliances with wireless communication functions, such as refrigerators, televisions, washing machines or furniture, etc.), a game console, a personal computer (Personal Computer, PC
  • Wearable devices include: smart watches, smart bracelets, smart headphones, smart glasses, smart jewelry (smart bracelets, smart bracelets, smart rings, smart necklaces, smart anklets, smart anklets, etc.), smart wristbands, smart clothing, etc.
  • the vehicle-mounted device can also be called a vehicle-mounted terminal, a vehicle-mounted controller, a vehicle-mounted module, a vehicle-mounted component, a vehicle-mounted chip or a vehicle-mounted unit, etc. It should be noted that the specific type of the terminal 11 is not limited in the embodiment of the present application.
  • the base station can pre-code the CSI-RS in advance and send the encoded CSI-RS to each terminal.
  • the terminal sees the channel corresponding to the encoded CSI-RS.
  • the terminal only needs to select several ports with higher strength from the ports indicated by the network side and report the coefficients corresponding to these ports.
  • neural network or machine learning methods can be used.
  • AI units such as neural networks, decision trees, support vector machines, Bayesian classifiers, etc.
  • This application uses neural networks as an example for illustration, but does not limit the specific type of AI modules.
  • a schematic diagram of the structure of a simple neural network is shown in Figure 2.
  • the neural network is composed of neurons, and the schematic diagram of neurons is shown in Figure 3.
  • a 1 , a 2 , ... a K represent inputs
  • w represents weights (i.e., multiplicative coefficients)
  • b represents biases (i.e., additive coefficients)
  • ⁇ (.) represents activation functions.
  • Common activation functions include Sigmoid (mapping variables between 0 and 1), tanh (translation and contraction of Sigmoid), linear rectification function/rectified linear unit (Rectified Linear Unit, ReLU), etc.
  • the parameters of the neural network can be optimized through the gradient optimization algorithm.
  • the gradient optimization algorithm is a type of algorithm that minimizes or maximizes the objective function (sometimes also called the loss function), and the objective function is often a mathematical combination of model parameters and data. For example, given the data X and its corresponding label Y, a neural network model f(.) can be constructed, then the predicted output f(x) can be obtained based on the input x, and the difference between the predicted value and the true value (f(x)-Y) can be calculated, which is the loss function.
  • the optimization goal of the gradient optimization algorithm is to find the appropriate w (i.e. weight) and b (i.e. bias) to minimize the value of the above loss function, and the smaller the loss value, the closer the model is to the actual situation.
  • the common optimization algorithms are basically based on the error back propagation (BP) algorithm.
  • BP error back propagation
  • the basic idea of the BP algorithm is that the learning process consists of two processes: the forward propagation of the signal and the back propagation of the error.
  • the input sample is transmitted from the input layer, processed by each hidden layer layer by layer, and then transmitted to the output layer. If the actual output of the output layer does not match the expected output, it will enter the back propagation stage of the error.
  • Error back propagation is to propagate the output error layer by layer through the hidden layer to the input layer in some form, and distribute the error to all units in each layer, so as to obtain the error signal of each layer unit, and this error signal is used as the basis for correcting the weights of each unit.
  • This process of adjusting the weights of each layer of the signal forward propagation and error back propagation is repeated.
  • the process of continuous adjustment of weights is the learning and training process of the network. This process continues until the error of the network output is reduced to an acceptable level, or until the pre-set number of learning times is reached.
  • these optimization algorithms calculate the derivative/partial derivative of the current neuron based on the error/loss obtained by the loss function, add the influence of the learning rate, the previous gradient/derivative/partial derivative, etc., get the gradient, and pass the gradient to the previous layer.
  • the terminal compresses and encodes the channel information, and the base station decodes the compressed content to restore the channel information.
  • the decoding network of the base station and the encoding network of the terminal need to be jointly trained to achieve a reasonable match.
  • the neural network is composed of a joint neural network through the encoder of the terminal and the decoder of the base station.
  • the network side conducts joint training.
  • the base station sends the encoder network to the terminal.
  • the terminal estimates the CSI-RS, calculates the channel information, and obtains the encoding result through the encoding network of the calculated channel information or the original estimated channel information.
  • the encoding result is sent to the base station.
  • the base station receives the encoded result and inputs it into the decoding network to restore the channel information.
  • the scalability of a model refers to the ability of a single model to adapt to multiple input/output configurations at the same time.
  • the scalability of the model mainly considers the following indicators: the number of subbands of the (encoder) input, the number of ports of the (encoder) input, and the length of the (encoder) output payload. That is, it is hoped that a CSI compression model can support as many of the above configuration combinations as possible (for example, a model can support both 64-bit and 116-bit payloads.
  • the CSI compression use case is a typical two-end model use case, that is, the complete CSI compression model needs to be deployed on different network nodes.
  • Most of the cases considered are to deploy the encoder on the UE side and the decoder on the network side (NW).
  • the (sub) models deployed on multiple nodes need to be paired with each other to work properly.
  • 3GPP has identified several basic types of training collaboration:
  • the training framework aims to train a complete encoder-decoder model on a network node (UE or NW or a third-party server node, etc.), and then deploy the corresponding model module to the target node through methods such as model transfer (for example, the encoder part is transferred to the UE and the decoder part is transferred to the NW).
  • a network node UE or NW or a third-party server node, etc.
  • the training framework refers to the joint participation of multiple nodes in the training process, and each node independently calculates the forward/backward propagation information required for local model training and updates the model parameters of its own node. Since the training process requires forward/backward propagation of the entire model (including encoder and decoder), the corresponding forward/backward propagation information needs to be transmitted between the nodes participating in the training. After the training is completed, the model no longer needs to be transmitted between the nodes.
  • the training framework means that a model used as a reference is first trained on a certain node, and then the relevant information of the reference model is sent to the target node. Finally, the target node trains the model required by the node based on the information, thereby ensuring that each node (sub) model can be paired with each other.
  • the NW side first trains a set of complete models of encoders and decoders, and determines that the decoder obtained is the decoder actually used in the future, and then sends the relevant information of the encoder corresponding to (the decoder) (generally the input and output data of the encoder) to the UE side, and the UE side trains the encoder used by itself based on the information.
  • This training framework can be further divided into two cases: UE-first training and NW-first training.
  • UE-first training means training the complete model on the UE side first, and then sending the information required for NW training to match the model (generally the input and output data of the model to be trained on the NW side) to the NW side.
  • NW-first training means training the complete model on the NW side first, and then sending the information required for UE training to match the model (generally the input and output data of the model to be trained on the UE side) to the UE side.
  • the AI unit may also be referred to as an AI model, a machine learning (ML) model, an ML unit, an AI structure, an AI function, an AI characteristic, a neural network, a neural network function, a neural network function, etc.; or the AI unit may also refer to a processing unit that can implement specific algorithms, formulas, processing procedures, capabilities, etc.
  • ML machine learning
  • the AI unit may be a processing method, algorithm, function, module or unit for a specific data set, or the AI unit may be a processing method, algorithm, function, module or unit running on AI/ML related hardware such as a graphics processing unit (GPU), a neural network processor (NPU), a tensor processor (TPU), an application specific integrated circuit (ASIC), etc., and this application does not make specific limitations on this.
  • the specific data set includes at least one of the input and output of the AI unit/AI model.
  • the identifier of the AI unit may be an AI model identifier, an AI structure identifier, an AI algorithm identifier, or an identifier of a specific data set associated with the AI unit, or an identifier of a specific scenario, environment, channel feature, or device related to the AI/ML, or an identifier of a function, feature, capability, or module related to the AI/ML, and this application does not make any specific limitations on this.
  • the capability information is used to indicate the terminal's support for the AI unit, that is, the terminal's capability in performing CSI compression.
  • the CSI configuration information includes configuration information for performing CSI compression, that is, the CSI configuration information is used to instruct the terminal how to perform CSI compression.
  • the network side device determines CSI configuration information according to the capability information to instruct the terminal how to perform CSI compression.
  • Step 403 The terminal compresses the CSI through the AI unit according to the CSI configuration information.
  • the AI unit can be used to perform CSI compression in at least one of the time domain, frequency domain, and spatial domain on the CSI, that is, the AI unit can perform at least one of CSI time domain compression, CSI frequency domain compression, and CSI spatial domain compression.
  • the terminal specifically performs one or more CSI compressions in the time domain, frequency domain, and spatial domain, depending on the function of the AI unit and the CSI configuration information.
  • time domain compression is time domain joint compression, that is, CSI on multiple time units (such as slots) are combined for compression, so as to further reduce the overhead of CSI reporting or improve the accuracy of CSI reporting.
  • time domain joint compression that is, the reporting method of CSI on multiple time units
  • time domain joint compression is divided into two types: packaged reporting and progressive reporting.
  • Packaged reporting is to report CSI on multiple time units (such as time slots) at one time (as shown in Figure 5 below)
  • progressive reporting is to report CSI on each time unit (such as time slot) in turn in an autoregressive manner (as shown in Figure 6 above).
  • the traditional CSI reporting based on the Type II codebook supports two modes: aperiodic (AP) and semi-persistent.
  • AP aperiodic
  • semi-persistent For the AP mode, from the system perspective, since the network side device (such as the base station) only needs the latest CSI for one scheduling, it only needs to trigger the CSI report of the terminal once to indicate the most recent measurement result, and there is no need to continuously report the CSI at multiple times. Even if this progressive CSI reporting is adopted in the AP mode, there will be a problem of no way to control the time of the last CSI transmission. If the network side device may be the terminal that was scheduled a long time ago, the time domain correlation between the two CSIs that need to be reported is very weak, and there is no room for the time domain compression scheme to play. Therefore, continuous scheduling (such as the SP mode) can lead to a scenario where the CSI is compressed in the time domain. That is, the progressive time domain CSI compression is mainly carried out by the semi-persistent
  • the terminal can send the capability information of the terminal about the AI unit to the network side device, thereby receiving the CSI configuration information sent by the network side device according to the capability information, and then compressing the CSI through the AI unit according to the received CSI configuration information.
  • the terminal can interact with the network side device about its capability information about the AI unit used to compress the CSI, so that the network side device can configure how the terminal performs CSI compression based on the terminal's capability information about the AI unit. Therefore, the embodiment of the present application provides a method for information interaction between the terminal and the network side device when compressing CSI based on the AI unit.
  • the capability information includes at least one of the following items A-1 to A-4:
  • Item A-1 whether the terminal supports AI-based CSI compression
  • Item A-2 the type of the AI unit supported by the terminal
  • the type of the AI unit may include at least one of the following types:
  • the first type (also called a dedicated model): different encoders and decoders are used to compress CSI in different time units; as shown in FIG7 , ENC0 and DEC0 are specifically used for reporting CSI on slot0, and so on, where ENC represents an encoder and DEC represents a decoder;
  • the second type also called the shared model: the same encoder and decoder are used to compress CSI in different time units; as shown in FIG6 , a set of ENC0 and DEC0 is applied to CSI reporting on all slots;
  • the third type in some time units, different encoders and decoders are used to compress CSI in different time units; in another part of time units, the same encoder and decoder are used to compress CSI in different time units; for example, the AI unit can compress CSI of 4 time units, among which the same encoder and decoder can be used to compress CSI in the first two time units, and different encoders and decoders can be used to compress CSI in the last two time units.
  • the first indication information is used to indicate the length of a target time window supported by the terminal, the target time window including at least one time unit (e.g., time slot) for the AI unit to perform CSI compression; that is, it can be understood that: the target time window is used to indicate the maximum number of time units processed when the AI unit performs an inference (i.e., performs CSI compression).
  • the target time window is used to indicate the maximum number of time units processed when the AI unit performs an inference (i.e., performs CSI compression).
  • an important feature of the dedicated model is that it has a clear concept of time window, that is, it processes CSI at a maximum of several different moments (generally speaking, the time window needs to be determined in the model training phase, and the time window length in the inference phase is consistent with the training phase).
  • the time window needs to be determined in the model training phase, and the time window length in the inference phase is consistent with the training phase.
  • Item E-4 the payload lengths of CSI reports after the first CSI report within the target time window are the same;
  • the first indication information is used to indicate a length of a target time window supported by the terminal, where the target time window includes at least one time unit for the AI unit to perform CSI compression;
  • the CSI configuration information includes at least one of the following:
  • the CSI grouping information is used to indicate grouping of CSI to be reported according to the length of a target time window, where the target time window includes at least one time unit for CSI compression by the AI unit;
  • the second indication information is used to indicate whether, when reporting the CSI, it is necessary to carry a position of the reported CSI in the group to which it belongs;
  • the target resource for measuring CSI in the target time window is the target resource for measuring CSI in the target time window.
  • the CSI grouping information includes at least one of the following:
  • the length of the payload of the first CSI report within the target time window is greater than the length of the payload of the CSI report after the first CSI report within the target time window;
  • the payload lengths of the CSI reports after the first CSI report within the target time window are arranged in an arithmetic progression
  • the payload lengths of CSI reports after the first CSI report within the target time window are the same;
  • the payload lengths of each CSI report within the target time window implicitly indicate the position of each CSI report within the target time window.
  • the target resource meets at least one of the following conditions:
  • the resources for measuring the CSI each time within the target time window are the same;
  • the resources used for measuring CSI in the first time unit in the target time window are the largest;
  • the number of resources used for measuring CSI in each time unit after the first time unit in the target time window is the same;
  • the number of resources used to measure CSI in each time unit after the first time unit in the target time window decreases in sequence according to the time domain.
  • the AI-based CSI compression device in the embodiment of the present application may be an electronic device, such as an electronic device with an operating system, or a component in an electronic device, such as an integrated circuit or a chip.
  • the electronic device may be a terminal; illustratively, the terminal may include but is not limited to the types of terminals 11 listed above, and the embodiment of the present application does not specifically limit this.
  • the AI-based CSI compression device provided in the embodiment of the present application can implement the various processes implemented in the method embodiment of Figure 4 and achieve the same technical effect. To avoid repetition, it will not be repeated here.
  • the embodiment of the present application further provides an AI-based CSI compression device, which is applied to a network-side device.
  • the AI-based CSI compression device 110 includes the following modules:
  • the second receiving module 1101 is used to receive capability information of an artificial intelligence AI unit of the terminal sent by the terminal, where the AI unit is used to compress channel state information CSI;
  • An information determining module 1102 is used to determine, according to the capability information, CSI configuration information for instructing the terminal to compress the CSI;
  • the second sending module 1103 is configured to send the CSI configuration information to the terminal.
  • the capability information includes at least one of the following:
  • the first indication information is used to indicate a length of a target time window supported by the terminal, where the target time window includes at least one time unit for the AI unit to perform CSI compression;
  • the CSI reporting interval mode supported by the terminal is the CSI reporting interval mode supported by the terminal.
  • the first indication information includes at least one of the following:
  • the target resource meets at least one of the following conditions:
  • the embodiment of the present application also provides a terminal, including a processor and a communication interface, the communication interface is coupled to the processor, and the processor is used to run a program or instruction to implement the steps in the method embodiment shown in Figure 4.
  • This terminal embodiment corresponds to the above-mentioned terminal side method embodiment, and each implementation process and implementation method of the above-mentioned method embodiment can be applied to the terminal embodiment and can achieve the same technical effect.
  • Figure 13 is a schematic diagram of the hardware structure of a terminal implementing an embodiment of the present application.
  • the terminal 1300 may also include a power source (such as a battery) for supplying power to each component, and the power source may be logically connected to the processor 1310 through a power management system, so as to implement functions such as managing charging, discharging, and power consumption management through the power management system.
  • a power source such as a battery
  • the terminal structure shown in FIG13 does not constitute a limitation on the terminal, and the terminal may include more or fewer components than shown in the figure, or combine certain components, or arrange components differently, which will not be described in detail here.
  • the input unit 1304 may include a graphics processing unit (GPU) 13041 and a microphone 13042, and the graphics processor 13041 processes the image data of the static picture or video obtained by the image capture device (such as a camera) in the video capture mode or the image capture mode.
  • the display unit 1306 may include a display panel 13061, and the display panel 13061 may be configured in the form of a liquid crystal display, an organic light emitting diode, etc.
  • the user input unit 1307 includes a touch panel 13071 and at least one of other input devices 13072.
  • the touch panel 13071 is also called a touch screen.
  • the touch panel 13071 may include two parts: a touch detection device and a touch controller.
  • Other input devices 13072 may include, but are not limited to, a physical keyboard, function keys (such as a volume control key, a switch key, etc.), a trackball, a mouse, and a joystick, which will not be repeated here.
  • the RF unit 1301 can transmit the data to the processor 1310 for processing; in addition, the RF unit 1301 can send uplink data to the network side device.
  • the RF unit 1301 includes but is not limited to an antenna, an amplifier, a transceiver, a coupler, a low noise amplifier, a duplexer, etc.
  • the memory 1309 can be used to store software programs or instructions and various data.
  • the memory 1309 may mainly include a first storage area for storing programs or instructions and a second storage area for storing data, wherein the first storage area may store an operating system, an application program or instruction required for at least one function (such as a sound playback function, an image playback function, etc.), etc.
  • the memory 1309 may include a volatile memory or a non-volatile memory.
  • the non-volatile memory may be a read-only memory (ROM), a programmable read-only memory (PROM), an erasable programmable read-only memory (EPROM), an electrically erasable programmable read-only memory (EEPROM), or a flash memory.
  • the volatile memory may be a random access memory (RAM), a static random access memory (SRAM), a dynamic random access memory (DRAM), a synchronous dynamic random access memory (SDRAM), a double data rate synchronous dynamic random access memory (DDRSDRAM), an enhanced synchronous dynamic random access memory (ESDRAM), a synchronous link dynamic random access memory (SLDRAM) and a direct memory bus random access memory (DRRAM).
  • RAM random access memory
  • SRAM static random access memory
  • DRAM dynamic random access memory
  • SDRAM synchronous dynamic random access memory
  • DDRSDRAM double data rate synchronous dynamic random access memory
  • ESDRAM enhanced synchronous dynamic random access memory
  • SLDRAM synchronous link dynamic random access memory
  • DRRAM direct memory bus random access memory
  • the processor 1310 may include one or more processing units; optionally, the processor 1310 integrates an application processor and a modem processor, wherein the application processor mainly processes operations related to an operating system, a user interface, and application programs, and the modem processor mainly processes wireless communication signals, such as a baseband processor. It is understandable that the modem processor may not be integrated into the processor 1310.
  • the radio frequency unit 1301 is used for:
  • the processor 1310 is used to: compress the CSI through the AI unit according to the CSI configuration information.
  • the capability information includes at least one of the following:
  • the first indication information is used to indicate a length of a target time window supported by the terminal, where the target time window includes at least one time unit for the AI unit to perform CSI compression;
  • the CSI reporting interval mode supported by the terminal is the CSI reporting interval mode supported by the terminal.
  • the first indication information includes at least one of the following:
  • the CSI configuration information includes at least one of the following:
  • the CSI grouping information is used to indicate grouping of CSI to be reported according to the length of a target time window, where the target time window includes at least one time unit for CSI compression by the AI unit;
  • the second indication information is used to indicate whether, when reporting the CSI, it is necessary to carry a position of the reported CSI in the group to which it belongs;
  • the target resource for measuring CSI in the target time window is the target resource for measuring CSI in the target time window.
  • the CSI grouping information includes at least one of the following:
  • a payload length of at least part of the CSI reported within the target time window satisfies at least one of the following:
  • the payload length of each CSI report within the target time window is the same;
  • the length of the payload of the first CSI report within the target time window is greater than the length of the payload of the CSI report after the first CSI report within the target time window;
  • the payload lengths of the CSI reports after the first CSI report within the target time window are arranged in an arithmetic progression
  • the payload lengths of CSI reports after the first CSI report within the target time window are the same;
  • the payload lengths of each CSI report within the target time window implicitly indicate the position of each CSI report within the target time window.
  • the target resource meets at least one of the following conditions:
  • the resources for measuring the CSI each time within the target time window are the same;
  • the resources used for measuring CSI in the first time unit in the target time window are the largest;
  • the number of resources used for measuring CSI in each time unit after the first time unit in the target time window is the same;
  • the number of resources used to measure CSI in each time unit after the first time unit in the target time window decreases in sequence according to the time domain.
  • the method executed by the network-side device in the above embodiment may be implemented in the baseband device 143, which includes a baseband processor.
  • the baseband device 143 may include, for example, at least one baseband board, on which multiple chips are arranged, as shown in Figure 14, one of which is, for example, a baseband processor, which is connected to the memory 145 through a bus interface to call the program in the memory 145 to execute the network device operations shown in the above method embodiment.
  • An embodiment of the present application also provides a readable storage medium, on which a program or instruction is stored.
  • a program or instruction is stored.
  • the various processes of the above-mentioned AI-based CSI compression method embodiment are implemented, and the same technical effect can be achieved. To avoid repetition, it will not be repeated here.
  • the embodiments of the present application further provide a computer program/program product, which is stored in a storage medium, and is executed by at least one processor to implement the various processes of the above-mentioned AI-based CSI compression method embodiment, and can achieve the same technical effect. To avoid repetition, it will not be repeated here.
  • An embodiment of the present application also provides an AI-based CSI compression system, including: a terminal and a network side device, wherein the terminal can be used to execute the steps of the AI-based CSI compression method applied to the terminal as above, and the network side device can be used to execute the steps of the method applied to the network side device as above.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Signal Processing (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Mathematical Physics (AREA)
  • General Physics & Mathematics (AREA)
  • Computing Systems (AREA)
  • Biophysics (AREA)
  • Molecular Biology (AREA)
  • General Health & Medical Sciences (AREA)
  • Computational Linguistics (AREA)
  • Biomedical Technology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Medical Informatics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Security & Cryptography (AREA)
  • Quality & Reliability (AREA)
  • Databases & Information Systems (AREA)
  • Mobile Radio Communication Systems (AREA)

Abstract

La présente demande appartient au domaine technique des communications. Sont divulgués un procédé et un appareil de compression de CSI reposant sur l'IA, un terminal et un dispositif côté réseau. Le procédé de compression de CSI reposant sur l'IA des modes de réalisation de la présente demande comprend les étapes suivantes : un terminal envoie à une unité côté réseau des informations de capacité de dispositif du terminal concernant une unité d'intelligence artificielle (IA), l'unité IA étant configurée pour compresser des informations d'état de canal (CSI) ; le terminal reçoit des informations de configuration de CSI envoyées par le dispositif côté réseau sur la base des informations de capacité ; et sur la base des informations de configuration de CSI, le terminal compresse les CSI au moyen de l'unité AI.
PCT/CN2024/144184 2024-01-04 2024-12-31 Procédé et appareil de compression de csi reposant sur l'ia, et terminal et dispositif côté réseau Pending WO2025146034A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202410015933.1 2024-01-04
CN202410015933.1A CN120263861A (zh) 2024-01-04 2024-01-04 一种基于ai的csi压缩方法、装置、终端及网络侧设备

Publications (1)

Publication Number Publication Date
WO2025146034A1 true WO2025146034A1 (fr) 2025-07-10

Family

ID=96193823

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2024/144184 Pending WO2025146034A1 (fr) 2024-01-04 2024-12-31 Procédé et appareil de compression de csi reposant sur l'ia, et terminal et dispositif côté réseau

Country Status (2)

Country Link
CN (1) CN120263861A (fr)
WO (1) WO2025146034A1 (fr)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2021237715A1 (fr) * 2020-05-29 2021-12-02 Oppo广东移动通信有限公司 Procédé de traitement d'informations d'état de canal, dispositif électronique, et support de stockage
CN114745712A (zh) * 2021-01-07 2022-07-12 中国移动通信有限公司研究院 基于终端能力的处理方法、装置、终端及网络设备
CN114788317A (zh) * 2022-03-14 2022-07-22 北京小米移动软件有限公司 信息处理方法及装置、通信设备及存储介质
WO2023184427A1 (fr) * 2022-03-31 2023-10-05 北京小米移动软件有限公司 Procédé et appareil de détermination de capacité de traitement de csi se basant sur l'ia, et support, produit et puce

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2021237715A1 (fr) * 2020-05-29 2021-12-02 Oppo广东移动通信有限公司 Procédé de traitement d'informations d'état de canal, dispositif électronique, et support de stockage
CN114745712A (zh) * 2021-01-07 2022-07-12 中国移动通信有限公司研究院 基于终端能力的处理方法、装置、终端及网络设备
CN114788317A (zh) * 2022-03-14 2022-07-22 北京小米移动软件有限公司 信息处理方法及装置、通信设备及存储介质
WO2023184427A1 (fr) * 2022-03-31 2023-10-05 北京小米移动软件有限公司 Procédé et appareil de détermination de capacité de traitement de csi se basant sur l'ia, et support, produit et puce

Also Published As

Publication number Publication date
CN120263861A (zh) 2025-07-04

Similar Documents

Publication Publication Date Title
US20230412430A1 (en) Inforamtion reporting method and apparatus, first device, and second device
WO2023179476A1 (fr) Procédés de rapport et de récupération d'informations de caractéristique de canal, terminal et dispositif côté réseau
US20250015910A1 (en) Channel feature information transmission method and apparatus, terminal, and network-side device
CN117318774A (zh) 信道矩阵处理方法、装置、终端及网络侧设备
WO2025146034A1 (fr) Procédé et appareil de compression de csi reposant sur l'ia, et terminal et dispositif côté réseau
WO2024088162A1 (fr) Procédé de transmission d'informations, procédé de traitement d'informations, appareil et dispositif de communication
WO2024055974A1 (fr) Procédé et appareil de transmission de cqi, terminal et dispositif côté réseau
CN117411527A (zh) 信道特征信息上报及恢复方法、终端和网络侧设备
WO2023179473A1 (fr) Procédé de rapport d'informations de caractéristiques de canal, procédé de récupération d'informations de caractéristiques de canal, terminal et dispositif côté réseau
WO2024007949A1 (fr) Procédé et appareil de traitement de modèle d'ia, terminal et dispositif côté réseau
CN117318773A (zh) 信道矩阵处理方法、装置、终端及网络侧设备
CN116939647A (zh) 信道特征信息上报及恢复方法、终端和网络侧设备
US20250254674A1 (en) Information transmission method and apparatus, information processing method and apparatus, and communication device
US20250184772A1 (en) Information transmission method and apparatus, device, system, and storage medium
WO2025140454A1 (fr) Procédé et appareil de mise à jour de modèle, et terminal, dispositif côté réseau et support
US20250343585A1 (en) Csi transmission method and apparatus, terminal, and network side device
WO2024222577A1 (fr) Procédé et appareil de traitement d'informations, procédé et appareil de transmission d'informations, et terminal et dispositif côté réseau
WO2025031426A1 (fr) Procédé et appareil de retour de csi, dispositif et support de stockage lisible
WO2024222573A1 (fr) Procédé de traitement d'information, appareil de traitement d'information, terminal et dispositif côté réseau
US20250193731A1 (en) Channel information processing method and apparatus, communication device, and storage medium
WO2024217495A1 (fr) Procédé de traitement d'informations, appareil de traitement d'informations, terminal et dispositif côté réseau
WO2024222601A1 (fr) Procédé et appareil pour rapporter un nombre de cpu, procédé et appareil pour recevoir un nombre de cpu, terminal et dispositif côté réseau
CN120934571A (zh) Csi数据处理方法、装置、终端、网络侧设备、介质及产品
CN120786392A (zh) 通信方法、装置、终端、网络侧设备、介质及产品
CN119449106A (zh) 信道状态信息的上报方法、获取方法、终端及网络侧设备

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 24915248

Country of ref document: EP

Kind code of ref document: A1