[go: up one dir, main page]

WO2025139843A1 - Procédé de communication et appareil de communication - Google Patents

Procédé de communication et appareil de communication Download PDF

Info

Publication number
WO2025139843A1
WO2025139843A1 PCT/CN2024/139092 CN2024139092W WO2025139843A1 WO 2025139843 A1 WO2025139843 A1 WO 2025139843A1 CN 2024139092 W CN2024139092 W CN 2024139092W WO 2025139843 A1 WO2025139843 A1 WO 2025139843A1
Authority
WO
WIPO (PCT)
Prior art keywords
model
information
downlink channel
channel information
dimensionality reduction
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
PCT/CN2024/139092
Other languages
English (en)
Chinese (zh)
Inventor
张津滔
刘凤威
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Technologies Co Ltd
Original Assignee
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co Ltd filed Critical Huawei Technologies Co Ltd
Publication of WO2025139843A1 publication Critical patent/WO2025139843A1/fr
Pending legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W24/00Supervisory, monitoring or testing arrangements
    • H04W24/02Arrangements for optimising operational condition
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04BTRANSMISSION
    • H04B17/00Monitoring; Testing
    • H04B17/30Monitoring; Testing of propagation channels
    • H04B17/391Modelling the propagation channel
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04BTRANSMISSION
    • H04B7/00Radio transmission systems, i.e. using radiation field
    • H04B7/02Diversity systems; Multi-antenna system, i.e. transmission or reception using multiple antennas
    • H04B7/04Diversity systems; Multi-antenna system, i.e. transmission or reception using multiple antennas using two or more spaced independent antennas
    • H04B7/0413MIMO systems
    • H04B7/0456Selection of precoding matrices or codebooks, e.g. using matrices antenna weighting
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04BTRANSMISSION
    • H04B7/00Radio transmission systems, i.e. using radiation field
    • H04B7/02Diversity systems; Multi-antenna system, i.e. transmission or reception using multiple antennas
    • H04B7/04Diversity systems; Multi-antenna system, i.e. transmission or reception using multiple antennas using two or more spaced independent antennas
    • H04B7/06Diversity systems; Multi-antenna system, i.e. transmission or reception using multiple antennas using two or more spaced independent antennas at the transmitting station
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04BTRANSMISSION
    • H04B7/00Radio transmission systems, i.e. using radiation field
    • H04B7/02Diversity systems; Multi-antenna system, i.e. transmission or reception using multiple antennas
    • H04B7/04Diversity systems; Multi-antenna system, i.e. transmission or reception using multiple antennas using two or more spaced independent antennas
    • H04B7/06Diversity systems; Multi-antenna system, i.e. transmission or reception using multiple antennas using two or more spaced independent antennas at the transmitting station
    • H04B7/0613Diversity systems; Multi-antenna system, i.e. transmission or reception using multiple antennas using two or more spaced independent antennas at the transmitting station using simultaneous transmission
    • H04B7/0615Diversity systems; Multi-antenna system, i.e. transmission or reception using multiple antennas using two or more spaced independent antennas at the transmitting station using simultaneous transmission of weighted versions of same signal
    • H04B7/0619Diversity systems; Multi-antenna system, i.e. transmission or reception using multiple antennas using two or more spaced independent antennas at the transmitting station using simultaneous transmission of weighted versions of same signal using feedback from receiving side
    • H04B7/0621Feedback content
    • H04B7/0626Channel coefficients, e.g. channel state information [CSI]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/16Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks using machine learning or artificial intelligence
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W24/00Supervisory, monitoring or testing arrangements
    • H04W24/06Testing, supervising or monitoring using simulated traffic

Definitions

  • the present application relates to the field of communications, and more specifically, to a communication method and a communication device.
  • network equipment can use port dimensionality reduction to measure the equivalent downlink channels under multiple physical antennas using a few downlink reference signals.
  • the terminal equipment measures the received downlink reference signals to obtain downlink channel information, and compresses the obtained downlink channel information based on the artificial intelligence (AI) model. After that, the compressed information is fed back to the network equipment through uplink control information (UCI).
  • AI artificial intelligence
  • UCI uplink control information
  • the dimensionality reduction weights used in port dimensionality reduction can be calculated by orthogonal codebook or orthogonal decomposition, but studies have found that based on the dimensionality reduction weights, the compressibility of the measured equivalent channel information is not high. Therefore, how to further reduce the feedback overhead has become an urgent problem to be solved.
  • the present application provides a communication method and a communication device, which can further reduce the feedback overhead of downlink channel information.
  • a communication method is provided.
  • the method may be executed by a network device, or may be executed by a component (such as a chip or a circuit) of the network device, without limitation.
  • the method comprises: performing digital precoding on M reference signals corresponding to M antenna ports based on dimensionality reduction weights, the dimensionality reduction weights being used to measure equivalent downlink channels of N physical antennas using the M reference signals, N being a positive integer, M being a positive integer less than N, and the dimensionality reduction weights being determined based on a dimensionality reduction model; sending the M reference signals; sending first indication information, the first indication information indicating a compression model, the compression model matching the dimensionality reduction model; receiving a first sequence, the first sequence being a compression feedback amount obtained by taking first downlink channel information as an input of the compression model, the first downlink channel information being obtained based on measuring the M reference signals; acquiring second downlink channel information, the second downlink channel information being information obtained by taking the first sequence as an input of a reconstruction model, the reconstruction model matching the dimensionality reduction model, and the second downlink channel being used to determine the downlink channel information of the N physical antennas.
  • the optimization goal of the current dimensionality reduction method is only to minimize the energy loss of the equivalent channel after dimensionality reduction.
  • Such a method does not take into account the compressibility of the channel after dimensionality reduction.
  • the dimensionality reduction model and the compression model in the present application are matching models. That is to say, the dimensionality reduction method and the compression method in the present application are jointly optimized. While realizing port dimensionality reduction, the compressibility of the channel after dimensionality reduction is also considered. Therefore, compared with the current downlink channel information feedback scheme, the above technical scheme can make the compressibility of the equivalent downlink channel information higher, thereby achieving the effect of lower feedback overhead.
  • the method further includes: inputting the first sequence into a reconstruction model for decompression to obtain second downlink channel information.
  • a communication method is provided.
  • the method may be executed by a terminal device, or may be executed by a component of the terminal device (eg, a chip or a circuit), without limitation.
  • the method includes: receiving M reference signals corresponding to M antenna ports, where M is a positive integer; obtaining first downlink channel information based on the M reference signals; receiving first indication information, where the first indication information indicates a compression model, where the compression model matches a dimensionality reduction model, where the dimensionality reduction model is used to obtain a dimensionality reduction weight, where the dimensionality reduction weight is used to measure an equivalent downlink channel of N physical antennas using the M reference signals, where N is a positive integer, and M is a positive integer less than N; and sending a first sequence, where the first sequence is a compressed feedback amount obtained by taking the first downlink channel information as an input of the compression model, where the first sequence is used to be decompressed as an input of a reconstruction model to obtain second downlink channel information, where the reconstruction model matches the dimensionality reduction model, and where the second downlink channel is used to determine the downlink channel information of the N physical antennas.
  • the method further includes: inputting the first downlink channel information into a compression model for compression to obtain a first sequence.
  • the dimensionality reduction weight is output information obtained by taking the channel prior information as the input of the dimensionality reduction model.
  • the channel prior information is determined based on at least one of the following information: currently measured uplink channel information, historical uplink channel information, historical downlink channel information, or perceived channel information.
  • the dimensionality reduction model, the compression model, and the reconstruction model are all related to the number of antenna ports M, the number of physical antennas N, and channel prior information.
  • the first downlink channel information indicates a response of an equivalent downlink channel
  • the first sequence is a compressed feedback amount of the response.
  • the first downlink channel information indicates a precoding matrix corresponding to a response of the equivalent downlink channel
  • the first sequence is a compressed feedback amount of the precoding matrix information corresponding to the response.
  • the downlink channel information of the N physical antennas includes downlink precoding matrices of the N physical antennas and downlink channel responses of the N physical antennas.
  • a downlink precoding matrix of N physical antennas is obtained, and the downlink precoding matrix of the N physical antennas is determined based on the dimensionality reduction weights and the second downlink channel information.
  • a communication method is provided, which can be executed by a first device, or can also be executed by a component (such as a chip or circuit) of the first device, without limitation.
  • the first device can be a network device or an AI network element #1, such as a third-party network element #1.
  • the method includes: based on a dimensionality reduction model, obtaining dimensionality reduction weights, the dimensionality reduction weights are used to digitally precode M reference signals corresponding to M antenna ports to measure equivalent downlink channels of N physical antennas, N is a positive integer, and M is a positive integer less than N; inputting a first sequence into a reconstruction model, wherein the first sequence is a compressed feedback amount obtained by taking first downlink channel information as an input of a compression model, the first downlink channel information is obtained by measuring the M reference signals, and the compression model, the reconstruction model, and the dimensionality reduction model are matched; outputting second downlink channel information, the second downlink channel is used to determine the downlink channel information of the N physical antennas.
  • the dimensionality reduction model, the compression model and the reconstruction model are matched models after joint training. This method can increase the compressibility of the equivalent downlink channel information and reduce the feedback overhead.
  • a communication method is provided, which can be executed by a second device, or can also be executed by a component (such as a chip or circuit) of the second device, without limitation.
  • the second device can be a terminal device or an AI network element #2, such as a third-party network element #2.
  • the method comprises: inputting first downlink channel information into a compression model, wherein the first downlink channel information is obtained by measuring M reference signals, the M reference signals are reference signals obtained by digitally precoding reference signals corresponding to M antenna ports based on dimensionality reduction weights, the dimensionality reduction weights are used to measure equivalent downlink channels of N physical antennas using the M reference signals, N is a positive integer, M is a positive integer less than N, the dimensionality reduction weights are determined based on the dimensionality reduction model, and the compression model and the dimensionality reduction model match; outputting a first sequence, the first sequence is used to be decompressed as an input of a reconstruction model to obtain second downlink channel information, the reconstruction model is associated with the dimensionality reduction model, the second downlink channel is used to determine the downlink channel information of the N physical antennas, and the reconstruction model is associated with the dimensionality reduction model.
  • the dimensionality reduction weight is output information obtained by taking the channel prior information as the input of the dimensionality reduction model.
  • the channel prior information is determined based on at least one of the following information: currently measured uplink channel information, historical uplink channel information, historical downlink channel information, or perceived channel information.
  • the dimensionality reduction model, the compression model, and the reconstruction model are all related to the number of antenna ports M, the number of physical antennas N, and channel prior information.
  • the first downlink channel information indicates a response of an equivalent downlink channel
  • the first sequence is a compressed feedback amount of the response.
  • the first downlink channel information indicates a precoding matrix corresponding to a response of the equivalent downlink channel
  • the first sequence is a compressed feedback amount of the precoding matrix information corresponding to the response.
  • the second downlink channel information indicates a downlink channel response, which is determined based on the first downlink channel information and a dimensionality reduction weight, or the second downlink channel information indicates a weight corresponding to the downlink channel response.
  • FIG8 is a schematic flow chart of a communication method 800 proposed in this application.
  • FIG9 is a schematic flowchart of a model training method 900 proposed in the present application.
  • FIG10 is a schematic flowchart of a model training method 1000 proposed in the present application.
  • FIG11 is a comparison chart of simulation performances of the model dimensionality reduction and compression method provided by the present application and the dimensionality reduction and compression method of the existing orthogonal codebook.
  • FIG. 12 is a schematic block diagram of a communication device 1200 provided in an embodiment of the present application.
  • FIG. 13 is a schematic block diagram of a communication device 1300 provided in an embodiment of the present application.
  • network element A sends information A to network element B can be understood as the destination end of the information A or the intermediate network element in the transmission path between the destination end and the network element B, which may include directly or indirectly sending information to network element B.
  • Network element B receives information A from network element A can be understood as the source end of the information A or the intermediate network element in the transmission path between the source end and the network element A, which may include directly or indirectly receiving information from network element A.
  • the information may be processed as necessary between the source end and the destination end of the information transmission, such as format changes, etc., but the destination end can understand the valid information from the source end. Similar expressions in the present application can be understood similarly and will not be repeated here.
  • sending and receiving can be performed between devices, for example, between terminal device #1 and terminal device #2, or can be performed within a device, for example, sending or receiving between components, modules, chips, software modules or hardware modules within the device through a bus, wiring or interface.
  • the device for realizing the function of the terminal device may be a terminal device, or a device capable of supporting the terminal device to realize the function, such as a chip system, which may be installed in the terminal device or used in combination with the terminal device.
  • the chip system may be composed of a chip, or may include a chip and other discrete devices.
  • only the device for realizing the function of the terminal device is used as an example for explanation, and the scheme of the embodiment of the present application is not limited.
  • the processing unit for implementing the baseband function in the BBU is called a baseband high layer (BBH) unit, and the processing unit for implementing the baseband function in the RRU/AAU/RRH is called a baseband low layer (BBL) unit.
  • BHB baseband high layer
  • BBL baseband low layer
  • the device for realizing the function of the network device may be a network device; or it may be a device capable of supporting the network device to realize the function, such as a chip system, a hardware circuit, a software module, or a hardware circuit plus a software module.
  • the device may be installed in the network device or used in combination with the network device.
  • only the device for realizing the function of the network device is a network device as an example for explanation, and the scheme of the embodiments of the present application is not limited.
  • the network device and/or terminal device can be deployed on land, including indoors or outdoors, handheld or vehicle-mounted; it can also be deployed on the water surface; it can also be deployed on aircraft, balloons and satellites in the air.
  • the scenarios in which the network device and the terminal device are located are not limited in the embodiments of the present application.
  • the terminal device and the network device can be hardware devices, or they can be software functions running on dedicated hardware, software functions running on general-purpose hardware, such as virtualization functions instantiated on a platform (e.g., a cloud platform), or entities including dedicated or general-purpose hardware devices and software functions.
  • a platform e.g., a cloud platform
  • the present application does not limit the specific forms of the terminal device and the network device.
  • wireless communication networks such as mobile communication networks
  • the services supported by the network are becoming more and more diverse, and therefore the requirements that need to be met are becoming more and more diverse.
  • the network needs to be able to support ultra-high speed, ultra-low latency, and/or ultra-large connections.
  • This feature makes network planning, network configuration, and/or resource scheduling increasingly complex.
  • MIMO multiple input multiple output
  • supporting beamforming supporting beamforming
  • new technologies such as beam management
  • AI nodes may also be introduced into the network.
  • the AI node can be deployed in one or more of the following locations in the communication system: access network equipment, terminal equipment, or core network equipment, etc., or the AI node can also be deployed separately, for example, deployed in a location other than any of the above devices, such as a host or cloud server in an over-the-top (OTT) system.
  • the AI node can communicate with other devices in the communication system, and the other devices can be, for example, one or more of the following: network equipment, terminal equipment, or network elements of the core network, etc.
  • the present application does not limit the number of AI nodes.
  • the multiple AI nodes can be divided based on functions, such as different AI nodes are responsible for different functions.
  • AI nodes can be independent devices, or integrated into the same device to implement different functions, or can be network elements in hardware devices, or can be software functions running on dedicated hardware, or can be virtualized functions instantiated on a platform (e.g., a cloud platform).
  • a platform e.g., a cloud platform.
  • AI nodes can be AI network elements, AI entities or AI modules.
  • FIG 1 is a schematic diagram of a possible application framework in a communication system.
  • network elements in the communication system are connected through interfaces (such as NG, Xn) or air interfaces.
  • One or more AI modules are provided in one or more devices of these network element nodes, such as core network equipment, access network nodes (RAN nodes), terminals or OAM (for clarity, only one is shown in Figure 1).
  • the access network node can be a separate RAN node, or it can include multiple RAN nodes, for example, including CU and DU.
  • the CU and/or DU can also be provided with one or more AI modules.
  • the CU can also be split into CU-CP and CU-UP.
  • One or more AI models are provided in the CU-CP and/or CU-UP.
  • the AI module is used to implement the corresponding AI function.
  • the AI modules deployed in different network elements may be the same or different.
  • the model of the AI module can implement different functions according to different parameter configurations.
  • the model of the AI module can be configured based on one or more of the following parameters: structural parameters (such as the number of neural network layers, the width of the neural network, the connection relationship between layers, the weight of the neuron, the activation function of the neuron, or at least one of the biases in the activation function), input parameters (such as the type of input parameters and/or the dimension of input parameters), or output parameters (such as the type of output parameters and/or the dimension of output parameters).
  • the bias in the activation function can also be called the bias of the neural network.
  • An AI module can have one or more models.
  • a model can be inferred to obtain an output, which includes one parameter or multiple parameters.
  • the learning process, training process, or inference process of different models can be deployed in different nodes or devices, or can be deployed in the same node or device.
  • FIG2 is a schematic diagram of a possible application framework in a communication system.
  • the communication system includes a RAN intelligent controller (RIC).
  • the RIC may be the AI modules 117, 118 shown in FIG1, which are used to implement AI-related functions.
  • the RIC includes a near-real-time RIC (near-real-time RIC, near-RT RIC) and a non-real-time RIC (non-real-time RIC, Non-RT RIC).
  • the non-real-time RIC mainly processes non-real-time information, such as data that is not sensitive to delay, and the delay of the data can be in the order of seconds.
  • the real-time RIC mainly processes near-real-time information, such as data that is relatively sensitive to delay, and the delay of the data is in the order of tens of milliseconds.
  • the near real-time RIC is used for model training and reasoning. For example, it is used to train an AI model and use the AI model for reasoning.
  • the near real-time RIC can obtain information on the network side and/or the terminal side from a RAN node (e.g., CU, CU-CP, CU-UP, DU, and/or RU) and/or a terminal. This information can be used as training data or reasoning data.
  • the near real-time RIC can submit the reasoning results to the RAN node and/or the terminal.
  • the reasoning results can be exchanged between the CU and the DU, and/or between the DU and the RU.
  • the near real-time RIC submits the reasoning results to the DU, and the DU sends it to the RU.
  • the non-real-time RIC is also used for model training and reasoning. For example, it is used to train an AI model and use the model for reasoning.
  • the non-real-time RIC can obtain information on the network side and/or the terminal side from a RAN node (such as a CU, CU-CP, CU-UP, DU and/or RU) and/or a terminal.
  • the information can be used as training data or reasoning data, and the reasoning results can be submitted to the RAN node and/or the terminal.
  • the reasoning results can be exchanged between the CU and the DU, and/or between the DU and the RU.
  • the non-real-time RIC submits the reasoning results to the DU, and the DU sends it to the RU.
  • the near real-time RIC and the non-real-time RIC may also be separately set as a network element.
  • the near real-time RIC and the non-real-time RIC may also be part of other devices, for example, the near real-time RIC is set in a RAN node (for example, in a CU or DU), and the non-real-time RIC is set in an OAM, a cloud server, a core network device, or other network devices.
  • FIG3 is a schematic diagram of a communication system applicable to the communication method of an embodiment of the present application.
  • the communication system 100 may include at least one network device, such as the network device 110 shown in FIG3; the communication system 100 may also include at least one terminal device, such as the terminal device 120 and the terminal device 130 shown in FIG3.
  • the network device 110 and the terminal device can communicate via a wireless link.
  • the communication devices in the communication system for example, the network device 110 and the terminal device 120, can communicate via a multi-antenna technology.
  • FIG4 is a schematic diagram of another communication system applicable to the communication method of an embodiment of the present application.
  • the communication system 200 shown in FIG4 also includes an AI network element 140.
  • the AI network element 140 is used to perform AI-related operations, such as building a training data set or training an AI model.
  • the network device 110 may send data related to the training of the AI model to the AI network element 140, which constructs a training data set and trains the AI model.
  • the data related to the training of the AI model may include data reported by the terminal device.
  • the AI network element 140 may send the results of the operations related to the AI model to the network device 110, and forward them to the terminal device through the network device 110.
  • the results of the operations related to the AI model may include at least one of the following: an AI model that has completed training, an evaluation result or a test result of the model, etc.
  • a part of the trained AI model may be deployed on the network device 110, and another part may be deployed on the terminal device.
  • the trained AI model may be deployed on the network device 110.
  • the trained AI model may be deployed on the terminal device.
  • FIG4 only takes the example of the AI network element 140 being directly connected to the network device 110 for illustration.
  • the AI network element 140 may also be connected to the terminal device.
  • the AI network element 140 may be connected to the network device 110 and the terminal device at the same time.
  • the AI network element 140 may also be connected to the network device 110 through a third-party network element.
  • the embodiment of the present application does not limit the connection relationship between the AI network element and other network elements.
  • the AI network element 140 may also be provided as a module in a network device and/or a terminal device, for example, in the network device 110 or the terminal device shown in FIG. 3 .
  • FIG. 3 and FIG. 4 are simplified schematic diagrams for ease of understanding.
  • the communication system may also include other devices, such as wireless relay devices and/or wireless backhaul devices, which are not shown in FIG. 3 and FIG. 4.
  • the communication system may include multiple network devices and may also include multiple terminal devices. The embodiment of the present application does not limit the number of network devices and terminal devices included in the communication system.
  • the training data set is used for training the AI model.
  • the training data set may include the input of the AI model, or the input and target output of the AI model.
  • the training data set includes one or more training data.
  • the training data may include training samples input to the AI model, or may include the target output of the AI model.
  • the target output may also be referred to as a label, sample label, or label sample.
  • the label is the true value.
  • the ground truth usually refers to data that is considered to be accurate or real.
  • training data sets can include simulation data collected through simulation platforms, experimental data collected in experimental scenarios, or measured data collected in actual communication networks. Due to differences in the geographical environment and channel conditions in which the data is generated, such as indoor and outdoor conditions, mobile speeds, frequency bands, or antenna configurations, the collected data can be classified when acquiring the data. For example, data with the same channel propagation environment and antenna configuration can be grouped together.
  • Model training is essentially learning some of its features from the training data.
  • AI models such as neural network models
  • the AI model is a neural network, and adjusting the model parameters of the neural network includes adjusting at least one of the following parameters: the number of layers, width, weights of neurons, or parameters in the activation function of the neurons of the neural network.
  • Inference data can be used as input to the trained AI model for inference of the AI model.
  • the inference data is input into the AI model, and the corresponding output is the inference result.
  • FIG. 6 is a schematic diagram of the forward propagation and back propagation of the neural network.
  • the AI module refers to a neural network layer or an AI model, where forward propagation refers to the neural network passing the intermediate calculation results of the previous neural network layer or the previous AI model to the next neural network layer or the next AI model to complete the intermediate calculation of the final output result under the parameter.
  • Back propagation refers to the method by which the neural network calculates the gradient, that is, after calculating the error between the final output result and the label (that is, the true result), in order to calculate the updated gradient of the network parameters to be trained, the neural network will calculate the gradient layer by layer from back to front by back propagation.
  • the principle is similar to the chain rule in calculus.
  • the error of the latter layer is calculated and propagated to the previous layer, and the previous layer calculates the gradient of the network layer based on the current error.
  • the introduction of AI technology into wireless communication networks has resulted in a CSI feedback method based on the AI model.
  • the terminal device uses the AI model to compress and feedback the CSI
  • the network device uses the AI model to restore the compressed CSI.
  • a sequence (such as a bit sequence) is transmitted. Compared with traditional CSI, the CSI feedback overhead is lower.
  • the process includes the following steps.
  • the network device receives uplink channel information, and calculates a dimension reduction weight w 0 based on the received uplink channel information (wherein the dimension of w 0 is N ⁇ M), for example, based on the uplink channel information, that is, the uplink channel response, and the second-order statistic Z of the uplink channel information calculated based on the historical uplink channel response, and then calculates the dimension reduction weight w 0 based on the second-order statistic Z in the following manner.
  • M downlink reference signal e.g. CSI-RS
  • the network device performs singular value decomposition (SVD) on the second-order statistic Z, and selects the first M feature directions ⁇ l 0 ,l 1 , ...,l M-1 ⁇ (where l i is a 1 ⁇ N vector) as the dimensionality reduction weight w 0 .
  • SVD singular value decomposition
  • the terminal device measures M downlink reference signals to obtain corresponding equivalent channel information H DL w 0 . After completing the measurement, the terminal device sends a downlink measurement report to the network device, and the report includes CSI.
  • the CSI feedback method based on the AI model is that the network equipment or terminal equipment can design and train an autoencoder to replace the codebook in the 3rd Generation Partnership Project (3GPP) R16 protocol to perform space-frequency dual-domain CSI compression.
  • the compression amount of the encoder output and the decoder input in the autoencoder is the compression amount of the precoding matrix of the equivalent downlink channel, and the compression amount is used in the feedback process to replace the feedback of the precoding matrix to reduce feedback overhead.
  • the network device receives the CSI in the downlink measurement report, obtains the compressed feedback of the equivalent channel precoding matrix, and completes the reconstruction of the downlink precoding matrix by restoring the inner layer weight v (i.e., the equivalent channel precoding matrix before compression) in the following way.
  • the present application proposes a communication method that can effectively solve the above technical problems.
  • the communication method is described in detail below.
  • Fig. 8 is a schematic flow chart of a communication method 800 proposed in the present application. The method includes the following steps.
  • the network device digitally precodes M reference signals corresponding to the M antenna ports based on dimensionality reduction weights, the dimensionality reduction weights are used to measure the equivalent downlink channels of N physical antennas using the M reference signals, N is a positive integer, M is a positive integer less than N, and the dimensionality reduction weights are determined based on the dimensionality reduction model.
  • the model in this application is an AI model, a neural network model, a machine learning model, etc., and this application does not impose any restrictions.
  • the network device digitally precodes the M reference signals corresponding to the M antenna ports based on the dimensionality reduction weight, that is, the network device implements port dimensionality reduction based on the dimensionality reduction weight, and the dimension of the dimensionality reduction weight is N*M.
  • the reference signal can be CSI-RS or SSB or DMRS, which is not limited in this application.
  • the dimensionality reduction weight is output information obtained by using the channel prior information as the input of the dimensionality reduction model.
  • the channel prior information is determined based on at least one of the following information: uplink channel information obtained by current measurement, historical uplink channel information, historical downlink channel information, or perceived channel information.
  • the channel prior information can be the uplink channel information obtained by current measurement, or can be calculated based on the uplink channel information obtained by previous measurement and the historical uplink channel information.
  • the dimensionality reduction model can be located as a module in a network device or an AI network element, such as a third-party network element.
  • the AI network element can be a core network element such as an AMF network element, a UPF network element, or an OAM, a cloud server, an OTT or other network element, without limitation.
  • the terminal device After the terminal device obtains the first downlink channel information, it feeds back the obtained first downlink channel information to the network device.
  • the terminal device will not directly feed back such a high-dimensional tensor to the network device, but will send the compressed feedback amount of the first downlink channel information compressed by the compression model to the network device. Since the dimensionality reduction model is introduced in this method, the dimensionality reduction method corresponding to the dimensionality reduction model is associated with the compression method. Therefore, the network device indicates the compression model matching the dimensionality reduction model to the terminal device in order to obtain better compression performance.
  • the reconstruction model and the compression model can be matched; in scenarios where dimensionality reduction is associated with compression (for example, in the scheme proposed in this application), the dimensionality reduction weights do not restrict orthogonality, the compression method, the reconstruction method and the dimensionality reduction method are strongly related, and the input of the compression model is inconsistent with the output of the reconstruction model. Therefore, the operating principle of the compression model and the reconstruction model is essentially different from the scenario where dimensionality reduction and compression are not associated. Therefore, in the embodiment of the present application, the compression model matching the dimensionality reduction model is indicated by the first indication information, and the compression model is used for feature extraction and compression of the inner weights of the first downlink channel information, which can not only improve the compression performance, but also further guarantee the decompression performance of the reconstruction model.
  • Implementation method 1 the first indication information indicates an identifier (ID) of the compression model.
  • Implementation method 2 the first indication information indicates the dimensionality reduction model ID and/or the reconstruction model ID.
  • the compression model ID, dimensionality reduction model ID and reconstruction model ID can be predefined and the three model IDs can be associated. Then, the network device can implicitly indicate the compression model by indicating the dimensionality reduction model ID and/or the reconstruction model ID.
  • the compression model can be determined based on the compression method, so the network device can implicitly indicate the compression model by indicating the compression method.
  • the reconstruction model can be located as a module in a network device or AI network element, such as a third-party network element.
  • the network device collects training data.
  • the training data used to train the AI model includes training samples and sample labels, where the channel prior information can be regarded as the training sample and the downlink channel information #1 can be regarded as the sample label (ie, the true value).
  • the channel prior information please refer to the description in S810, which will not be repeated here.
  • the following steps are described by taking the channel prior information as the uplink channel response H UL and the downlink channel information #1 as the actual downlink channel response H DL of N physical antennas as an example.
  • H UL is used as the input of the dimensionality reduction model
  • H DL is used to calculate the input v 0 of the compression model (the result of the SVD operation of the equivalent downlink channel response H DL w 0 ) and the final label V (i.e., the precoding matrix of the N physical antennas).
  • the network device can obtain the uplink channel response H UL through measurement, and complete the collection of the corresponding downlink channel response H DL through the existing air interface process.
  • each downlink channel response H DL sample should correspond to each uplink channel response H UL sample through the network device indication, such as requiring H UL and H DL to be located in the same or adjacent bandwidth (the frequency domain position interval is less than K RBs), and the time domain interval between the two should be less than t time slots, that is, the channel response reflected by H UL and H DL has reciprocity that meets the requirements, such as uplink and downlink reciprocity in the delay angle domain.
  • the first device sends downlink channel information #2 to the second device, where the downlink channel information #2 is determined based on the downlink channel information #1 and the dimensionality reduction weight, wherein the dimensionality reduction weight is output information obtained by using the channel prior information as the input of the dimensionality reduction model, and the dimensionality reduction weight is used to digitally precode M reference signals corresponding to the M antenna ports to measure the equivalent downlink channels of N physical antennas, where M is a positive integer less than N.
  • the second device receives the downlink channel information #2 from the first device.
  • the method further includes: the first device inputs the uplink channel response H UL into a dimensionality reduction model to output a dimensionality reduction weight w 0 , and then the first device determines downlink channel information #2 (ie, equivalent downlink channel information) based on the dimensionality reduction weight w 0 .
  • downlink channel information #2 ie, equivalent downlink channel information
  • the downlink channel information #2 is the equivalent channel weight v 0 after SVD of the equivalent downlink channel response H DL w 0 .
  • the second device sends a first sequence to the first device, where the first sequence is output information obtained by using the downlink channel information #2 as input of the compression model.
  • the first device receives the first sequence from the second device.
  • the method further includes: the second device inputs the downlink channel information #2 into the compression model and outputs the first sequence.
  • the first sequence may be the compressed feedback amount of the response H DL w 0 of the equivalent downlink channel, or the compressed feedback amount of the equivalent channel weight v 0 , without limitation.
  • the downlink channel information #2 and the first sequence can be regarded as intermediate calculation results of the forward propagation.
  • the first device sends first gradient information to the second device, the first gradient information is input-side gradient information of the reconstruction model, wherein the first gradient information is used to update model parameters of the compression model, the first gradient information is determined based on the first error information and the reconstruction model, the first error information is determined based on downlink channel information #3 and downlink channel information #1, downlink channel information #3 is output information obtained by taking the first sequence as input of the reconstruction model or calculated based on the output information of the reconstruction model, and the first error information is used to update model parameters of the reconstruction model.
  • the second device receives the first gradient information from the first device.
  • the method further includes: the first device inputs the first sequence into the reconstruction model to obtain output information of the reconstruction model (the output information is downlink channel information #3 or is used to determine downlink channel information #3), and then the first device determines first error information based on downlink channel information #1 and downlink channel information #3. Afterwards, the first device obtains gradient information (i.e., first gradient information) on the input side of the reconstruction model by back propagation calculation based on the first error information and the reconstruction model.
  • the first device inputs the first sequence into the reconstruction model to obtain output information of the reconstruction model (the output information is downlink channel information #3 or is used to determine downlink channel information #3), and then the first device determines first error information based on downlink channel information #1 and downlink channel information #3.
  • the first device obtains gradient information (i.e., first gradient information) on the input side of the reconstruction model by back propagation calculation based on the first error information and the reconstruction model.
  • the first error information is determined based on the downlink channel information #3 and the downlink channel information #1, which can be understood as: the downlink channel information #3 is the actual output information of the reconstructed model in the current training round obtained based on the channel prior information in the training data set or the information calculated based on the actual output information.
  • the downlink channel information #3 is the channel response (or channel weight) actually obtained during the training process.
  • the first device can determine the expected channel response (or channel weight) based on the downlink information #1 in the training data set. Then the first device can determine the first error information based on the difference between the expected channel response (or channel weight) and the actual channel response (or channel weight).
  • the output information of the reconstructed AI model can be Alternatively, the output information of the reconstructed AI model can also be the inner weight v, and then the first device can calculate based on the inner weight v and the dimensionality reduction weight w0 Afterwards, the first device performs SVD on the real downlink channel response H DL (ie, downlink channel information #1) to obtain the real weights V of the N physical antennas, and calculates V and Based on the calculated error, the first device obtains the input side gradient of the reconstructed model (i.e., the first gradient information) through back propagation calculation and transmits it to the second device.
  • H DL real downlink channel response
  • the second device sends second gradient information to the first device, the second gradient information is input side gradient information of the compression model, the second gradient information is determined based on the first gradient information and the compression model, and the second gradient information is used to update the model parameters of the dimensionality reduction model.
  • the first device receives the second gradient information from the second device.
  • the method further includes: the second device performs back propagation calculation based on the first gradient information and the compression model to obtain gradient information (ie, second gradient information) on the input side of the compression model.
  • the second device performs back propagation calculation based on the first gradient information and the compression model to obtain gradient information (ie, second gradient information) on the input side of the compression model.
  • the model parameters may include at least one of the following parameters: a neuron weight, or a bias, etc.
  • the method also includes: S970, when a termination condition of the model training is met, the first device determines to terminate the model training.
  • the above describes in detail the model training method 900 provided in the present application.
  • the traditional model training method only supports the training of two models at both ends, while the method 900 supports the joint training of multiple models at both ends.
  • the three models of the dimensionality reduction model, the compression model, and the reconstruction model can be jointly trained and optimized, which can further reduce the channel information feedback overhead.
  • Another model training method proposed in this application is introduced below.
  • the main difference from method 900 is that in method 900, three models are jointly trained on both ends, while in this method, three models are jointly trained on one side (for example, the first device). Afterwards, if one or more of the three models, such as model A, will be deployed on another device side (for example, the second device side), the first device can send the trained model A or the training data set corresponding to the trained model A to the second device, and the second device completes the model deployment or performs secondary training based on the received model A or training data set.
  • the first device can send the trained model A or the training data set corresponding to the trained model A to the second device, and the second device completes the model deployment or performs secondary training based on the received model A or training data set.
  • Figure 10 is a schematic flow chart of a method 1000 for model training proposed in the present application.
  • the first device can complete the training of the unilateral model by locally deploying the model and transmitting the intermediate layer calculation results and gradients in real time.
  • the dimensionality reduction model, the compression model, and the reconstruction model are all deployed on the first device side.
  • the first device can be a terminal device, a network device, or an AI network element #1, such as a third-party network element #1.
  • the method includes the following steps.
  • the first device obtains a first training data set, where the first training data set includes channel prior information and downlink channel information #1, where the downlink channel information #1 is actual downlink channel information of N physical antennas, where N is a positive integer.
  • the first device jointly trains a dimensionality reduction model, a compression model, and a reconstruction model, wherein the input of the dimensionality reduction model is channel prior information, and the output is dimensionality reduction weights, wherein the dimensionality reduction weights are used to digitally precode M reference signals corresponding to M antenna ports to measure the equivalent downlink channels of N physical antennas, M is a positive integer less than N, the input of the compression model is downlink channel information #2, and the output is a first sequence, wherein the downlink channel information #2 is determined based on the downlink channel information #1 and the dimensionality reduction weights, the input of the reconstruction model is the first sequence, and the output is downlink channel information #3.
  • the channel prior information may be an uplink channel response H UL
  • the downlink channel information #1 may be a real downlink channel response H DL of N physical antennas.
  • the input and output of each model and examples are described in S920 to S950 and will not be repeated here.
  • the first device updates network device parameters of the dimensionality reduction model, the compression model, and the reconstruction model based on first error information, where the first error information is determined based on downlink channel information #3 and downlink channel information #1.
  • the first device obtains the first gradient information and the second gradient information described in S940 and S950. Further, the first device updates the model parameters of the reconstruction model based on the first error information, updates the model parameters of the dimensionality reduction model based on the second gradient information, and updates the model parameters of the compression model based on the first gradient information.
  • first error information related examples of the first error information, the first gradient information, and the second gradient information, please refer to the description in S940 to S950, which will not be repeated here.
  • the method also includes: S1040, when a termination condition of the model training is met, the first device determines to terminate the model training.
  • method 900 and method 1000 the one-sided and two-end joint training processes are described in detail using three models as an example.
  • the training method proposed in this application is also applicable to the joint training process of Y (Y>3) models.
  • the above describes in detail the two model training methods provided by the present application.
  • the following is a simulation performance comparison diagram of the model dimensionality reduction method based on the present application and the dimensionality reduction method based on the traditional orthogonal codebook, in conjunction with FIG11 .
  • the horizontal axis of Figure 11 represents the feedback overhead, and the vertical axis represents the square generalized cosine similarity (SGCS) (SGCS can also be simply understood as the accuracy of the reconstructed downlink precoding matrix of 128 physical antennas), where SGCS satisfies the following formula Wherein, K is the rank (channel rank), Nf is the number of frequency domain units (a frequency domain unit can be one or more subcarriers or RBs, without limitation), and E ⁇ indicates averaging the samples in brackets.
  • K is the rank (channel rank)
  • Nf is the number of frequency domain units (a frequency domain unit can be one or more subcarriers or RBs, without limitation)
  • E ⁇ indicates averaging the samples in brackets.
  • the devices in the existing network architecture are mainly used as examples for exemplary description, and it can be understood that the embodiments of the present application do not limit the specific form of the devices. For example, devices that can achieve the same function in the future are applicable to the embodiments of the present application.
  • the methods and operations implemented by devices can also be implemented by components of the devices (such as chips or circuits).
  • the method provided by the embodiment of the present application is described in detail above in conjunction with Figures 1 to 11.
  • the above method is mainly introduced from the perspective of interaction between a network device and a terminal device (or a first device and a second device). It is understandable that the network device and the terminal device (or a first device and a second device), in order to implement the above functions, include hardware structures and/or software modules corresponding to the execution of each function.
  • the embodiment of the present application can divide the functional modules of the device according to the above method example.
  • each functional module can be divided corresponding to each function, or two or more functions can be integrated into one processing module.
  • the above-mentioned integrated module can be implemented in the form of hardware or in the form of software functional modules.
  • the division of modules in the embodiment of the present application is schematic, which is only a logical function division. There may be other division methods in actual implementation. The following is an example of dividing each functional module corresponding to each function.
  • FIG12 is a schematic block diagram of a communication device 1200 provided in an embodiment of the present application.
  • the device 1200 may include a communication unit 1210 and a processing unit 1220.
  • the communication unit 1210 may communicate with the outside, and the processing unit 1220 is used for data processing.
  • the communication unit 1210 may also be referred to as a communication interface or a transceiver unit.
  • the device 1200 can implement steps or processes corresponding to those executed by the network device in the above method embodiment, wherein the processing unit 1220 is used to execute processing-related operations of the network device in the above method embodiment, and the communication unit 1210 is used to execute sending-related operations of the network device in the above method embodiment.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • Electromagnetism (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Databases & Information Systems (AREA)
  • Evolutionary Computation (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Mobile Radio Communication Systems (AREA)

Abstract

L'invention concerne un procédé de communication et un appareil de communication. Dans le procédé, un dispositif de réseau réalise une réduction de dimension de port sur la base d'un modèle de réduction de dimension pour mesurer un canal de liaison descendante équivalent ; un dispositif terminal renvoie au dispositif de réseau une quantité de rétroaction de compression d'informations de canal équivalentes qui sont obtenues sur la base d'un modèle de compression correspondant au modèle de réduction de dimension ; puis le dispositif de réseau acquiert des informations de canal de liaison descendante équivalentes compressées reconstruites sur la base d'un modèle de reconstruction correspondant au modèle de réduction de dimension. Dans le procédé, le modèle de réduction de dimension, le modèle de compression et le modèle de reconstruction sont des modèles se mettant en correspondance après avoir fait l'objet d'un entraînement conjoint. Le procédé peut améliorer la compressibilité des informations de canal équivalentes et réduire davantage les surdébits de rétroaction. Les modèles impliqués dans la présente demande peuvent être un modèle d'IA, un modèle de réseau neuronal, un modèle d'apprentissage automatique, etc.
PCT/CN2024/139092 2023-12-29 2024-12-13 Procédé de communication et appareil de communication Pending WO2025139843A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202311863068.8A CN120238884A (zh) 2023-12-29 2023-12-29 通信方法和通信装置
CN202311863068.8 2023-12-29

Publications (1)

Publication Number Publication Date
WO2025139843A1 true WO2025139843A1 (fr) 2025-07-03

Family

ID=96165191

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2024/139092 Pending WO2025139843A1 (fr) 2023-12-29 2024-12-13 Procédé de communication et appareil de communication

Country Status (2)

Country Link
CN (1) CN120238884A (fr)
WO (1) WO2025139843A1 (fr)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114079493A (zh) * 2020-08-13 2022-02-22 华为技术有限公司 一种信道状态信息测量反馈方法及相关装置
WO2022121797A1 (fr) * 2020-12-08 2022-06-16 华为技术有限公司 Procédé et appareil de transmission de données
CN116018768A (zh) * 2020-08-18 2023-04-25 高通股份有限公司 用于信道状态反馈的配置
WO2023154003A2 (fr) * 2022-02-09 2023-08-17 Panasonic Intellectual Property Corporation Of America Appareil de communication et procédé de communication pour rétroaction de csi de dimension réduite

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114079493A (zh) * 2020-08-13 2022-02-22 华为技术有限公司 一种信道状态信息测量反馈方法及相关装置
CN116018768A (zh) * 2020-08-18 2023-04-25 高通股份有限公司 用于信道状态反馈的配置
WO2022121797A1 (fr) * 2020-12-08 2022-06-16 华为技术有限公司 Procédé et appareil de transmission de données
WO2023154003A2 (fr) * 2022-02-09 2023-08-17 Panasonic Intellectual Property Corporation Of America Appareil de communication et procédé de communication pour rétroaction de csi de dimension réduite

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
JIAJIA GUO; CHAO-KAI WEN; SHI JIN; XIAO LI: "AI for CSI Feedback Enhancement in 5G-Advanced", ARXIV.ORG, 17 September 2022 (2022-09-17), pages 1 - 8, XP091321552 *
YAN CHENG, HUAWEI, HISILICON: "Remaining issues on AI/ML for CSI feedback enhancement", 3GPP DRAFT; R1-2310845; TYPE DISCUSSION; FS_NR_AIML_AIR, vol. RAN WG1, 3 November 2023 (2023-11-03), Chicago, US, pages 1 - 15, XP052544515 *

Also Published As

Publication number Publication date
CN120238884A (zh) 2025-07-01

Similar Documents

Publication Publication Date Title
WO2021217519A1 (fr) Procédé et appareil de réglage de réseau neuronal
EP4163827A1 (fr) Procédé et dispositif pour acquérir un réseau de neurones artificiels
WO2024067258A1 (fr) Procédé et appareil de communication
WO2024208296A1 (fr) Procédé de communication et appareil de communication
WO2024251184A1 (fr) Procédé de communication et appareil de communication
WO2025139843A1 (fr) Procédé de communication et appareil de communication
WO2023092310A1 (fr) Procédé de traitement d'informations, procédé de génération de modèle et dispositifs
WO2022012256A1 (fr) Procédé de communication et dispositif de communication
WO2025067480A1 (fr) Procédé, appareil et système de communication
WO2025092630A1 (fr) Procédé de communication et appareil de communication
US20250240075A1 (en) Channel information feedback method, transmitting end device, and receiving end device
WO2025167701A1 (fr) Procédé de communication et appareil de communication
WO2025139762A1 (fr) Procédé de rapport d'informations csi et produit associé
WO2025066754A1 (fr) Procédé de rétroaction d'informations csi et produit associé
WO2025195293A1 (fr) Procédé de rétroaction d'informations d'état de canal et produit associé
WO2025218595A1 (fr) Procédé et appareil de communication
WO2025167989A1 (fr) Procédé de communication et appareil de communication
CN120934711A (zh) 一种通信方法和通信装置
WO2025209317A1 (fr) Procédé et appareil de transmission d'informations de rétroaction de canal
WO2025209331A1 (fr) Procédé, appareil et système de transmission d'informations
CN121013109A (zh) 一种通信方法和相关设备
WO2024131900A1 (fr) Procédé de communication et appareil de communication
WO2025185425A1 (fr) Modèle sans fil, procédé et dispositif de traitement d'informations, et système
WO2025140003A1 (fr) Procédé de communication et appareil de communication
WO2025209536A1 (fr) Procédé d'acquisition de données d'entraînement et appareil associé

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 24910774

Country of ref document: EP

Kind code of ref document: A1