[go: up one dir, main page]

WO2023126007A1 - Procédé et appareil de transmission d'informations de canal - Google Patents

Procédé et appareil de transmission d'informations de canal Download PDF

Info

Publication number
WO2023126007A1
WO2023126007A1 PCT/CN2023/070013 CN2023070013W WO2023126007A1 WO 2023126007 A1 WO2023126007 A1 WO 2023126007A1 CN 2023070013 W CN2023070013 W CN 2023070013W WO 2023126007 A1 WO2023126007 A1 WO 2023126007A1
Authority
WO
WIPO (PCT)
Prior art keywords
channel
information
sparse representation
model
training data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Ceased
Application number
PCT/CN2023/070013
Other languages
English (en)
Chinese (zh)
Inventor
郭艳伟
莫勇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Technologies Co Ltd
Original Assignee
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co Ltd filed Critical Huawei Technologies Co Ltd
Publication of WO2023126007A1 publication Critical patent/WO2023126007A1/fr
Anticipated expiration legal-status Critical
Ceased legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04BTRANSMISSION
    • H04B17/00Monitoring; Testing
    • H04B17/30Monitoring; Testing of propagation channels
    • H04B17/391Modelling the propagation channel
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04BTRANSMISSION
    • H04B17/00Monitoring; Testing
    • H04B17/30Monitoring; Testing of propagation channels
    • H04B17/309Measuring or estimating channel quality parameters
    • H04B17/345Interference values
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04BTRANSMISSION
    • H04B7/00Radio transmission systems, i.e. using radiation field
    • H04B7/02Diversity systems; Multi-antenna system, i.e. transmission or reception using multiple antennas
    • H04B7/04Diversity systems; Multi-antenna system, i.e. transmission or reception using multiple antennas using two or more spaced independent antennas
    • H04B7/0413MIMO systems
    • H04B7/0456Selection of precoding matrices or codebooks, e.g. using matrices antenna weighting
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04BTRANSMISSION
    • H04B7/00Radio transmission systems, i.e. using radiation field
    • H04B7/02Diversity systems; Multi-antenna system, i.e. transmission or reception using multiple antennas
    • H04B7/04Diversity systems; Multi-antenna system, i.e. transmission or reception using multiple antennas using two or more spaced independent antennas
    • H04B7/06Diversity systems; Multi-antenna system, i.e. transmission or reception using multiple antennas using two or more spaced independent antennas at the transmitting station
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04BTRANSMISSION
    • H04B7/00Radio transmission systems, i.e. using radiation field
    • H04B7/02Diversity systems; Multi-antenna system, i.e. transmission or reception using multiple antennas
    • H04B7/04Diversity systems; Multi-antenna system, i.e. transmission or reception using multiple antennas using two or more spaced independent antennas
    • H04B7/06Diversity systems; Multi-antenna system, i.e. transmission or reception using multiple antennas using two or more spaced independent antennas at the transmitting station
    • H04B7/0613Diversity systems; Multi-antenna system, i.e. transmission or reception using multiple antennas using two or more spaced independent antennas at the transmitting station using simultaneous transmission
    • H04B7/0615Diversity systems; Multi-antenna system, i.e. transmission or reception using multiple antennas using two or more spaced independent antennas at the transmitting station using simultaneous transmission of weighted versions of same signal
    • H04B7/0619Diversity systems; Multi-antenna system, i.e. transmission or reception using multiple antennas using two or more spaced independent antennas at the transmitting station using simultaneous transmission of weighted versions of same signal using feedback from receiving side
    • H04B7/0621Feedback content
    • H04B7/0626Channel coefficients, e.g. channel state information [CSI]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04BTRANSMISSION
    • H04B7/00Radio transmission systems, i.e. using radiation field
    • H04B7/02Diversity systems; Multi-antenna system, i.e. transmission or reception using multiple antennas
    • H04B7/04Diversity systems; Multi-antenna system, i.e. transmission or reception using multiple antennas using two or more spaced independent antennas
    • H04B7/06Diversity systems; Multi-antenna system, i.e. transmission or reception using multiple antennas using two or more spaced independent antennas at the transmitting station
    • H04B7/0613Diversity systems; Multi-antenna system, i.e. transmission or reception using multiple antennas using two or more spaced independent antennas at the transmitting station using simultaneous transmission
    • H04B7/0615Diversity systems; Multi-antenna system, i.e. transmission or reception using multiple antennas using two or more spaced independent antennas at the transmitting station using simultaneous transmission of weighted versions of same signal
    • H04B7/0619Diversity systems; Multi-antenna system, i.e. transmission or reception using multiple antennas using two or more spaced independent antennas at the transmitting station using simultaneous transmission of weighted versions of same signal using feedback from receiving side
    • H04B7/0621Feedback content
    • H04B7/0632Channel quality parameters, e.g. channel quality indicator [CQI]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L25/00Baseband systems
    • H04L25/02Details ; arrangements for supplying electrical power along data transmission lines
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L25/00Baseband systems
    • H04L25/02Details ; arrangements for supplying electrical power along data transmission lines
    • H04L25/0202Channel estimation
    • H04L25/0224Channel estimation using sounding signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L25/00Baseband systems
    • H04L25/02Details ; arrangements for supplying electrical power along data transmission lines
    • H04L25/0202Channel estimation
    • H04L25/024Channel estimation channel estimation algorithms
    • H04L25/0242Channel estimation channel estimation algorithms using matrix methods
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L5/00Arrangements affording multiple use of the transmission path
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L5/00Arrangements affording multiple use of the transmission path
    • H04L5/003Arrangements for allocating sub-channels of the transmission path
    • H04L5/0048Allocation of pilot signals, i.e. of signals known to the receiver

Definitions

  • the present application relates to the technical field of communications, and in particular to a channel information transmission method and device.
  • a communication system such as a fifth generation (5 th generation, 5G) mobile communication system
  • 5G fifth generation
  • 5G fifth generation
  • MIMO massive multiple input multiple output
  • access network equipment can provide high-quality services for more terminal equipment at the same time.
  • an important link is that the sending end precodes the data to be sent, and sends the precoded data to the receiving end.
  • Precoding can realize spatial multiplexing (spatial multiplexing) of multiple data streams to reduce interference between different data streams, so it can improve the signal-to-interference-plus-noise ratio (SINR) at the receiving end.
  • SINR signal-to-interference-plus-noise ratio
  • channel information for example, channel state information (channel state information, CSI), etc.
  • channel state information channel state information, CSI
  • CSI channel state information
  • the present application provides a channel information transmission method and device, aiming at saving communication resources.
  • a channel information transmission method is provided.
  • the method can be implemented on the side of the access network device, or on the side of other devices for recovering channel information, without limitation.
  • the method includes: receiving channel feedback information from a terminal device, where the channel feedback information is used to indicate sparse representation information of the first channel information, wherein the sparse representation information includes M elements, and the M elements include K non-zero elements and M-K zero elements, wherein M and K are positive integers; determine the first channel information according to the channel reconstruction model, wherein the input of the channel reconstruction model is determined according to the sparse representation information.
  • communication resources can be saved, that is, channel information can be transmitted more accurately in various communication scenarios through a channel reconstruction model.
  • the number and/or positions of the K non-zero elements can be changed to adapt to various communication scenarios.
  • the channel feedback information is used to indicate values of the K non-zero elements and positions of the K non-zero elements.
  • signaling overhead can be saved, for example, there is no need to feed back the positions and values of M-K zero elements.
  • the position of feeding back K non-zero elements may be replaced by the position of feeding back M-K zero elements, and the two are equivalent.
  • the channel feedback information is used to indicate a first pattern, and the first pattern indicates the positions of the K non-zero elements, where the first pattern is one of multiple candidate patterns one.
  • signaling overhead can be further saved, mainly the overhead when feeding back the positions of K non-zero elements can be saved.
  • the method further includes: determining at least one of the following according to the first channel information: a precoding matrix indicator PMI, a rank indicator RI, or a channel quality indicator CQI.
  • the channel feedback information is also used to indicate a scaling factor of the first channel information relative to the second channel information, where the first channel information is a normalized channel information.
  • the method further includes: determining at least one of the following according to the second channel information: PMI, RI, or CQI.
  • the transmission parameters when the access network equipment and the terminal equipment perform MIMO communication can be obtained, so that MIMO transmission can be performed to improve the system throughput.
  • the ratio of K and N is a first compression ratio, where N is a positive integer, and N represents a dimension of the first channel information.
  • the method further includes: sending information indicating that the first compression ratio is one of multiple candidate compression ratios to the terminal device.
  • the multiple candidate compression ratios are stipulated in a protocol, or indicated by a signaling sent to the terminal device.
  • the method further includes: sending information indicating that K is one of multiple candidate values to the terminal device.
  • the multiple candidate values are stipulated in the protocol, or indicated by a signaling sent to the terminal device.
  • the compression ratio of the channel information can be flexibly configured according to the requirements of the actual communication scenario, so as to meet the requirements of the communication scenario.
  • a channel information transmission method is provided.
  • the method may be implemented on the side of the terminal device, or on the side of other devices for feeding back channel information, without limitation.
  • the method includes: determining sparse representation information of the first channel information according to the first channel information and the channel reconstruction model, wherein the sparse representation information includes M elements, and the M elements include K non-zero elements and M-K zero elements, where M and K are positive integers; sending channel feedback information to the access network device, where the channel feedback information is used to indicate the sparse representation information.
  • the determining the sparse representation information of the first channel information according to the first channel information and the channel reconstruction model includes:
  • the sparse representation information of the first channel information is determined according to the following objective function:
  • ⁇ x ⁇ 0 ⁇ K x represents the sparse representation information of the first channel information
  • H w represents the first channel information
  • f de ( ) represents the channel reconstruction model
  • ⁇ ⁇ 2 represents the L2 norm
  • ⁇ ⁇ 0 represents L0 norm
  • the sparse representation information of the first channel information is determined according to the following objective function:
  • x represents the sparse representation information of the first channel information
  • H w represents the first channel information
  • f de ( ) represents the channel reconstruction model
  • f W ( ) represents the precoding generation model
  • f C (,) represents the channel capacity calculation model.
  • the ratio of K to N is a first compression ratio, and N represents a dimension of the first channel information.
  • the method further includes: receiving information indicating that the first compression ratio is one of multiple candidate compression ratios from the access network device.
  • the multiple candidate compression ratios are stipulated by a protocol, or indicated by a signaling from an access network device.
  • the method further includes: receiving information indicating that K is one of multiple candidate values from an access network device.
  • the multiple candidate values are stipulated in the protocol, or indicated by signaling from the access network device.
  • an apparatus for implementing the method in the first aspect.
  • the device may be an access network device, or a device configured in the access network device, or a device that can be matched and used with the access network device.
  • the device includes a one-to-one unit for performing the method/operation/step/action described in the first aspect, and the unit may be a hardware circuit, or software, or a combination of hardware circuit and software.
  • the apparatus may include a processing unit and a communication unit, and the processing unit and the communication unit may perform corresponding functions in the first aspect above.
  • the processing unit and the communication unit may perform corresponding functions in the first aspect above.
  • the communication unit is configured to receive channel feedback information from the terminal device, where the channel feedback information is used to indicate sparse representation information of the first channel information, wherein the sparse representation information includes M elements, and the M elements include K non-zero elements and M-K zero elements, wherein M and K are positive integers; the processing unit is used to determine the first channel information according to the channel reconstruction model, wherein the input of the channel reconstruction model is according to the sparse representation Information is determined.
  • the processing unit is configured to determine at least one of the following according to the first channel information: a precoding matrix indicator PMI, a rank indicator RI, or a channel quality indicator CQI.
  • the channel feedback information is also used to indicate a scaling factor of the first channel information relative to the second channel information, where the first channel information is a normalized channel information.
  • the processing unit is configured to determine at least one of the following according to the second channel information: PMI, RI, or CQI.
  • the ratio of K and N is a first compression ratio, where N is a positive integer, and N represents a dimension of the first channel information.
  • the communication unit is configured to send information indicating that the first compression ratio is one of multiple candidate compression ratios to the terminal device.
  • the multiple candidate compression ratios are stipulated in a protocol, or indicated by a signaling sent to the terminal device.
  • the communication unit is configured to send information indicating that K is one of multiple candidate values to the terminal device.
  • the multiple candidate values are stipulated in the protocol, or indicated by a signaling sent to the terminal device.
  • the above device includes a memory, configured to implement the method described in the above first aspect.
  • the apparatus may also include memory for storing instructions and/or data.
  • the memory is coupled to the processor, and when the processor executes the program instructions stored in the memory, the method described in the first aspect above can be implemented.
  • the apparatus may also include a communication interface for the apparatus to communicate with other devices.
  • the communication interface may be a transceiver, circuit, bus, module, pin or other types of communication interface.
  • the device includes:
  • a processor configured to use a communication interface to: receive channel feedback information from a terminal device, where the channel feedback information is used to indicate sparse representation information of the first channel information, wherein the sparse representation information includes M elements, and the M The elements include K non-zero elements and M-K zero elements, wherein M and K are positive integers;
  • the processor is configured to determine first channel information according to a channel reconstruction model, wherein an input of the channel reconstruction model is determined according to the sparse representation information.
  • a device for implementing the method in the second aspect.
  • the device may be a terminal device, or a device configured in the terminal device, or a device that can be matched with the terminal device.
  • the device includes a one-to-one unit for performing the method/operation/step/action described in the second aspect, and the unit may be a hardware circuit, or software, or a combination of hardware circuit and software.
  • the apparatus may include a processing unit and a communication unit, and the processing unit and the communication unit may perform corresponding functions in the second aspect above.
  • the processing unit and the communication unit may perform corresponding functions in the second aspect above.
  • the processing unit is configured to determine the sparse representation information of the first channel information according to the first channel information and the channel reconstruction model, wherein the sparse representation information includes M elements, and the M elements include K non-zero elements and M-K Among them, M and K are positive integers;
  • the communication unit is configured to send channel feedback information to the access network device, where the channel feedback information is used to indicate the sparse representation information.
  • the processing unit is used for:
  • the sparse representation information of the first channel information is determined according to the following objective function:
  • ⁇ x ⁇ 0 ⁇ K x represents the sparse representation information of the first channel information
  • H w represents the first channel information
  • f de ( ) represents the channel reconstruction model
  • ⁇ ⁇ 2 represents the L2 norm
  • ⁇ ⁇ 0 represents L0 norm
  • the sparse representation information of the first channel information is determined according to the following objective function:
  • x represents the sparse representation information of the first channel information
  • H w represents the first channel information
  • f de ( ) represents the channel reconstruction model
  • f W ( ) represents the precoding generation model
  • f C (,) represents the channel capacity calculation model.
  • the ratio of K to N is a first compression ratio, and N represents a dimension of the first channel information.
  • the communication unit is configured to receive information indicating that the first compression ratio is one of multiple candidate compression ratios from the access network device.
  • the multiple candidate compression ratios are stipulated by a protocol, or indicated by a signaling from an access network device.
  • the communication unit is configured to receive information indicating that K is one of multiple candidate values from the access network device.
  • the multiple candidate values are stipulated in the protocol, or indicated by signaling from the access network device.
  • the above device includes a memory, configured to implement the method described in the above second aspect.
  • the apparatus may also include memory for storing instructions and/or data.
  • the memory is coupled to the processor, and when the processor executes the program instructions stored in the memory, the method described in the second aspect above can be implemented.
  • the device may also include a communication interface for the device to communicate with other devices.
  • the communication interface may be a transceiver, circuit, bus, module, pin or other types of communication interface.
  • the device includes:
  • a processor configured to determine sparse representation information of the first channel information according to the first channel information and the channel reconstruction model, wherein the sparse representation information includes M elements, and the M elements include K non-zero elements and M-K zero elements where M and K are positive integers;
  • the processor uses the communication interface to: send channel feedback information to the access network device, where the channel feedback information is used to indicate the sparse representation information.
  • a model training method comprising: operation 1, determining a set of training data in the training data set; operation 2, for each training data in the set of training data, determining a sparse representation of the training data Information, according to the sparse representation information and the current channel reconstruction model to determine the model output corresponding to the training data; operation 3, for the set of training data, if the loss function meets the performance requirements, the training ends, otherwise, update the channel reconstruction model, And perform operation 1 again.
  • determining the sparse representation information of the training data includes: determining the sparse representation information of the training data according to a sparse representation algorithm and a current channel reconstruction model.
  • determining the sparse representation information of the training data includes: determining the sparse representation information of the training data according to the current sparse representation model, and if the loss function does not meet the performance requirements, the method further includes: updating The sparse representation model.
  • the loss function meets the performance requirements, including: the average value of the loss function of all training data in the set of training data (or use each loss function of all training data through other The value calculated by the method) meets the threshold requirement, or, the loss function of all the training data in the set of training data meets the threshold requirement.
  • a sixth aspect provides an apparatus for realizing the method of the fifth aspect.
  • the device includes a one-to-one unit for performing the method/operation/step/action described in the fifth aspect.
  • the unit may be a hardware circuit, or software, or a combination of hardware circuit and software.
  • the apparatus may include a processing unit, and the processing unit may perform corresponding functions in the fifth aspect above.
  • the processing unit may perform corresponding functions in the fifth aspect above. For example:
  • the processing unit is used for: operation 1, determining a group of training data in the training data set; operation 2, for each training data in the group of training data, determining the sparse representation information of the training data, according to the sparse representation information and
  • the channel reconstruction model determines the model output corresponding to the training data; operation 3, for the set of training data, if the loss function meets the performance requirements, the training ends, otherwise, update the channel reconstruction model, and perform operation 1 again.
  • the device may also include a communication unit, configured to acquire the training data set.
  • a seventh aspect provides a communication system, including the device of the third aspect and the device of the fourth aspect; or, including the device of the third aspect, the device of the fourth aspect, and the device of the sixth aspect.
  • a computer-readable storage medium including instructions, which, when run on a computer, cause the computer to execute the method of the first aspect, the second aspect, or the fifth aspect.
  • a computer program product including instructions, which, when run on a computer, cause the computer to execute the method of the first aspect, the second aspect, or the fifth aspect.
  • a chip system in a tenth aspect, includes a processor, and may further include a memory, for implementing the method of the first aspect, the second aspect, or the fifth aspect.
  • the system-on-a-chip may consist of chips, or may include chips and other discrete devices.
  • Figure 1 shows an example diagram of the architecture of the communication system
  • Figure 2A shows an example diagram of the structure of a neuron
  • Figure 2B shows a structural example diagram of a neural network
  • 3A to 3E are schematic diagrams of the network architecture
  • FIG. 4 and FIG. 5 are schematic flowcharts of channel information transmission methods
  • FIG. 6A, FIG. 6B and FIG. 7 are schematic flow charts of model training
  • Figure 8 and Figure 9 are diagrams showing an example of the structure of the device.
  • FIG. 1 is an example diagram of the architecture of a communication system 1000 to which the present disclosure can be applied.
  • the communication system includes a radio access network (radio access network, RAN) 100 and a core network (core network, CN) 200.
  • the communication system 1000 may also include the Internet 300 .
  • the radio access network 100 may include at least one access network device (or may be called RAN device, such as 110a and 110b in FIG. 1 ), and may also include at least one terminal device (such as 120a-120j in FIG. 1).
  • the terminal device is connected to the access network device in a wireless manner.
  • Access network devices are connected to core network devices in a wireless or wired manner.
  • the core network device and the access network device can be independent and different physical devices; or can be the same physical device that integrates the functions of the core network device and the access network device; or can be other possible situations, such as a
  • the function of the access network device and some functions of the core network device can be integrated on the physical device, and another physical device realizes the rest of the functions of the core network device.
  • the present disclosure does not limit the physical existence form of the core network device and the access network device.
  • Terminal devices may be connected to each other in a wired or wireless manner.
  • the access network device and the access network device may be connected to each other in a wired or wireless manner.
  • FIG. 1 is only a schematic diagram, and is not intended to limit the present disclosure.
  • the communication system may also include other network devices, such as wireless relay devices and wireless backhaul devices.
  • the core network 200 may include one or more core network elements.
  • the core network may include at least one of the following network elements: access and mobility management function (access and mobility management function, AMF) network element, session management function (session management function, SMF) network element, user plane function (user plane function (UPF) network element, policy control function (policy control function, PCF) network element, unified data management (unified data management, UDM) network element, application function (application function, AF) network element, or location management function ( location management function, LMF) network element, etc.
  • These core network elements may be a hardware structure, a software module, or a hardware structure plus a software module.
  • the implementation forms of different network elements may be the same or different, and are not limited. Different core network elements may be different physical devices (or may be called core network devices), or multiple different core network elements may be integrated on one physical device, that is, the physical device has the multiple core network elements function.
  • the device used to realize the function of the core network device may be a core network device; it may also be a device capable of supporting the core network device to realize the function, such as a chip system, a hardware circuit, a software module, or a hardware circuit plus a software module , the device can be installed in the core network equipment or can be matched with the core network equipment.
  • the technical solution provided by the present disclosure is described by taking the core network device as an example for realizing the functions of the core network device.
  • a system-on-a-chip may be composed of chips, and may also include chips and other discrete devices.
  • a terminal device may also be called a terminal, a user equipment (user equipment, UE), a mobile station, or a mobile terminal, etc.
  • Terminal devices can be widely used in various scenarios for communication.
  • the scenario includes but is not limited to at least one of the following: enhanced mobile broadband (enhanced mobile broadband, eMBB), ultra-reliable low-latency communication (ultra-reliable low-latency communication, URLLC), large-scale machine type communication ( massive machine-type communications (mMTC), device-to-device (D2D), vehicle to everything (V2X), machine-type communication (MTC), Internet of things (internet of things) , IOT), virtual reality, augmented reality, industrial control, automatic driving, telemedicine, smart grid, smart furniture, smart office, smart wear, smart transportation, or smart city, etc.
  • enhanced mobile broadband enhanced mobile broadband
  • eMBB ultra-reliable low-latency communication
  • URLLC ultra-reliable low-latency communication
  • mMTC massive machine-type communications
  • D2D
  • the terminal device can be a mobile phone, a tablet computer, a computer with wireless transceiver function, a wearable device, a vehicle, a drone, a helicopter, an airplane, a ship, a robot, a mechanical arm, or a smart home device, etc.
  • the present disclosure does not limit the specific technology and specific device form adopted by the terminal device.
  • the device for realizing the function of the terminal device may be a terminal device; it may also be a device capable of supporting the terminal device to realize the function, such as a chip system, a hardware circuit, a software module, or a hardware circuit plus a software module. It can be installed in the terminal equipment or can be matched with the terminal equipment.
  • the technical solution provided is described below by taking the terminal device as an example of the apparatus for realizing the functions of the terminal device, and optionally taking the terminal device as an example.
  • the access network device can be a base station (base station), Node B (Node B), evolved Node B (evolved NodeB, eNodeB or eNB), transmission reception point (transmission reception point, TRP), fifth generation (5 th generation , 5G) mobile communication system next generation Node B (next generation NodeB, gNB), open radio access network (open radio access network, O-RAN or open RAN) in the access network equipment, the sixth generation (6 th generation, 6G) mobile communication system next-generation base station, wireless fidelity (wireless fidelity, WiFi) system access node, or future mobile communication system base station, etc.
  • base station base station
  • Node B Node B
  • evolved Node B evolved Node B
  • eNodeB or eNB evolved Node B
  • transmission reception point transmission reception point
  • TRP transmission reception point
  • open radio access network open radio access network, O-
  • the access network device may be a module or unit that completes some functions of the access network device, for example, it may be a centralized unit (central unit, CU), a distributed unit (distributed unit, DU), a centralized unit control plane (CU control plane, CU-CP) module, centralized unit user plane (CU user plane, CU-UP) module, or radio unit (radio unit, RU).
  • the access network device may be a macro base station (such as 110a in Figure 1), a micro base station or an indoor station (such as 110b in Figure 1), or a relay node or a donor node.
  • 5G can also be called new radio (new radio, NR).
  • the device for implementing the function of the access network device may be the access network device; or, it may be a device capable of supporting the access network device to realize the function, such as a chip system, a hardware circuit, a software module, or a hardware circuit Adding software modules, etc., the device can be installed in the access network equipment or matched with the access network equipment.
  • the technical solution provided is described below by taking the access network device as an example of the apparatus for realizing the function of the access network device, and optionally taking the access network device as an example as a base station.
  • the protocol layer structure may include a control plane protocol layer structure and a user plane protocol layer structure.
  • the control plane protocol layer structure may include at least one of the following: a radio resource control (radio resource control, RRC) layer, a packet data convergence protocol (packet data convergence protocol, PDCP) layer, a radio link control (radio link control, RLC) layer, media access control (media access control, MAC) layer, or physical (physical, PHY) layer, etc.
  • the user plane protocol layer structure may include at least one of the following: a service data adaptation protocol (service data adaptation protocol, SDAP) layer, a PDCP layer, an RLC layer, a MAC layer, or a physical layer.
  • the above protocol layer structure between the access network device and the terminal device can be regarded as an access stratum (access stratum, AS) structure.
  • AS access stratum
  • NAS non-access stratum
  • the access network device may forward information between the terminal device and the core network device through transparent transmission.
  • the NAS message may be mapped to or included in RRC signaling as an element of RRC signaling.
  • the protocol layer structure between the access network device and the terminal device may further include an artificial intelligence (AI) layer, which is used to transmit data related to the AI function.
  • AI artificial intelligence
  • Access network devices may include CUs and DUs. This design can be called CU and DU separation. Multiple DUs can be centrally controlled by one CU.
  • the interface between CU and DU is called F1 interface.
  • the control plane (control panel, CP) interface may be F1-C
  • the user plane (user panel, UP) interface may be F1-U.
  • the present disclosure does not limit the specific names of the interfaces.
  • CU and DU can be divided according to the protocol layer of the wireless network: for example, the functions of the PDCP layer and above protocol layers (such as RRC layer and SDAP layer, etc.) etc.) functions are set in the DU; for another example, the functions of the protocol layers above the PDCP layer are set in the CU, and the functions of the PDCP layer and the protocol layers below are set in the DU, without restriction.
  • the functions of the PDCP layer and above protocol layers such as RRC layer and SDAP layer, etc.
  • the CU or DU may be divided into functions having more protocol layers, and for example, the CU or DU may be divided into part processing functions having protocol layers. For example, some functions of the RLC layer and functions of the protocol layers above the RLC layer are set in the CU, and the remaining functions of the RLC layer and functions of the protocol layers below the RLC layer are set in the DU.
  • the functions of the CU or DU can be divided according to the service type or other system requirements, for example, according to the delay, the functions that need to meet the delay requirement are set in the DU, and the functions that do not need to meet the delay requirement are set in the CU.
  • the CU may have one or more functions of the core network.
  • the CU can be set on the network side to facilitate centralized management.
  • the wireless unit (radio unit, RU) of the DU is remotely set.
  • the RU has a radio frequency function.
  • DUs and RUs can be divided at the PHY layer.
  • the DU can implement high-level functions in the PHY layer
  • the RU can implement low-level functions in the PHY layer.
  • the functions of the PHY layer may include at least one of the following: adding a cyclic redundancy check (cyclic redundancy check, CRC) bit, channel coding, rate matching, scrambling, modulation, layer mapping, precoding, Resource mapping, physical antenna mapping, or radio frequency transmission functions.
  • CRC cyclic redundancy check
  • the functions of the PHY layer may include at least one of the following: CRC check, channel decoding, de-rate matching, descrambling, demodulation, de-layer mapping, channel detection, resource de-mapping, physical antenna de-mapping, or RF receiving function.
  • the high-level functions in the PHY layer may include part of the functions of the PHY layer, which are closer to the MAC layer; the lower-level functions in the PHY layer may include another part of the functions of the PHY layer, for example, this part of functions is closer to the radio frequency function.
  • high-level functions in the PHY layer may include adding CRC bits, channel coding, rate matching, scrambling, modulation, and layer mapping
  • low-level functions in the PHY layer may include precoding, resource mapping, physical antenna mapping, and radio transmission functions
  • high-level functions in the PHY layer can include adding CRC bits, channel coding, rate matching, scrambling, modulation, layer mapping, and precoding
  • low-level functions in the PHY layer can include resource mapping, physical antenna mapping, and radio frequency send function.
  • the high-level functions in the PHY layer may include CRC check, channel decoding, de-rate matching, decoding, demodulation, and de-mapping
  • the low-level functions in the PHY layer may include channel detection, resource de-mapping, physical antenna de-mapping, and RF receiving functions
  • the high-level functions in the PHY layer may include CRC check, channel decoding, de-rate matching, decoding, demodulation, de-layer mapping, and channel detection
  • the low-level functions in the PHY layer may include resource de-mapping , physical antenna demapping, and RF receiving functions.
  • the functions of the CU may be further divided, and the control plane and the user plane may be separated and implemented by different entities.
  • the separated entities are the control plane CU entity (ie, CU-CP entity) and the user plane CU entity (ie, CU-UP entity).
  • the CU-CP entity and the CU-UP entity can be connected to the DU respectively.
  • an entity may be understood as a module or unit, and its existence form may be a hardware structure, a software module, or a hardware structure plus a software module, without limitation.
  • any one of the foregoing CU, CU-CP, CU-UP, DU, and RU may be a software module, a hardware structure, or a software module plus a hardware structure, without limitation.
  • the existence forms of different entities may be the same or different.
  • CU, CU-CP, CU-UP and DU are software modules
  • RU is a hardware structure.
  • all possible combinations are not listed here.
  • These modules and the methods performed by them are also within the protection scope of the present disclosure.
  • the method of the present disclosure when executed by an access network device, it may specifically be executed by at least one of CU, CU-CP, CU-UP, DU, RU, or near real-time RIC described below.
  • the access network device and/or the terminal device may be fixed or mobile.
  • Access network equipment and/or terminal equipment can be deployed on land, including indoors or outdoors, hand-held or vehicle-mounted; or can be deployed on water; or can be deployed on aircraft, balloons and artificial satellites in the air.
  • the present disclosure does not limit the environment/scene where the access network device and the terminal device are located.
  • Access network devices and terminal devices can be deployed in the same or different environments/scenarios, for example, access network devices and terminal devices are deployed on land at the same time; or, access network devices are deployed on land and terminal devices are deployed on water First class, no more examples one by one.
  • the helicopter or drone 120i in FIG. 1 can be configured as a mobile access network device.
  • the terminal device 120i is an access network device.
  • the access network device 110a, 120i may be a terminal device, that is, communication between 110a and 120i may be performed through a wireless access network air interface protocol.
  • 110a and 120i communicate through an interface protocol between access network devices.
  • relative to 110a, 120i is also an access network device. Therefore, both the access network device and the terminal device can be collectively referred to as a communication device (or communication device), 110a and 110b in FIG. It is called a communication device with terminal equipment function.
  • communication may be performed through licensed spectrum, or communication may be performed through unlicensed spectrum, or both Communicate over licensed spectrum and unlicensed spectrum; and/or, may communicate over spectrum below 6 gigahertz (GHz), or may communicate over spectrum above 6 GHz, or may use both spectrum below 6 GHz and 6 GHz above the frequency spectrum for communication.
  • GHz gigahertz
  • the present disclosure does not limit spectrum resources used by wireless communications.
  • the data sending end can know the channel information of the channel between the data sending end and the data receiving end, the data transmission efficiency can be improved.
  • the data sender can obtain the channel information, it can obtain transmission parameters such as precoding matrix, and can use the precoding matrix to precode the data to be sent, so that the data sender can pass the same
  • the resource for example, the same time-frequency resource
  • the channel information can be estimated by the data receiving end, and the channel information can be sent to the data sending end; the data sending end determines the precoding matrix based on the channel information, and uses the precoding matrix to precode the data to be sent. And send the precoded data to the data receiving end.
  • estimated channel information may also be described as measured channel information or other names, without limitation.
  • the data receiving end is a terminal device, and the data sending end is an access network device; or, the data receiving end is an access network device, and the data sending end is a terminal device; or, the data receiving end is a first access network device , the data sending end is the second access network device, which is not limited.
  • the following descriptions will be made by taking the data receiving end as a terminal device and the data sending end as an access network device as an example.
  • the present disclosure introduces artificial intelligence (AI) technology into the communication system.
  • AI artificial intelligence
  • the feedback of channel information is realized or assisted by AI technology, and the fed back channel information can better match the actual channel environment, so the fed back channel information is more accurate.
  • Machine learning methods can be employed.
  • the machine uses the training data to learn (or train) to obtain a model, and applies the model to reason (or predict). Inference results can be used to solve practical problems.
  • Machine learning methods include but are not limited to at least one of the following: neural network (neural network, NN), probabilistic graphical model, sparse coding/dictionary learning method, variational auto-encoder (variational auto-encoder, VAE), or generate confrontation Networks (generative adversarial networks, GAN), etc., are not limited.
  • a neural network is a concrete implementation of machine learning techniques and AI models. According to the general approximation theorem, the neural network can theoretically approximate any continuous function, so that the neural network has the ability to learn any mapping.
  • Traditional communication systems need to rely on rich expert knowledge to design communication modules, while deep learning communication systems based on neural networks can automatically discover hidden pattern structures from a large number of data sets, establish mapping relationships between data, and achieve better results than traditional communication systems. The performance of the modeling method.
  • each neuron performs a weighted sum operation on its input values, and outputs the operation result through an activation function.
  • FIG. 2A it is a schematic diagram of a neuron structure.
  • w i is used as the weight of xi to weight xi .
  • the bias for performing weighted summation of the input values according to the weights is, for example, b.
  • the output of the neuron is:
  • the output of the neuron is:
  • b may be various possible types such as decimals, integers (such as 0, positive integers or negative integers), or complex numbers.
  • the activation functions of different neurons in a neural network can be the same or different.
  • a neural network generally includes multiple layers, each layer may include one or more neurons. By increasing the depth and/or width of the neural network, the expressive ability of the neural network can be improved, providing more powerful information extraction and abstract modeling capabilities for complex systems.
  • the depth of the neural network may refer to the number of layers included in the neural network, and the number of neurons included in each layer may be referred to as the width of the layer.
  • a neural network includes an input layer and an output layer. The input layer of the neural network processes the received input information through neurons, and passes the processing result to the output layer, and the output layer obtains the output result of the neural network.
  • the neural network includes an input layer, a hidden layer and an output layer, refer to FIG. 2B .
  • a neural network processes the received input information through neurons, and passes the processing results to the middle hidden layer.
  • the hidden layer calculates the received processing results to obtain the calculation results, and the hidden layer transmits the calculation results to the output layer or
  • the next adjacent hidden layer finally gets the output of the neural network from the output layer.
  • a neural network may include one hidden layer, or include multiple hidden layers connected in sequence, without limitation.
  • the neural network involved in the present disclosure is, for example, a deep neural network (DNN).
  • DNNs can include feedforward neural networks (FNN), convolutional neural networks (CNN) and recurrent neural networks (RNN).
  • FNN feedforward neural networks
  • CNN convolutional neural networks
  • RNN recurrent neural networks
  • the type of the model involved in the present disclosure may be DNN, for example, FNN, CNN or RNN, without limitation.
  • a loss function can be defined.
  • the loss function describes the gap or difference between the output value of the model and the ideal target value.
  • the present disclosure does not limit the specific form of the loss function.
  • the model training process can be regarded as the following process: by adjusting some or all parameters of the model, the value of the loss function is less than the threshold value or meets the target requirements.
  • a model may also be called an AI model, a rule, or other names without limitation.
  • the AI model can be considered as a specific method to realize the AI function.
  • the AI model represents the mapping relationship or function between the input and output of the model.
  • AI functions may include at least one of the following: data collection, model training (or model learning), model information release, model inference (or called model inference, inference, or prediction, etc.), model monitoring or model verification, or inference results release etc.
  • AI functions may also be referred to as AI (related) operations, or AI-related functions.
  • an independent network element such as AI network element, AI node, or AI device, etc.
  • the AI network element can be directly connected to the access network device, or can be indirectly connected through a third-party network element and the access network device.
  • the third-party network element may be a core network element.
  • AI entities may be configured or set in other network elements in the communication system to implement AI-related operations.
  • the AI entity may also be called an AI module, an AI unit or other names, and is mainly used to realize some or all AI functions, and the disclosure does not limit its specific name.
  • the other network element may be an access network device, a core network device, a cloud server, or a network management (operation, administration and maintenance, OAM), etc.
  • the network element performing AI-related operations is a network element with a built-in AI function. Since both AI network elements and AI entities implement AI-related functions, for the convenience of description, the AI network elements and network elements with built-in AI functions are collectively described as AI function network elements.
  • OAM is used to operate, manage and/or maintain core network equipment (network management of core network equipment), and/or is used to operate, manage and/or maintain access network equipment (network management of access network equipment) .
  • the present disclosure includes a first OAM and a second OAM, the first OAM is the network management of the core network equipment, and the second OAM is the network management of the access network equipment.
  • the first OAM and/or the second OAM includes an AI entity.
  • the present disclosure includes a third OAM, and the third OAM is the network manager of the core network device and the access network device at the same time.
  • the AI entity is included in the third OAM.
  • an AI entity may be integrated in a terminal or a terminal chip.
  • FIG. 3A is an example diagram of an application framework of AI in a communication system.
  • the data source is used to store training data and inference data.
  • the model training host obtains the AI model by training or updating the training data provided by the data source, and deploys the AI model in the model inference host.
  • the AI model represents the mapping relationship between the input and output of the model. Learning the AI model through the model training node is equivalent to using the training data to learn the mapping relationship between the input and output of the model.
  • the model inference node uses the AI model to perform inference based on the inference data provided by the data source, and obtains the inference result.
  • the model inference node inputs the inference data into the AI model, and obtains an output through the AI model, and the output is the inference result.
  • the inference result may indicate: configuration parameters used (executed) by the execution object, and/or operations performed by the execution object.
  • the reasoning result can be uniformly planned by the execution (actor) entity, and sent to one or more execution objects (for example, a network element of the core network, a base station, or a UE, etc.) for execution.
  • the model reasoning node can feed back its reasoning results to the model training node. This process can be called model feedback.
  • the fed back reasoning results are used for the model training node to update the AI model, and the updated AI model is deployed on the model Inference node.
  • the execution object can feed back the collected network parameters to the data source. This process can be called performance feedback, and the fed back network parameters can be used as training data or inference data.
  • the application framework shown in FIG. 3A can be deployed on the network element shown in FIG. 1 .
  • the application framework in FIG. 3A may be deployed on at least one of the terminal device, access network device, core network device, or independently deployed AI network element (not shown) in FIG. 1 .
  • the AI network element (which can be regarded as a model training node) can analyze or train the training data (training data) provided by the terminal device and/or the access network device to obtain a model.
  • At least one of the terminal device, the access network device, or the core network device (which can be regarded as a model reasoning node) can use the model and reasoning data to perform reasoning and obtain the output of the model.
  • the reasoning data may be provided by the terminal device and/or the access network device.
  • the input of the model includes inference data
  • the output of the model is the inference result corresponding to the model.
  • At least one of the terminal device, the access network device, or the core network device (which can be regarded as an execution object) can perform a corresponding operation according to the reasoning result.
  • the model inference node and the execution object may be the same or different, without limitation.
  • the network architecture to which the method provided in the present disclosure can be applied is introduced as an example below with reference to FIGS. 3B to 3E .
  • the access network device includes a near real-time access network intelligent controller (RAN intelligent controller, RIC) module for model training and reasoning.
  • RAN intelligent controller RIC
  • near real-time RIC can be used to train an AI model and use that AI model for inference.
  • the near real-time RIC can obtain network-side and/or terminal-side information from at least one of CU, DU, RU or terminal equipment, and the information can be used as training data or inference data.
  • the near real-time RIC may submit the reasoning result to at least one of CU, DU, RU or terminal device.
  • the inference results can be exchanged between the CU and the DU.
  • the reasoning results can be exchanged between the DU and the RU, for example, the near real-time RIC submits the reasoning result to the DU, and the DU forwards it to the RU.
  • a non-real-time RIC is included outside the access network (optionally, the non-real-time RIC can be located in the OAM, in the cloud server, or in the core network device) for performing Model training and inference.
  • non-real-time RIC is used to train an AI model and use that model for inference.
  • the non-real-time RIC can obtain network-side and/or terminal-side information from at least one of CU, DU, RU, or terminal equipment. This information can be used as training data or inference data, and the inference results can be submitted to CU, RU, or terminal equipment. At least one of DU, RU, or terminal equipment.
  • the inference results can be exchanged between the CU and the DU.
  • the reasoning results can be exchanged between the DU and the RU, for example, the non-real-time RIC submits the reasoning result to the DU, and the DU forwards it to the RU.
  • the access network device includes a near real-time RIC, and the access network device includes a non-real-time RIC (optionally, the non-real-time RIC can be located in the OAM, cloud server in, or in core network equipment).
  • non-real-time RIC can be used for model training and inference.
  • the near real-time RIC can be used for model training and reasoning.
  • the non-real-time RIC performs model training, and the near-real-time RIC can obtain AI model information from the non-real-time RIC, and obtain network-side and/or terminal-side information from at least one of CU, DU, RU, or terminal equipment , using the information and the AI model information to obtain an inference result.
  • the near real-time RIC may submit the reasoning result to at least one of CU, DU, RU or terminal device.
  • the inference results can be exchanged between the CU and the DU.
  • the reasoning results can be exchanged between the DU and the RU, for example, the near real-time RIC submits the reasoning result to the DU, and the DU forwards it to the RU.
  • near real-time RIC is used to train model A and use model A for inference.
  • non-real-time RIC is used to train Model B and utilize Model B for inference.
  • the non-real-time RIC is used to train the model C, and the information of the model C is sent to the near-real-time RIC, and the near-real-time RIC uses the model C for inference.
  • FIG. 3C is an example diagram of a network architecture to which the method provided in the present disclosure can be applied. Compared with FIG. 3B , in FIG. 3B CU is separated into CU-CP and CU-UP.
  • FIG. 3D is an example diagram of a network architecture to which the method provided by the present disclosure can be applied.
  • the access network device includes one or more AI entities, and the functions of the AI entities are similar to the near real-time RIC described above.
  • the OAM includes one or more AI entities, and the functions of the AI entities are similar to the non-real-time RIC described above.
  • the core network device includes one or more AI entities, and the functions of the AI entities are similar to the above-mentioned non-real-time RIC.
  • both the OAM and the core network equipment include AI entities, the models trained by their respective AI entities are different, and/or the models used for reasoning are different.
  • the different models include at least one of the following differences: the structural parameters of the model (such as the number of neural network layers, the width of the neural network, the connection relationship between layers, the weight of neurons, the activation function of neurons, or the at least one of the bias), the input parameters of the model (such as the type of the input parameter and/or the dimension of the input parameter), or the output parameters of the model (such as the type of the output parameter and/or the dimension of the output parameter).
  • the structural parameters of the model such as the number of neural network layers, the width of the neural network, the connection relationship between layers, the weight of neurons, the activation function of neurons, or the at least one of the bias
  • the input parameters of the model such as the type of the input parameter and/or the dimension of the input parameter
  • the output parameters of the model such as the type of the output parameter and/or the dimension of the output parameter.
  • FIG. 3E is an example diagram of a network architecture to which the method provided in the present disclosure can be applied.
  • the access network devices in Fig. 3E are separated into CU and DU.
  • the CU may include an AI entity, and the function of the AI entity is similar to the above-mentioned near real-time RIC.
  • the DU may include an AI entity, and the function of the AI entity is similar to the above-mentioned near real-time RIC.
  • both the CU and the DU include AI entities, the models trained by their respective AI entities are different, and/or the models used for reasoning are different.
  • the CU in FIG. 3E may be further split into CU-CP and CU-UP.
  • one or more AI models may be deployed in the CU-CP.
  • one or more AI models can be deployed in CU-UP.
  • the OAM of the access network device and the OAM of the core network device are shown as unified deployment.
  • the OAM of the access network device and the OAM of the core network device may be deployed separately and independently.
  • a model can be inferred to obtain an output, and the output includes one parameter or multiple parameters.
  • the learning process or training process of different models can be deployed in different devices or nodes, or can be deployed in the same device or node.
  • Inference processes of different models can be deployed in different devices or nodes, or can be deployed in the same device or node. This disclosure is not limited to these implementations.
  • the involved network element may perform some or all of the steps or operations related to the network element. These steps or operations are just examples, and the present disclosure may also perform other operations or modifications of various operations. In addition, various steps may be performed in a different order than presented in the disclosure, and it may not be necessary to perform all operations in the disclosure.
  • At least one (item) can also be described as one (item) or multiple (items), and multiple (items) can be two (items), three (items), four (items) or more Multiple (items), without limitation.
  • “/" can indicate that the associated objects are an "or” relationship, for example, A/B can indicate A or B; "and/or” can be used to describe that there are three relationships between associated objects, for example, A and / Or B, can mean: A alone exists, A and B exist at the same time, and B exists alone, where A and B can be singular or plural.
  • words such as “first”, “second”, “A”, or “B” may be used to distinguish technical features with the same or similar functions.
  • the words “first”, “second”, “A”, or “B” do not limit the quantity and execution order.
  • words such as “first”, “second”, “A”, or “B” are not necessarily different.
  • Words such as “exemplary” or “such as” are used to indicate examples, illustrations or illustrations, and any design described as “exemplary” or “such as” should not be construed as being more preferred or better than other design solutions.
  • Advantage. The use of words such as “exemplary” or “for example” is intended to present related concepts in a specific manner for easy understanding.
  • the present disclosure provides a method for channel information feedback, which can save communication resources.
  • the terminal device determines the sparse representation information of the first channel information by using the channel reconstruction model.
  • the sparse representation information includes M elements, and the M elements include K non-zero elements and M-K zero elements. Wherein, K is an integer greater than or equal to 1, and M is an integer greater than or equal to K.
  • the terminal device indicates the sparse representation information to the access network device through channel feedback information.
  • the access network device can restore (or describe as: reconstruct) the first channel information by using the sparse representation information and the channel reconstruction model.
  • the transmission requirements may also be varied.
  • a possible approach is to design a transmission parameter of channel information for each transmission requirement and each channel environment. For example, for each transmission requirement and each channel environment, design a set of matching channel information encoder and channel information decoder.
  • the channel information encoder and the channel information decoder may be AI models.
  • the terminal equipment uses the channel information encoder to encode the channel information to obtain encoded information.
  • the terminal device sends the encoded information to the access network device.
  • the access network device decodes the coded information and the channel information decoder to obtain the channel information.
  • the terminal device sends sparse representation information of channel information to the access network device, the sparse representation information includes zero elements and non-zero elements, and the number of zero elements and non-zero elements and/or Locations can be designed to accommodate a wide variety of possible communication needs. For example, for different channel environments, the number and/or positions of non-zero elements may be independently set to meet communication requirements in different channel environments.
  • the design can be simplified, for example, multiple requirements can be met through one signaling.
  • the sparse representation information is obtained by the terminal device side under the condition of knowing the channel reconstruction model, so that the access network device side can use the sparse representation information and the channel reconstruction model to recover more accurate original channel information. Therefore, the above method can save communication resources and has strong generalization ability, that is, through a channel reconstruction model, channel information can be transmitted more accurately in various communication scenarios.
  • the channel reconfiguration model may also be called: a channel restoration model, a channel solution model or other names, without limitation.
  • the structure and/or parameters of the channel reconfiguration model used by the terminal device side and the channel reconfiguration model used by the access network device side may be different.
  • the channel reconfiguration model used by the terminal device side is the first channel reconfiguration model
  • the channel reconfiguration model used by the access network device side is the second channel reconfiguration model.
  • the specific parameters of the first channel reconstruction model and the second channel reconstruction model can be different, the input dimensions of the two are the same; the output dimensions are the same; and, given the same input, the outputs of the two are the same or approximately the same, For example, the difference between the two outputs is smaller than the threshold value.
  • the purpose of this design is that when the processing capability of the terminal equipment is limited, a channel reconstruction model that has almost the same function as that of the access network equipment side or the output error is within the allowable range can be deployed on the terminal equipment side, but the model structure is simpler. Save processing resources of terminal equipment.
  • the description below takes the first channel reconfiguration model and the second channel reconfiguration model as the same reconfiguration model as an example.
  • FIG. 4 is a schematic diagram of a channel information feedback method provided by the present disclosure, and the method includes operations S401 to S403.
  • the terminal device determines sparse representation information of the first channel information according to the first channel information and the channel reconstruction model.
  • the sparse representation information includes M elements, the M elements include K non-zero elements and M-K zero elements, K is an integer greater than or equal to 1, and M is an integer greater than or equal to K.
  • the first channel information is information about a channel between the access network device and the terminal device.
  • the first channel information may be channel information corresponding to one or more time units.
  • the time unit may be a symbol, a time slot, a subframe or other possible time units, without limitation.
  • the present disclosure does not limit the type and acquisition manner of the first channel information, for example, the information may be time domain information or frequency domain information, without limitation.
  • the first channel information is the downlink channel response estimated by the terminal device, or the information obtained after preprocessing the downlink channel response, which is not limited.
  • the downlink channel response may also be called a downlink channel matrix or other names, without limitation.
  • the preprocessing includes at least one of the following operations: channel whitening, channel normalization, or quantization.
  • the present disclosure does not exclude that the preprocessing may also include other possible operations.
  • the access network device sends a downlink reference signal, such as a downlink synchronization signal or a channel state information reference signal (channel state information reference signal, CSI-RS), to the terminal device.
  • a downlink reference signal such as a downlink synchronization signal or a channel state information reference signal (channel state information reference signal, CSI-RS)
  • the terminal device knows the sequence value of the downlink reference signal, for example, the sequence value is stipulated in the protocol or the access network device notifies the terminal device in advance, then the terminal device can estimate (measure) the downlink channel response based on the received downlink reference signal H.
  • H is the frequency domain channel response
  • H in a communication system based on Orthogonal Frequency Division Multiplexing (OFDM), H can be expressed as a 3-dimensional matrix, for example, the dimension of H is N C ⁇ N Tx ⁇ N Rx , where, the first The length of the first dimension is equal to the frequency domain bandwidth, for example, equal to the number N C of frequency domain subcarriers, the length of the second dimension is equal to the number N Tx of antenna ports at the transmitting end, and the length of the third dimension is equal to the number N Rx of antenna ports at the receiving end.
  • N C , N Tx and NRx are integers.
  • the order of these three dimensions can be exchanged.
  • the length of the first dimension of H is equal to N Tx
  • the length of the second dimension is equal to N Rx
  • the length of the third dimension is equal to N C
  • the dimension of H is N C ⁇ N Tx ⁇ N Rx as an example for illustration.
  • the elements in H as h i,j,z , h i,j,z are complex numbers, i takes a value from 0 to N C -1, j takes a value from 0 to N Tx -1, and z takes a value of 0 to N Rx -1.
  • h i, j, z represent the channel response of the channel between antenna port j at the transmitting end and antenna port z at the receiving end on subcarrier i.
  • the first channel information is H.
  • the first channel information is H1, and H1 is a matrix obtained by whitening H (which may be called the second channel information) by using the interference noise covariance matrix Ruu.
  • the method can be described as: the first channel information is the whitened channel information of the second channel information.
  • the access network device sends a zero power channel state information reference signal (ZP CSI-RS) to the terminal device, and the terminal device can estimate (measure) the interference and noise based on the received reference signal , denoted as I.
  • I contains a plurality of subcarrier information, where the dimension of I(k) of the kth subcarrier is N Rx ⁇ 1, and the covariance matrix of interference and noise can be obtained as follows:
  • R uu is the covariance matrix of interference and noise
  • I H (k) represents the conjugate transpose of I(k)
  • its dimension is N Rx ⁇ N Rx .
  • a whitening matrix P can be generated with a dimension of N Rx ⁇ N Rx , where P satisfies:
  • the channel information matrix is multiplied by the whitening matrix to complete the channel whitening:
  • the dimension after channel whitening remains unchanged, and the dimension of H whiten (k) is still N Rx ⁇ N Tx .
  • Combining the H whiten (k) of each subcarrier can obtain the above second channel information H1, and the dimension of H1 is N C ⁇ N Tx ⁇ N Rx .
  • the first channel information is H2, and H2 is a matrix obtained after normalizing H or H1.
  • the scaling factor and H2 can be obtained.
  • the scaling factor can also be referred to as a signal-to-noise ratio (signal-to-noise, SNR), and the matrix before normalization (which can be referred to as the second channel information, for example, the second channel information is H or H1) is divided by the scaling factor Equal to H2, indicating that the scaling factor is the scaling factor of the second channel information relative to the first channel information, or the matrix before normalization (which can be called the second channel information, for example, the second channel information is H or H1) multiplied by The scaling factor is equal to H2, indicating that the scaling factor is a scaling factor of the first channel information relative to the second channel information.
  • the values of the real and imaginary parts of H2 can be located in the interval [0,1].
  • the value of the scaling factor may be a decimal or an integer, for example, a number less than 1 or a number greater than or equal to 1, without limitation.
  • the method can be described as: the first channel information is the normalized channel information of the second channel information.
  • the terminal device may also send the scaling factor to the access network device, for example, send the scaling factor to the access network device through channel feedback information in S402 below.
  • the access network device may perform channel scaling on the recovered first channel information according to the scaling factor.
  • the sent scaling factor may be an original value or a quantized value.
  • the value of the feedback scaling factor is one of 2 U candidate values
  • the information field used to carry the scaling factor includes U bits to feed back the scaling factor value Which one of the 2 U candidate values is the value.
  • U is an integer, such as 1, 2, 3, 4, 5, 6 or other integers, without limitation.
  • the 2 U candidate values may be stipulated in the protocol, or pre-configured by the access network device to the terminal device through signaling, and are not limited.
  • the first channel information is a matrix H3 obtained by quantizing H, H1, or H2 (which may be called second channel information).
  • the method can be described as: the first channel information is the quantized channel information of the second channel information.
  • the terminal device obtains the sparse representation information of the first channel information by using the channel reconstruction model.
  • the sparse representation information of the first channel information includes M elements.
  • the M elements include K non-zero elements and M-K zero elements, K is an integer greater than or equal to 1, and M is an integer greater than or equal to K.
  • the value of the element is not limited, for example, may be a real number or a complex number, may be a positive number or a negative number, and/or may be a decimal number or an integer.
  • the channel reconstruction model can be used to restore and obtain the first channel information according to the sparse representation information of the first channel information.
  • the value of M is equal to the input dimension of the channel reconstruction model.
  • K non-zero elements indicate that the values of the K elements can be non-zero, and it is whatever is calculated. That is to say, during actual calculation, the value of one or some of the K non-zero elements may actually be equal to zero, but even if they are equal to zero, the terminal device will report to the access network according to the rules for reporting non-zero elements. The device reports the values of these elements. For the M-K non-zero elements, the terminal device does not need to report their values to the access network device, because the access network device will assume that the values of these elements are zero by default.
  • the compression ratio of the first channel information can be expressed as the ratio of K and N, where N represents the dimension of the first channel information, and the dimension is N C ⁇ N Tx ⁇ N Rx or 2 ⁇ N C ⁇ N Tx ⁇ N Rx , N is a positive integer, and 2 means that when the real part and the imaginary part of the first channel information are respectively considered, there are 2 dimensions in total.
  • the following description takes the dimension of the first channel information as N C ⁇ N Tx ⁇ N Rx as an example.
  • the compression ratio of the first channel information may also be referred to as a feedback compression ratio of the first channel information, a first compression ratio, or other names, without limitation.
  • the value of K or the first compression ratio may be stipulated in the protocol; or, the access network device notifies the terminal device in advance; or, is sent by the terminal device to the access network device through signaling, for example It is sent to the access network device through the channel feedback information in the following S402.
  • the terminal device may be notified of multiple candidate compression ratios or multiple candidate values of K by agreement or by the access network device in advance. Further, the access network device notifies the terminal device which of the multiple candidate compression ratios the first compression ratio is, or which of the multiple candidate values the value of K is. Alternatively, the terminal device feeds back which of the multiple candidate compression ratios the first compression ratio is to the access network device through signaling (such as the channel feedback information in S402 below), or the value of K is multiple candidate compression ratios. which of the values.
  • the access network device or terminal device can pass a value greater than or equal to bits indicate the index of the first compression ratio or the index of K, where, Indicates rounding up, and L is a positive integer.
  • N of the first channel information is 4096, where N C is 64, N Tx is 16, and N Rx is 4, and a total of 4 candidate compression ratios are configured, which are: 1/64, 1/128, 1 /256 and 1/512, then K can have 4 values, respectively 64, 32, 16 and 8; or, a total of 4 candidate values of K are configured, respectively: 64, 32, 16 and 8, then There are 4 candidate compression ratios: 1/64, 1/128, 1/256 and 1/512.
  • the access network device or the terminal device may indicate the first compression ratio or the specific value of K through 2 bits. Wherein, the value of the 2 bits and the first compression ratio indicated by each value are shown in Table 1A, and the value of the 2 bits and the value of K indicated by each value are shown in Table 1B.
  • various possible feedback compression ratios can be adapted by setting different values of K or different values of the first compression ratio for a channel reconstruction model. Since the input of the channel reconstruction model is sparse information, the value of the number K of non-zero elements in the sparse information can be set to be different to adapt to different compression ratios, so one model can be used to realize each channel information This kind of feedback needs, so that communication resources can be saved, such as no need to train multiple different models for different compression ratios.
  • the positions of K non-zero elements can be set among the M elements to adapt to different channel environments.
  • the positions of the K non-zero elements represent the positions of the K non-zero elements in the M elements.
  • U is a positive integer.
  • four positions are set, corresponding to: the first channel environment, the second channel environment, the third channel environment and the fourth channel environment. The position of the K non-zero elements will be described in more detail in operation S402 below.
  • the channel reconstruction model can be stipulated in the agreement, such as agreed in the agreement after offline training; or by the network side, such as AI functional network element, OAM, access network equipment, or core network equipment, etc., and sent to the terminal equipment after training or downloaded by the terminal device from a third-party network; or obtained by training the terminal device; there is no limit.
  • the training node when the training node trains to obtain the channel reconstruction model, it can train and obtain the channel reconstruction model according to the training data in the training data set.
  • the training data set includes one or more training data.
  • the form of the training data is the same as the form of the above-mentioned first channel information.
  • the training data is the channel information collected by the terminal device in history (optionally, when the training node is not a terminal device, the terminal device can send the training data to the training node); or, the training data is the access network device Historically collected channel information (optionally, when the training node is not an access network device, the access network device can send the training data to the training node); or, the training data is channel information generated according to a known channel model ;
  • the present disclosure does not limit the way of obtaining or determining the training data.
  • the training node may use the method shown in FIG. 6A or FIG. 6B to train and obtain the channel reconstruction model.
  • the channel reconstruction model obtained through training satisfies: when the input of the channel reconstruction model is sparse representation information of channel information, the channel information can be reconstructed and recovered as accurately as possible.
  • the training method may include: operation 1, the training node determines a set of training data in the training data set; operation 2, for each training data in the set of training data, determine the sparse representation information of the training data, according to the sparse representation information Determine the model output corresponding to the training data with the current channel reconstruction model; operation 3, for the set of training data, if the loss function meets the performance requirements, the training ends, otherwise, update the current channel reconstruction model, and perform operation 1 again.
  • the first possible model training method is introduced below in conjunction with FIG. 6A .
  • the sparse representation information of the training data is determined according to the sparse representation algorithm and the current channel reconstruction model.
  • the training node determines the input dimension of the channel reconstruction model, the characteristics of the input data, the output dimension, and the initial model parameters of the channel reconstruction model (for example, the channel reconstruction model is a neural network, and the initial model parameters include: Structural parameters).
  • the characteristics of the input data include: the input data includes M elements, and the M elements include K non-zero elements and M-K zero elements.
  • K may be equal to any one of multiple candidate values. That is, the channel reconstruction model obtained through training can be applicable to the case where K is equal to any one of the multiple candidate values, that is, the channel reconstruction model can be applicable to various compression ratios.
  • the current channel reconstruction model in the following operation 6A-1 is the initial channel reconstruction model.
  • Operation 6A-1 The training node determines a set of training data from the training data set, for example, the first set of training data, and for each training data in the set of training data, respectively perform operation 6A-1-1 and operation 6A-1- 2.
  • a set of training data may include one or more training data, for example, may include part or all of the data in the training data set.
  • the number of training data included in different sets of training data may be the same or different. There may or may not be an intersection between different sets of training data, which is not limited.
  • Operation 6A-1-1 The training node uses one training data A of the training data and the current channel reconstruction model f de ( ), and obtains the sparse representation information x of the training data A according to the sparse representation algorithm.
  • the sparse representation algorithm includes determining the sparse representation information x of the training data A according to the objective function -.
  • H w represents the training data A
  • ⁇ ⁇ 2 represents the L2 norm
  • ⁇ ⁇ 0 represents the L0 norm
  • f de (x) represents the inference result obtained when the input of the channel reconstruction model is x
  • x includes M elements
  • the M elements include K non-zero elements and MK zero elements.
  • the sparse representation algorithm may be any method for solving the sparse reconstruction problem, without limitation.
  • it can be an iterative shrinkage-thresholding algorithm (ISTA), a fast iterative shrinkage-thresholding algorithm (FISTA), or an alternating direction method of multipliers (ADMM) , or an orthogonal matching pursuit (OMP) method; not limited.
  • ISP orthogonal matching pursuit
  • Operation 6A-1-2 After the training node obtains the sparse representation information x of the training data A according to the sparse representation algorithm, the training node inputs the sparse representation information x into the current channel reconstruction model f de ( ), and obtains the reconstruction of the training data A by reasoning Data f de (x).
  • Operation 6A-2 For each training data in the set of training data, the training node calculates the loss between each training data, for example, denoted as training data A, and the reconstructed data f de (x) corresponding to the training data function value. Among them, the loss function is ⁇ H w -f de (x) ⁇ 2 . If the average of the loss functions of all the training data in the group of training data (or the value calculated by other methods using the loss functions of all the training data) is less than or equal to the first threshold, or, all the training data in the group If the loss function of the training data is less than or equal to the first threshold, it is considered that the current channel reconstruction model is a reconstruction model obtained through training, and the model training process ends.
  • training data A For each training data in the set of training data, the training node calculates the loss between each training data, for example, denoted as training data A, and the reconstructed data f de (x) corresponding to the training data function value. Among them, the loss function is ⁇
  • update the parameters of the channel reconstruction model such as using the gradient descent method to update the parameters of the channel reconstruction model
  • use the updated channel reconstruction model as the current channel reconstruction model and use another set of training data in the training data set, such as For the second set of training data, perform operations 6A-1 and 6A-2 again.
  • the above operations 6A-1 and 6A-2 can be performed for E2 iterations until the value of the loss function calculated according to the current channel reconstruction model is less than or equal to the first threshold.
  • the current channel reconstruction model is used as the channel reconstruction model obtained through training.
  • E1 and E2 are positive integers.
  • E1 is equal to E2, or E1 is smaller than E2, that is, the same training data can be used for repeated iteration training.
  • K may be any one of multiple candidate values.
  • each of the candidate values can be used as the value of K to perform operation 6A-1 to operation 6A-2 respectively, so that the channel reconstruction model obtained through training can be applied to various compression ratios.
  • the above operations 6A-1 and 6A-2 can be performed for E2*L iterations until the value of the loss function calculated according to the current channel reconstruction model is less than or equal to the first threshold, It is considered that the training process is over, and the current channel reconstruction model is used as the channel reconstruction model obtained through training.
  • E1 and E2 are positive integers
  • L is the number of candidate values of K.
  • the objective function 1 in the training process involved in the above-mentioned FIG. 6A can be replaced by the following objective function 2, the loss function can be replaced by f C (H w , f W (f de (x)), and The training end condition is replaced by the value of the loss function being greater than or equal to the second threshold, and the channel reconstruction model is obtained through training.
  • H w represents the training data A
  • f W () represents the precoding generation model, that is, it represents the precoding operation on f de (x)
  • f C (,) represents the channel capacity calculation model.
  • f W ( ) represents performing singular value decomposition (singular value decomposition, SVD) (it can also be described as SVD precoding).
  • f C (,) means to calculate the channel capacity calculation, for example:
  • the trained model can also be tested using test data.
  • test data When the test result reaches the target, for example, when the loss function obtained by using the model for one or more test data meets the performance requirements , the model is considered usable, otherwise the model needs to be retrained.
  • the type of test data is the same as the training data, and will not be repeated here.
  • the iterative process of the above-mentioned sparse representation algorithm can be expanded into a multi-layer neural network by adopting the method of deep network expansion.
  • Q is a positive integer.
  • Each layer of the unfolded network is based on the operation of the channel reconstruction model.
  • the sparse representation algorithm also called the sparse representation model
  • the sparse representation model formed by the unfolded network is trained end-to-end together with the channel reconstruction model, and the final channel reconstruction model can be obtained through multiple iterations of training.
  • other model-based methods based on the channel reconstruction model can be used to construct the sparse representation model; no limitation is imposed.
  • the second possible model training method is introduced below in conjunction with FIG. 6B .
  • the sparse representation information of the training data is determined.
  • the training node Before training, in addition to determining the relevant parameters of the channel reconstruction model, the training node also needs to determine the relevant parameters of the sparse representation model.
  • the relevant parameters of the channel reconstruction model are the same as those described above for FIG. 6A .
  • the relevant parameters of the sparse representation model include: the input dimension of the sparse representation model, the output dimension, the characteristics of the output data, and the initial model parameters of the sparse representation model (for example, the sparse representation model is a neural network, and the initial model parameters include: the structural parameters of the model ).
  • the characteristics of the output data include: the output data includes M elements, and the M elements include K non-zero elements and M-K zero elements.
  • K may be equal to any one of multiple candidate values.
  • the current sparse representation model in the following operation 6B-1 is the initial sparse representation model
  • the current channel reconstruction model is the initial channel reconstruction model
  • Operation 6B-1 The training node determines a set of training data from the training data set, such as the first set of training data, and performs operation 6B-1-1 and operation 6B-1- for each training data in the training data set respectively 2.
  • Operation 6B-1-1 The training node inputs the training data A in the set of training data into the current sparse representation model, and obtains the sparse representation information x of the training data A by reasoning.
  • Operation 6B-1-2 The training node inputs the sparse representation information x into the current channel reconstruction model f de ( ), and obtains the reconstruction data f de (x) of the training data A by reasoning.
  • Operation 6B-2 For each training data in the set of training data, the training node calculates the loss function value between each training data, such as training data A, and the reconstructed data f de (x) corresponding to the training data .
  • the loss function is ⁇ H w -f de (x) ⁇ 2 .
  • Hw represents the training data A. If the average of the loss functions of all the training data in the group of training data (or the value calculated by other methods using the loss functions of all the training data) is less than or equal to the first threshold, or, all the training data in the group If the loss function of the training data is less than or equal to the first threshold, it is considered that the current channel reconstruction model is a reconstruction model obtained through training, and the model training process ends.
  • update the parameters of the sparse representation model such as using gradient descent to update the parameters of the sparse representation model, and use the updated sparse representation model as the current sparse representation model
  • update the parameters of the channel reconstruction model such as using gradient descent
  • the method updates the parameters of the channel reconfiguration model, and uses the updated channel reconfiguration model as the current channel reconfiguration model.
  • operations 6B-1 and 6B-2 are performed again.
  • the loss function in the training process involved in the above Figure 6B can be replaced by f C (H w , f W (f de (x)), and the training end condition can be replaced by the value of the loss function Greater than or equal to the second threshold, the training obtains a sparse representation model and a channel reconstruction model.
  • Hw represents the training data A
  • f W ( ) represents the precoding generation model, which means that f de (x) is carried out to the precoding operation
  • f C (,) represents the channel capacity calculation model.
  • the features of x may be specified.
  • the characteristics of x include: for each of the K non-zero elements, the value of the element is one of multiple candidate values.
  • the multiple candidate values include 2 G candidate values.
  • the G bits may be used to feed back the index of the value of the element in the 2 G candidate values.
  • G is a positive integer.
  • the value of G may be stipulated in the protocol, or notified to the terminal device by the access network device in advance, and is not limited.
  • the interval of the value of each non-zero element is [0,1)
  • the value of each non-zero element is one of 16 candidate values
  • the 16 candidate values can be A multiple of 1/16 is added with an offset value, wherein the multiples of different candidate values are different but the offset value is the same, for example, the offset value can be 0, 0.1 or other possible values, without limitation.
  • This method is equivalent to agreeing that the non-zero elements in the sparse representation information are quantized values.
  • the values of the non-zero elements may be quantized to save signaling overhead, or may not be quantized to simplify the calculation process, without limitation.
  • the terminal device After obtaining the channel reconstruction model, for example, according to the protocol agreement, training to obtain the channel reconstruction model, or receiving the information of the channel reconstruction model from the network side, the terminal device can use the channel reconstruction model to obtain the sparse representation information of the first channel information .
  • the method for training a node, such as a terminal device or a network side device, to train a channel reconstruction model may be the method described in FIG. 6A, FIG. 6B or FIG. 7, or other possible methods, without limitation.
  • the channel reconstruction model is a neural network
  • the information of the channel reconstruction model includes at least one of the following: structural parameters of the model (such as the number of neural network layers, the width of the neural network, the connection relationship between layers, the weight of neurons, the neuron At least one of the activation function of the unit, or the bias in the activation function), the input parameters of the model (such as the type of the input parameter and/or the dimension of the input parameter), or the output parameters of the model (such as the type of the output parameter and / or the dimensions of the output parameter).
  • structural parameters of the model such as the number of neural network layers, the width of the neural network, the connection relationship between layers, the weight of neurons, the neuron At least one of the activation function of the unit, or the bias in the activation function
  • the input parameters of the model such as the type of the input parameter and/or the dimension of the input parameter
  • the output parameters of the model such as the type of the output parameter and / or the dimensions of the output parameter.
  • the terminal device determines the sparse representation information of the first channel information according to the first objective function or the second objective function.
  • the method for the terminal device to determine the sparse representation information of the first channel information according to the objective function 1 or the objective function 2 is similar to the method for the training node to determine the sparse representation of the training data A according to the objective function 1 or objective function 2 in the above operation 6A-1. method, which will not be repeated here.
  • operation 6A-1 is to use the current channel reconstruction model to obtain the sparse representation of training data A.
  • the parameters of the channel reconstruction model can be adjusted; in method A1, it is The channel reconstruction model that has been trained is used for reasoning, and the sparse representation algorithm is used to sparsely represent the first channel information, so as to obtain the sparse representation information of the first channel information.
  • the parameters of the channel reconstruction model remain unchanged.
  • the method for the terminal device to solve the first channel information according to the objective function may be any method for solving the sparse reconstruction problem, such as the ISTA, FISTA, ADMM, or OMP method.
  • the solution process of these algorithms can adopt the method of deep network expansion.
  • the initial value of the sparse representation information may be a random value, or an output obtained by inferring the first channel information using a sparse representation model .
  • the terminal device can obtain the sparse representation model, train the sparse representation model, or receive the information of the sparse representation model from the network side according to the manner stipulated in the protocol.
  • the method for training a node, such as a terminal device or a network side device, to train a sparse representation model may be the method described in FIG. 6B above, or other possible methods, which are not limited.
  • the terminal device determines the sparse representation information of the first channel information according to objective function 1 or objective function 2, and x can be constrained to satisfy: for K
  • Each of the non-zero elements is one of multiple candidate values.
  • x may not be constrained to be a quantized value, and after calculating the sparse representation information, the quantization operation is performed on Each of the K non-zero elements is quantized separately, so that the value of each element is one of the above multiple candidate values.
  • the values of different non-zero elements may be the same or different, without limitation.
  • the values of the non-zero elements may be quantized to save signaling overhead, or may not be quantized to simplify the calculation process, without limitation.
  • the terminal device indicates the sparse representation information to the access network device through channel feedback information. That is, the terminal device sends channel feedback information to the access network device. Wherein, the channel feedback information is used to indicate sparse representation information.
  • the terminal device may send the sparse representation information to the access network device through channel feedback information through any one of the following methods B1 to B2.
  • Method B1 the channel feedback information includes sparse representation information.
  • the sparse representation information is a matrix [0, 0.25, 0.9375, 0.125, 0, 0, 0.5, 0]. That is, the sparse representation information includes 8 elements, among which 4 are zero elements and 4 are non-zero elements, then the channel feedback information includes [0,0.25,0.9375,0.125,0,0,0.5,0].
  • Method B2 The channel feedback information is used to indicate the values of the K non-zero elements and the positions of the K non-zero elements of the sparse representation information.
  • the channel feedback information may indicate the positions of the K non-zero elements through any of the following examples.
  • the channel feedback information indicates the positions of the K non-zero elements through a bitmap (bitmap).
  • bitmap includes M bits, the value of each bit is 0 or 1, and each bit corresponds to an element in the sparse representation information. That is, there is a one-to-one correspondence between M elements in the sparse representation information and M bits in the bitmap.
  • the value of a bit in the bitmap is 1, it means that the element corresponding to the bit in the sparse representation information is a non-zero element; when the value of a bit in the bitmap is 0, it means that the element corresponding to the bit in the sparse representation information element is zero element.
  • the sparse representation information is a matrix [0,0.25,0.9375,0.125,0,0,0.5,0], which includes 4 non-zero elements, and the bitmap in the channel feedback information is [0,1,1 ,1,0,0,1,0].
  • the channel feedback information also indicates that the values of the four non-zero elements are 0.25, 0.9375, 0.125, and 0.5, respectively. Then, after the access network device receives the channel feedback information from the terminal device, the sparse representation information can be obtained as [0,0.25,0.9375,0.125,0,0,0.5,0].
  • the channel feedback information indicates the position of each non-zero element among the K non-zero elements in the M elements. Specifically, the channel feedback information is passed through K greater than or equal to The bit information respectively indicates the positions of the K non-zero elements in the M elements.
  • the sparse representation information is a matrix [0,0.25,0.9375,0.125,0,0,0.5,0], which includes 8 elements, 4 of which are non-zero elements, and the positions of the 4 non-zero elements are respectively is 1, 2, 3 and 6, the channel feedback information passes through 4
  • the bit information indicates that the positions of the four non-zero elements are 001, 010, 011 and 110, respectively.
  • the channel feedback information also indicates that the values of the four non-zero elements are 0.25, 0.9375, 0.125, and 0.5, respectively. Then, after the access network device receives the feedback information from the terminal device, the sparse representation information can be obtained as [0,0.25,0.9375,0.125,0,0,0.5,0].
  • the channel feedback information indicates the first pattern
  • the first image indicates the positions of K non-zero elements among the M elements.
  • the first pattern is one of multiple patterns (set of candidate patterns).
  • Different patterns in the plurality of patterns indicate different positions of the K non-zero elements.
  • at least one non-zero element indicated by pattern A is a zero element in pattern B.
  • its form can be the bitmap of the above-mentioned Example 1; or as the example given in the following Table 2B, its form can be the non- Zero element position, no constraints.
  • the channel feedback information indicates that the index of the first pattern is 0, and indicates that the values of the K non-zero elements are 0.25, 0.9375, 0.125, and 0.5. Then, after receiving the feedback information from the terminal device, the access network device can obtain the sparse representation information [0,0.25,0.9375,0.125,0,0,0.5,0].
  • different candidate pattern sets may be set for different K, and each candidate pattern set is numbered independently.
  • the corresponding pattern can be determined according to the value of K (or pattern set index) and the pattern index.
  • the access network device can obtain the sparse representation information [0,0.25,0.9375,0.125,0,0,0.5,0].
  • the above multiple patterns may be stipulated in the agreement, or the access network device notifies the terminal device in advance through signaling, without limitation.
  • the terminal device can directly indicate the values of the K non-zero elements; or, as mentioned above, in order to save signaling overhead, if the values of the K non-zero elements are quantized values, the channel feedback information indicates the value of each non-zero element value, may indicate the index of the value of the non-zero element among multiple candidate values.
  • the values of the K non-zero elements are quantized values, for example, there are 16 candidate values, which are respectively multiples of 1/16, namely: 0 (index: 0000), 0.0625 (index: 0001), 0.125 (Index: 0010), 0.1875 (Index: 0011), 0.25 (Index: 0100), 0.3125 (Index: 0101), 0.375 (Index: 0110), 0.4375 (Index: 0111), 0.5 (Index: 1000 ), 0.5625 (Index: 1001), 0.625 (Index: 1010), 0.6875 (Index: 1011), 0.75 (Index: 1100), 0.8125 (Index: 1101), 0.875 (Index: 1110), 0.9375 (Index: 1111) .
  • the index of the element can be indicated by 4 bits for each element, and the channel feedback information indicates: 0100, 1111, 0010, 1000.
  • the access network device recovers the first channel information by using the sparse representation information and the channel reconstruction model.
  • the access network device after receiving the channel feedback information, obtains sparse representation information according to the channel feedback information, inputs the sparse representation information into a channel reconstruction model, and obtains reconstructed (restored) first channel information by reasoning.
  • the access network device sets the initial input of the channel reconstruction model as M zeros. That is, the input dimension of the channel reconstruction model is M dimension.
  • the access network device uses the positions and values of the K non-zero elements indicated by the feedback information to replace the K 0 elements in the initial input of the channel reconstruction model with the K non-zero elements indicated by the feedback information, and then according to the channel reconstruction
  • the reconstructed (recovered) first channel information is obtained by reasoning the structural model.
  • the terminal device may also send the scaling factor of the second channel information to the access network device.
  • the access network uses the scaling factor to scale the first channel information to obtain the second channel information.
  • the scaling factor can be in the linear domain or the logarithmic domain. Wherein, there is no restriction on the base of the logarithm, for example, it may be 10, 2, a natural constant e or other possible values, without restriction.
  • the second channel information H 2 H 1 *T, where T represents the scaling factor
  • T represents the scaling factor
  • the first channel information is denoted as H 1
  • the base of the logarithm is 10
  • the method of the present disclosure can be understood as, for various communication scenarios, such as various channel environments and/or various compression ratios, a channel reconstruction model on the side of the fixed access network device. Since the terminal device uses the channel reconstruction model to obtain the sparse representation of channel information, there are no constraints on the solution method on the terminal device side, which can meet the needs of various communication scenarios and simplify the implementation of the terminal device side.
  • the channel reconstruction model on the access network device side can be regarded as a dictionary network, which can restore channel information through sparse representation information.
  • one channel reconfiguration model can be used to apply to all communication scenarios; multiple channel reconfiguration models can also be used, and each channel reconfiguration model can be applied to multiple communication scenarios to relatively save communication resources.
  • the access network device may determine transmission parameters according to the recovered first channel information or second channel information, for data transmission with the terminal device.
  • the access network device determines channel quality information (channel quality indicator, CQI) according to the recovered first channel information or second channel information.
  • the CQI is used by the access network device to schedule a physical downlink shared channel (PDSCH), that is, it is used by the access network device to determine the PDSCH (time domain and/or frequency domain) resources, and/or the modulation and coding mechanism ( modulation and coding scheme, MCS) and other transmission parameters.
  • PDSCH physical downlink shared channel
  • MCS modulation and coding scheme
  • the CQI can also be used by the access network device to schedule a physical uplink shared channel (PUSCH), that is, for the access network device to determine PUSCH (time domain and/or frequency domain) resources, and/or transmission parameters such as MCS.
  • PUSCH physical uplink shared channel
  • the access network device determines the precoding matrix indicator (precoding matrix indicator, PMI) and/or rank indicator (rank indicator, RI) of PDSCH and/or PUSCH according to the recovered first channel information or second channel information .
  • PMI precoding matrix indicator
  • rank indicator rank indicator
  • RI rank indicator
  • the access network device may send the PMI and RI of the PDSCH to the terminal device for the terminal device to decode data carried on the PDSCH. And/or, the access network device may send the PMI and RI of the PUSCH to the terminal device, so that the terminal device determines the data carried on the PUSCH according to the PMI and RI.
  • the access network device, the terminal device, and the network element with AI function include hardware structures and/or software modules corresponding to each function.
  • the present disclosure can be implemented in the form of hardware or a combination of hardware and computer software. Whether a certain function is executed by hardware or computer software drives the hardware depends on the specific application scenario and design constraints of the technical solution.
  • FIG. 8 and FIG. 9 are schematic structural diagrams of possible communication devices provided by the present disclosure. These communication devices can be used to realize the functions of the access network device, the terminal device, and the AI functional network element in the above method, and thus can also realize the beneficial effects of the above method.
  • a communication device 800 includes a processing unit 810 and a communication unit 820 .
  • the communication device 800 is used to implement the method shown above.
  • the communication unit 820 is used to receive channel feedback information from the terminal equipment, where the channel feedback information is used to indicate the sparse representation information of the first channel information, wherein the sparse representation information It includes M elements, and the M elements include K non-zero elements and M-K zero elements, where M and K are positive integers; the processing unit 810 is used to determine the first channel information according to the channel reconstruction model, where the The input of the channel reconstruction model is determined according to the sparse representation information.
  • the processing unit 810 is used to determine the sparse representation information of the first channel information according to the first channel information and the channel reconstruction model, wherein the sparse representation information includes M elements, and the M The elements include K non-zero elements and M-K zero elements, where M and K are positive integers; the communication unit 820 is used to send channel feedback information to the access network device, and the channel feedback information is used to indicate the sparse representation information .
  • processing unit 810 and the communication unit 820 may refer to the related description in the foregoing method, and details are not repeated here.
  • a communication device 900 includes a processor 910 and an interface circuit 920 .
  • the processor 910 and the interface circuit 920 are coupled to each other.
  • the interface circuit 920 may be a transceiver, a pin, an input/output interface or other communication interfaces.
  • the communication device 900 may further include a memory 930 for storing at least one of the following: instructions executed by the processor 910, input data required by the processor 910 to execute the instructions, or data generated after the processor 910 executes the instructions.
  • the processor 910 is used to implement the functions of the processing unit 810
  • the interface circuit 920 is used to implement the functions of the communication unit 820 .
  • the terminal device chip When the above communication device is a chip applied to a terminal device, the terminal device chip implements the functions of the terminal device in the above method.
  • the terminal device chip receives information from other modules in the terminal device (such as radio frequency modules or antennas), and the information is sent to the terminal device by the access network device; or, the terminal device chip sends information to other modules in the terminal device (such as radio frequency module or antenna) to send information, the information is sent by the terminal device to the access network device and so on.
  • the access network equipment module When the above communication device is a module applied to access network equipment, the access network equipment module implements the functions of the access network equipment in the above method.
  • the access network equipment module receives information from other modules (such as radio frequency modules or antennas) in the access network equipment, and the information is sent to the access network equipment by terminal equipment; or, the access network equipment module sends information to the access network equipment Other modules (such as radio frequency modules or antennas) in the network equipment send information, and the information is sent by the access network equipment to the terminal equipment and so on.
  • the access network equipment module here can be the baseband chip of the access network equipment, or it can be near real-time RIC, CU, DU or other modules.
  • the near real-time RIC, CU and DU here may be the near real-time RIC, CU and DU under the O-RAN architecture.
  • a processor may be a general-purpose processor, a digital signal processor, an application-specific integrated circuit, a field programmable gate array or other programmable logic device, a discrete gate or transistor logic device, or a discrete hardware component, and may realize or execute the present disclosure
  • a general purpose processor may be a microprocessor or any conventional processor or the like. The steps combined with the method of the present disclosure may be directly implemented by a hardware processor, or implemented by a combination of hardware and software modules in the processor.
  • the memory may be a non-volatile memory, such as a hard disk (hard disk drive, HDD) or a solid-state drive (solid-state drive, SSD), etc., or a volatile memory (volatile memory), such as random access Memory (random-access memory, RAM).
  • a memory is, but is not limited to, any other medium that can be used to carry or store desired program code in the form of instructions or data structures and that can be accessed by a computer.
  • the memory in the present disclosure may also be a circuit or any other device capable of implementing a storage function for storing program instructions and/or data.
  • the methods in the present disclosure may be fully or partially implemented by software, hardware, firmware or any combination thereof.
  • software When implemented using software, it may be implemented in whole or in part in the form of a computer program product.
  • the computer program product comprises one or more computer programs or instructions. When the computer programs or instructions are loaded and executed on the computer, the processes or functions described in this application are executed in whole or in part.
  • the computer may be a general computer, a dedicated computer, a computer network, an access network device, a terminal device, a core network device, an AI function network element, or other programmable devices.
  • the computer program or instructions may be stored in or transmitted from one computer-readable storage medium to another computer-readable storage medium, for example, the computer program or instructions may be downloaded from a website, computer, A server or data center transmits to another website site, computer, server or data center by wired or wireless means.
  • the computer-readable storage medium may be any available medium that can be accessed by a computer, or a data storage device such as a server or a data center integrating one or more available media.
  • the available medium may be a magnetic medium, such as a floppy disk, a hard disk, or a magnetic tape; it may also be an optical medium, such as a digital video disk; or it may be a semiconductor medium, such as a solid state disk.
  • the computer readable storage medium may be a volatile or a nonvolatile storage medium, or may include both volatile and nonvolatile types of storage media.

Landscapes

  • Engineering & Computer Science (AREA)
  • Signal Processing (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Electromagnetism (AREA)
  • Data Mining & Analysis (AREA)
  • Mathematical Physics (AREA)
  • Power Engineering (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Computing Systems (AREA)
  • Software Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Health & Medical Sciences (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Computational Linguistics (AREA)
  • Quality & Reliability (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Mobile Radio Communication Systems (AREA)

Abstract

Est prévu dans la présente divulgation un procédé de transmission d'informations de canal utilisé pour économiser des ressources de communication. Le procédé consiste à : envoyer par un dispositif terminal des informations de rétroaction de canal à un dispositif de réseau d'accès, les informations de rétroaction de canal étant utilisées pour indiquer des informations de représentation éparses de premières informations de canal, les informations de représentation éparses comprenant M éléments, les M éléments comprenant K éléments non nuls et M-K éléments nuls, et M et K étant des nombres entiers positifs ; et, sur la base des informations de représentation éparses et d'un modèle de reconstruction de canal, récupérer par le dispositif de réseau d'accès les premières informations de canal.
PCT/CN2023/070013 2021-12-31 2023-01-03 Procédé et appareil de transmission d'informations de canal Ceased WO2023126007A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202111660657.7 2021-12-31
CN202111660657.7A CN116436551A (zh) 2021-12-31 2021-12-31 信道信息传输方法及装置

Publications (1)

Publication Number Publication Date
WO2023126007A1 true WO2023126007A1 (fr) 2023-07-06

Family

ID=86998197

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2023/070013 Ceased WO2023126007A1 (fr) 2021-12-31 2023-01-03 Procédé et appareil de transmission d'informations de canal

Country Status (2)

Country Link
CN (1) CN116436551A (fr)
WO (1) WO2023126007A1 (fr)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2025209315A1 (fr) * 2024-03-30 2025-10-09 华为技术有限公司 Procédé et appareil de communication

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116738354B (zh) * 2023-08-15 2023-12-08 国网江西省电力有限公司信息通信分公司 一种电力物联网终端行为异常检测方法及系统
CN119727816A (zh) * 2023-09-28 2025-03-28 华为技术有限公司 通信方法、装置及可读存储介质

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107171702A (zh) * 2017-05-12 2017-09-15 重庆大学 基于pca演进的大规模mimo信道反馈方法
CN108847876A (zh) * 2018-07-26 2018-11-20 东南大学 一种大规模mimo时变信道状态信息压缩反馈及重建方法
US20190356516A1 (en) * 2018-05-18 2019-11-21 Parallel Wireless, Inc. Machine Learning for Channel Estimation
CN111464220A (zh) * 2020-03-10 2020-07-28 西安交通大学 一种基于深度学习的信道状态信息重建方法

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107171702A (zh) * 2017-05-12 2017-09-15 重庆大学 基于pca演进的大规模mimo信道反馈方法
US20190356516A1 (en) * 2018-05-18 2019-11-21 Parallel Wireless, Inc. Machine Learning for Channel Estimation
CN108847876A (zh) * 2018-07-26 2018-11-20 东南大学 一种大规模mimo时变信道状态信息压缩反馈及重建方法
CN111464220A (zh) * 2020-03-10 2020-07-28 西安交通大学 一种基于深度学习的信道状态信息重建方法

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2025209315A1 (fr) * 2024-03-30 2025-10-09 华为技术有限公司 Procédé et appareil de communication

Also Published As

Publication number Publication date
CN116436551A (zh) 2023-07-14

Similar Documents

Publication Publication Date Title
US12457017B2 (en) Communication method and apparatus
WO2023126007A1 (fr) Procédé et appareil de transmission d'informations de canal
US20240356595A1 (en) Uplink precoding method and apparatus
US20250039881A1 (en) Uplink signal sending and receiving method and apparatus
US20240348478A1 (en) Communication method and apparatus
EP4447524A1 (fr) Procédé et appareil de communication
US20250096861A1 (en) Communication method and apparatus
US20240171429A1 (en) Communication method and apparatus
CN116260552A (zh) 一种csi发送和接收方法及装置
WO2023279947A1 (fr) Procédé et appareil de communication
WO2024008004A1 (fr) Procédé et appareil de communication
CN118118133A (zh) 一种通信方法及装置
EP4503463A1 (fr) Procédé et appareil de communication
CN118509014A (zh) 一种通信的方法和通信装置
CN118784033A (zh) 一种通信的方法和通信装置
US20250088258A1 (en) Model application method and apparatus
WO2025067480A1 (fr) Procédé, appareil et système de communication
CN120785389A (zh) 一种信道反馈信息传输方法及装置

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23735112

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 23735112

Country of ref document: EP

Kind code of ref document: A1