[go: up one dir, main page]

WO2025098261A1 - Communication method and communication device - Google Patents

Communication method and communication device Download PDF

Info

Publication number
WO2025098261A1
WO2025098261A1 PCT/CN2024/129364 CN2024129364W WO2025098261A1 WO 2025098261 A1 WO2025098261 A1 WO 2025098261A1 CN 2024129364 W CN2024129364 W CN 2024129364W WO 2025098261 A1 WO2025098261 A1 WO 2025098261A1
Authority
WO
WIPO (PCT)
Prior art keywords
neural network
communication device
information
blocks
network blocks
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
PCT/CN2024/129364
Other languages
French (fr)
Chinese (zh)
Inventor
郭子阳
张文凯
程敏
刘鹏
杨讯
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Technologies Co Ltd
Original Assignee
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co Ltd filed Critical Huawei Technologies Co Ltd
Publication of WO2025098261A1 publication Critical patent/WO2025098261A1/en
Pending legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04BTRANSMISSION
    • H04B7/00Radio transmission systems, i.e. using radiation field
    • H04B7/02Diversity systems; Multi-antenna system, i.e. transmission or reception using multiple antennas
    • H04B7/04Diversity systems; Multi-antenna system, i.e. transmission or reception using multiple antennas using two or more spaced independent antennas
    • H04B7/0413MIMO systems
    • H04B7/0417Feedback systems
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04BTRANSMISSION
    • H04B7/00Radio transmission systems, i.e. using radiation field
    • H04B7/02Diversity systems; Multi-antenna system, i.e. transmission or reception using multiple antennas
    • H04B7/04Diversity systems; Multi-antenna system, i.e. transmission or reception using multiple antennas using two or more spaced independent antennas
    • H04B7/0413MIMO systems
    • H04B7/0456Selection of precoding matrices or codebooks, e.g. using matrices antenna weighting
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04BTRANSMISSION
    • H04B7/00Radio transmission systems, i.e. using radiation field
    • H04B7/02Diversity systems; Multi-antenna system, i.e. transmission or reception using multiple antennas
    • H04B7/04Diversity systems; Multi-antenna system, i.e. transmission or reception using multiple antennas using two or more spaced independent antennas
    • H04B7/06Diversity systems; Multi-antenna system, i.e. transmission or reception using multiple antennas using two or more spaced independent antennas at the transmitting station

Definitions

  • the present application relates to the field of communication technology, and more specifically, to a communication method and a communication device.
  • Wireless communications are developing rapidly.
  • the sixth-generation wireless (local area) network wireless fidelity, Wi-Fi) (such as the Institute of Electrical and Electronics Engineers (IEEE) 802.11ax) standard has been commercialized, and the next-generation wireless technology and standardization are also in full swing around the world.
  • Wireless communications have penetrated into all aspects of daily life and work and have become an indispensable part.
  • IoT Internet of Things
  • New wireless technologies, new terminals, and new applications have made wireless networks unprecedentedly complex. It is foreseeable that wireless networks will become more and more complex in the future.
  • artificial intelligence AI has become an industry consensus as an effective tool for wireless network design and management.
  • Beamforming technology requires the receiving end to feedback channel information (such as channel state information (CSI)).
  • CSI channel state information
  • the current standard uses a channel information feedback method based on Givens rotation. With the increase in bandwidth and the number of antennas, the channel state information feedback overhead based on Givens rotation is greatly increased.
  • AI technology how to perform channel information feedback based on AI has become an urgent problem to be solved.
  • the present application provides a communication method for performing channel information feedback based on AI.
  • a communication method is provided, which can be executed by a first communication device, or can also be executed by a component (such as a chip or circuit) of the first communication device, without limitation.
  • a component such as a chip or circuit
  • the following description is based on the first communication device as an example.
  • the communication method includes: a first communication device obtains a first neural network block, the first neural network block is used for channel information feedback, and neural networks containing different numbers of the first neural network blocks have different functions; the first communication device sends parameters corresponding to N first neural network blocks to a second communication device, the N first neural network blocks are included in a first neural network, and the first neural network is used for encoding processing of channel information, wherein N is a positive integer.
  • the first communication device can obtain the first neural network block, and when different neural networks contain different numbers of first neural network blocks, different neural networks implement different functions. For example, the more first neural network blocks a neural network contains, the higher the computational complexity of the neural network, and the compression ratio of the channel information compressed and processed by the neural network is high, the feedback overhead is reduced, and the system throughput is high. Furthermore, the first communication device can provide the second communication device with parameters corresponding to the N first neural network blocks, so that the second communication device can determine the first neural network based on the parameters corresponding to the N received first neural network blocks, and in the subsequent channel information feedback process, the channel information can be encoded and processed based on the first neural network to achieve AI-based channel information feedback.
  • neural networks with different functions can be obtained through a neural network block of a structure, without the need to train different neural networks for different functions, thereby reducing management overhead and storage overhead of the neural network.
  • the method before the first communication device sends parameters corresponding to N first neural network blocks to the second communication device, the method also includes: the first communication device receives first indication information from the second communication device, and the first indication information is used to indicate the N.
  • the second communication device can notify the first communication device of the number N of the required first neural network blocks through the first indication information, so that the first communication device can send the corresponding number of first neural network blocks according to the needs of the second communication device.
  • the corresponding parameters are used to avoid the situation where the parameters corresponding to the first neural network block provided by the first communication device do not meet the requirements of the second communication device.
  • the method also includes: the first communication device receives second indication information from the second communication device, the second indication information is used to indicate M, and M is a positive integer; the first communication device sends parameters corresponding to M first neural network blocks to the second communication device, wherein the M first neural network blocks and the N first neural network blocks are included in the second neural network.
  • the second communication device can request the parameters of the newly added first neural network blocks from the first communication device, without requesting the first communication device to provide the parameters corresponding to all the first neural network blocks included in the updated neural network. For example, if the updated neural network includes Q first neural network blocks, which is M more than the first neural network, then it is sufficient to request the first communication device to provide the parameters corresponding to the M first neural network blocks, without requesting the first communication device to provide the parameters corresponding to the Q first neural network blocks, thereby effectively reducing the transmission overhead of the neural network update.
  • the method also includes: the first communication device determines the maximum number P of first neural network blocks that the neural network can include, P is a positive integer greater than or equal to N; the first communication device sends the parameters corresponding to the N first neural network blocks to the second communication device, including: the first communication device sends the parameters corresponding to the P first neural network blocks to the second communication device.
  • the first communication device can broadcast the parameters corresponding to multiple first neural network blocks when the number of first neural network blocks is the largest to the second communication device according to the maximum number of first neural network blocks that the neural network can contain.
  • the first communication device actively sends the parameters corresponding to multiple first neural network blocks by broadcasting, without the need for the second communication device to initiate a request process, thereby reducing the signaling overhead of the second communication device.
  • the second communication device in addition to feeding back the first vector obtained based on neural network processing of the channel information, also carries third indication information in the feedback information to indicate which structure of the neural network processing is used to obtain the currently fed back first vector, thereby helping the first communication device to select a suitable neural network to parse the first vector and obtain channel information with high accuracy.
  • the parameters of the first neural network block include at least one of the following: weight information, bias information, or activation function information corresponding to the first neural network block.
  • the first neural network block supports at least one of the following neural network structures: a convolutional neural network CNN, a multi-layer perceptron MLP, or a transformer Transformer.
  • the connection method between the multiple first neural network blocks includes deep connection and/or wide connection.
  • the method further includes: the first communication device sending parameters of an output layer of the first neural network to the second communication device.
  • the method also includes: the first communication device sends at least one of the following information to the second communication device: information indicating a quantization method, information indicating a number of quantization bits, and information indicating a feature embedding method.
  • a communication method is provided, which can be executed by a second communication device, or can also be executed by a component (such as a chip or circuit) of the second communication device, without limitation.
  • a component such as a chip or circuit
  • the second communication device is used as an example for description below.
  • the communication method includes: the second communication device obtains parameters corresponding to N first neural network blocks respectively, the first neural network blocks are used for channel information feedback, and the functions of neural networks containing different numbers of the first neural network blocks are different; the second communication device determines the first neural network based on the N first neural network blocks, and the first neural network is used for the second communication device to encode the channel information, wherein N is a positive integer.
  • the second communication device obtains parameters corresponding to the N first neural network blocks respectively, including: the second communication device receives the parameters corresponding to the N first neural network blocks respectively from the first communication device.
  • the method when the second communication device receives the Before the parameters corresponding to the N first neural network blocks respectively, the method also includes: the second communication device determines the number N of first neural network blocks included in the required first neural network based on first information, the first information includes the capabilities of the second communication device and/or the need to process the channel information; the second communication device sends first indication information to the first communication device, and the first indication information is used to indicate the N.
  • the method also includes: the second communication device determines, based on second information, that the required second neural network includes Q of the first neural network blocks, where Q is a positive integer greater than N, and the difference between Q and N is M, and the second information includes the capabilities of the second communication device and/or the need to process the channel information; the second communication device sends second indication information to the first communication device, and the second indication information is used to indicate M; the second communication device receives the parameters corresponding to the M of the first neural network blocks from the first communication device.
  • the method also includes: the second communication device receives parameters corresponding to P first neural network blocks from the first communication device, where P is a positive integer greater than or equal to N; the second communication device determines the N first neural network blocks from the P first neural network blocks based on second information, where the second information includes the capabilities of the second communication device and/or the need to process the channel information.
  • the method also includes: the second communication device sends first information to the first communication device, the first information includes third indication information and a first vector, the third indication information is used to indicate N, and the first vector is a result of encoding the channel information through the first neural network.
  • the method further includes: the second communication device receiving parameters of the output layer of the first neural network from the first communication device.
  • a communication device which is used to execute the method provided in the first and second aspects.
  • the communication device may include units and/or modules, such as a processing unit and an acquisition unit, for executing the method provided in any one of the above implementations of the first and second aspects.
  • the transceiver unit may be a transceiver, or an input/output interface; the processing unit may be at least one processor.
  • the transceiver may be a transceiver circuit.
  • the input/output interface may be an input/output circuit.
  • the transceiver unit may be an input/output interface, interface circuit, output circuit, input circuit, pin or related circuit on the chip, chip system or circuit;
  • the processing unit may be at least one processor, processing circuit or logic circuit.
  • the communication device is the above-mentioned first communication device or a component (such as a chip or circuit) of the first communication device, and the communication device includes:
  • a processing unit is used to obtain a first neural network block, wherein the first neural network block is used for channel state information and channel information feedback, and the functions of neural networks containing different numbers of the first neural network blocks are different.
  • a transceiver unit is used to send parameters corresponding to N first neural network blocks to a second communication device, wherein the N first neural network blocks are included in a first neural network, and the first neural network is used for the second communication device to encode channel information, wherein N is a positive integer.
  • the communication device is the above-mentioned second communication device or a component (such as a chip or circuit) of the second communication device, and the communication device includes:
  • a transceiver unit is used to obtain parameters corresponding to N first neural network blocks, wherein the first neural network blocks are used for channel state information and channel information feedback, and neural networks containing different numbers of the first neural network blocks have different functions.
  • a processing unit is used to determine a first neural network based on the N first neural network blocks, wherein the first neural network is used for the second communication device to encode channel information, wherein N is a positive integer.
  • the present application provides a processor for executing the methods provided in the above aspects.
  • a computer-readable storage medium which stores a program code for execution by a device, wherein the program code includes a method for executing any one of the implementations of the first and second aspects above.
  • a computer program product comprising instructions
  • the computer program product when the computer program product is run on a computer, the computer executes the method provided by any one of the implementations of the first and second aspects above.
  • a chip comprising a processor and a communication interface, the processor reads data stored in a memory through the communication interface Instructions to execute the method provided by any one of the implementations of the first and second aspects above.
  • the chip also includes a memory, in which a computer program or instructions are stored, and the processor is used to execute the computer program or instructions stored in the memory.
  • the processor is used to execute the method provided by any implementation method of the first and second aspects above.
  • a communication system comprising a first communication device for executing the method provided in the first aspect and a second communication device for executing the method provided in the second aspect.
  • FIG. 1 is a schematic diagram of an application scenario to which an embodiment of the present application is applicable.
  • FIG2 shows a schematic diagram of a device structure provided in the present application.
  • FIG. 3 is a schematic diagram of a neural network.
  • FIG. 4 is a schematic diagram of calculating neuron output.
  • FIG5 is a schematic diagram of a method for AI-based CSI feedback.
  • FIG6 is a schematic flowchart of a communication method provided in an embodiment of the present application.
  • Figure 7 (a) to (d) are schematic diagrams of a neural network provided in an embodiment of the present application.
  • FIG8 (a) and (b) are schematic diagrams of the structure of a neural network for different requirements provided by an embodiment of the present application.
  • FIG9 is a schematic diagram of neural network parameters provided in an embodiment of the present application.
  • FIG10 is another schematic diagram of neural network parameters provided in an embodiment of the present application.
  • FIG. 11 is a schematic block diagram of a communication device provided in an embodiment of the present application.
  • FIG. 12 is a schematic diagram of another communication device provided in an embodiment of the present application.
  • FIG. 13 is a schematic diagram of a chip system provided in an embodiment of the present application.
  • used for indication may include being used for direct indication and being used for indirect indication.
  • indication information When describing that a certain indication information is used for indicating A, it may include that the indication information directly indicates A or indirectly indicates A, but it does not mean that the indication information must carry A.
  • the information indicated by the indication information is called the information to be indicated.
  • the information to be indicated can be directly indicated, such as the information to be indicated itself or the index of the information to be indicated.
  • the information to be indicated can also be indirectly indicated by indicating other information, wherein there is an association between the other information and the information to be indicated. It is also possible to indicate only a part of the information to be indicated, while the other parts of the information to be indicated are known or agreed in advance.
  • the indication of specific information can also be achieved with the help of the arrangement order of each piece of information agreed in advance (for example, stipulated by the protocol), thereby reducing the indication overhead to a certain extent.
  • the common parts of each piece of information can also be identified and indicated uniformly to reduce the indication overhead caused by indicating the same information separately.
  • words such as “exemplary” or “for example” are used to indicate examples, illustrations or descriptions. Any embodiment or design described as “exemplary” or “for example” in the present application should not be interpreted as being more preferred or more advantageous than other embodiments or designs. Specifically, the use of words such as “exemplary” or “for example” is intended to present related concepts in a specific way.
  • the "storage” involved in the embodiments of the present application may refer to storage in one or more memories.
  • the one or more memories may be separately set or integrated in an encoder or decoder, a processor, or a communication device.
  • the one or more memories may also be partially separately set and partially integrated in a decoder, a processor, or a communication device.
  • the type of memory may be any form of storage medium, which is not limited by the present application.
  • protocol may refer to a standard protocol in the communication field, for example, it may include the NR protocol and related protocols used in future communication systems, and this application does not limit this.
  • the term "and/or" in this article is only a description of the association relationship of the associated objects, indicating that there can be three relationships.
  • a and/or B can mean: A exists alone, A and B exist at the same time, and B exists alone.
  • the character "/" in this article generally indicates that the associated objects before and after are in an "or" relationship.
  • WLAN wireless local area network
  • IEEE Institute of Electrical and Electronics Engineers
  • 802.11 related standards such as 802.11a/b/g standards, 802.11n standards, 802.11ac standards, 802.11ax standards, IEEE 802.11ax next-generation Wi-Fi protocols such as 802.11be, Wi-Fi 7.
  • Extremely high throughput (EHT), 802.11ad, 802.11ay or 802.11bf, such as 802.11be next generation, Wi-Fi 8, etc. can also be applied to wireless personal area network systems based on ultra-wide band (UWB), such as the 802.15 series standards, and can also be applied to sensing systems, such as the 802.11bf series standards, and can also be applied to the 802.11bn standard or the ultra-high reliability (UHR) standard.
  • UWB ultra-wide band
  • sensing systems such as the 802.11bf series standards
  • UHR ultra-high reliability
  • the 802.11n standard is called high throughput (HT)
  • the 802.11ac standard is called very high throughput (VHT) standard
  • the 802.11ax standard is called high efficient (HE) standard
  • the 802.11be standard is called extremely high throughput (EHT) standard.
  • 802.11bf includes two major standards: low frequency (for example, sub7GHz) and high frequency (for example, 60GHz).
  • the implementation of sub7GHz mainly relies on 802.11ac, 802.11ax, 802.11be and the next generation standards
  • the implementation of 60GHz mainly relies on 802.11ad, 802.11ay and the next generation standards.
  • 802.11ad can also be called directional multi-gigabit (DMG) standard
  • 802.11ay can also be called enhanced directional multi-gigabit (EDMG) standard.
  • DMG directional multi-gigabit
  • EDMG enhanced directional multi-gigabit
  • the embodiments of the present application are mainly described by taking the deployment of WLAN networks, especially networks using the IEEE 802.11 system standard as an example, it is easy for those skilled in the art to understand that the various aspects involved in the embodiments of the present application can be extended to other networks that adopt various standards or protocols, such as high performance radio local area networks (HIPERLAN), wireless wide area networks (WWAN), wireless personal area networks (WPAN) or other networks now known or developed later. Therefore, regardless of the coverage range and wireless access protocol used, the various aspects provided in the embodiments of the present application can be applied to any suitable wireless network.
  • HIPERLAN high performance radio local area networks
  • WWAN wireless wide area networks
  • WPAN wireless personal area networks
  • WLAN communication system wireless fidelity (Wi-Fi) system, long term evolution (LTE) system, LTE frequency division duplex (FDD) system, LTE time division duplex (TDD), universal mobile telecommunication system (UMTS), worldwide interoperability for microwave access (WiMAX) communication system, fifth generation (5G) system or new radio (NR), next generation communication system, internet of things (IoT) network or vehicle to x (V2X), etc.
  • Wi-Fi wireless fidelity
  • LTE long term evolution
  • FDD frequency division duplex
  • TDD LTE time division duplex
  • UMTS universal mobile telecommunication system
  • WiMAX worldwide interoperability for microwave access
  • 5G fifth generation
  • NR new radio
  • next generation communication system internet of things (IoT) network or vehicle to x (V2X), etc.
  • IoT internet of things
  • V2X vehicle to x
  • FIG1 is a schematic diagram of an application scenario applicable to an embodiment of the present application.
  • the communication method provided by the present application is applicable to data communication between an access point (AP) and a station (STA), wherein the station may be a non-access point station (non-AP STA), referred to as a non-AP station or STA.
  • AP access point
  • STA station
  • non-AP STA non-access point station
  • the scheme of the present application is applicable to data communication between an AP and one or more non-AP stations (for example, data communication between AP1 and non-AP STA1, non-AP STA2), and also to data communication between APs (for example, data communication between AP1 and AP2), and data communication between non-AP STAs and non-AP STAs (for example, data communication between non-AP STA2 and non-AP STA3).
  • the access point can be a node for terminals (such as mobile phones) to enter the wired (or wireless) network. It is mainly deployed in homes, buildings and parks, with a typical coverage radius of tens to hundreds of meters. Of course, it can also be deployed outdoors.
  • the access point is equivalent to a bridge connecting the wired network and the wireless network. Its main function is to connect various wireless network clients together and then connect the wireless network to the Ethernet.
  • the access point can be a terminal or network device with a Wi-Fi chip
  • the network device can be a server, a router, a switch, a bridge, a computer, a mobile phone, a relay station, a vehicle-mounted device, a wearable device, a network device in a 5G network, a network device in a future communication network, or a network device in a public land mobile communication network (public land mobile network, PLMN), etc.
  • the embodiments are not limited.
  • the access point may be a device that supports the Wi-Fi standard.
  • the access point may also support one or more standards of the IEEE 802.11 series, such as 802.11a, 802.11b, 802.11g, 802.11n, 802.11ac, 802.11ax, 802.11be, 802.11ad, 802.11ay, 802.11bn, 802.11bf, etc.
  • IEEE 802.11 series such as 802.11a, 802.11b, 802.11g, 802.11n, 802.11ac, 802.11ax, 802.11be, 802.11ad, 802.11ay, 802.11bn, 802.11bf, etc.
  • a non-AP site may be a wireless communication chip, a wireless sensor or a wireless communication terminal, etc., and may also be referred to as a user, user equipment (UE), an access terminal, a user unit, a user station, a mobile station, a mobile station, a remote station, a remote terminal, a mobile device, a user terminal, a terminal, a wireless communication device, a user agent or a user device.
  • UE user equipment
  • a non-AP site may be a cellular phone, a cordless phone, a session initiation protocol (SIP) phone, a wireless local loop (WLL) station, a personal digital assistant (PDA), a handheld device with wireless communication function, a computing device or other processing device connected to a wireless modem, a vehicle-mounted device, an Internet of Things device, a wearable device, a terminal device in a 5G network, a terminal device in a future communication network or a terminal device in a PLMN, etc., and the embodiments of the present application are not limited to this.
  • a non-AP site may be a device that supports the WLAN format.
  • a non-AP site can support one or more standards in the IEEE 802.11 series, such as 802.11a, 802.11b, 802.11g, 802.11n, 802.11ac, 802.11ax, 802.11be, 802.11ad, 802.11ay, and 802.11bf.
  • non-AP sites can be mobile phones, tablets, set-top boxes, smart TVs, smart wearable devices, vehicle-mounted communication equipment, computers, Internet of Things (IoT) nodes, sensors, smart homes such as smart cameras, smart remote controls, smart water and electricity meters, and sensors in smart cities.
  • IoT Internet of Things
  • the above-mentioned AP or non-AP site may include a transmitter, a receiver, a memory, a processor, etc., wherein the transmitter and the receiver are respectively used for sending and receiving packet structures, the memory is used to store signaling information and store preset values agreed in advance, etc., and the processor is used to parse signaling information, process related data, etc.
  • FIG2 shows a communication device provided by the present application, and the device shown in FIG2 can be an AP or a non-AP site.
  • the communication device involved in this embodiment includes a neural network processing unit (NPU), which is used to train neural network parameters.
  • the communication device may also include one or more of the following: a central processing unit, a medium access control (MAC) layer processing module, a transceiver, or an antenna, etc.
  • NPU neural network processing unit
  • MAC medium access control
  • the NPU includes a training module and an inference module.
  • the trained neural network parameters can be fed back to the inference module.
  • the NPU can act on various other modules of the network node.
  • the NPU can act on the central processing unit, MAC, transceiver, or antenna.
  • the NPU can act on the AI tasks of each module. For example, the NPU interacts with the transceiver to decide whether to switch the transceiver on or off for energy saving; for another example, the NPU interacts with the antenna to control the direction of the antenna; for another example, the NPU interacts with the MAC to control channel access, channel selection, etc.
  • FIG. 2 is only an example of a device provided by the present application and does not constitute a limitation of the present application.
  • the device may also include a controller and/or a scheduler.
  • the function of the NPU in the device may be implemented by a central processing unit.
  • the modules included in the device may have other names, which are not illustrated one by one here.
  • Neural network It is a machine learning technology that simulates the human brain neural network in order to achieve artificial intelligence-like.
  • a neural network consists of at least 3 layers: an input layer, an intermediate layer (also called a hidden layer), and an output layer. Deeper neural networks may contain more hidden layers between the input layer and the output layer.
  • FIG. 3 is a schematic diagram of a fully connected neural network containing 3 layers.
  • the neural network includes 3 layers, namely the input layer, the hidden layer and the output layer, wherein the input layer has 3 neurons, the hidden layer has 4 neurons, the output layer has 2 neurons, and each layer of neurons is fully connected to the neurons in the next layer.
  • Each connection between neurons corresponds to a weight, which can be updated through training.
  • the structure of the neural network that is, the number of neurons contained in each layer and how the output of the previous neuron is input into the subsequent neuron (that is, the connection relationship between neurons), plus the parameters of the neural network, that is, the weights and biases, you will know all the information about the neural network.
  • Each neuron may have multiple input connections, and each neuron calculates the output based on the input.
  • a neuron contains 3 inputs, 1 output, and 2 calculation functions.
  • the calculation formula for the output can be expressed as:
  • the output layer may not have the activation function calculated, that is, the above formula (1-1) can be transformed into:
  • the output of a k-layer neural network can be expressed as:
  • x represents the input of the neural network
  • y represents the output of the neural network
  • wi represents the weight of the i-th layer of the neural network
  • bi represents the bias of the i-th layer of the neural network
  • fi represents the activation function of the i-th layer of the neural network.
  • AI-based CSI feedback solutions include but are not limited to the following two solutions:
  • Solution 1 Use AI methods to maximize the CSI compression ratio and reduce feedback overhead, thereby improving system throughput.
  • solution 1 may be:
  • the STA performs CSI feedback according to the standard method (e.g., the CSI feedback method based on Givens rotation), and the AP uses the collected data to train the encoder and decoder.
  • the encoder is sent to the STA.
  • the precoding matrix V is input into the encoder, and the output vector m is obtained and fed back to the AP.
  • the AP uses the decoder to recover the vector m based on m. Pre-encoding.
  • FIG5 is a schematic diagram of a method for CSI feedback based on AI, comprising the following steps:
  • the AP sends an NDP to the STA.
  • the NDP is used for channel measurement.
  • STA performs channel estimation. Specifically, after performing channel estimation based on NDP, STA obtains the current channel H, performs SVD on H to obtain the precoding matrix V, and performs Givens rotation operation on V to convert it into two types of angles, ⁇ and ⁇ .
  • the STA sends ⁇ and ⁇ to the AP.
  • the AP determines an encoder and a decoder. Specifically, the AP trains the encoder and the decoder using the collected data.
  • S550 The AP sends the encoder to the STA.
  • steps S510 to S550 are the aforementioned training phase, during which the STA performs CSI feedback according to a standard method.
  • S560 The AP sends an NDP to the STA.
  • STA performs channel estimation. Specifically, after performing channel measurement, STA inputs the precoding matrix V into the encoder to obtain an output result m.
  • the STA sends m to the AP.
  • AP determines precoding matrix Specifically, the AP decodes m based on the decoder to obtain the precoding matrix
  • Solution 2 Use AI methods to reduce the computational complexity of the CSI feedback process. On the basis of ensuring that the feedback amount is no greater than the Givens rotation-based method used in the current standard, a neural network is used to replace the Givens rotation calculation to reduce the computational complexity.
  • Scheme 2 may be: after the STA performs channel measurement, the precoding matrix V is input into the encoder (e.g., a fully connected neural network) to obtain the output result m'.
  • the encoder e.g., a fully connected neural network
  • the focus is on reducing the complexity of the CSI feedback calculation process by reducing the computational complexity of determining m' based on the coding matrix V. Therefore, the encoder used by the STA in Scheme 2 has a different structure from the encoder used by the STA in Scheme 1.
  • the present application provides a communication method for obtaining a neural network block that can be included in a neural network (or encoder) with different functions to meet the needs of neural networks in different scenarios and improve the performance of channel information feedback based on AI.
  • the communication method will be described in detail below in conjunction with FIG6.
  • the embodiments of the present application can be applied to a plurality of different scenarios, including the scenario shown in FIG1 , but are not limited to the scenario. For example, it can also be applied to 5G, next generation communication systems or future communication systems.
  • the embodiments shown below do not particularly limit the specific structure of the execution subject of the method provided by the embodiments of the present application.
  • the execution subject of the method provided by the embodiments of the present application may be a receiving device or a sending device, or a functional module in the receiving device or the sending device that can call and execute the program.
  • the first communication device involved in the embodiment of the present application may be an access point AP, and the second communication device may be a non-access point non-AP (such as STA); or, the first communication device may be a STA, and the second communication device may be an AP; or, the first communication device and the second communication device may be an access point AP; or, the first communication device and the second communication device may be a non-access point non-AP; or, the first communication device may be a network device (or a centralized unit (central unit, CU) or a distributed unit (distributed unit, DU) in the network device), and the second communication device may be a terminal device; or, the first communication device may be a network device in an open radio access network (open radio access network, O-RAN) (or a CU (such as, called O-CU (open CU)
  • O-RAN open radio access network
  • O-RAN open radio access network
  • CU such as, called O-CU (open CU
  • the first neural network block involved in the following embodiments may be referred to as an NN block.
  • the first neural network block is understood as a network structure that can be stacked (or reused).
  • the first neural network block may include one or more layers of convolutional neural networks (CNN), fully-connected networks (FC), or transformers, etc.
  • CNN convolutional neural networks
  • FC fully-connected networks
  • transformers etc.
  • the network structure of the first neural network block is a network structure including one or more layers of CNN; for another example, the network structure of the first neural network block is a network structure including one or more layers of FC; for another example, the network structure of the first neural network block is a network structure including one or more layers of Transformer; for another example, the network structure of the first neural network block is a network structure including one or more layers of CNN and one or more layers of FC, etc., and examples are not given one by one here.
  • FIG6 is a schematic flow chart of a communication method provided in an embodiment of the present application, comprising the following steps:
  • the first communication device obtains a first neural network block.
  • the neural network block (e.g., the first neural network block) involved in this embodiment is used for channel information feedback, and can assist in processing channel information during the channel information feedback process.
  • Neural networks containing different numbers of first neural network blocks have different functions for processing channel information.
  • the channel information involved in this embodiment includes but is not limited to the above-mentioned CSI, and the CSI includes a precoding matrix indicator (PMI), and the PMI is used to indicate the precoding matrix.
  • the channel information involved in this embodiment may also include other information that can be used to reflect the channel state, information indicating the channel state between the first communication device and the second communication device, or other information carrying the PMI.
  • the channel information is CSI as an example for explanation below.
  • the first neural network block is any one of the at least one neural network block acquired by the first communication device.
  • the number of neural network blocks acquired by the first communication device there is no limitation on the number of neural network blocks acquired by the first communication device, and different neural network blocks have different structures.
  • the first communication device acquires a first neural network block and a second neural network block, and the structure of the first neural network block is different from the structure of the second neural network block.
  • the different functions of the neural networks containing different numbers of first neural network blocks can be understood as: the first neural network blocks in this embodiment can achieve different functions by stacking.
  • neural network #1 includes N1 first neural network blocks
  • neural network #2 includes N2 first neural network blocks
  • N1 and N2 are unequal positive integers
  • the functions achieved by neural network #1 and neural network #2 are different.
  • neural network #1 can achieve the function of the encoder in the above-mentioned scheme one (improve CSI compression ratio, reduce feedback overhead, and improve system throughput)
  • neural network #2 can achieve the function of the encoder in the above-mentioned scheme two (reduce the computational complexity in the CSI feedback process).
  • Spend the function of the encoder in the above-mentioned scheme one (improve CSI compression ratio, reduce feedback overhead, and improve system throughput)
  • neural network #2 can achieve the function of the encoder in the above-mentioned scheme two (reduce the computational complexity in the CSI feedback process).
  • connection between the multiple first neural network blocks can be deep connection or wide connection.
  • a neural network includes multiple first neural network blocks supporting CNN and an output layer, wherein the multiple first neural network blocks supporting CNN are connected via deep connections, the multiple first neural network blocks supporting CNN have the same structure but different corresponding parameters, and the parameters of each first neural network block are related to the position of the first neural network block in the neural network.
  • a neural network includes multiple first neural network blocks supporting FC and an output layer, wherein the multiple first neural network blocks supporting FC are connected via deep connections, and the multiple first neural network blocks supporting FC have the same structure but different corresponding parameters.
  • a neural network includes multiple first neural network blocks supporting FC and an output layer, wherein the multiple first neural network blocks supporting FC are connected via width connections, and the multiple first neural network blocks supporting FC have the same structure but different corresponding parameters.
  • a neural network includes a plurality of first neural network blocks supporting Transformer and an output layer, wherein the plurality of first neural network blocks supporting Transformer are connected by width connection.
  • the connection mode between different first neural network blocks included in a certain neural network can be a deep connection and/or a wide connection, wherein the deep connection mode between two first neural network blocks means that the output of the first first neural network block of the two first neural network blocks is used as the input of the second first neural network block.
  • the wide connection mode between two first neural network blocks means that the inputs of the two first neural network blocks are the same data (or the outputs of the same neural network block), and the outputs of the two first neural network blocks are output to the same output layer or the next neural network block.
  • the first neural network block and the output layer can constitute neural networks for different requirements, and do not constitute any limitation on the scope of protection of the present application.
  • the first neural network block and the output layer can also constitute neural networks of other modes, for example, neural networks with different numbers of layers, which will not be explained one by one here.
  • the neural network includes 4 identical first neural network blocks that are deeply connected based on convolutional neural networks.
  • the structure of each of the 4 first neural network blocks is shown in the dotted box in FIG8(a), that is, the dotted box part in FIG8(a) is repeated 4 times (the repeated part is not shown in FIG8(a)).
  • the dotted box in FIG8(a) shows a first neural network block based on convolution.
  • the neural network shown in Figure 8 (b) includes a smaller number of first neural network blocks (1), so the neural network shown in Figure 8 (b) has the characteristic of low computational complexity and is suitable for the second AI-based CSI feedback solution shown in the previous text.
  • the number of first neural network blocks (4) included in the neural network shown in FIG8(a) is greater, and the neural network shown in FIG8(b) has the following characteristics:
  • the first communication device may obtain the first neural network block based on the following possible implementations:
  • the first communication device can determine the first neural network block by itself. For example, the first communication device obtains the first neural network block by training based on the training data.
  • the collected data may be historical communication data between the first communication device and the second communication device, or may be training data provided by the management device, or may be generated by the first communication device for training. In this embodiment, there is no limitation on how to obtain the data for training.
  • the first communication device may determine the first neural network block in a predefined manner, such as a protocol predetermines the structure of at least one neural network block.
  • the first communication device may negotiate with the second communication device to determine the first neural network block.
  • the parameters of the corresponding neural network block can be provided to the second communication device with different requirements, so that the second communication device can determine the corresponding neural network based on the parameters of the received neural network block, thereby performing channel state information compression processing based on the determined neural network and performing uplink channel state information feedback.
  • the method flow shown in FIG6 may optionally also include:
  • the first communication device sends parameters corresponding to the N first neural network blocks to the second communication device, and correspondingly, the second communication device receives parameters corresponding to the N first neural network blocks from the first communication device.
  • N first neural network blocks are included in a first neural network, and the first neural network is used for the second communication device to encode CSI.
  • the parameters corresponding to the N first neural network blocks include: the parameters corresponding to the i-th first neural network block in the first neural network, and i takes values from 1 to N.
  • the N first neural network blocks are the first neural network blocks stacked N times, which is equivalent to the neural network blocks with the same network structure being repeated N times in the first neural network, and the parameters of the first neural network blocks of different layers of the first neural network (such as the weights, biases or activation functions shown in FIG. 4, etc.) are different, so the first communication device needs to send the parameters corresponding to the N first neural network blocks contained in the first neural network to the second communication device.
  • the first neural network includes two first neural network blocks: the first neural network block #1 and the first neural network block #2.
  • the first neural network block #1 and the first neural network block #2 have the same structure but different parameters. It can be understood that the first neural network block is stacked twice.
  • the first communication device may also send the parameters of the output layer of the first neural network to the second communication device.
  • the output layer refers to the neural network used to obtain the final feedback information (such as m or m' shown above) from the output of the last neural network block, and may include one or more layers of FC or CNN, etc.
  • the above-mentioned first neural network block can also be designed as a neural network block that can serve as an output layer.
  • the first communication device may not need to provide the parameters of the output layer of the first neural network to the second communication device; or, when the output layer of the first neural network can reuse the output layer of a known neural network, the first communication device may not need to provide the parameters of the output layer of the first neural network to the second communication device.
  • the first communication device can provide parameters of the neural network blocks required to form a neural network to one or more second communication devices.
  • this embodiment is described by taking an example in which the first communication device provides parameters corresponding to N first neural network blocks contained in the first neural network to a second communication device.
  • the first communication device may also provide the second communication device with parameters such as the quantization method used, the number of quantization bits, and the feature embedding method.
  • the first communication device may provide the second communication device with the parameters corresponding to the N first neural network blocks mentioned above respectively through the following possible implementation methods:
  • Method 1 The first communication device sends the parameters corresponding to the above-mentioned N first neural network blocks to the second communication device in response to the request of the second communication device.
  • the method flow shown in FIG6 further includes:
  • the second communication device determines that the number of first neural network blocks included in the first neural network is N according to the second information.
  • the second communication device can determine that the number of first neural network blocks included in the required first neural network is N based on the second information, and the second information includes the capabilities of the second communication device and/or the need to process CSI.
  • the second communication device can request the first communication device for N first neural network blocks included in the first neural network based on the second information, where N is greater than a threshold.
  • the second communication device can request the first communication device for the N first neural network blocks included in the first neural network based on the second information, where N is less than a threshold.
  • the second communication device sends first indication information to the first communication device, and correspondingly, the first communication device receives the first indication information from the second communication device.
  • the first indication information is used to indicate the above-mentioned N. It indicates that the number of first neural network blocks required by the second communication device is N.
  • the second communication device requests the parameters of the required neural network block in the case described in the first method with reference to a specific example:
  • Example 1 Assume that the neural network (or encoder) required by the second communication device is a neural network structure including a maximum of 4 neural network blocks.
  • model 1 (hereinafter referred to as model 1) containing three neural network blocks is requested according to the second information; when the computing power of the second communication device is insufficient (for example, the second communication device is an IoT terminal or the second communication device is low on power), the second communication device requests model 2 (hereinafter referred to as model 2) containing one neural network block according to the second information.
  • the second communication device may use 4 bits to indicate whether the parameters of 4 neural network blocks are needed. Therefore, the second communication device may use 0111 to request model 1 and 0001 to request model 2. The first communication device sends the parameters of the corresponding neural network block to the second communication device according to the received request.
  • the parameters provided by the first communication device may be as shown in Figure 9. It should be understood that Figure 9 is only an example, and the parameters provided by the first communication device may not include the parameters of the output layer, which will not be described one by one here.
  • Method 2 The first communication device broadcasts parameters corresponding to multiple first neural network blocks.
  • the method flow shown in FIG6 further includes:
  • the first communication device determines the maximum value P of the number of first neural network blocks.
  • the first communication device determines the maximum value P of the number of first neural network blocks that the neural network can include, where P is a positive integer greater than or equal to N.
  • the above-mentioned first communication device sends parameters corresponding to N first neural network blocks to the second communication device, including: the first communication device sends parameters corresponding to P first neural network blocks to the second communication device.
  • the second communication device may select different parameters according to its own capabilities or needs.
  • the method flow shown in FIG. 6 further includes:
  • the second communication device determines N first neural network blocks based on the second information.
  • the second communication device may request parameters of the newly added first neural network blocks from the first communication device.
  • the second communication device knows the parameters corresponding to the N first neural network blocks included in the first neural network.
  • the second communication device can send a second indication message to the first communication device, and the second indication message indicates M, requesting the parameters of the newly added M first neural network blocks.
  • the second communication device may request the first communication device to update the parameters of the first neural network blocks included in the neural network.
  • the second communication device knows the parameters corresponding to N first neural network blocks included in the first neural network.
  • the second communication device can request the parameters of the Q first neural network blocks from the first communication device.
  • the second communication device updates the neural network with reference to specific examples.
  • Example 2 The second communication device is initially low on power and uses the above-mentioned model 2; after the power is restored, it is desired to use the above-mentioned model 1.
  • the second communication device can request the parameters of the additional first neural network blocks required to update from model 2 (1 first neural network block, such as 0001) to model 1 (3 first neural network blocks, such as 0111), i.e., 2 first neural network blocks (e.g., 0110), effectively reducing the transmission overhead of the neural network update.
  • the parameters provided by the first communication device may be as shown in Figure 10. It should be understood that Figure 10 is only an example, and in the case where the neural network required by the second communication device is updated, the parameters provided by the first communication device may also be the parameters of the first neural network block included in the updated neural network, which will not be illustrated one by one here.
  • the first neural network can be determined, and CSI processing is performed based on the first neural network.
  • the method flow shown in FIG6 also includes:
  • S630 The second communication device performs CSI processing.
  • the second communication device processes the CSI based on the determined first neural network. For example, the second communication device compresses the precoding matrix V or other channel state information based on the first neural network.
  • the second communication device receives measurement information from the first communication device (such as the NDP shown above, or other measurement reference signals), and the second communication device can perform channel estimation based on the measurement information to obtain the current channel H, and perform SVD on H to obtain the precoding matrix V.
  • This embodiment mainly involves compression processing of V based on the first neural network, and other processes are not limited.
  • the process of the second communication device obtaining the parameters of the first neural network block included in the first neural network and the process of the second communication device receiving the measurement information are independent processes.
  • the computational complexity of the first neural network is high, and the second communication device can improve the CSI compression ratio based on the first neural network, reduce feedback overhead, and improve system throughput.
  • the computational complexity of the first neural network is low, and the second communication device can reduce the computational complexity in the CSI feedback process based on the first neural network.
  • the second communication device sends first information to the first communication device, and correspondingly, the first communication device receives the first information from the second communication device.
  • the first information includes third indication information and a first vector
  • the third indication information is used to indicate N
  • the first vector is the result of CSI being encoded by the first neural network. It should be understood that the third indication information is used to indicate the structure of the neural network based on which the first vector currently fed back by the second communication device is processed, thereby helping the first communication device to select a suitable neural network to parse the first vector to obtain high-accuracy channel information.
  • the third indication information can be carried in a multiple input multiple output (MIMO) control field, or in a MIMO compressed beamforming report (CBR) field, as shown in Figure 10.
  • MIMO multiple input multiple output
  • CBR MIMO compressed beamforming report
  • the first communication device can obtain the first neural network block, and when different neural networks contain different numbers of first neural network blocks, different neural networks implement different functions. For example, the more first neural network blocks a neural network contains, the higher the computational complexity of the neural network, and the compression ratio of the CSI compressed and processed by the neural network is high, the feedback overhead is reduced, and the system throughput is high. Furthermore, the first communication device can provide the second communication device with parameters corresponding to the N first neural network blocks, so that the second communication device can determine the first neural network based on the parameters corresponding to the N received first neural network blocks, and can encode the CSI based on the first neural network in the subsequent CSI feedback process to implement AI-based CSI feedback.
  • neural networks with different functions can be obtained through a neural network block of a structure, without the need to train different neural networks for different functions, thereby reducing management overhead and storage overhead of the neural network.
  • sequence numbers of the above processes do not mean the order of execution.
  • the execution order of each process should be determined by its function and internal logic, and should not constitute any limitation on the implementation process of the embodiments of the present application.
  • the methods and operations implemented by the device can also be implemented by components that can be used in the device (such as chips or circuits).
  • the communication method provided by the embodiment of the present application is described in detail above in conjunction with FIG6.
  • the above communication method is mainly introduced from the perspective of the first communication device and the second communication device. It can be understood that in order to implement the above functions, the first communication device and the second communication device include hardware structures and/or software modules corresponding to the execution of each function.
  • the embodiment of the present application can divide the functional modules of the transmitting end device or the receiving end device according to the above method example.
  • each functional module can be divided corresponding to each function, or two or more functions can be integrated into one processing module.
  • the above integrated module can be implemented in the form of hardware or in the form of software functional modules. It should be noted that the division of modules in the embodiment of the present application is schematic and is only a logical functional division. There may be other division methods in actual implementation. The following is an example of dividing each functional module corresponding to each function.
  • FIG11 is a schematic block diagram of a communication device 10 provided in an embodiment of the present application.
  • the device 10 includes a transceiver module 11 and a processing module 12.
  • the transceiver module 11 can implement corresponding communication functions, and the processing module 12 is used to perform data processing, or in other words, the transceiver module 11 is used to perform operations related to receiving and sending, and the processing module 12 is used to perform other operations besides receiving and sending.
  • the transceiver module 11 can also be called a communication interface or a communication unit.
  • the device 10 may further include a storage module 13, which may be used to store instructions and/or data.
  • the processing module 12 may read the instructions and/or data in the storage module so that the device implements the actions of the devices in the aforementioned method embodiments.
  • the device 10 may correspond to the first communication device in the above method embodiment, or a component (such as a chip) of the first communication device.
  • the device 10 can implement the steps or processes executed by the first communication device in the above method embodiment, wherein the transceiver module 11 can be used to execute the transceiver related operations of the first communication device in the above method embodiment, and the processing module 12 can be used to execute the processing related operations of the first communication device in the above method embodiment.
  • the processing module 12 is used to obtain a first neural network block, the first neural network block is used for channel information feedback, and the functions of neural networks containing different numbers of the first neural network blocks are different.
  • the transceiver module 11 is used to send parameters corresponding to N first neural network blocks to the second communication device, the N first neural network blocks are included in the first neural network, and the first neural network is used for encoding processing of channel information, wherein N is a positive integer.
  • the transceiver module 11 can be used to execute the steps of sending and receiving information in the method, such as steps S622, S620 and S640, and the processing module 12 can be used to execute the processing steps in the method, such as steps S610 and S623.
  • the device 10 may correspond to the second communication device in the above method embodiment, or be a component (such as a chip) of the second communication device.
  • the device 10 can implement steps or processes corresponding to those performed by the second communication device in the above method embodiment, wherein the transceiver module 11 can be used to perform transceiver-related operations of the second communication device in the above method embodiment, and the processing module 12 can be used to perform processing-related operations of the second communication device in the above method embodiment.
  • the transceiver module 11 is used to obtain parameters corresponding to N first neural network blocks, respectively, where the first neural network blocks are used for channel information feedback, and the functions of neural networks containing different numbers of the first neural network blocks are different.
  • the processing module 12 is used to determine a first neural network based on the N first neural network blocks, where the first neural network is used for the second communication device to encode the channel information, wherein N is a positive integer.
  • the transceiver module 11 may be used to execute the steps of sending and receiving information in the method, such as step S622, S620 and S640, the processing module 12 can be used to execute the processing steps in the method, such as steps S621, S624 and S630.
  • module here may refer to an application specific integrated circuit (ASIC), an electronic circuit, a processor (such as a shared processor, a dedicated processor or a group processor, etc.) and a memory for executing one or more software or firmware programs, a merged logic circuit and/or other suitable components that support the described functions.
  • ASIC application specific integrated circuit
  • processor such as a shared processor, a dedicated processor or a group processor, etc.
  • memory for executing one or more software or firmware programs, a merged logic circuit and/or other suitable components that support the described functions.
  • the device 10 can be specifically the mobile management network element in the above-mentioned embodiment, and can be used to execute the various processes and/or steps corresponding to the mobile management network element in the above-mentioned method embodiments; or, the device 10 can be specifically the terminal device in the above-mentioned embodiment, and can be used to execute the various processes and/or steps corresponding to the terminal device in the above-mentioned method embodiments. To avoid repetition, it will not be repeated here.
  • the device 10 of each of the above schemes has the function of implementing the corresponding steps performed by the device (such as the first communication device) in the above method.
  • the function can be implemented by hardware, or by hardware executing the corresponding software implementation.
  • the hardware or software includes one or more modules corresponding to the above functions; for example, the transceiver module can be replaced by a transceiver (for example, the sending unit in the transceiver module can be replaced by a transmitter, and the receiving unit in the transceiver module can be replaced by a receiver), and other units, such as the processing module, can be replaced by a processor to respectively perform the transceiver operations and related processing operations in each method embodiment.
  • the transceiver module 11 may also be a transceiver circuit (for example, may include a receiving circuit and a sending circuit), and the processing module may be a processing circuit.
  • FIG12 is a schematic diagram of another communication device 20 provided in an embodiment of the present application.
  • the device 20 includes a processor 21, and the processor 21 is used to execute a computer program or instruction stored in a memory 22, or read data/signaling stored in the memory 22 to execute the method in each method embodiment above.
  • the processor 21 is used to execute a computer program or instruction stored in a memory 22, or read data/signaling stored in the memory 22 to execute the method in each method embodiment above.
  • the device 20 further includes a memory 22, and the memory 22 is used to store computer programs or instructions and/or data.
  • the memory 22 can be integrated with the processor 21, or can also be separately arranged.
  • the memory 22 is one or more.
  • the device 20 further includes a transceiver 23, and the transceiver 23 is used for receiving and/or sending signals.
  • the processor 21 is used to control the transceiver 23 to receive and/or send signals.
  • the device 20 is used to implement the operations performed by the first communication device or the second communication device in the above various method embodiments.
  • processors mentioned in the embodiments of the present application may be a central processing unit (CPU), or other general-purpose processors, digital signal processors (DSP), application-specific integrated circuits (ASIC), field programmable gate arrays (FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, etc.
  • DSP digital signal processors
  • ASIC application-specific integrated circuits
  • FPGA field programmable gate arrays
  • a general-purpose processor may be a microprocessor or the processor may also be any conventional processor, etc.
  • RAM includes the following forms: static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double data rate SDRAM (DDR SDRAM), enhanced SDRAM (ESDRAM), synchronous link DRAM (SLDRAM), and direct rambus RAM (DR RAM).
  • SRAM static RAM
  • DRAM dynamic RAM
  • SDRAM synchronous DRAM
  • DDR SDRAM double data rate SDRAM
  • ESDRAM enhanced SDRAM
  • SLDRAM synchronous link DRAM
  • DR RAM direct rambus RAM
  • the processor is a general-purpose processor, DSP, ASIC, FPGA or other programmable logic device, discrete gate or transistor logic device, discrete hardware component, the memory (storage module) can be integrated into the processor.
  • memory described herein is intended to include, but is not limited to, these and any other suitable types of memory.
  • FIG13 is a schematic diagram of a chip system 30 provided in an embodiment of the present application.
  • the chip system 30 (or also referred to as a processing system) includes a logic circuit 31 and an input/output interface 32.
  • the logic circuit 31 may be a processing circuit in the chip system 30.
  • the logic circuit 31 may be coupled to a storage unit and call instructions in the storage unit so that the chip system 30 can implement the methods and functions of the various embodiments of the present application.
  • the input and output circuits in the chip system 30 output information processed by the chip system 30 , or input data or signaling information to be processed into the chip system 30 for processing.
  • the chip system 30 is used to implement the operations performed by the first communication device or the second communication device in the above method embodiments.
  • An embodiment of the present application also provides a communication system, including the aforementioned first communication device and second communication device.
  • the units described as separate components may or may not be physically separated, and the components shown as units may or may not be physical units, that is, they may be located in one place or distributed on multiple network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
  • each functional unit in each embodiment of the present application may be integrated into one processing unit, or each unit may exist physically separately, or two or more units may be integrated into one unit.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • General Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Signal Processing (AREA)
  • Evolutionary Computation (AREA)
  • Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Mobile Radio Communication Systems (AREA)

Abstract

The present application provides a communication method and a communication device. The communication method comprises: a first communication device acquires first neural network blocks for channel information feedback, wherein neural networks comprising different numbers of first neural network blocks have different functions; and the first communication device sends to a second communication device parameters respectively corresponding to N first neural network blocks, the N first neural network blocks being comprised in a first neural network, and the first neural network being used for coding processing of channel information. Therefore, in a subsequent channel information feedback process, the second communication device can perform coding processing on channel information on the basis of the first neural network, thereby achieving AI-based channel information feedback. In addition, in the technical solution, neural networks having different functions can be obtained by means of neural network blocks of the same structure, and no training for different functions is required to obtain different neural networks, thereby reducing the management overhead and the storage overhead of neural networks.

Description

通信方法和通信装置Communication method and communication device

本申请要求于2023年11月10日提交中国专利局、申请号为202311503718.8、申请名称为“通信方法和通信装置”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。This application claims the priority of the Chinese patent application filed with the China Patent Office on November 10, 2023, with application number 202311503718.8 and application name “Communication Method and Communication Device”, all contents of which are incorporated by reference in this application.

技术领域Technical Field

本申请涉及通信技术的领域,并且更具体地,涉及一种通信方法和通信装置。The present application relates to the field of communication technology, and more specifically, to a communication method and a communication device.

背景技术Background Art

无线通信迅猛发展,第六代无线(局域)网(wireless fidelity,Wi-Fi)(如,电气和电子工程师学会(institute of electrical and electronics engineers,IEEE)802.11ax)标准已经商用,下一代无线技术和标准化也正在全球范围如火如荼的进行中。无线通信已经渗透日常生活和工作的各个方面,成为不可或缺的部分。随着智能终端数目的高速增长,以及物联网(internet of things,IoT)设备的普及。新型无线技术、新型终端、新型应用使无线网络变得空前复杂。可以预见,未来的无线网络会越发复杂。为了对抗无线网络的高复杂性发展趋势,人工智能(artificial intelligence,AI)作为无线网络设计和管理的有效工具已经成为业界的共识。Wireless communications are developing rapidly. The sixth-generation wireless (local area) network (wireless fidelity, Wi-Fi) (such as the Institute of Electrical and Electronics Engineers (IEEE) 802.11ax) standard has been commercialized, and the next-generation wireless technology and standardization are also in full swing around the world. Wireless communications have penetrated into all aspects of daily life and work and have become an indispensable part. With the rapid growth of the number of smart terminals and the popularization of Internet of Things (IoT) devices. New wireless technologies, new terminals, and new applications have made wireless networks unprecedentedly complex. It is foreseeable that wireless networks will become more and more complex in the future. In order to combat the high complexity development trend of wireless networks, artificial intelligence (AI) has become an industry consensus as an effective tool for wireless network design and management.

另外,为了提升通信速率,需要增大带宽和增加天线数,并且采用波束成型技术。波束成型技术需要接收端反馈信道信息(如,信道状态信息(channel state information,CSI))。目前标准中使用基于吉文斯旋转(Givens rotation)的信道信息反馈方法。随着带宽的增大以及天线数的增加,极大增加了基于吉文斯旋转得信道状态信息反馈开销。随着AI技术的发展,如何基于AI进行信道信息反馈成为亟待解决的问题。In addition, in order to improve the communication rate, it is necessary to increase the bandwidth and the number of antennas, and adopt beamforming technology. Beamforming technology requires the receiving end to feedback channel information (such as channel state information (CSI)). The current standard uses a channel information feedback method based on Givens rotation. With the increase in bandwidth and the number of antennas, the channel state information feedback overhead based on Givens rotation is greatly increased. With the development of AI technology, how to perform channel information feedback based on AI has become an urgent problem to be solved.

发明内容Summary of the invention

本申请提供一种通信方法,以期基于AI进行信道信息反馈。The present application provides a communication method for performing channel information feedback based on AI.

第一方面,提供了一种通信方法,该方法可以由第一通信装置执行,或者,也可以由第一通信装置的组成部件(例如芯片或者电路)执行,对此不作限定。为了便于描述下文中以第一通信装置执行为例进行说明。In a first aspect, a communication method is provided, which can be executed by a first communication device, or can also be executed by a component (such as a chip or circuit) of the first communication device, without limitation. For ease of description, the following description is based on the first communication device as an example.

该通信方法包括:第一通信装置获取第一神经网络块,所述第一神经网络块用于信道信息反馈,包含不同数量的所述第一神经网络块的神经网络的功能不同;所述第一通信装置向第二通信装置发送N个所述第一神经网络块分别对应的参数,所述N个第一神经网络块包含于第一神经网络,所述第一神经网络用于信道信息的编码处理,其中,N为正整数。The communication method includes: a first communication device obtains a first neural network block, the first neural network block is used for channel information feedback, and neural networks containing different numbers of the first neural network blocks have different functions; the first communication device sends parameters corresponding to N first neural network blocks to a second communication device, the N first neural network blocks are included in a first neural network, and the first neural network is used for encoding processing of channel information, wherein N is a positive integer.

基于上述技术方案,第一通信装置能够获取第一神经网络块,并且不同的神经网络包含的第一神经网络块的数量不同的情况下,不同的神经网络实现的功能不同。例如,神经网络包含的第一神经网络块的数量越多,该神经网络的计算复杂度越高,且经由该神经网络压缩处理的信道信息的压缩比高,反馈开销减小,系统吞吐高。进一步地,第一通信装置可以向第二通信装置提供N个第一神经网络块分别对应的参数,从而使得第二通信装置能够基于接收到的N个第一神经网络块分别对应的参数确定第一神经网络,在后续的信道信息反馈过程中可以基于第一神经网络对信道信息进行编码处理,实现基于AI的信道信息反馈。Based on the above technical solution, the first communication device can obtain the first neural network block, and when different neural networks contain different numbers of first neural network blocks, different neural networks implement different functions. For example, the more first neural network blocks a neural network contains, the higher the computational complexity of the neural network, and the compression ratio of the channel information compressed and processed by the neural network is high, the feedback overhead is reduced, and the system throughput is high. Furthermore, the first communication device can provide the second communication device with parameters corresponding to the N first neural network blocks, so that the second communication device can determine the first neural network based on the parameters corresponding to the N received first neural network blocks, and in the subsequent channel information feedback process, the channel information can be encoded and processed based on the first neural network to achieve AI-based channel information feedback.

而且该技术方案中可以通过一种结构的神经网络块得到不同功能的神经网络,无需针对不同的功能训练得到不同的神经网络,降低了管理开销以及神经网络的存储开销。Moreover, in this technical solution, neural networks with different functions can be obtained through a neural network block of a structure, without the need to train different neural networks for different functions, thereby reducing management overhead and storage overhead of the neural network.

结合第一方面,在第一方面的某些实现方式中,在所述第一通信装置向第二通信装置发送N个第一神经网络块分别对应的参数之前,所述方法还包括:所述第一通信装置接收来自所述第二通信装置的第一指示信息,所述第一指示信息用于指示所述N。In combination with the first aspect, in certain implementations of the first aspect, before the first communication device sends parameters corresponding to N first neural network blocks to the second communication device, the method also includes: the first communication device receives first indication information from the second communication device, and the first indication information is used to indicate the N.

基于上述技术方案,第二通信装置可以将所需的第一神经网络块的数量N通过第一指示信息通知给第一通信装置,以使得第一通信装置可以根据第二通信装置的需求发送相应数量的第一神经网络块所对 应的参数,避免第一通信装置提供的第一神经网络块所对应的参数不满足第二通信装置的需求。Based on the above technical solution, the second communication device can notify the first communication device of the number N of the required first neural network blocks through the first indication information, so that the first communication device can send the corresponding number of first neural network blocks according to the needs of the second communication device. The corresponding parameters are used to avoid the situation where the parameters corresponding to the first neural network block provided by the first communication device do not meet the requirements of the second communication device.

结合第一方面,在第一方面的某些实现方式中,所述方法还包括:所述第一通信装置接收来自所述第二通信装置的第二指示信息,所述第二指示信息用于指示M,所述M为正整数;所述第一通信装置向第二通信装置发送M个所述第一神经网络块分别对应的参数,其中,所述M个所述第一神经网络块和所述N个所述第一神经网络块包含于第二神经网络。In combination with the first aspect, in certain implementations of the first aspect, the method also includes: the first communication device receives second indication information from the second communication device, the second indication information is used to indicate M, and M is a positive integer; the first communication device sends parameters corresponding to M first neural network blocks to the second communication device, wherein the M first neural network blocks and the N first neural network blocks are included in the second neural network.

基于上述技术方案,当第二通信装置所需的神经网络包括的第一神经网络块的数量发生变化的情况下,第二通信装置可以向第一通信装置请求新增的第一神经网络块的参数,无需请求第一通信装置提供更新后的神经网络包括的所有的第一神经网络块对应的参数。例如,更新后的神经网络包括Q个第一神经网络块,相比于第一神经网络多M个第一神经网络块,则请求第一通信装置提供该M个第一神经网络块对应的参数即可,无需请求第一通信装置提供Q个第一神经网络块对应的参数,有效减少神经网络更新的传输开销。Based on the above technical solution, when the number of first neural network blocks included in the neural network required by the second communication device changes, the second communication device can request the parameters of the newly added first neural network blocks from the first communication device, without requesting the first communication device to provide the parameters corresponding to all the first neural network blocks included in the updated neural network. For example, if the updated neural network includes Q first neural network blocks, which is M more than the first neural network, then it is sufficient to request the first communication device to provide the parameters corresponding to the M first neural network blocks, without requesting the first communication device to provide the parameters corresponding to the Q first neural network blocks, thereby effectively reducing the transmission overhead of the neural network update.

结合第一方面,在第一方面的某些实现方式中,所述方法还包括:所述第一通信装置确定神经网络能够包括的第一神经网络块个数的最大值P,P为大于或者等于N的正整数;所述第一通信装置向所述第二通信装置发送N个所述第一神经网络块分别对应的参数,包括:所述第一通信装置向所述第二通信装置发送所述P个所述第一神经网络块分别对应的参数。In combination with the first aspect, in certain implementations of the first aspect, the method also includes: the first communication device determines the maximum number P of first neural network blocks that the neural network can include, P is a positive integer greater than or equal to N; the first communication device sends the parameters corresponding to the N first neural network blocks to the second communication device, including: the first communication device sends the parameters corresponding to the P first neural network blocks to the second communication device.

基于上述技术方案,第一通信装置可以根据神经网络最多能够包含的第一神经网络块的个数向第二通信装置广播第一神经网络块数量最多的情况下,多个第一神经网络块分别对应的参数。第一通信装置通过广播的方式主动下发多个第一神经网络块分别对应的参数,无需第二通信装置发起请求流程,降低了第二通信装置的信令开销。Based on the above technical solution, the first communication device can broadcast the parameters corresponding to multiple first neural network blocks when the number of first neural network blocks is the largest to the second communication device according to the maximum number of first neural network blocks that the neural network can contain. The first communication device actively sends the parameters corresponding to multiple first neural network blocks by broadcasting, without the need for the second communication device to initiate a request process, thereby reducing the signaling overhead of the second communication device.

结合第一方面,在第一方面的某些实现方式中,所述方法还包括:所述第一通信装置接收来自所述第二通信装置的第一信息,所述第一信息中包括第三指示信息和第一向量,所述第三指示信息用于指示所述N,所述第一向量为所述信道信息经过所述第一神经网络编码得到的结果。In combination with the first aspect, in certain implementations of the first aspect, the method also includes: the first communication device receives first information from the second communication device, the first information includes third indication information and a first vector, the third indication information is used to indicate N, and the first vector is a result of encoding the channel information through the first neural network.

基于上述技术方案,第二通信装置在反馈信道信息的过程中,除了反馈信道信息基于神经网络处理得到的第一向量之外,还在反馈信息中携带第三指示信息,以指示当前反馈的第一向量是基于哪种结构的神经网络处理得到的,从而有助于第一通信装置可以选择合适的神经网络解析该第一向量得到准确率高的信道信息。Based on the above technical solution, in the process of feeding back channel information, the second communication device, in addition to feeding back the first vector obtained based on neural network processing of the channel information, also carries third indication information in the feedback information to indicate which structure of the neural network processing is used to obtain the currently fed back first vector, thereby helping the first communication device to select a suitable neural network to parse the first vector and obtain channel information with high accuracy.

结合第一方面,在第一方面的某些实现方式中,所述第一神经网络块的参数包括以下至少一项:所述第一神经网络块对应的权重信息、偏置信息、或激活函数信息。In combination with the first aspect, in certain implementations of the first aspect, the parameters of the first neural network block include at least one of the following: weight information, bias information, or activation function information corresponding to the first neural network block.

结合第一方面,在第一方面的某些实现方式中,所述第一神经网络块支持以下至少一种神经网络结构:卷积神经网络CNN、多层感知机MLP、或转换器Transformer。In combination with the first aspect, in certain implementations of the first aspect, the first neural network block supports at least one of the following neural network structures: a convolutional neural network CNN, a multi-layer perceptron MLP, or a transformer Transformer.

结合第一方面,在第一方面的某些实现方式中,所述第一神经网络包括多个所述第一神经网络块时,所述多个第一神经网络块之间的连接方式包括深度连接和/或宽度连接。In combination with the first aspect, in certain implementations of the first aspect, when the first neural network includes multiple first neural network blocks, the connection method between the multiple first neural network blocks includes deep connection and/or wide connection.

结合第一方面,在第一方面的某些实现方式中,所述方法还包括:所述第一通信装置向第二通信装置发送第一神经网络的输出层的参数。In combination with the first aspect, in some implementations of the first aspect, the method further includes: the first communication device sending parameters of an output layer of the first neural network to the second communication device.

结合第一方面,在第一方面的某些实现方式中,所述方法还包括:所述第一通信装置向第二通信装置发送以下信息中的至少一种:指示量化方法的信息、指示量化比特数的信息、指示特征嵌入方法的信息。In combination with the first aspect, in some implementations of the first aspect, the method also includes: the first communication device sends at least one of the following information to the second communication device: information indicating a quantization method, information indicating a number of quantization bits, and information indicating a feature embedding method.

第二方面,提供了一种通信方法,该方法可以由第二通信装置执行,或者,也可以由第二通信装置的组成部件(例如芯片或者电路)执行,对此不作限定。为了便于描述下文中以第二通信装置执行为例进行说明。In a second aspect, a communication method is provided, which can be executed by a second communication device, or can also be executed by a component (such as a chip or circuit) of the second communication device, without limitation. For ease of description, the second communication device is used as an example for description below.

该通信方法包括:第二通信装置获取N个第一神经网络块分别对应的参数,所述第一神经网络块用于信道信息反馈,包含不同数量的所述第一神经网络块的神经网络的功能不同;所述第二通信装置基于所述N个所述第一神经网络块确定第一神经网络,所述第一神经网络用于所述第二通信装置对信道信息进行编码处理,其中,N为正整数。The communication method includes: the second communication device obtains parameters corresponding to N first neural network blocks respectively, the first neural network blocks are used for channel information feedback, and the functions of neural networks containing different numbers of the first neural network blocks are different; the second communication device determines the first neural network based on the N first neural network blocks, and the first neural network is used for the second communication device to encode the channel information, wherein N is a positive integer.

结合第二方面,在第二方面的某些实现方式中,所述第二通信装置获取N个第一神经网络块分别对应的参数,包括:所述第二通信装置接收来自第一通信装置的所述N个所述第一神经网络块分别对应的参数。In combination with the second aspect, in certain implementations of the second aspect, the second communication device obtains parameters corresponding to the N first neural network blocks respectively, including: the second communication device receives the parameters corresponding to the N first neural network blocks respectively from the first communication device.

结合第二方面,在第二方面的某些实现方式中,在所述第二通信装置接收来自第一通信装置的所述 N个所述第一神经网络块分别对应的参数之前,所述方法还包括:所述第二通信装置根据第一信息确定所需的第一神经网络包括的第一神经网络块个数N,所述第一信息包括所述第二通信装置的能力和/或对所述信道信息进行处理的需求;所述第二通信装置向所述第一通信装置发送第一指示信息,所述第一指示信息用于指示所述N。In combination with the second aspect, in some implementations of the second aspect, when the second communication device receives the Before the parameters corresponding to the N first neural network blocks respectively, the method also includes: the second communication device determines the number N of first neural network blocks included in the required first neural network based on first information, the first information includes the capabilities of the second communication device and/or the need to process the channel information; the second communication device sends first indication information to the first communication device, and the first indication information is used to indicate the N.

结合第二方面,在第二方面的某些实现方式中,在所述第二通信装置获取N个所述第一神经网络块分别对应的参数之后,所述方法还包括:所述第二通信装置根据第二信息确定所需的第二神经网络包括Q个所述第一神经网络块,所述Q为大于N的正整数,且所述Q和所述N的差值为M,所述第二信息包括所述第二通信装置的能力和/或对所述信道信息进行处理的需求;所述第二通信装置向所述第一通信装置发送第二指示信息,所述第二指示信息用于指示所述M;所述第二通信装置接收来自第一通信装置的M个所述第一神经网络块分别对应的参数。In combination with the second aspect, in certain implementations of the second aspect, after the second communication device obtains the parameters corresponding to N of the first neural network blocks, the method also includes: the second communication device determines, based on second information, that the required second neural network includes Q of the first neural network blocks, where Q is a positive integer greater than N, and the difference between Q and N is M, and the second information includes the capabilities of the second communication device and/or the need to process the channel information; the second communication device sends second indication information to the first communication device, and the second indication information is used to indicate M; the second communication device receives the parameters corresponding to the M of the first neural network blocks from the first communication device.

结合第二方面,在第二方面的某些实现方式中,所述方法还包括:所述第二通信装置接收来自第一通信装置的P个所述第一神经网络块分别对应的参数,P为大于或者等于N的正整数;所述第二通信装置根据第二信息从所述P个所述第一神经网络块中确定所述N个所述第一神经网络块,所述第二信息包括所述第二通信装置的能力和/或对所述信道信息进行处理的需求。In combination with the second aspect, in certain implementations of the second aspect, the method also includes: the second communication device receives parameters corresponding to P first neural network blocks from the first communication device, where P is a positive integer greater than or equal to N; the second communication device determines the N first neural network blocks from the P first neural network blocks based on second information, where the second information includes the capabilities of the second communication device and/or the need to process the channel information.

结合第二方面,在第二方面的某些实现方式中,所述方法还包括:所述第二通信装置向所述第一通信装置发送第一信息,所述第一信息中包括第三指示信息和第一向量,所述第三指示信息用于指示所述N,所述第一向量为所述信道信息经过所述第一神经网络编码得到的结果。In combination with the second aspect, in certain implementations of the second aspect, the method also includes: the second communication device sends first information to the first communication device, the first information includes third indication information and a first vector, the third indication information is used to indicate N, and the first vector is a result of encoding the channel information through the first neural network.

结合第二方面,在第二方面的某些实现方式中,所述方法还包括:所述第二通信装置接收来自第一通信装置的所述第一神经网络的输出层的参数。In combination with the second aspect, in some implementations of the second aspect, the method further includes: the second communication device receiving parameters of the output layer of the first neural network from the first communication device.

以上第二方面及其可能的设计所示方法的技术效果可参照第一方面及其可能的设计中的技术效果。The technical effects of the method shown in the above second aspect and its possible design can refer to the technical effects in the first aspect and its possible design.

第三方面,提供了一种通信装置,该装置用于执行上述第一方面和第二方面提供的方法。具体地,该通信装置可以包括用于执行第一方面和第二方面的上述任意一种实现方式提供的方法的单元和/或模块,如处理单元和获取单元。In a third aspect, a communication device is provided, which is used to execute the method provided in the first and second aspects. Specifically, the communication device may include units and/or modules, such as a processing unit and an acquisition unit, for executing the method provided in any one of the above implementations of the first and second aspects.

在一种实现方式中,收发单元可以是收发器,或,输入/输出接口;处理单元可以是至少一个处理器。可选地,收发器可以为收发电路。可选地,输入/输出接口可以为输入/输出电路。In one implementation, the transceiver unit may be a transceiver, or an input/output interface; the processing unit may be at least one processor. Optionally, the transceiver may be a transceiver circuit. Optionally, the input/output interface may be an input/output circuit.

在另一种实现方式中,收发单元可以是该芯片、芯片系统或电路上的输入/输出接口、接口电路、输出电路、输入电路、管脚或相关电路等;处理单元可以是至少一个处理器、处理电路或逻辑电路等。In another implementation, the transceiver unit may be an input/output interface, interface circuit, output circuit, input circuit, pin or related circuit on the chip, chip system or circuit; the processing unit may be at least one processor, processing circuit or logic circuit.

示例性地,该通信装置为上述的第一通信装置或第一通信装置的组成部件(例如芯片或者电路),则通信装置包括:Exemplarily, the communication device is the above-mentioned first communication device or a component (such as a chip or circuit) of the first communication device, and the communication device includes:

处理单元,用于获取第一神经网络块,所述第一神经网络块用于信道状态信息信道信息反馈,包含不同数量的所述第一神经网络块的神经网络的功能不同。收发单元,用于向第二通信装置发送N个所述第一神经网络块分别对应的参数,所述N个第一神经网络块包含于第一神经网络,所述第一神经网络用于所述第二通信装置对信道信息进行编码处理,其中,N为正整数。A processing unit is used to obtain a first neural network block, wherein the first neural network block is used for channel state information and channel information feedback, and the functions of neural networks containing different numbers of the first neural network blocks are different. A transceiver unit is used to send parameters corresponding to N first neural network blocks to a second communication device, wherein the N first neural network blocks are included in a first neural network, and the first neural network is used for the second communication device to encode channel information, wherein N is a positive integer.

示例性地,该通信装置为上述的第二通信装置或第二通信装置的组成部件(例如芯片或者电路),则通信装置包括:Exemplarily, the communication device is the above-mentioned second communication device or a component (such as a chip or circuit) of the second communication device, and the communication device includes:

收发单元,用于获取N个第一神经网络块分别对应的参数,所述第一神经网络块用于信道状态信息信道信息反馈,包含不同数量的所述第一神经网络块的神经网络的功能不同。处理单元,用于基于所述N个所述第一神经网络块确定第一神经网络,所述第一神经网络用于所述第二通信装置对信道信息进行编码处理,其中,N为正整数。A transceiver unit is used to obtain parameters corresponding to N first neural network blocks, wherein the first neural network blocks are used for channel state information and channel information feedback, and neural networks containing different numbers of the first neural network blocks have different functions. A processing unit is used to determine a first neural network based on the N first neural network blocks, wherein the first neural network is used for the second communication device to encode channel information, wherein N is a positive integer.

第四方面,本申请提供一种处理器,用于执行上述各方面提供的方法。In a fourth aspect, the present application provides a processor for executing the methods provided in the above aspects.

对于处理器所涉及的发送和获取/接收等操作,如果没有特殊说明,或者,如果未与其在相关描述中的实际作用或者内在逻辑相抵触,则可以理解为处理器输出和接收、输入等操作,也可以理解为由射频电路和天线所进行的发送和接收操作,本申请对此不做限定。For the operations such as sending and acquiring/receiving involved in the processor, unless otherwise specified, or unless they conflict with their actual function or internal logic in the relevant description, they can be understood as operations such as processor output, reception, input, etc., or as sending and receiving operations performed by the radio frequency circuit and antenna, and this application does not limit this.

第五方面,提供一种计算机可读存储介质,该计算机可读存储介质存储用于设备执行的程序代码,该程序代码包括用于执行上述第一方面和第二方面的任意一种实现方式提供的方法。In a fifth aspect, a computer-readable storage medium is provided, which stores a program code for execution by a device, wherein the program code includes a method for executing any one of the implementations of the first and second aspects above.

第六方面,提供一种包含指令的计算机程序产品,当该计算机程序产品在计算机上运行时,使得计算机执行上述第一方面和第二方面的任意一种实现方式提供的方法。In a sixth aspect, a computer program product comprising instructions is provided, and when the computer program product is run on a computer, the computer executes the method provided by any one of the implementations of the first and second aspects above.

第七方面,提供一种芯片,芯片包括处理器与通信接口,处理器通过通信接口读取存储器上存储的 指令,执行上述第一方面和第二方面的任意一种实现方式提供的方法。In a seventh aspect, a chip is provided, the chip comprising a processor and a communication interface, the processor reads data stored in a memory through the communication interface Instructions to execute the method provided by any one of the implementations of the first and second aspects above.

可选地,作为一种实现方式,芯片还包括存储器,存储器中存储有计算机程序或指令,处理器用于执行存储器上存储的计算机程序或指令,当计算机程序或指令被执行时,处理器用于执行上述第一方面和第二方面的任意一种实现方式提供的方法。Optionally, as an implementation method, the chip also includes a memory, in which a computer program or instructions are stored, and the processor is used to execute the computer program or instructions stored in the memory. When the computer program or instructions are executed, the processor is used to execute the method provided by any implementation method of the first and second aspects above.

第八方面,提供一种通信系统,包括用于执行上述第一方面提供的方法的第一通信装置和用于执行上述第二方面提供的方法的第二通信装置。In an eighth aspect, a communication system is provided, comprising a first communication device for executing the method provided in the first aspect and a second communication device for executing the method provided in the second aspect.

附图说明BRIEF DESCRIPTION OF THE DRAWINGS

图1是本申请实施例适用的一种应用场景的示意图。FIG. 1 is a schematic diagram of an application scenario to which an embodiment of the present application is applicable.

图2示出了本申请提供的一种装置结构示意图。FIG2 shows a schematic diagram of a device structure provided in the present application.

图3是一种神经网络的示意图。FIG. 3 is a schematic diagram of a neural network.

图4是一种计算神经元输出的示意图。FIG. 4 is a schematic diagram of calculating neuron output.

图5是一种基于AI的CSI反馈的方法示意图。FIG5 is a schematic diagram of a method for AI-based CSI feedback.

图6是本申请实施例提供的一种通信方法的示意性流程图。FIG6 is a schematic flowchart of a communication method provided in an embodiment of the present application.

图7中(a)至(d)是本申请实施例提供的神经网络的示意图。Figure 7 (a) to (d) are schematic diagrams of a neural network provided in an embodiment of the present application.

图8中(a)和(b)是本申请实施例提供不同需求的神经网络的结构示意图。FIG8 (a) and (b) are schematic diagrams of the structure of a neural network for different requirements provided by an embodiment of the present application.

图9是本申请实施例提供的一种神经网络参数示意图。FIG9 is a schematic diagram of neural network parameters provided in an embodiment of the present application.

图10是本申请实施例提供的另一种神经网络参数示意图。FIG10 is another schematic diagram of neural network parameters provided in an embodiment of the present application.

图11是本申请实施例提供的通信装置的示意性框图。FIG. 11 is a schematic block diagram of a communication device provided in an embodiment of the present application.

图12是本申请实施例提供另一种通信装置的示意图。FIG. 12 is a schematic diagram of another communication device provided in an embodiment of the present application.

图13是本申请实施例提供一种芯片系统的示意图。FIG. 13 is a schematic diagram of a chip system provided in an embodiment of the present application.

具体实施方式DETAILED DESCRIPTION

为了便于理解本申请实施例,首先做出以下几点说明。In order to facilitate understanding of the embodiments of the present application, the following points are first explained.

第一,在本申请中,“用于指示”可以包括用于直接指示和用于间接指示。当描述某一指示信息用于指示A时,可以包括该指示信息直接指示A或间接指示A,而并不代表该指示信息中一定携带有A。First, in this application, "used for indication" may include being used for direct indication and being used for indirect indication. When describing that a certain indication information is used for indicating A, it may include that the indication information directly indicates A or indirectly indicates A, but it does not mean that the indication information must carry A.

将指示信息所指示的信息称为待指示信息,则具体实现过程中,对待指示信息进行指示的方式有很多种,例如但不限于,可以直接指示待指示信息,如待指示信息本身或者该待指示信息的索引等。也可以通过指示其他信息来间接指示待指示信息,其中该其他信息与待指示信息之间存在关联关系。还可以仅仅指示待指示信息的一部分,而待指示信息的其他部分则是已知的或者提前约定的。例如,还可以借助预先约定(例如协议规定)的各个信息的排列顺序来实现对特定信息的指示,从而在一定程度上降低指示开销。同时,还可以识别各个信息的通用部分并统一指示,以降低单独指示同样的信息而带来的指示开销。The information indicated by the indication information is called the information to be indicated. In the specific implementation process, there are many ways to indicate the information to be indicated, such as but not limited to, the information to be indicated can be directly indicated, such as the information to be indicated itself or the index of the information to be indicated. The information to be indicated can also be indirectly indicated by indicating other information, wherein there is an association between the other information and the information to be indicated. It is also possible to indicate only a part of the information to be indicated, while the other parts of the information to be indicated are known or agreed in advance. For example, the indication of specific information can also be achieved with the help of the arrangement order of each piece of information agreed in advance (for example, stipulated by the protocol), thereby reducing the indication overhead to a certain extent. At the same time, the common parts of each piece of information can also be identified and indicated uniformly to reduce the indication overhead caused by indicating the same information separately.

第二,在本申请中示出的“至少一个”是指一个或者多个,“多个”是指两个或两个以上。另外,在本申请的实施例中,“第一”、“第二”以及各种数字编号(例如,“#1”、“#2”等)只是为了描述方便进行的区分,并不用来限制本申请实施例的范围。下文各过程的序号的大小并不意味着执行顺序的先后,各过程的执行顺序应以其功能和内在逻辑确定,而不应对本申请实施例的实施过程构成任何限定,应该理解这样描述的对象在适当情况下可以互换,以便能够描述本申请的实施例以外的方案。此外,在本申请实施例中,“S610”等字样仅为了描述方便作出的标识,并不是对执行步骤的次序进行限定。Second, "at least one" shown in the present application refers to one or more, and "multiple" refers to two or more. In addition, in the embodiments of the present application, "first", "second" and various digital numbers (for example, "#1", "#2", etc.) are only for the convenience of description, and are not used to limit the scope of the embodiments of the present application. The size of the sequence number of each process below does not mean the order of execution. The execution order of each process should be determined by its function and inherent logic, and should not constitute any limitation on the implementation process of the embodiments of the present application. It should be understood that the objects described in this way can be interchanged in appropriate circumstances so as to be able to describe the schemes other than the embodiments of the present application. In addition, in the embodiments of the present application, words such as "S610" are only marks made for the convenience of description, and are not used to limit the order of execution steps.

第三,本申请实施例中,“示例性的”或者“例如”等词用于表示作例子、例证或说明。本申请中被描述为“示例性的”或者“例如”的任何实施例或设计方案不应被解释为比其他实施例或设计方案更优选或更具优势。确切而言,使用“示例性的”或者“例如”等词旨在以具体方式呈现相关概念。Third, in the embodiments of the present application, words such as "exemplary" or "for example" are used to indicate examples, illustrations or descriptions. Any embodiment or design described as "exemplary" or "for example" in the present application should not be interpreted as being more preferred or more advantageous than other embodiments or designs. Specifically, the use of words such as "exemplary" or "for example" is intended to present related concepts in a specific way.

第四,本申请实施例中涉及的“保存”,可以是指的保存在一个或者多个存储器中。该一个或者多个存储器,可以是单独的设置,也可以是集成在编码器或者译码器,处理器、或通信装置中。该一个或者多个存储器,也可以是一部分单独设置,一部分集成在译码器、处理器、或通信装置中。存储器的类型可以是任意形式的存储介质,本申请并不对此限定。Fourth, the "storage" involved in the embodiments of the present application may refer to storage in one or more memories. The one or more memories may be separately set or integrated in an encoder or decoder, a processor, or a communication device. The one or more memories may also be partially separately set and partially integrated in a decoder, a processor, or a communication device. The type of memory may be any form of storage medium, which is not limited by the present application.

第五,在本申请实施中,“协议”可以指通信领域的标准协议,例如可以包括NR协议以及应用于未来的通信系统中的相关协议,本申请对此不做限定。 Fifth, in the implementation of this application, "protocol" may refer to a standard protocol in the communication field, for example, it may include the NR protocol and related protocols used in future communication systems, and this application does not limit this.

第六,本申请实施例中,“的(of)”,“相应的(corresponding,relevant)”、“对应的(corresponding)”和“关联的(associate)”有时可以混用,应当指出的是,在不强调其区别时,其所要表达的含义是一致的。Sixth, in the embodiments of the present application, the terms “of”, “corresponding, relevant”, “corresponding” and “associate” can sometimes be used interchangeably. It should be pointed out that when the distinction between them is not emphasized, the meanings they intend to express are consistent.

第七,在本申请实施例中,“在…情况下”、“当…时”、“若…”有时可以混用,应当指出的是,在不强调其区别时,其所要表达的含义是一致的。Seventh, in the embodiments of the present application, "under the circumstances", "when", and "if" can sometimes be used interchangeably. It should be pointed out that when the distinction between them is not emphasized, the meanings they intend to express are the same.

第八,本文中术语“和/或”,仅仅是一种描述关联对象的关联关系,表示可以存在三种关系,例如,A和/或B,可以表示:单独存在A,同时存在A和B,单独存在B这三种情况。另外,本文中字符“/”,一般表示前后关联对象是一种“或”的关系。Eighth, the term "and/or" in this article is only a description of the association relationship of the associated objects, indicating that there can be three relationships. For example, A and/or B can mean: A exists alone, A and B exist at the same time, and B exists alone. In addition, the character "/" in this article generally indicates that the associated objects before and after are in an "or" relationship.

下面将结合附图,对本申请中的技术方案进行描述。The technical solution in this application will be described below in conjunction with the accompanying drawings.

本申请实施例提供的技术方案可以适用于无线局域网(wireless local area network,WLAN)场景,例如,支持电气和电子工程师学会(institute of electrical and electronics engineers,IEEE)802.11相关标准,例如802.11a/b/g标准、802.11n标准、802.11ac标准、802.11ax标准、IEEE 802.11ax下一代Wi-Fi协议,如802.11be、Wi-Fi 7、极高吞吐量(extremely high throughput,EHT)、802.11ad、802.11ay或802.11bf,再如802.11be下一代、Wi-Fi 8等,还可以应用于基于超宽带(ultra wide band,UWB)的无线个人局域网系统,如802.15系列标准,还可以应用于感知(sensing)系统,如802.11bf系列标准,还可以应用于802.11bn标准或超高可靠性(ultra-high reliability,UHR)标准。其中,802.11n标准称为高吞吐率(high throughput,HT),802.11ac标准称为非常高吞吐率(very high throughput,VHT)标准,802.11ax标准称为高效(high efficient,HE)标准,802.11be标准称为超高吞吐率(extremely high throughput,EHT)标准。其中,802.11bf包括低频(例如,sub7GHz)和高频(例如,60GHz)两个大类标准。sub7GHz的实现方式主要依托802.11ac、802.11ax、802.11be及下一代等标准,60GHz实现方式主要依托802.11ad、802.11ay及下一代等标准。其中,802.11ad也可以称为定向多吉比特(directional multi-gigabit,DMG)标准,802.11ay也可以称为增强定向多吉比特(enhanced directional multi-gigabit,EDMG)标准。The technical solution provided in the embodiments of the present application can be applied to wireless local area network (WLAN) scenarios, for example, supporting the Institute of Electrical and Electronics Engineers (IEEE) 802.11 related standards, such as 802.11a/b/g standards, 802.11n standards, 802.11ac standards, 802.11ax standards, IEEE 802.11ax next-generation Wi-Fi protocols such as 802.11be, Wi-Fi 7. Extremely high throughput (EHT), 802.11ad, 802.11ay or 802.11bf, such as 802.11be next generation, Wi-Fi 8, etc., can also be applied to wireless personal area network systems based on ultra-wide band (UWB), such as the 802.15 series standards, and can also be applied to sensing systems, such as the 802.11bf series standards, and can also be applied to the 802.11bn standard or the ultra-high reliability (UHR) standard. Among them, the 802.11n standard is called high throughput (HT), the 802.11ac standard is called very high throughput (VHT) standard, the 802.11ax standard is called high efficient (HE) standard, and the 802.11be standard is called extremely high throughput (EHT) standard. Among them, 802.11bf includes two major standards: low frequency (for example, sub7GHz) and high frequency (for example, 60GHz). The implementation of sub7GHz mainly relies on 802.11ac, 802.11ax, 802.11be and the next generation standards, and the implementation of 60GHz mainly relies on 802.11ad, 802.11ay and the next generation standards. Among them, 802.11ad can also be called directional multi-gigabit (DMG) standard, and 802.11ay can also be called enhanced directional multi-gigabit (EDMG) standard.

虽然本申请实施例主要以部署WLAN网络,尤其是应用IEEE 802.11系统标准的网络为例进行说明,本领域技术人员容易理解,本申请实施例涉及的各个方面可以扩展到采用各种标准或协议的其它网络,例如,高性能无线局域网(high performance radio local area network,HIPERLAN)、无线广域网(wireless wide area network,WWAN)、无线个人区域网(wireless personal area network,WPAN)或其它现在已知或以后发展起来的网络。因此,无论使用的覆盖范围和无线接入协议如何,本申请实施例提供的各种方面可以适用于任何合适的无线网络。Although the embodiments of the present application are mainly described by taking the deployment of WLAN networks, especially networks using the IEEE 802.11 system standard as an example, it is easy for those skilled in the art to understand that the various aspects involved in the embodiments of the present application can be extended to other networks that adopt various standards or protocols, such as high performance radio local area networks (HIPERLAN), wireless wide area networks (WWAN), wireless personal area networks (WPAN) or other networks now known or developed later. Therefore, regardless of the coverage range and wireless access protocol used, the various aspects provided in the embodiments of the present application can be applied to any suitable wireless network.

本申请实施例的技术方案还可以应用于各种通信系统,例如:WLAN通信系统,无线保真(wireless fidelity,Wi-Fi)系统、长期演进(long term evolution,LTE)系统、LTE频分双工(frequency division duplex,FDD)系统、LTE时分双工(time division duplex,TDD)、通用移动通信系统(universal mobile telecommunication system,UMTS)、全球互联微波接入(worldwide interoperability for microwave access,WiMAX)通信系统、第五代(5th generation,5G)系统或新无线(new radio,NR)、下一代通信系统、物联网(internet of things,IoT)网络或车联网(vehicle to x,V2X)等。The technical solutions of the embodiments of the present application can also be applied to various communication systems, such as: WLAN communication system, wireless fidelity (Wi-Fi) system, long term evolution (LTE) system, LTE frequency division duplex (FDD) system, LTE time division duplex (TDD), universal mobile telecommunication system (UMTS), worldwide interoperability for microwave access (WiMAX) communication system, fifth generation (5G) system or new radio (NR), next generation communication system, internet of things (IoT) network or vehicle to x (V2X), etc.

上述适用本申请的通信系统仅是举例说明,适用本申请的通信系统不限于此,在此统一说明,以下不再赘述。The above-mentioned communication system applicable to the present application is only an example for illustration, and the communication system applicable to the present application is not limited to this. A unified description is given here and no further elaboration is given below.

图1为本申请实施例适用的一种应用场景的示意图。如图1所示,本申请提供的通信的方法适用于接入点(access point,AP)和站点(station,STA)之间的数据通信,其中,站点可以是非接入点类的站点(none access point station,non-AP STA),简称为非AP站点或STA。具体地,本申请的方案适用于AP与一个或多个非AP站点之间的数据通信(例如,AP1与non-AP STA1、non-AP STA2之间的数据通信),也适用于AP与AP之间的数据通信(例如,AP1与AP2之间的数据通信),以及,non-AP STA与non-AP STA之间的数据通信(例如,non-AP STA2与non-AP STA3之间的数据通信)。FIG1 is a schematic diagram of an application scenario applicable to an embodiment of the present application. As shown in FIG1 , the communication method provided by the present application is applicable to data communication between an access point (AP) and a station (STA), wherein the station may be a non-access point station (non-AP STA), referred to as a non-AP station or STA. Specifically, the scheme of the present application is applicable to data communication between an AP and one or more non-AP stations (for example, data communication between AP1 and non-AP STA1, non-AP STA2), and also to data communication between APs (for example, data communication between AP1 and AP2), and data communication between non-AP STAs and non-AP STAs (for example, data communication between non-AP STA2 and non-AP STA3).

其中,接入点可以为终端(例如,手机)进入有线(或无线)网络的节点,主要部署于家庭、大楼内部以及园区内部,典型覆盖半径为几十米至上百米,当然,也可以部署于户外。接入点相当于一个连接有线网和无线网的桥梁,主要作用是将各个无线网络客户端连接到一起,然后将无线网络接入以太网。Among them, the access point can be a node for terminals (such as mobile phones) to enter the wired (or wireless) network. It is mainly deployed in homes, buildings and parks, with a typical coverage radius of tens to hundreds of meters. Of course, it can also be deployed outdoors. The access point is equivalent to a bridge connecting the wired network and the wireless network. Its main function is to connect various wireless network clients together and then connect the wireless network to the Ethernet.

具体的,接入点可以是带有Wi-Fi芯片的终端或者网络设备,该网络设备可以为服务器、路由器、交换机、网桥、计算机、手机、中继站、车载设备、可穿戴设备、5G网络中的网络设备以及未来通信网络中的网络设备或者公用陆地移动通信网络(public land mobile network,PLMN)中的网络设备等,本申请 实施例并不限定。接入点可以为支持Wi-Fi制式的设备。例如,接入点也可以支持802.11a、802.11b、802.11g、802.11n、802.11ac、802.11ax、802.11be、802.11ad、802.11ay、802.11bn、802.11bf等IEEE 802.11系列的一种或多种标准。Specifically, the access point can be a terminal or network device with a Wi-Fi chip, and the network device can be a server, a router, a switch, a bridge, a computer, a mobile phone, a relay station, a vehicle-mounted device, a wearable device, a network device in a 5G network, a network device in a future communication network, or a network device in a public land mobile communication network (public land mobile network, PLMN), etc., and the present application The embodiments are not limited. The access point may be a device that supports the Wi-Fi standard. For example, the access point may also support one or more standards of the IEEE 802.11 series, such as 802.11a, 802.11b, 802.11g, 802.11n, 802.11ac, 802.11ax, 802.11be, 802.11ad, 802.11ay, 802.11bn, 802.11bf, etc.

非AP站点可以为无线通讯芯片、无线传感器或无线通信终端等,也可称为用户、用户设备(user equipment,UE)、接入终端、用户单元、用户站、移动站、移动台、远方站、远程终端、移动设备、用户终端、终端、无线通信设备、用户代理或用户装置。非AP站点可以是蜂窝电话、无绳电话、会话启动协议(session initiation protocol,SIP)电话、无线本地环路(wireless local loop,WLL)站、个人数字处理(personal digital assistant,PDA)、具有无线通信功能的手持设备、计算设备或连接到无线调制解调器的其它处理设备、车载设备、物联网设备、可穿戴设备、5G网络中的终端设备、未来通信网络中的终端设备或者PLMN中的终端设备等,本申请实施例对此并不限定。非AP站点可以为支持WLAN制式的设备。例如,非AP站点可以支持802.11a、802.11b、802.11g、802.11n、802.11ac、802.11ax、802.11be、802.11ad、802.11ay、802.11bf等IEEE 802.11系列的一种或多种标准。A non-AP site may be a wireless communication chip, a wireless sensor or a wireless communication terminal, etc., and may also be referred to as a user, user equipment (UE), an access terminal, a user unit, a user station, a mobile station, a mobile station, a remote station, a remote terminal, a mobile device, a user terminal, a terminal, a wireless communication device, a user agent or a user device. A non-AP site may be a cellular phone, a cordless phone, a session initiation protocol (SIP) phone, a wireless local loop (WLL) station, a personal digital assistant (PDA), a handheld device with wireless communication function, a computing device or other processing device connected to a wireless modem, a vehicle-mounted device, an Internet of Things device, a wearable device, a terminal device in a 5G network, a terminal device in a future communication network or a terminal device in a PLMN, etc., and the embodiments of the present application are not limited to this. A non-AP site may be a device that supports the WLAN format. For example, a non-AP site can support one or more standards in the IEEE 802.11 series, such as 802.11a, 802.11b, 802.11g, 802.11n, 802.11ac, 802.11ax, 802.11be, 802.11ad, 802.11ay, and 802.11bf.

例如,非AP站点可以为移动电话、平板电脑、机顶盒、智能电视、智能可穿戴设备、车载通信设备、计算机、物联网(internet of things,IoT)节点、传感器、智慧家居,如智能摄像头、智能遥控器、智能水表电表、以及智慧城市中的传感器等。For example, non-AP sites can be mobile phones, tablets, set-top boxes, smart TVs, smart wearable devices, vehicle-mounted communication equipment, computers, Internet of Things (IoT) nodes, sensors, smart homes such as smart cameras, smart remote controls, smart water and electricity meters, and sensors in smart cities.

上述AP或非AP站点可以包括发送器、接收器、存储器、处理器等,其中,发送器和接收器分别用于分组结构的发送和接收,存储器用于存储信令信息以及存储提前约定的预设值等,处理器用于解析信令信息、处理相关数据等。The above-mentioned AP or non-AP site may include a transmitter, a receiver, a memory, a processor, etc., wherein the transmitter and the receiver are respectively used for sending and receiving packet structures, the memory is used to store signaling information and store preset values agreed in advance, etc., and the processor is used to parse signaling information, process related data, etc.

例如,图2示出了本申请提供的一种通信装置,图2所示的装置可以为AP,也可以为非AP站点。具体地,该实施例中涉及的通信装置包括神经网络处理单元(neural network processing unit,NPU),该NPU用于进行神经网络参数的训练。可选地,通信装置还可以包括以下一种或多种:中央处理器,介质接入控制(medium access control,MAC)层处理模块,收发机,或天线等。For example, FIG2 shows a communication device provided by the present application, and the device shown in FIG2 can be an AP or a non-AP site. Specifically, the communication device involved in this embodiment includes a neural network processing unit (NPU), which is used to train neural network parameters. Optionally, the communication device may also include one or more of the following: a central processing unit, a medium access control (MAC) layer processing module, a transceiver, or an antenna, etc.

作为示例而非限定,NPU包含训练模块和推理模块。训练好的神经网络参数可以反馈给推理模块。NPU可以作用到网络节点的各个其他模块。示例性地,NPU可以作用中央处理器、MAC、收发机、或天线。可选地,NPU可以作用到各个模块的AI任务。例如,NPU和收发机交互,决策收发机的开关用于节能;还例如,NPU与天线交互,控制天线的朝向;又例如,NPU与MAC交互,控制信道接入、信道选择等。As an example and not a limitation, the NPU includes a training module and an inference module. The trained neural network parameters can be fed back to the inference module. The NPU can act on various other modules of the network node. Exemplarily, the NPU can act on the central processing unit, MAC, transceiver, or antenna. Optionally, the NPU can act on the AI tasks of each module. For example, the NPU interacts with the transceiver to decide whether to switch the transceiver on or off for energy saving; for another example, the NPU interacts with the antenna to control the direction of the antenna; for another example, the NPU interacts with the MAC to control channel access, channel selection, etc.

应理解,图2仅为本申请提供的一种装置的示例,并不构成本申请的限定,例如,该装置也可以包括控制器和/或调度器。还例如,该装置中NPU的功能可以由中央处理器实现。又例如,装置中包括的模块可能有其他名称,这里不再一一举例说明。It should be understood that FIG. 2 is only an example of a device provided by the present application and does not constitute a limitation of the present application. For example, the device may also include a controller and/or a scheduler. For another example, the function of the NPU in the device may be implemented by a central processing unit. For another example, the modules included in the device may have other names, which are not illustrated one by one here.

为了便于理解本申请实施例的技术方案,首先对本申请实施例可能涉及到的一些术语或概念进行简单描述。In order to facilitate understanding of the technical solutions of the embodiments of the present application, some terms or concepts that may be involved in the embodiments of the present application are first briefly described.

1、神经网络(neural network,NN):是一种模拟人脑神经网络以期能够实现类人工智能的机器学习技术。神经网络至少包括3层,一个输入层、一个中间层(也称隐藏层)以及一个输出层。更深一些的神经网络可能在输入层和输出层之间包含更多的隐藏层。1. Neural network (NN): It is a machine learning technology that simulates the human brain neural network in order to achieve artificial intelligence-like. A neural network consists of at least 3 layers: an input layer, an intermediate layer (also called a hidden layer), and an output layer. Deeper neural networks may contain more hidden layers between the input layer and the output layer.

为了便于理解,结合图3对神经网络内部的结构和实现进行说明,参见图3,图3是包含3个层的全连接神经网络示意图。如图3所示,该神经网络包括3个层,分别是输入层、隐藏层以及输出层,其中输入层有3个神经元,隐藏层有4个神经元,输出层有2个神经元,并且每层神经元与下一层神经元全连接。神经元之间的每条连线对应一个权重(weight),这些权重通过训练可以更新。隐藏层和输出层的每个神经元还可以对应一个偏置(bias),这些偏置通过训练可以更新。更新神经网络是指更新这些权重和偏置。知道了神经网络的结构即每层包含的神经元个数以及前面的神经元的输出如何输入后面的神经元(即神经元之间的连接关系),再加上神经网络的参数即权重和偏置,就知道了该神经网络的全部信息。For ease of understanding, the internal structure and implementation of the neural network are explained in conjunction with Figure 3. See Figure 3, which is a schematic diagram of a fully connected neural network containing 3 layers. As shown in Figure 3, the neural network includes 3 layers, namely the input layer, the hidden layer and the output layer, wherein the input layer has 3 neurons, the hidden layer has 4 neurons, the output layer has 2 neurons, and each layer of neurons is fully connected to the neurons in the next layer. Each connection between neurons corresponds to a weight, which can be updated through training. Each neuron in the hidden layer and the output layer can also correspond to a bias, which can be updated through training. Updating the neural network means updating these weights and biases. Knowing the structure of the neural network, that is, the number of neurons contained in each layer and how the output of the previous neuron is input into the subsequent neuron (that is, the connection relationship between neurons), plus the parameters of the neural network, that is, the weights and biases, you will know all the information about the neural network.

2、神经网络输出:每个神经元可能有多条输入连线,每个神经元根据输入计算输出。2. Neural network output: Each neuron may have multiple input connections, and each neuron calculates the output based on the input.

为了便于理解,结合图4简单介绍如何计算神经元的输出。如图4所示,一个神经元包含3个输入,1个输出,以及2个计算功能,输出的计算公式可以表示为:For ease of understanding, we briefly introduce how to calculate the output of a neuron in conjunction with Figure 4. As shown in Figure 4, a neuron contains 3 inputs, 1 output, and 2 calculation functions. The calculation formula for the output can be expressed as:

输出=激活函数(输入1*权重1+输入2*权重2+输入3*权重3+偏置)      (1-1)Output = activation function (input 1 * weight 1 + input 2 * weight 2 + input 3 * weight 3 + bias)      (1-1)

其中,公式(1-1)中的符号“*”表示数学运算“乘”或“乘以”,下文不再赘述。The symbol “*” in formula (1-1) represents the mathematical operation “multiplication” or “times”, which will not be described in detail below.

应理解,每个神经元可能有多条输出连线,一个神经元的输出作为下一个神经元的输入。需要说明 的是,输入层只有输出连线,输入层的每个神经元是输入神经网络的值,每个神经元的输出值直接作为所有输出连线的输入。输出层只有输入连线,采用上述公式(1-1)的计算方式计算输出。It should be understood that each neuron may have multiple output connections, and the output of one neuron serves as the input of the next neuron. The input layer has only output connections. Each neuron in the input layer is the value of the input neural network. The output value of each neuron is directly used as the input of all output connections. The output layer has only input connections, and the output is calculated using the above formula (1-1).

可选的,输出层可以没有激活函数的计算,也就是说前述公式(1-1)可以变换成:Optionally, the output layer may not have the activation function calculated, that is, the above formula (1-1) can be transformed into:

输出=输入1*权重1+输入2*权重2+输入3*权重3+偏置。Output = Input 1 * Weight 1 + Input 2 * Weight 2 + Input 3 * Weight 3 + Bias.

举例来说,k层神经网络的输出可以表示为:For example, the output of a k-layer neural network can be expressed as:

y=fk(fk-1(…(fi(wi*x+bi)))                                                           (1-2)y=fk(fk-1(...(fi(wi*x+bi)))

其中,公式(1-2)中的x表示神经网络的输入,y表示神经网络的输出,wi表示第i层神经网络的权重,bi表示第i层神经网络的偏置,fi表示第i层神经网络的激活函数。i=1,2,…,k。In formula (1-2), x represents the input of the neural network, y represents the output of the neural network, wi represents the weight of the i-th layer of the neural network, bi represents the bias of the i-th layer of the neural network, and fi represents the activation function of the i-th layer of the neural network. i=1,2,…,k.

3、人工智能(artificial intelligence,AI):无线通信迅猛发展,5G和第六代Wi-Fi标准已经商用,下一代无线技术和标准化正在全球范围如火如荼的进行中。无线通信已经渗透日常生活和工作的各个方面,成为不可或缺的部分。随着智能终端数目的高速增长,以及IoT设备的普及,催生了虚拟现实、增强现实和全息影像等层出不穷的新型无线应用。新型无线技术、新型终端、新型应用使无线网络变得空前复杂。可以预见,未来的无线网络会越发复杂。为了对抗无线网络的高复杂性发展趋势,AI作为无线网络设计和管理的有效工具已经成为业界的共识。在无线网络中,AI的优势作用体现在四个方面:3. Artificial intelligence (AI): Wireless communications are developing rapidly. 5G and the sixth-generation Wi-Fi standards have been commercialized, and the next-generation wireless technology and standardization are in full swing around the world. Wireless communications have penetrated into all aspects of daily life and work and have become an indispensable part. With the rapid growth in the number of smart terminals and the popularity of IoT devices, new wireless applications such as virtual reality, augmented reality, and holographic imaging have emerged. New wireless technologies, new terminals, and new applications have made wireless networks unprecedentedly complex. It is foreseeable that wireless networks will become more and more complex in the future. In order to combat the high complexity of wireless networks, AI has become an industry consensus as an effective tool for wireless network design and management. In wireless networks, the advantages of AI are reflected in four aspects:

1)解决没有数学模型的复杂网络问题;2)解决搜索空间大的无线网络管理问题;3)带来跨层和跨节点的网络级全局优化效果;4)通过AI的预测能力,主动优化无线网络参数。1) Solve complex network problems without mathematical models; 2) Solve wireless network management problems with large search spaces; 3) Bring about cross-layer and cross-node network-level global optimization effects; 4) Actively optimize wireless network parameters through AI’s predictive capabilities.

4、基于Givens rotation的CSI反馈:目前标准中使用基于Givens rotation的CSI反馈方法。例如,AP先发送NDPA,再发送NDP进行信道测量,STA进行信道估计后,得到当前信道H,对H进行奇异值分解(singular value decomposition,SVD)得到预编码矩阵V,对V进行Givens rotation操作将其转化为φ和ψ两类角度,量化后反馈这些角度给AP,AP从收到的角度中恢复得到用于预编码。4. CSI feedback based on Givens rotation: The current standard uses a CSI feedback method based on Givens rotation. For example, the AP first sends NDPA, then sends NDP for channel measurement. After the STA estimates the channel, it obtains the current channel H, performs singular value decomposition (SVD) on H to obtain the precoding matrix V, performs Givens rotation on V to convert it into two types of angles, φ and ψ, and then quantizes and feeds back these angles to the AP. The AP recovers from the received angles Used for precoding.

上文结合图1简单介绍了本申请实施例提供的通信方法能够应用的场景,以及介绍了本申请实施例中可能涉及到的基本概念,并在基本概念中介绍了AI和无线通信的联系,以及目前标准中基于Givens rotation的CSI反馈方法。应理解,随着带宽的增大以及天线数的增加,继续采用基于Givens rotation的CSI反馈方法进行信道状态信息反馈,将极大增加信道状态信息反馈的开销。而随着AI技术的发展,基于AI的CSI反馈方案在Wi-Fi标准中引起广泛讨论。The above text briefly introduces the scenarios in which the communication method provided in the embodiment of the present application can be applied in combination with Figure 1, and introduces the basic concepts that may be involved in the embodiment of the present application, and introduces the connection between AI and wireless communication in the basic concepts, as well as the CSI feedback method based on Givens rotation in the current standard. It should be understood that with the increase in bandwidth and the increase in the number of antennas, continuing to use the CSI feedback method based on Givens rotation for channel state information feedback will greatly increase the overhead of channel state information feedback. And with the development of AI technology, the AI-based CSI feedback scheme has caused extensive discussion in the Wi-Fi standard.

示例性地,基于AI的CSI反馈方案包括但不限于以下两种方案:Exemplarily, AI-based CSI feedback solutions include but are not limited to the following two solutions:

方案一:使用AI方法尽可能提高CSI压缩比,减少反馈开销,从而提高系统吞吐。Solution 1: Use AI methods to maximize the CSI compression ratio and reduce feedback overhead, thereby improving system throughput.

示例性地,方案一的具体实现可以是:For example, the specific implementation of solution 1 may be:

在训练阶段,STA按照标准方法(如,基于Givens rotation的CSI反馈方法)进行CSI反馈,AP利用收集的数据训练编码器(encoder)和解码器(decoder),完成训练后,将编码器下发给STA。在推理阶段,STA进行信道测量后,将预编码矩阵V输入编码器,得到输出向量m后反馈给AP,AP使用解码器基于m恢复得到进行预编码。In the training phase, the STA performs CSI feedback according to the standard method (e.g., the CSI feedback method based on Givens rotation), and the AP uses the collected data to train the encoder and decoder. After the training is completed, the encoder is sent to the STA. In the inference phase, after the STA performs channel measurement, the precoding matrix V is input into the encoder, and the output vector m is obtained and fed back to the AP. The AP uses the decoder to recover the vector m based on m. Pre-encoding.

为了便于理解,结合图5介绍该方案一所示的情况下,基于AI的CSI反馈。For ease of understanding, the AI-based CSI feedback is introduced in conjunction with FIG5 under the situation shown in the first solution.

图5是一种基于AI的CSI反馈的方法示意图,包括以下步骤:FIG5 is a schematic diagram of a method for CSI feedback based on AI, comprising the following steps:

S510,AP向STA发送NDP。该NDP用于进行信道测量。S510: The AP sends an NDP to the STA. The NDP is used for channel measurement.

S520,STA进行信道估计。具体地,STA基于NDP进行信道估计后,得到当前信道H,对H进行SVD得到预编码矩阵V,对V进行Givens rotation操作将其转化为φ和ψ两类角度。S520, STA performs channel estimation. Specifically, after performing channel estimation based on NDP, STA obtains the current channel H, performs SVD on H to obtain the precoding matrix V, and performs Givens rotation operation on V to convert it into two types of angles, φ and ψ.

S530,STA向AP发送φ和ψ。S530, the STA sends φ and ψ to the AP.

S540,AP确定编码器和解码器。具体地,AP利用收集的数据训练编码器和解码器。S540, the AP determines an encoder and a decoder. Specifically, the AP trains the encoder and the decoder using the collected data.

S550,AP向STA发送编码器。S550: The AP sends the encoder to the STA.

应理解,步骤S510至S550即上述的训练阶段,该阶段STA按照标准方法进行CSI反馈。It should be understood that steps S510 to S550 are the aforementioned training phase, during which the STA performs CSI feedback according to a standard method.

S560,AP向STA发送NDP。S560: The AP sends an NDP to the STA.

S570,STA进行信道估计。具体地,STA进行信道测量后,将预编码矩阵V输入编码器,得到输出结果m。S570, STA performs channel estimation. Specifically, after performing channel measurement, STA inputs the precoding matrix V into the encoder to obtain an output result m.

S580,STA向AP发送m。S580, the STA sends m to the AP.

S590,AP确定预编码矩阵具体地,AP基于解码器对m进行解码得到预编码矩阵 S590, AP determines precoding matrix Specifically, the AP decodes m based on the decoder to obtain the precoding matrix

方案二:使用AI方法降低CSI反馈过程中的计算复杂度。在保证反馈量不大于当前标准使用的基于Givens rotation的方法的基础上,用神经网络取代Givens rotation的计算,降低计算复杂度。 Solution 2: Use AI methods to reduce the computational complexity of the CSI feedback process. On the basis of ensuring that the feedback amount is no greater than the Givens rotation-based method used in the current standard, a neural network is used to replace the Givens rotation calculation to reduce the computational complexity.

方案二的具体实现可以是:STA在进行信道测量后,将预编码矩阵V输入编码器(如,全连接神经网络),得到输出结果m’。应理解,方案二所示的情况下,重点在于通过降低基于编码矩阵V确定m’的计算复杂度,实现降低CSI反馈计算过程的复杂度,因此方案二中STA所使用的编码器和方案一中STA所使用的编码器结构不同。The specific implementation of Scheme 2 may be: after the STA performs channel measurement, the precoding matrix V is input into the encoder (e.g., a fully connected neural network) to obtain the output result m'. It should be understood that in the case shown in Scheme 2, the focus is on reducing the complexity of the CSI feedback calculation process by reducing the computational complexity of determining m' based on the coding matrix V. Therefore, the encoder used by the STA in Scheme 2 has a different structure from the encoder used by the STA in Scheme 1.

上述的两种方案对应的编码器结构不同,不同的编码器结构会导致需要多套互联互通的协议支持,存在管理开销。另外,不同的编码器结构也会增加编码器的存储开销。The above two solutions correspond to different encoder structures. Different encoder structures will require multiple sets of interconnection and interoperability protocol support, which will cause management overhead. In addition, different encoder structures will also increase the storage overhead of the encoder.

本申请提供一种通信方法,获取能够包含于不同功能的神经网络(或者说编码器)的神经网络块,以满足不同场景下对于神经网络的需求,提高基于AI进行信道信息反馈的性能。下面将结合图6详细介绍该通信方法。The present application provides a communication method for obtaining a neural network block that can be included in a neural network (or encoder) with different functions to meet the needs of neural networks in different scenarios and improve the performance of channel information feedback based on AI. The communication method will be described in detail below in conjunction with FIG6.

下面将结合附图详细说明本申请提供的技术方案。本申请实施例可以应用于多个不同的场景下,包括图1所示的场景,但并不限于该场景。例如,还可以应用于5G、下一代通信系统或未来通信系统中。The technical solution provided by the present application will be described in detail below with reference to the accompanying drawings. The embodiments of the present application can be applied to a plurality of different scenarios, including the scenario shown in FIG1 , but are not limited to the scenario. For example, it can also be applied to 5G, next generation communication systems or future communication systems.

应理解,下文示出的实施例并未对本申请实施例提供的方法的执行主体的具体结构特别限定,只要能够通过运行记录有本申请实施例的提供的方法的代码的程序,以根据本申请实施例提供的方法进行通信即可,例如,本申请实施例提供的方法的执行主体可以是接收端设备或发送端设备,或者,是接收端设备或发送端设备中能够调用程序并执行程序的功能模块。It should be understood that the embodiments shown below do not particularly limit the specific structure of the execution subject of the method provided by the embodiments of the present application. As long as it is possible to communicate according to the method provided by the embodiments of the present application by running a program that records the code of the method provided by the embodiments of the present application, for example, the execution subject of the method provided by the embodiments of the present application may be a receiving device or a sending device, or a functional module in the receiving device or the sending device that can call and execute the program.

以下,不失一般性,以发第一通信装置和第二通信装置之间的交互为例详细说明本申请实施例提供的通信方法,本申请实施例中涉及的第一通信装置可以为接入点AP、第二通信装置可以为非接入点non-AP(如,STA);或者,第一通信装置可以为STA、第二通信装置可以为AP;或者,第一通信装置和第二通信装置为接入点AP;或者,第一通信装置和第二通信装置为非接入点non-AP;或者,第一通信装置为网络设备(或网络设备中的集中式单元(central unit,CU)或分布式单元(distributed unit,DU)),第二通信装置为终端设备;或者,第一通信装置可以为开放式无线接入网(open radio access network,O-RAN)中的网络设备(或网络设备中的CU(如,称为O-CU(开放式CU))或DU(如,称为O-DU(开放式DU)),第二通信装置为终端设备等。In the following, without loss of generality, the communication method provided in the embodiment of the present application is described in detail by taking the interaction between the first communication device and the second communication device as an example. The first communication device involved in the embodiment of the present application may be an access point AP, and the second communication device may be a non-access point non-AP (such as STA); or, the first communication device may be a STA, and the second communication device may be an AP; or, the first communication device and the second communication device may be an access point AP; or, the first communication device and the second communication device may be a non-access point non-AP; or, the first communication device may be a network device (or a centralized unit (central unit, CU) or a distributed unit (distributed unit, DU) in the network device), and the second communication device may be a terminal device; or, the first communication device may be a network device in an open radio access network (open radio access network, O-RAN) (or a CU (such as, called O-CU (open CU)) or DU (such as, called O-DU (open DU)) in the network device), and the second communication device may be a terminal device, etc.

另外,下述实施例中所涉及的第一神经网络块可以称为NN块(block),第一神经网络块理解为:可以堆叠(或者说可以复用)的网络结构,第一神经网络块可以包括一层或多层卷积神经网络(convolutional neural network,CNN)、全连接网络(fully-connected network,FC)、或转换器(Transformer)等。例如,第一神经网络块的网络结构为:包括一层或多层CNN的网络结构;还例如,第一神经网络块的网络结构为:包括一层或多层FC的网络结构;又例如,第一神经网络块的网络结构为:包括一层或多层Transformer的网络结构;又例如,第一神经网络块的网络结构为:包括一层或多层CNN以及一层或多层FC的网络结构等,这里不再一一举例说明。In addition, the first neural network block involved in the following embodiments may be referred to as an NN block. The first neural network block is understood as a network structure that can be stacked (or reused). The first neural network block may include one or more layers of convolutional neural networks (CNN), fully-connected networks (FC), or transformers, etc. For example, the network structure of the first neural network block is a network structure including one or more layers of CNN; for another example, the network structure of the first neural network block is a network structure including one or more layers of FC; for another example, the network structure of the first neural network block is a network structure including one or more layers of Transformer; for another example, the network structure of the first neural network block is a network structure including one or more layers of CNN and one or more layers of FC, etc., and examples are not given one by one here.

图6是本申请实施例提供的一种通信方法的示意性流程图,包括以下步骤:FIG6 is a schematic flow chart of a communication method provided in an embodiment of the present application, comprising the following steps:

S610,第一通信装置获取第一神经网络块。S610, the first communication device obtains a first neural network block.

具体地,该实施例中涉及的神经网络块(如,第一神经网络块)用于信道信息反馈,可以辅助信道信息反馈过程中对信道信息进行处理。包含不同数量的第一神经网络块的神经网络的对信道信息处理的功能不同。Specifically, the neural network block (e.g., the first neural network block) involved in this embodiment is used for channel information feedback, and can assist in processing channel information during the channel information feedback process. Neural networks containing different numbers of first neural network blocks have different functions for processing channel information.

该实施例中涉及的信道信息包括但不限于上述的CSI,该CSI包括预编码矩阵指示(precoding matrix indicator,PMI),该PMI用于指示预编码矩阵。另外,该实施例中涉及的信道信息还可以包括其他能够用于反映信道状态的信息、指示第一通信装置和第二通信装置之间的信道状态的信息、或其他承载PMI的信息。为了便于描述,下文中以信道信息为CSI为例进行说明。The channel information involved in this embodiment includes but is not limited to the above-mentioned CSI, and the CSI includes a precoding matrix indicator (PMI), and the PMI is used to indicate the precoding matrix. In addition, the channel information involved in this embodiment may also include other information that can be used to reflect the channel state, information indicating the channel state between the first communication device and the second communication device, or other information carrying the PMI. For ease of description, the channel information is CSI as an example for explanation below.

示例性地,第一神经网络块为第一通信装置获取的至少一个神经网络块中的任意一个,该实施例中对于第一通信装置所获取的神经网络块的个数不做任何限定,不同神经网络块的结构不同。例如,第一通信装置获取第一神经网络块和第二神经网络块,该第一神经网络块的结构和第二神经网络块的结构不同。Exemplarily, the first neural network block is any one of the at least one neural network block acquired by the first communication device. In this embodiment, there is no limitation on the number of neural network blocks acquired by the first communication device, and different neural network blocks have different structures. For example, the first communication device acquires a first neural network block and a second neural network block, and the structure of the first neural network block is different from the structure of the second neural network block.

具体地,包含不同数量的第一神经网络块的神经网络的功能不同可以理解为:该实施例中的第一神经网络块可以通过堆叠实现不同的功能。例如,神经网络#1包括N1个第一神经网络块,神经网络#2包括N2个第一神经网络块,N1和N2为不相等的正整数,神经网络#1和神经网络#2实现的功能不同。作为示例而非限定,神经网络#1能够实现上述方案一中的编码器的功能(提高CSI压缩比,减少反馈开销,提升系统吞吐),神经网络#2能够实现上述方案二中的编码器的功能(降低CSI反馈过程中的计算复杂 度)。Specifically, the different functions of the neural networks containing different numbers of first neural network blocks can be understood as: the first neural network blocks in this embodiment can achieve different functions by stacking. For example, neural network #1 includes N1 first neural network blocks, and neural network #2 includes N2 first neural network blocks, N1 and N2 are unequal positive integers, and the functions achieved by neural network #1 and neural network #2 are different. As an example and not a limitation, neural network #1 can achieve the function of the encoder in the above-mentioned scheme one (improve CSI compression ratio, reduce feedback overhead, and improve system throughput), and neural network #2 can achieve the function of the encoder in the above-mentioned scheme two (reduce the computational complexity in the CSI feedback process). Spend).

当一个神经网络包括多个第一神经网络块时,多个第一神经网络块之间的连接方式可以是深度连接或宽度连接。When a neural network includes multiple first neural network blocks, the connection between the multiple first neural network blocks can be deep connection or wide connection.

为了便于理解,下面结合图7中(a)至(d)简单介绍4种由第一神经网络块堆叠的得到的神经网络结构。To facilitate understanding, the following briefly introduces four neural network structures obtained by stacking the first neural network blocks in combination with (a) to (d) in Figure 7.

如图7中(a)所示,某个神经网络包括多个支持CNN的第一神经网络块和一个输出层,其中,多个支持CNN的第一神经网络块通过深度连接的方式相连接,该多个支持CNN的第一神经网络块的结构相同而对应的参数不同,每个第一神经网络块的参数与第一神经网络块在神经网络中位置有关。As shown in (a) of Figure 7, a neural network includes multiple first neural network blocks supporting CNN and an output layer, wherein the multiple first neural network blocks supporting CNN are connected via deep connections, the multiple first neural network blocks supporting CNN have the same structure but different corresponding parameters, and the parameters of each first neural network block are related to the position of the first neural network block in the neural network.

如图7中(b)所示,某个神经网络包括多个支持FC的第一神经网络块和一个输出层,其中,多个支持FC的第一神经网络块通过深度连接的方式相连接,该多个支持FC的第一神经网络块的结构相同而对应的参数不同。As shown in (b) of Figure 7, a neural network includes multiple first neural network blocks supporting FC and an output layer, wherein the multiple first neural network blocks supporting FC are connected via deep connections, and the multiple first neural network blocks supporting FC have the same structure but different corresponding parameters.

如图7中(c)所示,某个神经网络包括多个支持FC的第一神经网络块和一个输出层,其中,多个支持FC的第一神经网络块通过宽度连接的方式相连接,该多个支持FC的第一神经网络块的结构相同而对应的参数不同。As shown in (c) of FIG. 7 , a neural network includes multiple first neural network blocks supporting FC and an output layer, wherein the multiple first neural network blocks supporting FC are connected via width connections, and the multiple first neural network blocks supporting FC have the same structure but different corresponding parameters.

如图7中(d)所示,某个神经网络包括多个支持Transformer的第一神经网络块和一个输出层,其中,多个支持Transformer的第一神经网络块通过宽度连接的方式相连接。As shown in (d) of FIG. 7 , a neural network includes a plurality of first neural network blocks supporting Transformer and an output layer, wherein the plurality of first neural network blocks supporting Transformer are connected by width connection.

从图7中(a)至(d)可以看出,某个神经网络中包括的不同第一神经网络块之间的连接方式可以是深度连接和/或宽度连接,其中,两个第一神经网络块之间为深度连接方式表示:该两个第一神经网络块中的前一个第一神经网络块的输出作为后一个第一神经网络块的输入。两个第一神经网络块之间为宽度连接方式表示:该两个第一神经网络块的输入为相同的数据(或相同的神经网络块的输出),且该两个第一神经网络块的输出为输出到同一个输出层或下一个神经网络块。As can be seen from (a) to (d) in FIG7 , the connection mode between different first neural network blocks included in a certain neural network can be a deep connection and/or a wide connection, wherein the deep connection mode between two first neural network blocks means that the output of the first first neural network block of the two first neural network blocks is used as the input of the second first neural network block. The wide connection mode between two first neural network blocks means that the inputs of the two first neural network blocks are the same data (or the outputs of the same neural network block), and the outputs of the two first neural network blocks are output to the same output layer or the next neural network block.

应理解,图7中(a)至(d)只是示例性给出第一神经网络块和输出层可以构成不同需求的神经网络,对本申请的保护范围不构成任何的限定,该实施例中第一神经网络块和输出层还可以构成其他模式的神经网络,例如,构成层数不同神经网络,这里不再一一举例说明。It should be understood that (a) to (d) in FIG. 7 are only illustrative examples of how the first neural network block and the output layer can constitute neural networks for different requirements, and do not constitute any limitation on the scope of protection of the present application. In this embodiment, the first neural network block and the output layer can also constitute neural networks of other modes, for example, neural networks with different numbers of layers, which will not be explained one by one here.

需要说明的是,对于深度连接或宽度连接的神经网络,包含的第一神经网络块越多,计算复杂度越高,反馈比特越少,系统吞吐越高。相反,若包含的第一神经网络块越少,神经网络计算复杂度越低。It should be noted that for a deep-connected or wide-connected neural network, the more first neural network blocks it contains, the higher the computational complexity, the fewer feedback bits, and the higher the system throughput. On the contrary, if the fewer first neural network blocks it contains, the lower the computational complexity of the neural network.

为了便于理解,下面结合图8中的(a)和(b)说明不同第一神经网络块数对于构成的神经网络的计算复杂度的影响。For ease of understanding, the following describes the effect of different numbers of first neural network blocks on the computational complexity of the constructed neural network in conjunction with (a) and (b) in FIG8 .

示例性地,如图8中(a)所示,该神经网络包括4个相同的基于卷积神经网络的深度连接的第一神经网络块,该4个第一神经网络块中每个第一神经网络块的结构如图8中(a)的虚线框所示,即图8中(a)虚线框部分重复4次即可(重复部分图8中(a)未示出)。图8中(a)的虚线框中所示的为一种基于卷积的第一神经网络块。Exemplarily, as shown in FIG8(a), the neural network includes 4 identical first neural network blocks that are deeply connected based on convolutional neural networks. The structure of each of the 4 first neural network blocks is shown in the dotted box in FIG8(a), that is, the dotted box part in FIG8(a) is repeated 4 times (the repeated part is not shown in FIG8(a)). The dotted box in FIG8(a) shows a first neural network block based on convolution.

应理解,图8中(a)的虚线框中所示的第一神经网络块结构仅为示例,对本申请的保护范围不构成任何的限定。It should be understood that the first neural network block structure shown in the dotted box in Figure 8 (a) is only an example and does not constitute any limitation on the scope of protection of the present application.

如图8中(b)所示,该神经网络包括一个基于卷积神经网络的深度连接的第一神经网络块,该1个第一神经网络块的结构如图8中(a)的虚线框所示。As shown in FIG8(b), the neural network includes a first neural network block with a deep connection based on a convolutional neural network, and the structure of the first neural network block is shown in the dotted box in FIG8(a).

应理解,相比于图8中(a)所示的神经网络,图8中(b)所示的神经网络包括的第一神经网络块的数量(1个)少,则图8中(b)所示的神经网络具有计算复杂度低的特点,适用于前文中所示的基于AI的CSI反馈的方案二。It should be understood that compared with the neural network shown in Figure 8 (a), the neural network shown in Figure 8 (b) includes a smaller number of first neural network blocks (1), so the neural network shown in Figure 8 (b) has the characteristic of low computational complexity and is suitable for the second AI-based CSI feedback solution shown in the previous text.

相比于图8中(b)所示的神经网络,图8中(a)所示的神经网络包括的第一神经网络块的数量(4个)多,则图8中(b)所示的神经网络具有以下特点:Compared with the neural network shown in FIG8(b), the number of first neural network blocks (4) included in the neural network shown in FIG8(a) is greater, and the neural network shown in FIG8(b) has the following characteristics:

计算复杂度高、反馈比特越少,以及系统吞吐越高等特点,适用于前文中所示的基于AI的CSI反馈的方案一。The characteristics of high computational complexity, fewer feedback bits, and higher system throughput are suitable for solution 1 of the AI-based CSI feedback shown in the previous article.

作为示例而非限定,该实施例中第一通信装置可以基于以下几种可能的实现方式获取上述的第一神经网络块:As an example but not a limitation, in this embodiment, the first communication device may obtain the first neural network block based on the following possible implementations:

作为一种可能的实现方式,该实施例中第一通信装置可以自行确定第一神经网络块。如,第一通信装置基于训练数据训练得到第一神经网络块。其中,收集的数据可以是第一通信装置和第二通信装置之间的历史通信数据,或者可以是管理设备提供的训练数据,或者还可以是第一通信装置生成的用于训练 的数据,该实施例中对于如何获取用于训练的数据不做任何限定。As a possible implementation, in this embodiment, the first communication device can determine the first neural network block by itself. For example, the first communication device obtains the first neural network block by training based on the training data. The collected data may be historical communication data between the first communication device and the second communication device, or may be training data provided by the management device, or may be generated by the first communication device for training. In this embodiment, there is no limitation on how to obtain the data for training.

作为另一种可能的实现方式,该实施例中第一通信装置可以与预定义的方式确定第一神经网络块。如,协议预定了至少一个神经网络块的结构。As another possible implementation, in this embodiment, the first communication device may determine the first neural network block in a predefined manner, such as a protocol predetermines the structure of at least one neural network block.

作为又一种可能的实现方式,该实施例中第一通信装置可以和第二通信装置协商确定第一神经网络块。As another possible implementation, in this embodiment, the first communication device may negotiate with the second communication device to determine the first neural network block.

进一步地,该实施例中第一通信装置获取第一神经网络块之后,可以为不同需求的第二通信装置提供相应的神经网络块的参数,以使得第二通信设备可以基于接收到的神经网络块的参数确定相应的神经网络,从而可以基于确定的神经网络进行信道状态信息压缩处理,进行上行信道状态信息反馈,则图6所示的方法流程可选的还包括:Further, in this embodiment, after the first communication device obtains the first neural network block, the parameters of the corresponding neural network block can be provided to the second communication device with different requirements, so that the second communication device can determine the corresponding neural network based on the parameters of the received neural network block, thereby performing channel state information compression processing based on the determined neural network and performing uplink channel state information feedback. The method flow shown in FIG6 may optionally also include:

S620,第一通信装置向第二通信装置发送N个第一神经网络块分别对应的参数,相应的,第二通信装置接收来自第一通信装置的N个第一神经网络块分别对应的参数。S620, the first communication device sends parameters corresponding to the N first neural network blocks to the second communication device, and correspondingly, the second communication device receives parameters corresponding to the N first neural network blocks from the first communication device.

具体地,N个第一神经网络块包含于第一神经网络,第一神经网络用于第二通信装置对CSI进行编码处理。其中,N个第一神经网络块分别对应的参数包括:第一神经网络中的第i个第一神经网络块对应的参数,i取值从1至N。应理解,N个第一神经网络块为第一神经网络块堆叠N次,相当于相同的网络结构的神经网络块在第一神经网络中重复了N次,而第一神经网络的不同层的第一神经网络块的参数(如,图4中所示的权重、偏置或激活函数等)不同,因此第一通信装置需要向第二通信装置发送第一神经网络所包含的N个第一神经网络块分别对应的参数。Specifically, N first neural network blocks are included in a first neural network, and the first neural network is used for the second communication device to encode CSI. The parameters corresponding to the N first neural network blocks include: the parameters corresponding to the i-th first neural network block in the first neural network, and i takes values from 1 to N. It should be understood that the N first neural network blocks are the first neural network blocks stacked N times, which is equivalent to the neural network blocks with the same network structure being repeated N times in the first neural network, and the parameters of the first neural network blocks of different layers of the first neural network (such as the weights, biases or activation functions shown in FIG. 4, etc.) are different, so the first communication device needs to send the parameters corresponding to the N first neural network blocks contained in the first neural network to the second communication device.

例如,第一神经网络包含2个第一神经网络块:第一神经网络块#1和第一神经网络块#2,该第一神经网络块#1和第一神经网络块#2的结构相同,参数不同,可以理解为第一神经网络块堆叠两次即可。For example, the first neural network includes two first neural network blocks: the first neural network block #1 and the first neural network block #2. The first neural network block #1 and the first neural network block #2 have the same structure but different parameters. It can be understood that the first neural network block is stacked twice.

示例性地,第一通信装置还可以向第二通信装置发送第一神经网络的输出层的参数。输出层指的是由最后一个神经网络块的输出得到最终反馈信息(如,前文所示的m或m’)所使用的神经网络,可以包括一层或多层FC或CNN等。Exemplarily, the first communication device may also send the parameters of the output layer of the first neural network to the second communication device. The output layer refers to the neural network used to obtain the final feedback information (such as m or m' shown above) from the output of the last neural network block, and may include one or more layers of FC or CNN, etc.

需要说明的是,上述的第一神经网络块也可以设计为可以充当输出层的神经网络块,在该情况下,第一通信装置可以无需向第二通信装置额外提供第一神经网络的输出层的参数;或者,当第一神经网络的输出层可以复用已知的神经网络的输出层时,第一通信装置也可以无需向第二通信装置额外提供第一神经网络的输出层的参数。It should be noted that the above-mentioned first neural network block can also be designed as a neural network block that can serve as an output layer. In this case, the first communication device may not need to provide the parameters of the output layer of the first neural network to the second communication device; or, when the output layer of the first neural network can reuse the output layer of a known neural network, the first communication device may not need to provide the parameters of the output layer of the first neural network to the second communication device.

应理解,该实施例中第一通信装置可以向一个或者多个第二通信装置提供组成神经网络所需的神经网络块的参数,为了便于描述该实施例中以第一通信装置向某个第二通信装置提供第一神经网络所包含的N个第一神经网络块分别对应的参数为例进行说明。It should be understood that in this embodiment, the first communication device can provide parameters of the neural network blocks required to form a neural network to one or more second communication devices. For the sake of convenience in description, this embodiment is described by taking an example in which the first communication device provides parameters corresponding to N first neural network blocks contained in the first neural network to a second communication device.

可选的,第一通信装置还可以向第二通信装置提供使用的量化方法、量化比特数、特征嵌入方法等参数。Optionally, the first communication device may also provide the second communication device with parameters such as the quantization method used, the number of quantization bits, and the feature embedding method.

示例性地,该实施例中第一通信装置可以通过以下几种可能的实现方式向第二通信装置提供上述的N个第一神经网络块分别对应的参数:Exemplarily, in this embodiment, the first communication device may provide the second communication device with the parameters corresponding to the N first neural network blocks mentioned above respectively through the following possible implementation methods:

方式一:第一通信装置响应于第二通信装置的请求向第二通信装置发送上述的N个第一神经网络块分别对应的参数。Method 1: The first communication device sends the parameters corresponding to the above-mentioned N first neural network blocks to the second communication device in response to the request of the second communication device.

在方式一所示的情况下,图6所示的方法流程还包括:In the case shown in the first mode, the method flow shown in FIG6 further includes:

S621,第二通信装置根据第二信息确定第一神经网络包括的第一神经网络块的数量为N。S621, the second communication device determines that the number of first neural network blocks included in the first neural network is N according to the second information.

具体地,第二通信装置可以根据第二信息确定所需的第一神经网络包括的第一神经网络块的数量为N,第二信息包括所述第二通信装置的能力和/或对CSI进行处理的需求。Specifically, the second communication device can determine that the number of first neural network blocks included in the required first neural network is N based on the second information, and the second information includes the capabilities of the second communication device and/or the need to process CSI.

例如,第二通信装置为满足一定要求的装置(如,第二通信装置为算力充足或性能高的设备或者第二通信装置需要对CSI进行高压缩,以降低反馈开销,提升系统有效吞吐),则第二通信装置可以根据第二信息向第一通信装置请求第一神经网络包括的N个第一神经网络块,N大于阈值。For example, if the second communication device is a device that meets certain requirements (for example, the second communication device is a device with sufficient computing power or high performance, or the second communication device needs to perform high compression on CSI to reduce feedback overhead and improve the effective throughput of the system), then the second communication device can request the first communication device for N first neural network blocks included in the first neural network based on the second information, where N is greater than a threshold.

还例如,第二通信装置为不满足一定要求的装置(如,第二通信装置为算力不充足、性能差、或电量不足的设备,或者第二通信装置需要对CSI进行计算复杂度低的压缩处理,以降低处理复杂度),则第二通信装置可以根据第二信息向第一通信装置请求第一神经网络包括的N个第一神经网络块,N小于阈值。For another example, if the second communication device is a device that does not meet certain requirements (for example, the second communication device is a device with insufficient computing power, poor performance, or insufficient power, or the second communication device needs to perform compression processing on the CSI with low computational complexity to reduce processing complexity), the second communication device can request the first communication device for the N first neural network blocks included in the first neural network based on the second information, where N is less than a threshold.

S622,第二通信装置向第一通信装置发送第一指示信息,相应的,第一通信装置接收来自第二通信装置的第一指示信息。 S622: The second communication device sends first indication information to the first communication device, and correspondingly, the first communication device receives the first indication information from the second communication device.

具体地,第一指示信息用于指示上述的N。表示第二通信装置所需的第一神经网络块的数量为N。Specifically, the first indication information is used to indicate the above-mentioned N. It indicates that the number of first neural network blocks required by the second communication device is N.

为了便于理解,下面结合具体的示例说明在方式一所述的情况下,第二通信装置如何请求所需的神经网络块的参数:For ease of understanding, the following describes how the second communication device requests the parameters of the required neural network block in the case described in the first method with reference to a specific example:

示例一:假设第二通信装置所需的神经网络(或者说编码器)为最多包括4个神经网络块的神经网络结构。Example 1: Assume that the neural network (or encoder) required by the second communication device is a neural network structure including a maximum of 4 neural network blocks.

当第二通信装置算力较充足,根据第二信息请求包含3个神经网络块的模型1(下文称为模型1);当第二通信装置算力不足(如,第二通信装置为IoT终端或者第二通信装置电量不足),第二通信装置根据第二信息请求包含1个神经网络块的模型2(下文称为模型2)。When the computing power of the second communication device is sufficient, model 1 (hereinafter referred to as model 1) containing three neural network blocks is requested according to the second information; when the computing power of the second communication device is insufficient (for example, the second communication device is an IoT terminal or the second communication device is low on power), the second communication device requests model 2 (hereinafter referred to as model 2) containing one neural network block according to the second information.

可选地,第二通信装置可以用4个比特用于指示是否需要4个神经网络块的参数。因此第二通信装置可以使用0111请求模型1,使用0001请求模型2。第一通信装置根据收到请求向第二通信装置下发对应神经网络块的参数。Optionally, the second communication device may use 4 bits to indicate whether the parameters of 4 neural network blocks are needed. Therefore, the second communication device may use 0111 to request model 1 and 0001 to request model 2. The first communication device sends the parameters of the corresponding neural network block to the second communication device according to the received request.

示例性地,第一通信装置提供的参数可以如图9所示。应理解,图9仅为示例,第一通信装置提供的参数还可以不包括输出层的参数,这里不再一一举例说明。Exemplarily, the parameters provided by the first communication device may be as shown in Figure 9. It should be understood that Figure 9 is only an example, and the parameters provided by the first communication device may not include the parameters of the output layer, which will not be described one by one here.

方式二:第一通信装置广播多个第一神经网络块分别对应的参数。Method 2: The first communication device broadcasts parameters corresponding to multiple first neural network blocks.

在方式二所示的情况下,图6所示的方法流程还包括:In the case shown in the second mode, the method flow shown in FIG6 further includes:

S623,第一通信装置确定第一神经网络块个数的最大值P。S623, the first communication device determines the maximum value P of the number of first neural network blocks.

具体地,第一通信装置确定神经网络能够包括的第一神经网络块个数的最大值P,P为大于或者等于N的正整数。Specifically, the first communication device determines the maximum value P of the number of first neural network blocks that the neural network can include, where P is a positive integer greater than or equal to N.

在该方式二所示的情况下,上述的第一通信装置向第二通信装置发送N个第一神经网络块分别对应的参数,包括:第一通信装置向第二通信装置发送P个第一神经网络块分别对应的参数。In the case shown in the second method, the above-mentioned first communication device sends parameters corresponding to N first neural network blocks to the second communication device, including: the first communication device sends parameters corresponding to P first neural network blocks to the second communication device.

第二通信装置收到第一通信装置广播的参数后,第二通信装置可以根据自身能力或需求选择不同参数,则在方式二所示的情况下,图6所示的方法流程还包括:After the second communication device receives the parameters broadcast by the first communication device, the second communication device may select different parameters according to its own capabilities or needs. In the case shown in the second mode, the method flow shown in FIG. 6 further includes:

S624,第二通信装置根据第二信息确定N个第一神经网络块。S624, the second communication device determines N first neural network blocks based on the second information.

例如,第二通信装置为满足一定要求的装置(如,第二通信装置为算力充足或性能高的设备或者第二通信装置需要对CSI进行高压缩,以降低反馈开销,提升系统有效吞吐),则第二通信装置可以根据第二信息从P个第一神经网络块中选择N个第一神经网络块,N大于阈值。For example, if the second communication device is a device that meets certain requirements (for example, the second communication device is a device with sufficient computing power or high performance, or the second communication device needs to perform high compression on CSI to reduce feedback overhead and improve the effective throughput of the system), then the second communication device can select N first neural network blocks from P first neural network blocks based on the second information, where N is greater than a threshold.

还例如,第二通信装置为不满足一定要求的装置(如,第二通信装置为算力不充足、性能差、或电量不足的设备,或者第二通信装置需要对CSI进行计算复杂度低的压缩处理,以降低处理复杂度),则第二通信装置可以根据第二信息从P个第一神经网络块中选择N个第一神经网络块,N小于阈值。For example, if the second communication device is a device that does not meet certain requirements (for example, the second communication device is a device with insufficient computing power, poor performance, or insufficient power, or the second communication device needs to perform compression processing on the CSI with low computational complexity to reduce processing complexity), the second communication device can select N first neural network blocks from P first neural network blocks based on the second information, where N is less than a threshold.

进一步地,当第二通信装置所需的神经网络需要更新时,第二通信装置可以通过第二指示信息请求更新后的神经网络包含的第一神经网络块对应的参数。其中,第二通信装置所需的神经网络需要更新可以是第二通信装置的能力或者需求发生改变,例如,第二通信装置开始电量不足,使用上述的模型2;电量恢复后,希望使用上述的模型1;还例如,第二通信装置开始需求为对CSI进行计算复杂度低的压缩处理,以降低处理复杂度,一段时间后需求更新为对CSI进行高压缩,以降低反馈开销。Furthermore, when the neural network required by the second communication device needs to be updated, the second communication device may request the parameters corresponding to the first neural network block contained in the updated neural network through the second indication information. The need to update the neural network required by the second communication device may be due to a change in the capability or demand of the second communication device. For example, the second communication device is initially low on power and uses the above-mentioned model 2; after the power is restored, it is desired to use the above-mentioned model 1; for example, the second communication device initially requires low-computational-complexity compression processing of the CSI to reduce processing complexity, and after a period of time, the requirement is updated to high compression of the CSI to reduce feedback overhead.

应理解,上述所示的第二通信装置确定需要更新神经网络的条件仅为示例,该实施例中对于第二通信装置确定需要更新神经网络的原因不做任何限定。It should be understood that the conditions for the second communication device to determine that the neural network needs to be updated are only examples, and this embodiment does not limit the reasons why the second communication device determines that the neural network needs to be updated.

作为一种可能的实现方式,当第二通信装置所需的神经网络包括的第一神经网络块个数发生变化时,第二通信装置可以向第一通信装置请求新增的第一神经网络块的参数。As a possible implementation manner, when the number of first neural network blocks included in the neural network required by the second communication device changes, the second communication device may request parameters of the newly added first neural network blocks from the first communication device.

例如,第二通信装置已知第一神经网络包含的N个第一神经网络块分别对应的参数,当第二通信装置所需的神经网络由第一神经网络更新为第二神经网络,且第二神经网络包含的第一神经网络块为Q个时,Q-N=M。则第二通信装置可以向第一通信装置发送第二指示信息,该第二指示信息指示M,请求新增的M个第一神经网络块的参数。For example, the second communication device knows the parameters corresponding to the N first neural network blocks included in the first neural network. When the neural network required by the second communication device is updated from the first neural network to the second neural network, and the second neural network includes Q first neural network blocks, Q-N = M. Then the second communication device can send a second indication message to the first communication device, and the second indication message indicates M, requesting the parameters of the newly added M first neural network blocks.

作为一种可能的实现方式,当第二通信装置所需的神经网络包括的第一神经网络块个数发生变化时,第二通信装置可以向第一通信装置请求更新后的神经网络包括的第一神经网络块的参数。As a possible implementation manner, when the number of first neural network blocks included in the neural network required by the second communication device changes, the second communication device may request the first communication device to update the parameters of the first neural network blocks included in the neural network.

例如,第二通信装置已知第一神经网络包含的N个第一神经网络块分别对应的参数,当第二通信装置所需的神经网络由第一神经网络更新为第二神经网络,且第二神经网络包含的第一神经网络块为Q个时,第二通信装置可以向第一通信装置请求Q个第一神经网络块的参数。For example, the second communication device knows the parameters corresponding to N first neural network blocks included in the first neural network. When the neural network required by the second communication device is updated from the first neural network to the second neural network, and the second neural network includes Q first neural network blocks, the second communication device can request the parameters of the Q first neural network blocks from the first communication device.

为了便于理解,下面结合具体的示例说明第二通信装置如何更新神经网络。 For ease of understanding, the following describes how the second communication device updates the neural network with reference to specific examples.

实例二:第二通信装置开始电量不足,使用上述的模型2;电量恢复后,希望使用上述的模型1。Example 2: The second communication device is initially low on power and uses the above-mentioned model 2; after the power is restored, it is desired to use the above-mentioned model 1.

第二通信装置可以请求从模型2(1个第一神经网络块,如0001)更新为模型1(3个第一神经网络块,如0111)所需要的额外第一神经网络块的参数,即2个第一神经网络块(如,0110),有效减少神经网络更新的传输开销。The second communication device can request the parameters of the additional first neural network blocks required to update from model 2 (1 first neural network block, such as 0001) to model 1 (3 first neural network blocks, such as 0111), i.e., 2 first neural network blocks (e.g., 0110), effectively reducing the transmission overhead of the neural network update.

示例性地,第一通信装置提供的参数可以如图10所示。应理解,图10仅为示例,在第二通信装置所需的神经网络更新的情况下,第一通信装置提供的参数还可以为更新后的神经网络包括的第一神经网络块的参数,这里不再一一举例说明。Exemplarily, the parameters provided by the first communication device may be as shown in Figure 10. It should be understood that Figure 10 is only an example, and in the case where the neural network required by the second communication device is updated, the parameters provided by the first communication device may also be the parameters of the first neural network block included in the updated neural network, which will not be illustrated one by one here.

该实施例中第二通信装置获取N个第一神经网络块分别对应的参数之后,可以确定第一神经网络,并且基于该第一神经网络进行CSI处理,图6所示的方法流程还包括:In this embodiment, after the second communication device obtains the parameters corresponding to the N first neural network blocks, the first neural network can be determined, and CSI processing is performed based on the first neural network. The method flow shown in FIG6 also includes:

S630,第二通信装置进行CSI处理。S630: The second communication device performs CSI processing.

具体地,该实施例中第二通信装置是基于确定的第一神经网络对CSI进行处理的。例如,第二通信装置基于第一神经网络对预编码矩阵V或其他信道状态信息进行压缩处理。Specifically, in this embodiment, the second communication device processes the CSI based on the determined first neural network. For example, the second communication device compresses the precoding matrix V or other channel state information based on the first neural network.

应理解,该实施例中第二通信装置进行信道处理的前提是:第二通信装置接收到来自第一通信装置的测量信息(如,前文所示的NDP,或者为其他测量参考信号灯),第二通信装置可以基于该测量信息进行信道估计后,得到当前信道H,对H进行SVD得到预编码矩阵V。该实施例中主要涉及基于第一神经网络对V进行压缩处理,而对于其他流程不做限定。例如,第二通信装置获取第一神经网络包含的第一神经网路块的参数的流程和第二通信装置接收到测量信息的流程为独立的流程。It should be understood that the premise for the second communication device to perform channel processing in this embodiment is that the second communication device receives measurement information from the first communication device (such as the NDP shown above, or other measurement reference signals), and the second communication device can perform channel estimation based on the measurement information to obtain the current channel H, and perform SVD on H to obtain the precoding matrix V. This embodiment mainly involves compression processing of V based on the first neural network, and other processes are not limited. For example, the process of the second communication device obtaining the parameters of the first neural network block included in the first neural network and the process of the second communication device receiving the measurement information are independent processes.

当上述的第一神经网络包含的第一神经网络块数量多于某个阈值(如,大于4个第一神经网络块)时,第一神经网络计算复杂度高,第二通信装置可以基于该第一神经网络提高CSI压缩比,减少反馈开销,提高系统吞吐。When the number of first neural network blocks included in the above-mentioned first neural network is greater than a certain threshold (for example, greater than 4 first neural network blocks), the computational complexity of the first neural network is high, and the second communication device can improve the CSI compression ratio based on the first neural network, reduce feedback overhead, and improve system throughput.

当上述的第一神经网络包含的第一神经网络块数量少于某个阈值(如,小于2个第一神经网络块)时,第一神经网络计算复杂度低,第二通信装置可以基于该第一神经网络降低CSI反馈过程中的计算复杂度。When the number of first neural network blocks included in the above-mentioned first neural network is less than a certain threshold (e.g., less than 2 first neural network blocks), the computational complexity of the first neural network is low, and the second communication device can reduce the computational complexity in the CSI feedback process based on the first neural network.

S640,第二通信装置向第一通信装置发送第一信息,相应的,第一通信装置接收来自第二通信装置的第一信息。S640: The second communication device sends first information to the first communication device, and correspondingly, the first communication device receives the first information from the second communication device.

具体地,第一信息包括第三指示信息和第一向量,第三指示信息用于指示N,第一向量为CSI经过第一神经网络编码得到的结果。应理解,第三指示信息用于指示第二通信装置当前反馈的第一向量是基于哪种结构的神经网络处理得到的,从而有助于第一通信装置可以选择合适的神经网络解析该第一向量得到准确率高的信道信息。Specifically, the first information includes third indication information and a first vector, the third indication information is used to indicate N, and the first vector is the result of CSI being encoded by the first neural network. It should be understood that the third indication information is used to indicate the structure of the neural network based on which the first vector currently fed back by the second communication device is processed, thereby helping the first communication device to select a suitable neural network to parse the first vector to obtain high-accuracy channel information.

可选地,第三指示信息可以携带在多输入多输出(multiple input multiple output,MIMO)控制域(control field),也可以携带在MIMO压缩波束赋形报告(compressed beamforming report,CBR)域(field),如图10所示。Optionally, the third indication information can be carried in a multiple input multiple output (MIMO) control field, or in a MIMO compressed beamforming report (CBR) field, as shown in Figure 10.

图6所示的通信方法中,第一通信装置能够获取第一神经网络块,并且不同的神经网络包含的第一神经网络块的数量不同的情况下,不同的神经网络实现的功能不同。例如,神经网络包含的第一神经网络块的数量越多,该神经网络的计算复杂度越高,且经由该神经网络压缩处理的CSI的压缩比高,反馈开销减小,系统吞吐高。进一步地,第一通信装置可以向第二通信装置提供N个第一神经网络块分别对应的参数,从而使得第二通信装置能够基于接收到的N个第一神经网络块分别对应的参数确定第一神经网络,在后续的CSI反馈过程中可以基于第一神经网络对CSI进行编码处理,实现基于AI的CSI反馈。In the communication method shown in FIG6 , the first communication device can obtain the first neural network block, and when different neural networks contain different numbers of first neural network blocks, different neural networks implement different functions. For example, the more first neural network blocks a neural network contains, the higher the computational complexity of the neural network, and the compression ratio of the CSI compressed and processed by the neural network is high, the feedback overhead is reduced, and the system throughput is high. Furthermore, the first communication device can provide the second communication device with parameters corresponding to the N first neural network blocks, so that the second communication device can determine the first neural network based on the parameters corresponding to the N received first neural network blocks, and can encode the CSI based on the first neural network in the subsequent CSI feedback process to implement AI-based CSI feedback.

而且该技术方案中可以通过一种结构的神经网络块得到不同功能的神经网络,无需针对不同的功能训练得到不同的神经网络,降低了管理开销以及神经网络的存储开销。Moreover, in this technical solution, neural networks with different functions can be obtained through a neural network block of a structure, without the need to train different neural networks for different functions, thereby reducing management overhead and storage overhead of the neural network.

应理解,上述各过程的序号的大小并不意味着执行顺序的先后,各过程的执行顺序应以其功能和内在逻辑确定,而不应对本申请实施例的实施过程构成任何限定。It should be understood that the sequence numbers of the above processes do not mean the order of execution. The execution order of each process should be determined by its function and internal logic, and should not constitute any limitation on the implementation process of the embodiments of the present application.

还应理解,在本申请的各个实施例中,如果没有特殊说明以及逻辑冲突,不同的实施例之间的术语和/或描述具有一致性、且可以相互引用,不同的实施例中的技术特征根据其内在的逻辑关系可以组合形成新的实施例。It should also be understood that in the various embodiments of the present application, unless otherwise specified or there is a logical conflict, the terms and/or descriptions between different embodiments are consistent and can be referenced to each other, and the technical features in different embodiments can be combined to form new embodiments according to their internal logical relationships.

还应理解,在上述一些实施例中,主要以现有的网络架构中的设备为例进行了示例性说明,应理解,对于设备的具体形式本申请实施例不作限定。例如,在未来可以实现同样功能的设备都适用于本申请实施例。 It should also be understood that in some of the above embodiments, the devices in the existing network architecture are mainly used as examples for exemplary description, and it should be understood that the embodiments of the present application do not limit the specific form of the devices. For example, devices that can achieve the same function in the future are applicable to the embodiments of the present application.

可以理解的是,上述各个方法实施例中,由设备(如第一通信装置和第二通信装置)实现的方法和操作,也可以由可用于设备的部件(例如芯片或者电路)实现。It can be understood that in the above-mentioned various method embodiments, the methods and operations implemented by the device (such as the first communication device and the second communication device) can also be implemented by components that can be used in the device (such as chips or circuits).

还可以理解,本申请的各实施例中的一些可选的特征,在某些场景下,可以不依赖于其他特征,也可以在某些场景下,与其他特征进行结合,不作限定。It can also be understood that some optional features in the embodiments of the present application may not depend on other features in some scenarios, or may be combined with other features in some scenarios, without limitation.

以上,结合图6详细说明了本申请实施例提供的通信方法。上述通信方法主要从第一通信装置和第二通信装置的角度进行了介绍。可以理解的是,第一通信装置和第二通信装置为了实现上述功能,其包含了执行各个功能相应的硬件结构和/或软件模块。The communication method provided by the embodiment of the present application is described in detail above in conjunction with FIG6. The above communication method is mainly introduced from the perspective of the first communication device and the second communication device. It can be understood that in order to implement the above functions, the first communication device and the second communication device include hardware structures and/or software modules corresponding to the execution of each function.

本领域技术人员应该可以意识到,结合本文中所公开的实施例描述的各示例的单元及算法步骤,本申请能够以硬件或硬件和计算机软件的结合形式来实现。某个功能究竟以硬件还是计算机软件驱动硬件的方式来执行,取决于技术方案的特定应用和设计约束条件。专业技术人员可以对每个特定的应用来使用不同方法来实现所描述的功能,但是这种实现不应认为超出本申请的范围。Those skilled in the art should be aware that, in combination with the units and algorithm steps of each example described in the embodiments disclosed herein, the present application can be implemented in the form of hardware or a combination of hardware and computer software. Whether a function is performed in the form of hardware or computer software driving hardware depends on the specific application and design constraints of the technical solution. Professional and technical personnel can use different methods to implement the described functions for each specific application, but such implementation should not be considered to be beyond the scope of this application.

以下,结合图11至图13详细说明本申请实施例提供的通信装置。应理解,装置实施例的描述与方法实施例的描述相互对应,因此,未详细描述的内容可以参见上文方法实施例,为了简洁,部分内容不再赘述。The communication device provided in the embodiment of the present application is described in detail below in conjunction with Figures 11 to 13. It should be understood that the description of the device embodiment corresponds to the description of the method embodiment, so the content not described in detail can refer to the above method embodiment, and for the sake of brevity, some content will not be repeated.

本申请实施例可以根据上述方法示例对发送端设备或者接收端设备进行功能模块的划分,例如,可以对应各个功能划分各个功能模块,也可以将两个或两个以上的功能集成在一个处理模块中。上述集成的模块既可以采用硬件的形式实现,也可以采用软件功能模块的形式实现。需要说明的是,本申请实施例中对模块的划分是示意性的,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式。下面以采用对应各个功能划分各个功能模块为例进行说明。The embodiment of the present application can divide the functional modules of the transmitting end device or the receiving end device according to the above method example. For example, each functional module can be divided corresponding to each function, or two or more functions can be integrated into one processing module. The above integrated module can be implemented in the form of hardware or in the form of software functional modules. It should be noted that the division of modules in the embodiment of the present application is schematic and is only a logical functional division. There may be other division methods in actual implementation. The following is an example of dividing each functional module corresponding to each function.

图11是本申请实施例提供的通信装置10的示意性框图。该装置10包括收发模块11和处理模块12。收发模块11可以实现相应的通信功能,处理模块12用于进行数据处理,或者说该收发模块11用于执行接收和发送相关的操作,该处理模块12用于执行除了接收和发送以外的其他操作。收发模块11还可以称为通信接口或通信单元。FIG11 is a schematic block diagram of a communication device 10 provided in an embodiment of the present application. The device 10 includes a transceiver module 11 and a processing module 12. The transceiver module 11 can implement corresponding communication functions, and the processing module 12 is used to perform data processing, or in other words, the transceiver module 11 is used to perform operations related to receiving and sending, and the processing module 12 is used to perform other operations besides receiving and sending. The transceiver module 11 can also be called a communication interface or a communication unit.

可选地,该装置10还可以包括存储模块13,该存储模块13可以用于存储指令和/或数据,处理模块12可以读取存储模块中的指令和/或数据,以使得装置实现前述各个方法实施例中设备的动作。Optionally, the device 10 may further include a storage module 13, which may be used to store instructions and/or data. The processing module 12 may read the instructions and/or data in the storage module so that the device implements the actions of the devices in the aforementioned method embodiments.

在一种设计中,该装置10可对应于上文方法实施例中的第一通信装置,或者是第一通信装置的组成部件(如芯片)。In one design, the device 10 may correspond to the first communication device in the above method embodiment, or a component (such as a chip) of the first communication device.

该装置10可实现对应于上文方法实施例中的第一通信装置执行的步骤或者流程,其中,收发模块11可用于执行上文方法实施例中第一通信装置的收发相关的操作,处理模块12可用于执行上文方法实施例中第一通信装置的处理相关的操作。The device 10 can implement the steps or processes executed by the first communication device in the above method embodiment, wherein the transceiver module 11 can be used to execute the transceiver related operations of the first communication device in the above method embodiment, and the processing module 12 can be used to execute the processing related operations of the first communication device in the above method embodiment.

在一种可能的实现方式,处理模块12,用于获取第一神经网络块,所述第一神经网络块用于信道信息反馈,包含不同数量的所述第一神经网络块的神经网络的功能不同。收发模块11,用于向第二通信装置发送N个所述第一神经网络块分别对应的参数,所述N个第一神经网络块包含于第一神经网络,所述第一神经网络用于信道信息的编码处理,其中,N为正整数。In a possible implementation, the processing module 12 is used to obtain a first neural network block, the first neural network block is used for channel information feedback, and the functions of neural networks containing different numbers of the first neural network blocks are different. The transceiver module 11 is used to send parameters corresponding to N first neural network blocks to the second communication device, the N first neural network blocks are included in the first neural network, and the first neural network is used for encoding processing of channel information, wherein N is a positive integer.

当该装置10用于执行图6中的方法时,收发模块11可用于执行方法中的收发信息的步骤,如步骤S622、S620和S640,处理模块12可用于执行方法中的处理步骤,如步骤S610和S623。When the device 10 is used to execute the method in Figure 6, the transceiver module 11 can be used to execute the steps of sending and receiving information in the method, such as steps S622, S620 and S640, and the processing module 12 can be used to execute the processing steps in the method, such as steps S610 and S623.

应理解,各单元执行上述相应步骤的具体过程在上述方法实施例中已经详细说明,为了简洁,在此不再赘述。It should be understood that the specific process of each unit executing the above corresponding steps has been described in detail in the above method embodiment, and for the sake of brevity, it will not be repeated here.

在另一种设计中,该装置10可对应于上文方法实施例中的第二通信装置,或者是第二通信装置的组成部件(如芯片)。In another design, the device 10 may correspond to the second communication device in the above method embodiment, or be a component (such as a chip) of the second communication device.

该装置10可实现对应于上文方法实施例中的第二通信装置执行的步骤或者流程,其中,收发模块11可用于执行上文方法实施例中第二通信装置的收发相关的操作,处理模块12可用于执行上文方法实施例中第二通信装置的处理相关的操作。The device 10 can implement steps or processes corresponding to those performed by the second communication device in the above method embodiment, wherein the transceiver module 11 can be used to perform transceiver-related operations of the second communication device in the above method embodiment, and the processing module 12 can be used to perform processing-related operations of the second communication device in the above method embodiment.

在一种可能的实现方式,收发模块11,用于获取N个第一神经网络块分别对应的参数,所述第一神经网络块用于信道信息反馈,包含不同数量的所述第一神经网络块的神经网络的功能不同。处理模块12,用于基于所述N个所述第一神经网络块确定第一神经网络,所述第一神经网络用于所述第二通信装置对信道信息进行编码处理,其中,N为正整数。In a possible implementation, the transceiver module 11 is used to obtain parameters corresponding to N first neural network blocks, respectively, where the first neural network blocks are used for channel information feedback, and the functions of neural networks containing different numbers of the first neural network blocks are different. The processing module 12 is used to determine a first neural network based on the N first neural network blocks, where the first neural network is used for the second communication device to encode the channel information, wherein N is a positive integer.

当该装置10用于执行图6中的方法时,收发模块11可用于执行方法中的收发信息的步骤,如步骤 S622、S620和S640,处理模块12可用于执行方法中的处理步骤,如步骤S621、S624和S630。When the device 10 is used to execute the method in FIG. 6 , the transceiver module 11 may be used to execute the steps of sending and receiving information in the method, such as step S622, S620 and S640, the processing module 12 can be used to execute the processing steps in the method, such as steps S621, S624 and S630.

应理解,各单元执行上述相应步骤的具体过程在上述方法实施例中已经详细说明,为了简洁,在此不再赘述。It should be understood that the specific process of each unit executing the above corresponding steps has been described in detail in the above method embodiment, and for the sake of brevity, it will not be repeated here.

还应理解,这里的装置10以功能模块的形式体现。这里的术语“模块”可以指应用特有集成电路(application specific integrated circuit,ASIC)、电子电路、用于执行一个或多个软件或固件程序的处理器(例如共享处理器、专有处理器或组处理器等)和存储器、合并逻辑电路和/或其它支持所描述的功能的合适组件。在一个可选例子中,本领域技术人员可以理解,装置10可以具体为上述实施例中的移动管理网元,可以用于执行上述各方法实施例中与移动管理网元对应的各个流程和/或步骤;或者,装置10可以具体为上述实施例中的终端设备,可以用于执行上述各方法实施例中与终端设备对应的各个流程和/或步骤,为避免重复,在此不再赘述。It should also be understood that the device 10 here is embodied in the form of a functional module. The term "module" here may refer to an application specific integrated circuit (ASIC), an electronic circuit, a processor (such as a shared processor, a dedicated processor or a group processor, etc.) and a memory for executing one or more software or firmware programs, a merged logic circuit and/or other suitable components that support the described functions. In an optional example, those skilled in the art can understand that the device 10 can be specifically the mobile management network element in the above-mentioned embodiment, and can be used to execute the various processes and/or steps corresponding to the mobile management network element in the above-mentioned method embodiments; or, the device 10 can be specifically the terminal device in the above-mentioned embodiment, and can be used to execute the various processes and/or steps corresponding to the terminal device in the above-mentioned method embodiments. To avoid repetition, it will not be repeated here.

上述各个方案的装置10具有实现上述方法中的设备(如第一通信装置)所执行的相应步骤的功能。该功能可以通过硬件实现,也可以通过硬件执行相应的软件实现。该硬件或软件包括一个或多个与上述功能相对应的模块;例如收发模块可以由收发机替代(例如,收发模块中的发送单元可以由发送机替代,收发模块中的接收单元可以由接收机替代),其它单元,如处理模块等可以由处理器替代,分别执行各个方法实施例中的收发操作以及相关的处理操作。The device 10 of each of the above schemes has the function of implementing the corresponding steps performed by the device (such as the first communication device) in the above method. The function can be implemented by hardware, or by hardware executing the corresponding software implementation. The hardware or software includes one or more modules corresponding to the above functions; for example, the transceiver module can be replaced by a transceiver (for example, the sending unit in the transceiver module can be replaced by a transmitter, and the receiving unit in the transceiver module can be replaced by a receiver), and other units, such as the processing module, can be replaced by a processor to respectively perform the transceiver operations and related processing operations in each method embodiment.

此外,上述收发模块11还可以是收发电路(例如可以包括接收电路和发送电路),处理模块可以是处理电路。In addition, the transceiver module 11 may also be a transceiver circuit (for example, may include a receiving circuit and a sending circuit), and the processing module may be a processing circuit.

图12是本申请实施例提供另一种通信装置20的示意图。该装置20包括处理器21,处理器21用于执行存储器22存储的计算机程序或指令,或读取存储器22存储的数据/信令,以执行上文各方法实施例中的方法。可选地,处理器21为一个或多个。FIG12 is a schematic diagram of another communication device 20 provided in an embodiment of the present application. The device 20 includes a processor 21, and the processor 21 is used to execute a computer program or instruction stored in a memory 22, or read data/signaling stored in the memory 22 to execute the method in each method embodiment above. Optionally, there are one or more processors 21.

可选地,如图12所示,该装置20还包括存储器22,存储器22用于存储计算机程序或指令和/或数据。该存储器22可以与处理器21集成在一起,或者也可以分离设置。可选地,存储器22为一个或多个。Optionally, as shown in FIG12 , the device 20 further includes a memory 22, and the memory 22 is used to store computer programs or instructions and/or data. The memory 22 can be integrated with the processor 21, or can also be separately arranged. Optionally, the memory 22 is one or more.

可选地,如图12所示,该装置20还包括收发器23,收发器23用于信号的接收和/或发送。例如,处理器21用于控制收发器23进行信号的接收和/或发送。Optionally, as shown in Fig. 12, the device 20 further includes a transceiver 23, and the transceiver 23 is used for receiving and/or sending signals. For example, the processor 21 is used to control the transceiver 23 to receive and/or send signals.

作为一种方案,该装置20用于实现上文各个方法实施例中由第一通信装置或第二通信装置执行的操作。As a solution, the device 20 is used to implement the operations performed by the first communication device or the second communication device in the above various method embodiments.

应理解,本申请实施例中提及的处理器可以是中央处理单元(central processing unit,CPU),还可以是其他通用处理器、数字信号处理器(digital signal processor,DSP)、专用集成电路(application specific integrated circuit,ASIC)、现成可编程门阵列(field programmable gate array,FPGA)或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件等。通用处理器可以是微处理器或者该处理器也可以是任何常规的处理器等。It should be understood that the processor mentioned in the embodiments of the present application may be a central processing unit (CPU), or other general-purpose processors, digital signal processors (DSP), application-specific integrated circuits (ASIC), field programmable gate arrays (FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, etc. A general-purpose processor may be a microprocessor or the processor may also be any conventional processor, etc.

还应理解,本申请实施例中提及的存储器可以是易失性存储器和/或非易失性存储器。其中,非易失性存储器可以是只读存储器(read-only memory,ROM)、可编程只读存储器(programmable ROM,PROM)、可擦除可编程只读存储器(erasable PROM,EPROM)、电可擦除可编程只读存储器(electrically EPROM,EEPROM)或闪存。易失性存储器可以是随机存取存储器(random access memory,RAM)。例如,RAM可以用作外部高速缓存。作为示例而非限定,RAM包括如下多种形式:静态随机存取存储器(static RAM,SRAM)、动态随机存取存储器(dynamic RAM,DRAM)、同步动态随机存取存储器(synchronous DRAM,SDRAM)、双倍数据速率同步动态随机存取存储器(double data rate SDRAM,DDR SDRAM)、增强型同步动态随机存取存储器(enhanced SDRAM,ESDRAM)、同步连接动态随机存取存储器(synchlink DRAM,SLDRAM)和直接内存总线随机存取存储器(direct rambus RAM,DR RAM)。It should also be understood that the memory mentioned in the embodiments of the present application may be a volatile memory and/or a non-volatile memory. Among them, the non-volatile memory may be a read-only memory (ROM), a programmable read-only memory (PROM), an erasable programmable read-only memory (EPROM), an electrically erasable programmable read-only memory (EEPROM), or a flash memory. The volatile memory may be a random access memory (RAM). For example, a RAM may be used as an external cache. By way of example and not limitation, RAM includes the following forms: static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double data rate SDRAM (DDR SDRAM), enhanced SDRAM (ESDRAM), synchronous link DRAM (SLDRAM), and direct rambus RAM (DR RAM).

需要说明的是,当处理器为通用处理器、DSP、ASIC、FPGA或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件时,存储器(存储模块)可以集成在处理器中。It should be noted that when the processor is a general-purpose processor, DSP, ASIC, FPGA or other programmable logic device, discrete gate or transistor logic device, discrete hardware component, the memory (storage module) can be integrated into the processor.

还需要说明的是,本文描述的存储器旨在包括但不限于这些和任意其它适合类型的存储器。It should also be noted that the memory described herein is intended to include, but is not limited to, these and any other suitable types of memory.

图13是本申请实施例提供一种芯片系统30的示意图。该芯片系统30(或者也可以称为处理系统)包括逻辑电路31以及输入/输出接口(input/output interface)32。FIG13 is a schematic diagram of a chip system 30 provided in an embodiment of the present application. The chip system 30 (or also referred to as a processing system) includes a logic circuit 31 and an input/output interface 32.

其中,逻辑电路31可以为芯片系统30中的处理电路。逻辑电路31可以耦合连接存储单元,调用存储单元中的指令,使得芯片系统30可以实现本申请各实施例的方法和功能。输入/输出接口32,可以为 芯片系统30中的输入输出电路,将芯片系统30处理好的信息输出,或将待处理的数据或信令信息输入芯片系统30进行处理。The logic circuit 31 may be a processing circuit in the chip system 30. The logic circuit 31 may be coupled to a storage unit and call instructions in the storage unit so that the chip system 30 can implement the methods and functions of the various embodiments of the present application. The input and output circuits in the chip system 30 output information processed by the chip system 30 , or input data or signaling information to be processed into the chip system 30 for processing.

作为一种方案,该芯片系统30用于实现上文各个方法实施例中由第一通信装置或第二通信装置执行的操作。As a solution, the chip system 30 is used to implement the operations performed by the first communication device or the second communication device in the above method embodiments.

例如,逻辑电路31用于实现上文方法实施例中由第一通信装置或第二通信装置执行的处理相关的操作;输入/输出接口32用于实现上文方法实施例中由终端设备执行的发送和/或接收相关的操作。For example, the logic circuit 31 is used to implement the processing-related operations performed by the first communication device or the second communication device in the above method embodiment; the input/output interface 32 is used to implement the sending and/or receiving-related operations performed by the terminal device in the above method embodiment.

本申请实施例还提供一种计算机可读存储介质,其上存储有用于实现上述各方法实施例中由设备执行的方法的计算机指令。An embodiment of the present application also provides a computer-readable storage medium on which computer instructions for implementing the methods executed by the device in the above-mentioned method embodiments are stored.

例如,该计算机程序被计算机执行时,使得该计算机可以实现上述方法各实施例中由第一通信装置或第二通信装置执行的方法。For example, when the computer program is executed by a computer, the computer can implement the method performed by the first communication device or the second communication device in each embodiment of the above method.

本申请实施例还提供一种计算机程序产品,包含指令,该指令被计算机执行时以实现上述各方法实施例中由第一通信装置或第二通信装置执行的方法。An embodiment of the present application further provides a computer program product, comprising instructions, which, when executed by a computer, implement the methods performed by the first communication device or the second communication device in the above-mentioned method embodiments.

本申请实施例还提供了一种通信系统,包括前述的第一通信装置和第二通信装置。An embodiment of the present application also provides a communication system, including the aforementioned first communication device and second communication device.

上述提供的任一种装置中相关内容的解释及有益效果均可参考上文提供的对应的方法实施例,此处不再赘述。The explanation of the relevant contents and beneficial effects of any of the above-mentioned devices can be referred to the corresponding method embodiments provided above, which will not be repeated here.

本领域普通技术人员可以意识到,结合本文中所公开的实施例描述的各示例的单元及算法步骤,能够以电子硬件、或者计算机软件和电子硬件的结合来实现。这些功能究竟以硬件还是软件方式来执行,取决于技术方案的特定应用和设计约束条件。专业技术人员可以对每个特定的应用来使用不同方法来实现所描述的功能,但是这种实现不应认为超出本申请的范围。Those of ordinary skill in the art will appreciate that the units and algorithm steps of each example described in conjunction with the embodiments disclosed herein can be implemented in electronic hardware, or a combination of computer software and electronic hardware. Whether these functions are performed in hardware or software depends on the specific application and design constraints of the technical solution. Professional and technical personnel can use different methods to implement the described functions for each specific application, but such implementation should not be considered to be beyond the scope of this application.

所属领域的技术人员可以清楚地了解到,为描述的方便和简洁,上述描述的系统、装置和单元的具体工作过程,可以参考前述方法实施例中的对应过程,在此不再赘述。Those skilled in the art can clearly understand that, for the convenience and brevity of description, the specific working processes of the systems, devices and units described above can refer to the corresponding processes in the aforementioned method embodiments and will not be repeated here.

在本申请所提供的几个实施例中,应该理解到,所揭露的系统、装置和方法,可以通过其它的方式实现。例如,以上所描述的装置实施例仅仅是示意性的,例如,所述单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,装置或单元的间接耦合或通信连接,可以是电性,机械或其它的形式。In the several embodiments provided in the present application, it should be understood that the disclosed systems, devices and methods can be implemented in other ways. For example, the device embodiments described above are only schematic. For example, the division of the units is only a logical function division. There may be other division methods in actual implementation, such as multiple units or components can be combined or integrated into another system, or some features can be ignored or not executed. Another point is that the mutual coupling or direct coupling or communication connection shown or discussed can be through some interfaces, indirect coupling or communication connection of devices or units, which can be electrical, mechanical or other forms.

所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。The units described as separate components may or may not be physically separated, and the components shown as units may or may not be physical units, that is, they may be located in one place or distributed on multiple network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.

另外,在本申请各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。In addition, each functional unit in each embodiment of the present application may be integrated into one processing unit, or each unit may exist physically separately, or two or more units may be integrated into one unit.

所述功能如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。基于这样的理解,本申请的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)执行本申请各个实施例所述方法的全部或部分步骤。而前述的存储介质包括:U盘、移动硬盘、只读存储器(Read-Only Memory,ROM)、随机存取存储器(Random Access Memory,RAM)、磁碟或者光盘等各种可以存储程序代码的介质。If the functions are implemented in the form of software functional units and sold or used as independent products, they can be stored in a computer-readable storage medium. Based on this understanding, the technical solution of the present application can be essentially or partly embodied in the form of a software product that contributes to the prior art. The computer software product is stored in a storage medium and includes several instructions for a computer device (which can be a personal computer, server, or network device, etc.) to perform all or part of the steps of the methods described in each embodiment of the present application. The aforementioned storage media include: U disk, mobile hard disk, read-only memory (ROM), random access memory (RAM), disk or optical disk, and other media that can store program codes.

以上所述,仅为本申请的具体实施方式,但本申请的保护范围并不局限于此,任何熟悉本技术领域的技术人员在本申请揭露的技术范围内,可轻易想到变化或替换,都应涵盖在本申请的保护范围之内。因此,本申请的保护范围应以所述权利要求的保护范围为准。 The above is only a specific implementation of the present application, but the protection scope of the present application is not limited thereto. Any person skilled in the art who is familiar with the present technical field can easily think of changes or substitutions within the technical scope disclosed in the present application, which should be included in the protection scope of the present application. Therefore, the protection scope of the present application should be based on the protection scope of the claims.

Claims (24)

一种通信方法,其特征在于,包括:A communication method, characterized by comprising: 第一通信装置获取第一神经网络块,所述第一神经网络块用于信道信息反馈,包含不同数量的所述第一神经网络块的神经网络的功能不同;The first communication device obtains a first neural network block, where the first neural network block is used for channel information feedback, and neural networks containing different numbers of the first neural network blocks have different functions; 所述第一通信装置向第二通信装置发送N个所述第一神经网络块分别对应的参数,所述N个第一神经网络块包含于第一神经网络,所述第一神经网络用于所述信道信息的编码处理,The first communication device sends parameters corresponding to N first neural network blocks to the second communication device, wherein the N first neural network blocks are included in a first neural network, and the first neural network is used for encoding processing of the channel information. 其中,N为正整数。Wherein, N is a positive integer. 根据权利要求1所述的方法,其特征在于,在所述第一通信装置向第二通信装置发送N个第一神经网络块分别对应的参数之前,所述方法还包括:The method according to claim 1 is characterized in that before the first communication device sends the parameters corresponding to the N first neural network blocks to the second communication device, the method further comprises: 所述第一通信装置接收来自所述第二通信装置的第一指示信息,所述第一指示信息用于指示所述N。The first communication device receives first indication information from the second communication device, where the first indication information is used to indicate the N. 根据权利要求1或2所述的方法,其特征在于,所述方法还包括:The method according to claim 1 or 2, characterized in that the method further comprises: 所述第一通信装置接收来自所述第二通信装置的第二指示信息,所述第二指示信息用于指示M,所述M为正整数;The first communication device receives second indication information from the second communication device, where the second indication information is used to indicate M, where M is a positive integer; 所述第一通信装置向第二通信装置发送M个所述第一神经网络块分别对应的参数,The first communication device sends the parameters corresponding to the M first neural network blocks to the second communication device, 其中,所述M个所述第一神经网络块和所述N个所述第一神经网络块包含于第二神经网络。Among them, the M first neural network blocks and the N first neural network blocks are included in the second neural network. 根据权利要求1至3中任一项所述的方法,其特征在于,所述方法还包括:The method according to any one of claims 1 to 3, characterized in that the method further comprises: 所述第一通信装置确定神经网络能够包括的第一神经网络块个数的最大值P,P为大于或者等于N的正整数;The first communication device determines a maximum value P of the number of first neural network blocks that the neural network can include, where P is a positive integer greater than or equal to N; 所述第一通信装置向所述第二通信装置发送N个所述第一神经网络块分别对应的参数,包括:The first communication device sends parameters corresponding to N first neural network blocks to the second communication device, including: 所述第一通信装置向所述第二通信装置发送所述P个所述第一神经网络块分别对应的参数。The first communication device sends the parameters corresponding to the P first neural network blocks to the second communication device. 根据权利要求1至4中任一项所述的方法,其特征在于,所述方法还包括:The method according to any one of claims 1 to 4, characterized in that the method further comprises: 所述第一通信装置接收来自所述第二通信装置的第一信息,所述第一信息中包括第三指示信息和第一向量,所述第三指示信息用于指示所述N,所述第一向量为所述信道信息经过所述第一神经网络编码处理得到的结果。The first communication device receives first information from the second communication device, the first information includes third indication information and a first vector, the third indication information is used to indicate N, and the first vector is a result obtained by processing the channel information through the first neural network encoding. 根据权利要求1至5中任一项所述的方法,其特征在于,所述第一神经网络块的参数包括以下至少一项:The method according to any one of claims 1 to 5, characterized in that the parameters of the first neural network block include at least one of the following: 所述第一神经网络块对应的权重信息、偏置信息、或激活函数信息。Weight information, bias information, or activation function information corresponding to the first neural network block. 根据权利要求1至6中任一项所述的方法,其特征在于,所述第一神经网络块支持以下至少一种神经网络结构:The method according to any one of claims 1 to 6, characterized in that the first neural network block supports at least one of the following neural network structures: 卷积神经网络CNN、多层感知机MLP、或转换器Transformer。Convolutional Neural Network CNN, Multi-layer Perceptron MLP, or Transformer Transformer. 根据权利要求1至7中任一项所述的方法,其特征在于,所述第一神经网络包括多个所述第一神经网络块时,所述多个第一神经网络块之间的连接方式包括深度连接和/或宽度连接。The method according to any one of claims 1 to 7 is characterized in that when the first neural network includes multiple first neural network blocks, the connection method between the multiple first neural network blocks includes deep connection and/or width connection. 根据权利要求1至8中任一项所述的方法,其特征在于,所述方法还包括:The method according to any one of claims 1 to 8, characterized in that the method further comprises: 所述第一通信装置向第二通信装置发送第一神经网络的输出层的参数。The first communication device sends parameters of an output layer of the first neural network to the second communication device. 一种通信方法,其特征在于,包括:A communication method, comprising: 第二通信装置获取N个第一神经网络块分别对应的参数,所述第一神经网络块用于信道信息反馈,包含不同数量的所述第一神经网络块的神经网络的功能不同;The second communication device obtains parameters corresponding to N first neural network blocks respectively, where the first neural network blocks are used for channel information feedback, and neural networks containing different numbers of the first neural network blocks have different functions; 所述第二通信装置基于所述N个所述第一神经网络块确定第一神经网络,所述第一神经网络用于所述第二通信装置对所述信道信息进行编码处理,其中,N为正整数。The second communication device determines a first neural network based on the N first neural network blocks, and the first neural network is used by the second communication device to encode the channel information, wherein N is a positive integer. 根据权利要求10所述的方法,其特征在于,所述第二通信装置获取N个第一神经网络块分别对应的参数,包括:The method according to claim 10 is characterized in that the second communication device obtains parameters corresponding to the N first neural network blocks respectively, including: 所述第二通信装置接收来自第一通信装置的所述N个所述第一神经网络块分别对应的参数。The second communication device receives parameters corresponding to the N first neural network blocks respectively from the first communication device. 根据权利要求11所述的方法,其特征在于,在所述第二通信装置接收来自第一通信装置的所述N个所述第一神经网络块分别对应的参数之前,所述方法还包括:The method according to claim 11, characterized in that before the second communication device receives the parameters corresponding to the N first neural network blocks respectively from the first communication device, the method further comprises: 所述第二通信装置根据第二信息确定所需的第一神经网络包括的第一神经网络块个数N,所述第二信息包括所述第二通信装置的能力和/或对所述信道信息进行处理的需求;The second communication device determines the number N of first neural network blocks included in the required first neural network according to second information, wherein the second information includes the capability of the second communication device and/or the requirement for processing the channel information; 所述第二通信装置向所述第一通信装置发送第一指示信息,所述第一指示信息用于指示所述N。 The second communication device sends first indication information to the first communication device, where the first indication information is used to indicate the N. 根据权利要求10至12中任一项所述的方法,其特征在于,在所述第二通信装置获取N个所述第一神经网络块分别对应的参数之后,所述方法还包括:The method according to any one of claims 10 to 12, characterized in that after the second communication device obtains the parameters corresponding to the N first neural network blocks respectively, the method further comprises: 所述第二通信装置根据第二信息确定所需的第二神经网络包括Q个所述第一神经网络块,所述Q为大于N的正整数,且所述Q和所述N的差值为M,所述第二信息包括所述第二通信装置的能力和/或对所述信道信息进行处理的需求;The second communication device determines, according to the second information, that the required second neural network includes Q first neural network blocks, where Q is a positive integer greater than N, and a difference between Q and N is M, and the second information includes a capability of the second communication device and/or a requirement for processing the channel information; 所述第二通信装置向所述第一通信装置发送第二指示信息,所述第二指示信息用于指示所述M;The second communication device sends second indication information to the first communication device, where the second indication information is used to indicate the M; 所述第二通信装置接收来自第一通信装置的M个所述第一神经网络块分别对应的参数。The second communication device receives parameters corresponding to the M first neural network blocks from the first communication device. 根据权利要求10至13中任一项所述的方法,其特征在于,所述方法还包括:The method according to any one of claims 10 to 13, characterized in that the method further comprises: 所述第二通信装置接收来自第一通信装置的P个所述第一神经网络块分别对应的参数,P为大于或者等于N的正整数;The second communication device receives parameters corresponding to P of the first neural network blocks from the first communication device, where P is a positive integer greater than or equal to N; 所述第二通信装置根据第二信息从所述P个所述第一神经网络块中确定所述N个所述第一神经网络块,所述第二信息包括所述第二通信装置的能力和/或对所述信道信息进行处理的需求。The second communication device determines the N first neural network blocks from the P first neural network blocks based on second information, and the second information includes the capabilities of the second communication device and/or the need to process the channel information. 根据权利要求10至14中任一项所述的方法,其特征在于,所述方法还包括:The method according to any one of claims 10 to 14, characterized in that the method further comprises: 所述第二通信装置向所述第一通信装置发送第一信息,所述第一信息中包括第三指示信息和第一向量,所述第三指示信息用于指示所述N,所述第一向量为所述信道信息经过所述第一神经网络编码得到的结果。The second communication device sends first information to the first communication device, where the first information includes third indication information and a first vector, where the third indication information is used to indicate N, and the first vector is a result of encoding the channel information through the first neural network. 根据权利要求10至15中任一项所述的方法,其特征在于,所述第一神经网络块的参数包括以下至少一项:The method according to any one of claims 10 to 15, characterized in that the parameters of the first neural network block include at least one of the following: 所述第一神经网络块对应的权重信息、偏置信息、或激活函数信息。Weight information, bias information, or activation function information corresponding to the first neural network block. 根据权利要求10至16中任一项所述的方法,其特征在于,所述第一神经网络块支持以下至少一种神经网络结构:The method according to any one of claims 10 to 16, characterized in that the first neural network block supports at least one of the following neural network structures: 卷积神经网络CNN、多层感知机MLP、或转换器Transformer。Convolutional Neural Network CNN, Multi-layer Perceptron MLP, or Transformer Transformer. 根据权利要求10至17中任一项所述的方法,其特征在于,所述第一神经网络包括多个所述第一神经网络块时,所述多个第一神经网络块之间的连接方式包括深度连接和/或宽度连接。The method according to any one of claims 10 to 17 is characterized in that when the first neural network includes multiple first neural network blocks, the connection method between the multiple first neural network blocks includes deep connection and/or width connection. 根据权利要求10至18中任一项所述的方法,其特征在于,所述方法还包括:The method according to any one of claims 10 to 18, characterized in that the method further comprises: 所述第二通信装置接收来自第一通信装置的所述第一神经网络的输出层的参数。The second communication device receives parameters of the output layer of the first neural network from the first communication device. 一种通信装置,其特征在于,包括:A communication device, comprising: 处理器,用于执行存储器中存储的计算机程序,以使得所述装置执行如权利要求1至9中任一项所述的方法。A processor, configured to execute a computer program stored in a memory, so that the apparatus performs the method according to any one of claims 1 to 9. 一种通信装置,其特征在于,包括:A communication device, comprising: 处理器,用于执行存储器中存储的计算机程序,以使得所述装置执行如权利要求10至19中任一项所述的方法。A processor, configured to execute a computer program stored in the memory, so that the apparatus performs the method according to any one of claims 10 to 19. 一种通信系统,其特征在于,包括至少一个如权利要求20所述的通信装置和至少一个如权利要求21所述的通信装置。A communication system, characterized by comprising at least one communication device according to claim 20 and at least one communication device according to claim 21. 一种芯片,其特征在于,包括:处理器和接口,用于从存储器中调用并运行所述存储器中存储的计算机程序,以执行如权利要求1至19中任一项所述的方法。A chip, characterized in that it comprises: a processor and an interface, used to call and run a computer program stored in a memory from the memory to execute the method as described in any one of claims 1 to 19. 一种计算机可读存储介质,其特征在于,用于存储计算机程序,所述计算机程序包括用于实现如权利要求1至19中任一项所述的方法的指令。 A computer-readable storage medium, characterized in that it is used to store a computer program, wherein the computer program includes instructions for implementing the method as described in any one of claims 1 to 19.
PCT/CN2024/129364 2023-11-10 2024-11-01 Communication method and communication device Pending WO2025098261A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202311503718.8 2023-11-10
CN202311503718.8A CN119995653A (en) 2023-11-10 2023-11-10 Communication method and communication device

Publications (1)

Publication Number Publication Date
WO2025098261A1 true WO2025098261A1 (en) 2025-05-15

Family

ID=95630606

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2024/129364 Pending WO2025098261A1 (en) 2023-11-10 2024-11-01 Communication method and communication device

Country Status (2)

Country Link
CN (1) CN119995653A (en)
WO (1) WO2025098261A1 (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20210185515A1 (en) * 2019-12-16 2021-06-17 Qualcomm Incorporated Neural network configuration for wireless communication system assistance
CN116939715A (en) * 2022-04-06 2023-10-24 华为技术有限公司 Information interaction method and related device
CN116939716A (en) * 2022-04-06 2023-10-24 华为技术有限公司 Communication device and method
US20230359872A1 (en) * 2022-05-05 2023-11-09 Nvidia Corporation Neural network capability indication

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20210185515A1 (en) * 2019-12-16 2021-06-17 Qualcomm Incorporated Neural network configuration for wireless communication system assistance
CN116939715A (en) * 2022-04-06 2023-10-24 华为技术有限公司 Information interaction method and related device
CN116939716A (en) * 2022-04-06 2023-10-24 华为技术有限公司 Communication device and method
US20230359872A1 (en) * 2022-05-05 2023-11-09 Nvidia Corporation Neural network capability indication

Also Published As

Publication number Publication date
CN119995653A (en) 2025-05-13

Similar Documents

Publication Publication Date Title
US12328167B2 (en) Method and device for transmitting channel state information
WO2022012257A1 (en) Communication method and communication apparatus
WO2023186010A1 (en) Channel state information report transmission method and apparatus, and terminal device and network device
CN113992309A (en) Method and device for obtaining channel parameters
US20240162957A1 (en) Method and device for reporting csi based on ai model in wireless communication system
CN116418424A (en) Channel quality indication calculation or acquisition method and device, terminal and network equipment
WO2025098261A1 (en) Communication method and communication device
WO2024251184A1 (en) Communication method and communication apparatus
WO2024046288A1 (en) Communication method and apparatus
CN118509014A (en) Communication method and communication device
WO2023092310A1 (en) Information processing method, model generation method, and devices
US20250337467A1 (en) Method and device for detecting channel variation, based on ai model in wireless communication system
WO2025232001A1 (en) Methods and systems for csi compression using machine learning with regularization
WO2025167701A1 (en) Communication method and communication apparatus
WO2025139843A1 (en) Communication method and communication apparatus
WO2025016247A1 (en) Channel state information report determination method and apparatus, terminal device, and network device
WO2025024995A1 (en) Information compression method and apparatus, and terminal device and network device
WO2025139762A1 (en) Channel state information reporting method and related product
WO2024140409A1 (en) Channel state information (csi) report configuration method and related apparatus
WO2023231933A1 (en) Communication method and apparatus
WO2025218595A1 (en) Communication method and apparatus
WO2024131900A1 (en) Communication method and communication apparatus
WO2025195293A1 (en) Channel state information feedback method and related product
WO2025011342A1 (en) Communication method and apparatus
WO2025167989A1 (en) Communication method and communication apparatus

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 24887873

Country of ref document: EP

Kind code of ref document: A1