[go: up one dir, main page]

WO2025098020A1 - Procédé de communication et dispositif associé - Google Patents

Procédé de communication et dispositif associé Download PDF

Info

Publication number
WO2025098020A1
WO2025098020A1 PCT/CN2024/119383 CN2024119383W WO2025098020A1 WO 2025098020 A1 WO2025098020 A1 WO 2025098020A1 CN 2024119383 W CN2024119383 W CN 2024119383W WO 2025098020 A1 WO2025098020 A1 WO 2025098020A1
Authority
WO
WIPO (PCT)
Prior art keywords
channel information
channel
reference signal
neural network
information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
PCT/CN2024/119383
Other languages
English (en)
Chinese (zh)
Inventor
张朝阳
陈子瑞
李榕
皇甫幼睿
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Technologies Co Ltd
Original Assignee
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co Ltd filed Critical Huawei Technologies Co Ltd
Publication of WO2025098020A1 publication Critical patent/WO2025098020A1/fr
Pending legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L25/00Baseband systems
    • H04L25/02Details ; arrangements for supplying electrical power along data transmission lines

Definitions

  • the present application relates to the field of wireless technology, and in particular to a communication method and related equipment.
  • MIMO multi-input multi-output
  • the network device can use the channel information obtained in the channel measurement process to calculate the precoding information between the network device and the terminal device. Subsequently, the network device and the terminal device can realize MIMO communication through the precoding information.
  • the downlink reference signal sent by the network device may include a channel state information reference signal (CSI-RS), and the network device may receive feedback of the CSI-RS, so that the network device can obtain channel information based on the feedback of the CSI-RS.
  • CSI-RS is sent through the port of the network device, that is, the overhead of the CSI-RS is related to the number of ports of the network device.
  • the number of ports of network devices may gradually increase, which will increase the overhead of reference signals (such as CSI-RS) used for channel measurement and occupy more transmission resources, thereby increasing the power consumption of terminal devices.
  • reference signals such as CSI-RS
  • the present application provides a communication method and related equipment, which are used to obtain channel information of a lower channel dimension based on a reference signal through processing of a neural network to obtain channel information of a higher dimension, thereby reducing the overhead of the reference signal, thereby improving resource utilization and reducing power consumption of terminal equipment.
  • the present application provides a communication method, which is executed by a network device, or the method is executed by some components (such as a processor, a chip or a chip system, etc.) in the network device, or the method can also be implemented by a logic module or software that can implement all or part of the network device functions.
  • the communication method is described by taking the execution of the network device as an example.
  • a network device sends a first reference signal; the network device receives first channel information, the first channel information is determined based on the first reference signal, and the channel dimension represented by the first channel information is [T 1 , R 1 , F 1 ], where T 1 , R 1 , and F 1 are all positive numbers; the network device processes the first channel information and Q second channel information based on a first neural network to obtain third channel information; wherein the Q second channel information are determined by Q second reference signals, where Q is a positive integer; the position of the time domain resource carrying the Q second reference signals is located before the position of the time domain resource carrying the first reference signal, and the channel dimension represented by the third channel information is [T, R, F], where T, R, and F are all positive numbers; wherein the parameters in the channel dimension represented by the first channel information and the parameters in the channel dimension represented by the third channel information satisfy at least one of the following: T is greater than T 1 , R is greater than R 1 , and F is greater than F 1 .
  • the network device can receive the first channel information determined based on the first reference signal. Thereafter, the network device can process the first channel information and Q second channel information based on the first neural network to obtain the third channel information.
  • the channel dimension represented by the first channel information is [T 1 , R 1 , F 1 ]
  • the channel dimension represented by the third channel information is [T, R, F], satisfying at least one of the following: T is greater than the T 1 , R is greater than R 1 , and F is greater than F 1 .
  • the network device can obtain the third channel information of a higher channel dimension based on the first channel information of a lower channel dimension, and the first channel information of the lower channel dimension is determined by the first reference signal.
  • the network device can obtain the channel information of a higher dimension through the processing of the neural network based on the channel information of the lower channel dimension determined by the reference signal, which can reduce the reference signal overhead, thereby improving resource utilization and reducing the power consumption of the terminal device.
  • the input data of the first neural network may include the first channel information of lower dimension and Q second channel information, and the Q second reference signals are carried.
  • the position of the time domain resource is located before the position of the time domain resource carrying the first reference signal, that is, the Q second channel information is the historical channel information of the first channel information. Therefore, the first neural network can utilize the time domain correlation between the Q second channel information and the first channel information, and can also utilize the spatial domain correlation and/or frequency domain correlation between the first channel information and the third channel information, thereby improving the accuracy of the neural network in obtaining higher-dimensional channel information based on lower-dimensional channel information.
  • the channel dimensions represented by the channel information may include multiple dimensions such as dimensions in the spatial domain, dimensions in the frequency domain, and dimensions in the time domain.
  • the dimensions in the spatial domain may include dimensions of the antenna port of the signal transmitting end (i.e., transmitting antenna dimensions) and dimensions of the antenna port of the signal receiving end (i.e., receiving antenna dimensions).
  • the channel dimensions represented by the channel information include three channel dimensions as an example for explanation, one channel dimension is represented as T or T 1 (or other parameters represented by T that may appear later, such as T 2 /T 3 /T 4 /T 5 , etc.), another channel dimension is represented as R or R 1 (or other parameters represented by R that may appear later, such as R 2 /R 3 /R 4 /R 5 , etc.), and another channel dimension is represented as F or F 1 (or other parameters represented by F that may appear later, such as F 2 /F 3 /F 4 /F 5 , etc.).
  • the transmission process of the reference signal in space includes the process of the antenna port of the signal transmitting end sending the reference signal and the process of the antenna port of the signal receiving end receiving the reference signal. That is, the dimension of the channel information in the spatial domain can correspond to the transceiver port of the reference signal.
  • the dimension of the reference signal in the spatial domain can be expressed as [T, R] in [T, R, F], where T (or T 1 , or other parameters represented by T that may appear later, such as T 2 /T 3 /T 4 /T 5 , etc.) represents the antenna port of the signal transmitting end, and R (or R 1 , or other parameters represented by R that may appear later, such as R 2 /R 3 /R 4 /R 5 , etc.) represents the antenna port of the signal receiving end.
  • the dimension of the reference signal in the frequency domain may correspond to the frequency domain resource carrying the reference signal.
  • the dimension of the reference signal in the spatial domain may be expressed as [F] in [T, R, F], where F (or F 1 , or other parameters represented by F that may appear later, such as F 2 /F 3 /F 4 /F 5 , etc.) represents the frequency domain resource carrying the reference signal (e.g., one or more frequency domain units contained in the frequency domain resource).
  • the position of the time domain resources carrying the Q second reference signals is located before the position of the time domain resources carrying the first reference signal, which can be understood as at least one of the following: the Q second reference signals are sent by the network device before the network device sends the first reference signal, the Q second reference signals are received by the terminal device before the terminal device receives the first reference signal, and the index number of the time domain resources carrying the Q second reference signals is smaller than the index number of the time domain resources carrying the first reference signal.
  • the index number of the time domain resource can be any one of the following: the index number of the system frame, the index number of the subframe, the index number of the time slot, the index number of the orthogonal frequency division multiplexing (OFDM) symbol, etc.
  • the first neural network is used to process lower dimensional channel information (such as first channel information) to obtain higher dimensional channel information (such as third channel information). Accordingly, the first neural network can be understood as a neural network that performs channel prediction.
  • channel prediction can be replaced by other terms, such as channel estimation, channel pre-estimation, channel simulation recovery, channel reconstruction, channel acquisition, channel inference, etc.
  • neural network can be replaced by other terms, such as AI network, machine learning network, AI learning network, AI neural network, etc.
  • the channel dimension represented by at least one second channel information among the Q second channel information is [T 2 , R 2 , F 2 ], and T 2 , R 2 , and F 2 are all positive numbers; the parameters in the channel dimension represented by the second channel information satisfy at least one of the following: T is greater than T 2 , R is greater than R 2 , and F is greater than F 2 .
  • the Q second channel information are determined by Q second reference signals, wherein the channel dimension represented by at least one of the Q second channel information satisfies at least one of the above items, that is, the channel dimension represented by the at least one second channel information is lower than the channel dimension represented by the third channel information.
  • the Q second channel information is one of the input data of the first neural network. Therefore, in this way, the reference signal overhead in the process of acquiring the input data of the neural network can be reduced.
  • the parameters in the channel dimension represented by the first channel information and the parameters in the channel dimension represented by the second channel information satisfy at least one of the following: the values of T1 and T2 are the same, the values of R1 and R2 are the same, and the values of F1 and F2 are the same.
  • the position of the time domain resource carrying the Q second reference signals is located before the position of the time domain resource carrying the first reference signal, that is, the Q second channel information is the historical channel information of the first channel information, and the parameters in the two channel information satisfy at least one of the above items, which enables the network device to obtain the two channel information based on at least one identical dimension, thereby reducing the implementation complexity.
  • the first neural network includes a first sub-neural network and a second sub-neural network;
  • the network device processes the first channel information and the Q second channel information based on the first neural network to obtain the third channel information, including: the network device processes the first channel information based on the first sub-neural network to obtain the fourth channel information; wherein the channel dimension represented by the fourth channel information is [T, R, F]; the network device processes the fourth channel information and the Q second channel information based on the second sub-neural network to obtain the third channel information.
  • the network device in the process of the network device obtaining the third channel information based on the first neural network, can process the first channel information based on the first sub-neural network to obtain the fourth channel information, and then process the fourth channel information and the Q second channel information based on the second sub-neural network to obtain the third channel information.
  • the network device can obtain the fourth channel information of higher dimension by processing the first sub-neural network based on the first channel information of lower dimension obtained by the reference signal, and then further process the fourth channel information and the Q second channel information obtained historically based on the second sub-neural network to obtain the third channel information.
  • the method further includes: the network device processes the third channel information based on the second neural network to obtain fifth channel information, and the channel dimension represented by the fifth channel information is [T, R, F].
  • the network device after the network device obtains the third channel information of higher dimension based on the first neural network, the network device can also process the third channel information based on the second neural network to obtain the fifth channel information of the same channel dimension.
  • the accuracy of the higher dimensional channel information obtained based on the neural network can be further improved.
  • the second neural network is used to process the input channel information (such as the third channel information) to obtain the output channel information (such as the fifth channel information), and compared with the input channel information, the output channel information is closer to the channel information of the transmission channel in the actual space.
  • the second neural network can be a neural network for optimizing the channel information.
  • a deviation between the fifth channel information and the sixth channel information is less than or equal to a deviation between the third channel information and the sixth channel information
  • a channel dimension represented by the sixth channel information is [T, R, F]
  • the sixth channel information is obtained by measuring a reference signal.
  • the sixth channel information is obtained by measuring the reference signal, and the third channel information and the fifth channel information are obtained based on the second neural network. Therefore, compared with the third channel information and the fifth channel information, the sixth channel information is closer to the channel information of the transmission channel in the actual space.
  • the second neural network is used to optimize the third channel information to obtain the fifth channel information, so that the deviation between the fifth channel information and the sixth channel information is less than or equal to the deviation between the third channel information and the sixth channel information, that is, compared with the third channel information, the fifth channel information is closer to the channel information of the transmission channel in the actual space.
  • the deviation between two channel information can be reflected by the mathematical operation results of the two channel information.
  • the deviation can be a normalized mean square error (NMSE), a mean square error (MSE), etc.
  • the method also includes: the network device sends a third reference signal; the network device receives channel information corresponding to the third reference signal, the channel dimension represented by the channel information corresponding to the third reference signal is [T 3 , R 3 , F 3 ], T 3 , R 3 , and F 3 are all positive numbers; the channel dimension represented by the channel information corresponding to the third reference signal satisfies at least one of the following: T is greater than T 3 , R is greater than R 3 , and F is greater than F 3 ; the network device processes the channel information corresponding to the second reference signal, part or all of the Q second channel information, and the third channel information based on the first neural network to obtain seventh channel information; wherein, the channel dimension represented by the seventh channel information is [T, R, F].
  • the network device may also send a third reference signal and receive the channel information corresponding to the third reference signal, and subsequently the network device may obtain the seventh channel information based on the channel information corresponding to the second reference signal.
  • the channel dimension represented by the seventh channel information is [T, R, F], and the channel dimension represented by the channel information corresponding to the third reference signal satisfies at least one of the following: T is greater than T 3 , R is greater than R 3 , and F is greater than F 3 .
  • the network device may obtain higher-dimensional channel information based on lower-dimensional channel information multiple times by iteratively running the first neural network based on multiple reference signal transmissions.
  • the channel dimension represented by the first channel information and the channel dimension represented by the channel information corresponding to the third reference signal satisfy at least one of the following: the values of T1 and T3 are the same, the values of R1 and R3 are the same, and the values of F1 and F3 are the same.
  • the information of different channel information obtained based on different reference signals is
  • the channel dimension may satisfy at least one of the above items, that is, the different channel information includes at least one same dimension, which can reduce the implementation complexity.
  • the method also includes: the network device sends a fourth reference signal; the network device receives first information, the first information being used to indicate a deviation between channel information corresponding to the fourth reference signal and eighth channel information; wherein the channel dimension represented by the eighth channel information is [T, R, F], and the eighth channel information is obtained by processing the channel information corresponding to the fourth reference signal and P ninth channel information based on a third neural network, the P ninth channel information being determined by P reference signals, and the position of the time domain resources carrying the P reference signals is located before the position of the time domain resources carrying the fourth reference signal, and P is a positive integer.
  • the network device may also send a fourth reference signal, and the eighth channel information is channel information obtained by the receiver (i.e., the terminal device) of the fourth reference signal processing the channel information corresponding to the fourth reference signal based on the third neural network.
  • the network device may also receive first information indicating a deviation between the channel information corresponding to the fourth reference signal and the eighth channel information, so that the network device can determine the performance of the channel prediction of the third neural network based on the first information.
  • the network device can determine the performance of the channel prediction of the neural network through the first information. Since the transmission overhead of the first information is less than the transmission overhead of the channel information corresponding to the fourth reference signal, the transmission overhead can be reduced.
  • the first information is used to indicate the deviation between the channel information corresponding to the fourth reference signal and the eighth channel information
  • the eighth channel information is the channel information obtained by processing based on the third neural network. Accordingly, the eighth channel information can be understood as the result obtained by the third neural network performing channel prediction, and the channel information corresponding to the fourth reference signal is the channel information of the transmission channel in the actual space. For this reason, the deviation indicated by the first information can be understood as the first information used to indicate the performance test result of the third neural network performing channel prediction.
  • the network structure and/or network parameters of the first neural network deployed on the network device and the third neural network deployed on the terminal device are the same.
  • the channel dimension represented by the channel information corresponding to the fourth reference signal is [T 4 , R 4 , F 4 ], and T 4 , R 4 , and F 4 are all positive numbers;
  • the channel dimension represented by the channel information corresponding to the fourth reference signal and the channel dimension represented by the channel information corresponding to the first reference signal (i.e., the first channel information) satisfy at least one of the following: T is greater than or equal to T 4 and T 4 is greater than T 1 , R is greater than or equal to R 4 and R 4 is greater than R 1 , and F is greater than or equal to F 4 and F 4 is greater than F 1 .
  • the channel dimension represented by the channel information corresponding to the fourth reference signal and the channel dimension represented by the channel information corresponding to the first reference signal satisfy at least one of the above items, that is, the channel information corresponding to the fourth reference signal is channel information of higher dimension than the channel information corresponding to the first reference signal (i.e., the fourth reference signal is a reference signal of higher density than the first reference signal), so as to enable the terminal device to detect the channel prediction performance of the third neural network through the reference signal of higher density.
  • the channel information corresponding to the fourth reference signal is channel information of lower dimension than the channel information corresponding to the eighth reference signal, which can reduce the overhead in the detection process of the channel prediction performance of the neural network.
  • the method further includes: the network device sends second information, where the second information is used to indicate whether to adjust the input parameters of the third neural network; wherein the second information is determined based on the first information.
  • the network device after receiving the first information, can also send second information for indicating whether to adjust the input parameters of the third neural network, so that the network device can adjust the input parameters of the third neural network deployed on the terminal device based on the performance detection results of the third neural network.
  • the method further includes: the network device sends third information, the third information indicating channel information corresponding to the fourth reference signal processed based on the third neural network; wherein the third information is determined based on the first information.
  • the network device may also send third information for indicating processing of the channel information corresponding to the fourth reference signal based on the third neural network, so that the terminal device can start performance detection of the channel information corresponding to the fourth reference signal based on the third information.
  • the method further includes: the network device sends a fourth reference signal; the network device receives channel information corresponding to the fourth reference signal; the channel dimension represented by the channel information corresponding to the fourth reference signal is [T 4 , R 4 , F 4 ], and T 4 , R 4 , and F 4 are all positive numbers; at least one of the following is satisfied: T is greater than or equal to T 4 and T 4 is greater than T 1 , R is greater than or equal to R 4 and R 4 is greater than R 1 , F is greater than or equal to F 4 and F 4 is greater than F 1 ; the network device performs, based on the first neural network, a filtering operation on the channel information corresponding to the fourth reference signal and the Q channels; The second channel information is partially or entirely processed to obtain tenth channel information; wherein the channel dimension represented by the tenth channel information is the [T, R, F]; and the network device determines the deviation between the channel information corresponding to the fourth reference signal and the tenth channel information
  • the network device may also send a fourth reference signal and receive channel information corresponding to the fourth reference signal, and the network device may process the channel information corresponding to the fourth reference signal based on the first neural network to obtain tenth channel information.
  • the network device may also determine the deviation between the channel information corresponding to the fourth reference signal and the tenth channel information, so that the network device can determine the performance of the channel prediction of the first neural network based on the deviation.
  • the method also includes: the network device sends a fifth reference signal, and the channel dimension represented by the channel information corresponding to the fifth reference signal is determined based on the deviation between the channel information corresponding to the fourth reference signal and the tenth channel information.
  • the network device can also send a fifth reference signal based on the deviation between the channel information corresponding to the fourth reference signal and the tenth channel information, that is, the network device can update or adjust the sending of the reference signal based on the performance of the channel prediction of the first neural network.
  • the channel dimension represented by the channel information corresponding to the fifth reference signal is [T 5 , R 5 , F 5 ], and T 5 , R 5 , and F 5 are all positive numbers;
  • the channel dimension represented by the channel information corresponding to the fifth reference signal is determined based on the deviation between the channel information corresponding to the fourth reference signal and the tenth channel information, including: when the deviation between the channel information corresponding to the fourth reference signal and the tenth channel information indicates that the channel quality corresponding to the tenth channel information is better, at least one of the following is satisfied: T 5 is less than or equal to T 4 , R 5 is less than or equal to R 4 , and F 5 is less than or equal to F 4 ; when the deviation between the channel information corresponding to the fourth reference signal and the tenth channel information indicates that the channel quality corresponding to the tenth channel information is poor, at least one of the following is satisfied: T 5 is greater than or equal to T 4 , R 5 is greater than or equal to R 4 ,
  • the network device determines that the channel prediction performance of the first neural network is better. For this reason, after sending the fourth reference signal, the network device can send to the fifth reference signal with the same or smaller pilot density to reduce overhead.
  • the network device determines that the channel prediction performance of the first neural network is poor.
  • the network device can send to the fifth reference signal with the same or larger pilot density, so that the network device can obtain channel information with a higher channel dimension, and optimize the input parameters of the first neural network based on the channel information with a higher channel dimension to improve the channel prediction performance of the first neural network.
  • the method further includes: the network device adjusting an input parameter of the first neural network based on a deviation between the channel information corresponding to the fourth reference signal and the tenth channel information.
  • the network device can also adjust the input parameters of the first neural network based on the deviation, and can optimize the input parameters of the first neural network to improve the performance of the channel prediction of the first neural network.
  • the method further includes: the network device sends a sixth reference signal; the network device receives channel information corresponding to the sixth reference signal; the network device processes the channel information corresponding to the sixth reference signal and K eleventh channel information based on the fourth neural network to obtain twelfth channel information; wherein the K eleventh channel information is determined by K reference signals, and K is a positive integer; the time domain resource carrying the Kth reference signal is located before the time domain resource carrying the sixth reference signal; the channel dimension represented by the twelfth channel information is [T, R, F]; the channel dimension represented by the twelfth channel information is [T, R, F]; wherein the network structure of the first neural network is different from the network structure of the fourth neural network; or, the network structure of the first neural network is the same as the network structure of the fourth neural network, and the parameters in the network structure of the first neural network are different from the parameters in the network structure of the fourth neural network; the sixth reference signal satisfies at least one of the
  • the resource carrying the sixth reference signal is different from the resource carrying the first reference signal
  • the first channel information and the channel information corresponding to the sixth reference signal come from different terminal devices;
  • the first channel information and the channel information corresponding to the sixth reference signal come from terminal devices with different mobile information.
  • the network device in addition to sending the first reference signal, can also send a sixth reference signal, and perform channel prediction on the channel information corresponding to the sixth reference signal based on the fourth neural network to obtain a twelfth channel information of a higher dimension.
  • the transmission of the sixth reference signal satisfies at least one of the above items to enhance the flexibility of the solution implementation.
  • the second aspect of the present application provides a communication method, which is executed by a network device, or the method is executed by some components in the network device (such as a processor, a chip or a chip system, etc.), or the method can also be implemented by a logic module or software that can realize all or part of the network device functions.
  • the communication method is described as an example of being executed by a network device.
  • the network device sends a fourth reference signal; the network device receives first information, and the first information is used to indicate the deviation between the channel information corresponding to the fourth reference signal and the eighth channel information; wherein the channel dimension represented by the eighth channel information is [T, R, F], and T, R, and F are all positive numbers; wherein the eighth channel information is obtained by processing the channel information corresponding to the fourth reference signal and P ninth channel information based on the third neural network, and the P ninth channel information is determined by P reference signals, and the position of the time domain resource carrying the P reference signals is located before the position of the time domain resource carrying the fourth reference signal, and P is a positive integer.
  • the network device can send a fourth reference signal, and the eighth channel information is channel information obtained by the receiver (i.e., the terminal device) of the fourth reference signal processing the channel information corresponding to the fourth reference signal based on the third neural network.
  • the network device can also receive first information indicating the deviation between the channel information corresponding to the fourth reference signal and the eighth channel information, so that the network device can determine the performance of the channel prediction of the third neural network based on the first information.
  • the network device can determine the performance of the channel prediction of the neural network through the first information. Since the transmission overhead of the first information is less than the transmission overhead of the channel information corresponding to the fourth reference signal, the transmission overhead can be reduced.
  • the channel dimension represented by the channel information corresponding to the fourth reference signal is [T 4 , R 4 , F 4 ], and T 4 , R 4 , and F 4 are all positive numbers;
  • the channel dimension represented by the channel information corresponding to the fourth reference signal and the channel dimension represented by the channel information corresponding to the eighth reference signal satisfy at least one of the following: T is greater than or equal to T 4 , R is greater than or equal to R 4 , and F is greater than or equal to F 4 .
  • the channel information corresponding to the fourth reference signal is channel information of lower dimension than the channel information corresponding to the eighth reference signal, which can reduce the overhead in the detection process of the channel prediction performance of the neural network.
  • the method further includes: the network device sends second information, where the second information is used to indicate whether to adjust the input parameters of the third neural network; wherein the second information is determined based on the first information.
  • the network device after receiving the first information, can also send second information for indicating whether to adjust the input parameters of the third neural network, so that the network device can adjust the input parameters of the third neural network deployed on the terminal device based on the performance detection results of the third neural network.
  • the method further includes: the network device sends third information, the third information indicating channel information corresponding to the fourth reference signal processed based on the third neural network; wherein the third information is determined based on the first information.
  • the network device may also send third information for indicating processing of the channel information corresponding to the fourth reference signal based on the third neural network, so that the terminal device can start performance detection of the channel information corresponding to the fourth reference signal based on the third information.
  • the third aspect of the present application provides a communication method, which is executed by a terminal device, or the method is executed by some components in the terminal device (such as a processor, a chip or a chip system, etc.), or the method can also be implemented by a logic module or software that can realize all or part of the terminal device functions.
  • the communication method is described as an example of being executed by a terminal device.
  • the terminal device receives a fourth reference signal; the terminal device sends a first information, and the first information is used to indicate the deviation between the channel information corresponding to the fourth reference signal and the eighth channel information; wherein the channel dimension represented by the eighth channel information is [T, R, F], and T, R, and F are all positive numbers; wherein the eighth channel information is obtained by processing the channel information corresponding to the fourth reference signal and P ninth channel information based on the third neural network, and the P ninth channel information is determined by P reference signals, and the position of the time domain resource carrying the P reference signals is located before the position of the time domain resource carrying the fourth reference signal, and P is a positive integer.
  • the terminal device can receive the fourth reference signal, and the eighth channel information is the channel information obtained by the receiver of the fourth reference signal (i.e., the terminal device) processing the channel information corresponding to the fourth reference signal based on the third neural network.
  • the network device can also receive first information indicating the deviation between the channel information corresponding to the fourth reference signal and the eighth channel information, so that the network device can determine the performance of the channel prediction of the third neural network based on the first information.
  • the network device compared to the terminal device sending the channel information corresponding to the fourth reference signal, the network device generates the channel information corresponding to the fourth reference signal based on the first neural network.
  • the channel information corresponding to the fourth reference signal is processed to determine the performance of the channel prediction of the first neural network.
  • the network device can determine the performance of the channel prediction of the neural network through the first information. Since the transmission overhead of the first information is less than the transmission overhead of the channel information corresponding to the fourth reference signal, the transmission overhead can be reduced.
  • the channel dimension represented by the channel information corresponding to the fourth reference signal is [T 4 , R 4 , F 4 ], and T 4 , R 4 , and F 4 are all positive numbers;
  • the channel dimension represented by the channel information corresponding to the fourth reference signal and the channel dimension represented by the channel information corresponding to the eighth reference signal satisfy at least one of the following: T is greater than or equal to T 4 , R is greater than or equal to R 4 , and F is greater than or equal to F 4 .
  • the channel information corresponding to the fourth reference signal is channel information of lower dimension than the channel information corresponding to the eighth reference signal, which can reduce the overhead in the detection process of the channel prediction performance of the neural network.
  • the method further includes: the terminal device receives second information, the second information being used to indicate whether to adjust an input parameter of the third neural network; wherein the second information is determined based on the first information.
  • the terminal device after the terminal device sends the first information to the network device, the terminal device can also receive second information for indicating whether to adjust the input parameters of the third neural network, so that the network device can adjust the input parameters of the third neural network deployed on the terminal device based on the performance detection results of the third neural network.
  • the method also includes: the terminal device receives third information, the third information indicating channel information corresponding to the fourth reference signal processed based on the third neural network; wherein the third information is determined based on the first information.
  • the terminal device before the terminal device sends the first information to the network device, the terminal device can also receive third information for indicating processing of the channel information corresponding to the fourth reference signal based on the third neural network, so that the terminal device can start performance detection of the channel information corresponding to the fourth reference signal based on the third information.
  • a communication device which is a network device, or the device is a partial component in the network device (such as a processor, a chip or a chip system, etc.), or the device can also be a logic module or software that can implement all or part of the network device functions.
  • the communication device is described as an example of a network device.
  • the device includes a processing unit and a transceiver unit; the transceiver unit is used to send a first reference signal; the transceiver unit is also used to receive first channel information, the first channel information is determined based on the first reference signal, the channel dimension represented by the first channel information is [T 1 , R 1 , F 1 ], T 1 , R 1 , F 1 are all positive numbers; the processing unit is used to process the first channel information and Q second channel information based on a first neural network to obtain third channel information; wherein the Q second channel information are determined by Q second reference signals, Q is a positive integer; the position of the time domain resource carrying the Q second reference signals is located before the position of the time domain resource carrying the first reference signal, the channel dimension represented by the third channel information is [T, R, F], T, R, F are all positive numbers; wherein the parameters in the channel dimension represented by the first channel information and the parameters in the channel dimension represented by the third channel information satisfy at least one of the following: T is greater than the T 1 , R is greater
  • the constituent modules of the communication device can also be used to execute the steps performed in each possible implementation method of the first aspect and achieve corresponding technical effects.
  • the constituent modules of the communication device can also be used to execute the steps performed in each possible implementation method of the first aspect and achieve corresponding technical effects.
  • the present application provides a communication device, which is a network device, or the device is a partial component in the network device (such as a processor, a chip or a chip system, etc.), or the device can also be a logic module or software that can implement all or part of the network device functions.
  • the communication device is described as an example of a network device.
  • the device includes a processing unit and a transceiver unit; the transceiver unit is used to send a fourth reference signal; the transceiver unit is also used to receive first information, and the processing unit is used to determine the deviation between channel information corresponding to the fourth reference signal and eighth channel information based on the first information; wherein the channel dimension represented by the eighth channel information is [T, R, F], and T, R, and F are all positive numbers; wherein the eighth channel information is obtained by processing the channel information corresponding to the fourth reference signal and P ninth channel information based on a third neural network, the P ninth channel information are determined by P reference signals, the position of the time domain resources carrying the P reference signals is located before the position of the time domain resources carrying the fourth reference signal, and P is a positive integer.
  • the constituent modules of the communication device can also be used to execute the steps performed in each possible implementation method of the second aspect and achieve corresponding technical effects.
  • the constituent modules of the communication device can also be used to execute the steps performed in each possible implementation method of the second aspect and achieve corresponding technical effects.
  • a sixth aspect of the present application provides a communication device, which is a terminal device, or the device is a partial component in the terminal device. (e.g., a processor, a chip, or a chip system, etc.), or the device may also be a logic module or software that can implement all or part of the terminal device functions.
  • the communication device is described as an example of executing the terminal device.
  • the device includes a processing unit and a transceiver unit; the transceiver unit is used to receive a fourth reference signal; the processing unit is used to determine the first information; the transceiver unit is also used to send the first information, the first information is used to indicate the deviation between the channel information corresponding to the fourth reference signal and the eighth channel information; wherein the channel dimension represented by the eighth channel information is [T, R, F], and T, R, and F are all positive numbers; wherein the eighth channel information is obtained by processing the channel information corresponding to the fourth reference signal and P ninth channel information based on a third neural network, the P ninth channel information is determined by P reference signals, the position of the time domain resources carrying the P reference signals is located before the position of the time domain resources carrying the fourth reference signal, and P is a positive integer.
  • the constituent modules of the communication device can also be used to execute the steps performed in each possible implementation method of the third aspect and achieve corresponding technical effects.
  • the constituent modules of the communication device can also be used to execute the steps performed in each possible implementation method of the third aspect and achieve corresponding technical effects.
  • the seventh aspect of the present application provides a communication device, comprising at least one processor, wherein the at least one processor is coupled to a memory; the memory is used to store programs or instructions; wherein the at least one processor is used to execute the program or instructions so that the device implements the method described in any one of the first to third aspects and any possible implementation method thereof.
  • the present application provides a communication device, comprising at least one logic circuit and an input/output interface; the logic circuit is used to execute the method described in any one of the first to third aspects and any possible implementation method thereof.
  • the present application provides a computer-readable storage medium storing instructions.
  • the processor executes the method described in any one of the first to third aspects above and any possible implementation thereof.
  • the tenth aspect of the present application provides a computer program product (or computer program), which includes computer program code.
  • the computer program code is executed by a processor, the processor executes the method described in any one of the first to third aspects above and any possible implementation method thereof.
  • a chip system which includes at least one processor for supporting a communication device to implement the functions involved in any one of the first to third aspects and any possible implementation methods thereof.
  • the chip system may also include a memory for storing program instructions and data necessary for the first communication device.
  • the chip system may be composed of a chip, or may include a chip and other discrete devices.
  • the chip system also includes an interface circuit, which provides program instructions and/or data for the at least one processor.
  • the twelfth aspect of the present application provides a communication system, which includes the communication device of the third aspect and the communication device of the fourth aspect.
  • the communication system includes the terminal device of any of the above aspects and any of its implementations, and the network device of any of the above aspects and any of its implementations.
  • FIG. 1a is a schematic diagram of a communication system provided by the present application.
  • FIG1b is another schematic diagram of a communication system provided by the present application.
  • FIG1c is another schematic diagram of a communication system provided by the present application.
  • FIG1d is a schematic diagram of the AI processing process involved in this application.
  • FIG. 1e is another schematic diagram of the AI processing process involved in this application.
  • FIG2a is another schematic diagram of the AI processing process involved in the present application.
  • FIG2b is another schematic diagram of the AI processing process involved in the present application.
  • FIG2c is another schematic diagram of the AI processing process involved in this application.
  • FIG2d is another schematic diagram of the AI processing process involved in the present application.
  • FIG2e is another schematic diagram of the AI processing process involved in this application.
  • FIG3 is an interactive schematic diagram of the communication method provided by the present application.
  • FIG4a is another schematic diagram of the AI processing process provided by the present application.
  • FIG4b is another schematic diagram of the AI processing process provided by the present application.
  • FIG4c is another schematic diagram of the AI processing process provided by the present application.
  • FIG4d is another schematic diagram of the AI processing process provided by the present application.
  • FIG5a is another schematic diagram of the AI processing process provided by the present application.
  • FIG5b is another schematic diagram of the AI processing process provided by the present application.
  • FIG5c is another schematic diagram of the AI processing process provided by the present application.
  • FIG5d is another schematic diagram of the AI processing process provided by the present application.
  • FIG6a is another schematic diagram of the AI processing process provided by the present application.
  • FIG6b is another schematic diagram of the AI processing process provided by the present application.
  • FIG7 is another schematic diagram of the AI processing process provided by the present application.
  • FIG8 is a schematic diagram of a communication device provided by the present application.
  • FIG9 is another schematic diagram of a communication device provided by the present application.
  • FIG10 is another schematic diagram of a communication device provided by the present application.
  • FIG11 is another schematic diagram of a communication device provided by the present application.
  • FIG. 12 is another schematic diagram of the communication device provided in the present application.
  • Terminal device It can be a wireless terminal device that can receive network device scheduling and instruction information.
  • the wireless terminal device can be a device that provides voice and/or data connectivity to users, or a handheld device with wireless connection function, or other processing devices connected to a wireless modem.
  • the terminal equipment can communicate with one or more core networks or the Internet via the radio access network (RAN).
  • the terminal equipment can be a mobile terminal equipment, such as a mobile phone (or "cellular" phone, mobile phone), a computer and a data card.
  • a mobile terminal equipment such as a mobile phone (or "cellular" phone, mobile phone), a computer and a data card.
  • it can be a portable, pocket-sized, handheld, computer-built-in or vehicle-mounted mobile device that exchanges voice and/or data with the radio access network.
  • PCS personal communication service
  • SIP session initiation protocol
  • WLL wireless local loop
  • PDA personal digital assistants
  • Pad tablet computers with wireless transceiver functions and other devices.
  • Wireless terminal equipment can also be called system, subscriber unit, subscriber station, mobile station, mobile station (MS), remote station, access point (AP), remote terminal equipment (remote terminal), access terminal equipment (access terminal), user terminal equipment (user terminal), user agent (user agent), subscriber station (SS), customer premises equipment (CPE), terminal, user equipment (UE), mobile terminal (MT), etc.
  • the terminal device may also be a wearable device.
  • Wearable devices may also be referred to as wearable smart devices or smart wearable devices, etc., which are a general term for the application of wearable technology to intelligently design and develop wearable devices for daily wear, such as glasses, gloves, watches, clothing and shoes.
  • a wearable device is a portable device that is worn directly on the body or integrated into the user's clothes or accessories. Wearable devices are not only hardware devices, but also powerful functions achieved through software support, data interaction, and cloud interaction.
  • wearable smart devices include full-featured, large-size, and independent of smartphones to achieve complete or partial functions, such as smart watches or smart glasses, etc., as well as those that only focus on a certain type of application function and need to be used in conjunction with other devices such as smartphones, such as various types of smart bracelets, smart helmets, and smart jewelry for vital sign monitoring.
  • the terminal may also be a drone, a robot, a terminal in device-to-device (D2D) communication, a terminal in vehicle to everything (V2X), a virtual reality (VR) terminal device, an augmented reality (AR) terminal device, a wireless terminal in industrial control, a wireless terminal in self driving, a wireless terminal in remote medical, a wireless terminal in smart grid, a wireless terminal in transportation safety, a wireless terminal in a smart city, a wireless terminal in a smart home, etc.
  • D2D device-to-device
  • V2X vehicle to everything
  • VR virtual reality
  • AR augmented reality
  • the terminal device may also be a terminal device in a communication system that evolves after the fifth generation (5th generation, 5G) communication system (e.g., a sixth generation (6th generation, 6G) communication system, etc.) or a terminal device in a public land mobile network (PLMN) that evolves in the future, etc.
  • 5G fifth generation
  • 6G sixth generation
  • PLMN public land mobile network
  • the 6G network can further expand the form and function of the 5G communication terminal
  • the 6G terminal includes but is not limited to a car, a cellular network terminal (with integrated satellite terminal function), a drone, and an Internet of Things (IoT) device.
  • IoT Internet of Things
  • the terminal device may also obtain AI services provided by the network device.
  • the terminal device may also have AI processing capabilities.
  • the network equipment can be a RAN node (or device) that connects a terminal device to a wireless network, which can also be called a base station.
  • RAN equipment are: base station, evolved NodeB (eNodeB), gNB (gNodeB) in a 5G communication system, transmission reception point (TRP), evolved Node B (eNB), radio network controller (RNC), Node B (NB), home base station (e.g., home evolved Node B, or home Node B, HNB), baseband unit (BBU), or wireless fidelity (Wi-Fi) access point AP, etc.
  • the network equipment may include a centralized unit (CU) node, a distributed unit (DU) node, or a RAN device including a CU node and a DU node.
  • CU centralized unit
  • DU distributed unit
  • RAN device including a CU node and a DU node.
  • the RAN node can also be a macro base station, a micro base station or an indoor station, a relay node or a donor node, or a wireless controller in a cloud radio access network (CRAN) scenario.
  • the RAN node can also be a server, a wearable device, a vehicle or an onboard device, etc.
  • the access network device in the vehicle to everything (V2X) technology can be a road side unit (RSU).
  • the RAN node can be a central unit (CU), a distributed unit (DU), a CU-control plane (CP), a CU-user plane (UP), or a radio unit (RU).
  • the CU and DU can be set separately, or can also be included in the same network element, such as a baseband unit (BBU).
  • BBU baseband unit
  • the RU can be included in a radio frequency device or a radio frequency unit, such as a remote radio unit (RRU), an active antenna unit (AAU) or a remote radio head (RRH).
  • CU or CU-CP and CU-UP
  • DU or RU may also have different names, but those skilled in the art can understand their meanings.
  • O-CU open CU
  • DU may also be called O-DU
  • CU-CP may also be called O-CU-CP
  • CU-UP may also be called O-CU-UP
  • RU may also be called O-RU.
  • CU, CU-CP, CU-UP, DU and RU are used as examples for description in this application.
  • Any unit of CU (or CU-CP, CU-UP), DU and RU in this application may be implemented by a software module, a hardware module, or a combination of a software module and a hardware module.
  • the protocol layer may include a control plane protocol layer and a user plane protocol layer.
  • the control plane protocol layer may include at least one of the following: a radio resource control (RRC) layer, a packet data convergence protocol (PDCP) layer, a radio link control (RLC) layer, a media access control (MAC) layer, or a physical (PHY) layer.
  • the user plane protocol layer may include at least one of the following: a service data adaptation protocol (SDAP) layer, a PDCP layer, an RLC layer, a MAC layer, or a physical layer.
  • SDAP service data adaptation protocol
  • the network device may be any other device that provides wireless communication functions for the terminal device.
  • the embodiments of the present application do not limit the specific technology and specific device form used by the network device. For the convenience of description, the embodiments of the present application do not limit.
  • the network equipment may also include core network equipment, such as mobility management entity (MME), home subscriber server (HSS), serving gateway (S-GW), policy and charging rules function (PCRF), public data network gateway (PDN gateway, P-GW) in the fourth generation (4G) network; access and mobility management function (AMF), user plane function (UPF) or session management function (SMF) in the 5G network.
  • MME mobility management entity
  • HSS home subscriber server
  • S-GW serving gateway
  • PDN gateway public data network gateway
  • P-GW public data network gateway
  • AMF access and mobility management function
  • UPF user plane function
  • SMF session management function
  • SMF session management function
  • 5G network equipment may also include other core network equipment in the 5G network and the next generation network of the 5G network.
  • the above-mentioned network device may also have a network node with AI capabilities, which can provide AI services for terminals or other network devices.
  • a network node with AI capabilities can provide AI services for terminals or other network devices.
  • it may be an AI node on the network side (access network or core network), a computing node, a RAN node with AI capabilities, a core network element with AI capabilities, etc.
  • the device for realizing the function of the network device may be a network device, or may be a device capable of supporting the network device to realize the function, such as a chip system, which may be installed in the network device.
  • the technical solution provided in the embodiment of the present application is described by taking the device for realizing the function of the network device as an example that the network device is used as the device.
  • Configuration and pre-configuration are used at the same time.
  • Configuration refers to the network device/server sending some parameter configuration information or parameter values to the terminal through messages or signaling, so that the terminal can determine the communication parameters or resources during transmission based on these values or information.
  • Pre-configuration is similar to configuration, and can be parameter information or parameter values pre-negotiated between the network device/server and the terminal device, or parameter information or parameter values used by the base station/network device or terminal device specified by the standard protocol, or parameter information or parameter values pre-stored in the base station/server or terminal device. This application does not limit this.
  • system and “network” in the embodiments of the present application can be used interchangeably.
  • “Multiple” refers to two or more.
  • “And/or” describes the association relationship of associated objects, indicating that three relationships may exist.
  • a and/or B can represent: A exists alone, A and B exist at the same time, and B exists alone, where A and B can be singular or plural.
  • the character “/” generally indicates that the objects associated with each other are in an "or” relationship.
  • At least one of the following” or similar expressions refers to any combination of these items, including any combination of single items or plural items.
  • “at least one of A, B and C” includes A, B, C, AB, AC, BC or ABC.
  • the ordinal numbers such as “first” and “second” mentioned in the embodiments of the present application are used to distinguish multiple objects, and are not used to limit the order, timing, priority or importance of multiple objects.
  • Send and “receive” in the embodiments of the present application indicate the direction of signal transmission.
  • send information to XX can be understood as the destination of the information is XX, which can include direct sending through the air interface, and also include indirect sending through the air interface by other units or modules.
  • Receiveive information from YY can be understood as the source of the information is YY, which can include direct receiving from YY through the air interface, and also include indirect receiving from YY through the air interface from other units or modules.
  • Send can also be understood as the "output” of the chip interface, and “receive” can also be understood as the "input” of the chip interface.
  • sending and receiving can be performed between devices, for example, between a network device and a terminal device, or can be performed within a device, for example, sending or receiving between components, modules, chips, software modules, or hardware modules within the device through a bus, wiring, or interface.
  • information may be processed between the source and destination of information transmission, such as coding, modulation, etc., but the destination can understand the valid information from the source. Similar expressions in this application can be understood similarly and will not be repeated.
  • indication may include direct indication and indirect indication, and may also include explicit indication and implicit indication.
  • the information indicated by a certain information is called the information to be indicated.
  • the information to be indicated there are many ways to indicate the information to be indicated, such as but not limited to, directly indicating the information to be indicated, such as the information to be indicated itself or the index of the information to be indicated.
  • the information to be indicated may also be indirectly indicated by indicating other information, wherein the other information has an association with the information to be indicated; or only a part of the information to be indicated may be indicated, while the other part of the information to be indicated is known or agreed in advance.
  • the arrangement order of each piece of information agreed in advance may be used to implement the indication of specific information, thereby reducing the indication overhead to a certain extent.
  • the present application does not limit the specific method of indication. It is understandable that for the indication information
  • the indication information can be used to indicate the information to be indicated; for the receiver of the indication information, the indication information can be used to determine the information to be indicated.
  • the present application can be applied to a long term evolution (LTE) system, a new radio (NR) system, or a communication system evolved after 5G (such as 6G, etc.), wherein the communication system includes at least one network device and/or at least one terminal device.
  • LTE long term evolution
  • NR new radio
  • 5G 5th Generation
  • 6G 6th Generation
  • FIG. 1a is a schematic diagram of a communication system in the present application.
  • FIG. 1a shows a network device and six terminal devices, which are terminal device 1, terminal device 2, terminal device 3, terminal device 4, terminal device 5, and terminal device 6.
  • terminal device 1 is a smart tea cup
  • terminal device 2 is a smart air conditioner
  • terminal device 3 is a smart gas station
  • terminal device 4 is a means of transportation
  • terminal device 5 is a mobile phone
  • terminal device 6 is a printer.
  • the AI configuration information sending entity may be a network device.
  • the AI configuration information receiving entity may be a terminal device 1-terminal device 6.
  • the network device and the terminal device 1-terminal device 6 form a communication system.
  • the terminal device 1-terminal device 6 may send data to the network device, and the network device needs to receive the data sent by the terminal device 1-terminal device 6.
  • the network device may send configuration information to the terminal device 1-terminal device 6.
  • terminal device 4-terminal device 6 can also form a communication system.
  • terminal device 5 serves as a network device, that is, an AI configuration information sending entity
  • terminal device 4 and terminal device 6 serve as terminal devices, that is, AI configuration information receiving entities.
  • terminal device 5 sends AI configuration information to terminal device 4 and terminal device 6 respectively, and receives data sent by terminal device 4 and terminal device 6; correspondingly, terminal device 4 and terminal device 6 receive AI configuration information sent by terminal device 5, and send data to terminal device 5.
  • different devices may also execute AI-related services.
  • the base station can perform communication-related services and AI-related services with one or more terminal devices, and communication-related services and AI-related services can also be performed between different terminal devices.
  • communication-related services and AI-related services can also be performed between the TV and the mobile phone.
  • an AI network element can be introduced into the communication system provided in the present application to implement some or all AI-related operations.
  • the AI network element may also be referred to as an AI node, an AI device, an AI entity, an AI module, an AI model, or an AI unit, etc.
  • the AI network element may be a network element built into a communication system.
  • the AI network element may be an AI module built into: an access network device, a core network device, a cloud server, or a network management (operation, administration and maintenance, OAM) to implement AI-related functions.
  • the OAM may be a network management device for a core network device and/or a network management device for an access network device.
  • the AI network element may also be a network element independently set up in the communication system.
  • the terminal or the chip built into the terminal may also include an AI entity to implement AI-related functions.
  • AI artificial intelligence
  • AI Artificial intelligence
  • machines human intelligence for example, it can allow machines to use computer hardware and software to simulate certain intelligent behaviors of humans.
  • machine learning methods can be used.
  • machines use training data to learn (or train) a model.
  • the model represents the mapping from input to output.
  • the learned model can be used for reasoning (or prediction), that is, the model can be used to predict the output corresponding to a given input. Among them, the output can also be called the reasoning result (or prediction result).
  • Machine learning can include supervised learning, unsupervised learning, and reinforcement learning. Among them, unsupervised learning can also be called unsupervised learning.
  • Supervised learning uses machine learning algorithms to learn the mapping relationship between sample values and sample labels based on the collected sample values and sample labels, and uses AI models to express the learned mapping relationship.
  • the process of training a machine learning model is the process of learning this mapping relationship.
  • the sample values are input into the model to obtain the model's predicted values, and the model parameters are optimized by calculating the error between the model's predicted values and the sample labels (ideal values).
  • the learned mapping can be used to predict new sample labels.
  • the mapping relationship learned by supervised learning can include linear mapping or nonlinear mapping. According to the type of label, the learning task can be divided into classification task and regression task.
  • Unsupervised learning uses algorithms to discover the inherent patterns of samples based on the collected sample values.
  • One type of algorithm in unsupervised learning uses the samples themselves as supervisory signals, that is, the model learns the mapping relationship from sample to sample, which is called self-supervised learning.
  • the model parameters are optimized by calculating the error between the model's predicted value and the sample itself.
  • Self-supervised learning can be used in applications such as signal compression and decompression recovery.
  • Common algorithms include autoencoders and adversarial generative networks.
  • Reinforcement learning is different from supervised learning. It is a type of algorithm that learns problem-solving strategies by interacting with the environment. Unlike supervised and unsupervised learning, reinforcement learning problems do not have clear "correct" action label data.
  • the algorithm needs to interact with the environment to obtain reward signals from the environment, and then adjust the decision-making actions to obtain a larger reward signal value. For example, in downlink power control, the reinforcement learning model adjusts the downlink transmission power of each user according to the total system throughput fed back by the wireless network, and then expects to obtain a higher system throughput.
  • the goal of reinforcement learning is also to learn the mapping relationship between the state of the environment and the better (e.g., optimal) decision action.
  • the network cannot be optimized by calculating the error between the action and the "correct action”. Reinforcement learning training is achieved through iterative interaction with the environment.
  • Neural network is a specific model in machine learning technology. According to the universal approximation theorem, neural network can theoretically approximate any continuous function, so that neural network has the ability to learn any mapping.
  • Traditional communication systems require rich expert knowledge to design communication modules, while deep learning communication systems based on neural networks can automatically discover implicit pattern structures from a large number of data sets, establish mapping relationships between data, and obtain performance that is superior to traditional modeling methods.
  • each neuron performs a weighted sum operation on its input values and outputs the operation result through an activation function.
  • FIG. 1d it is a schematic diagram of a neuron structure.
  • w i is used as the weight of xi to weight xi .
  • the bias for weighted summation of input values according to the weights is, for example, b.
  • the activation function can take many forms.
  • the output of the neuron is:
  • the output of the neuron is:
  • b can be a decimal, an integer (eg, 0, a positive integer or a negative integer), or a complex number, etc.
  • the activation functions of different neurons in a neural network can be the same or different.
  • a neural network generally includes multiple layers, each of which may include one or more neurons.
  • the expressive power of the neural network can be improved, providing a more powerful information extraction and abstract modeling capability for complex systems.
  • the depth of a neural network may refer to the number of layers included in the neural network, and the number of neurons included in each layer may be referred to as the width of the layer.
  • the neural network includes an input layer and an output layer. The input layer of the neural network processes the received input information through neurons, passes the processing results to the output layer, and the output layer obtains the output result of the neural network.
  • the neural network includes an input layer, a hidden layer, and an output layer.
  • the input layer of the neural network processes the received input information through neurons, passes the processing results to the middle hidden layer, the hidden layer calculates the received processing results, obtains the calculation results, and the hidden layer passes the calculation results to the output layer or the next adjacent hidden layer, and finally the output layer obtains the output result of the neural network.
  • a neural network may include one hidden layer, or include multiple hidden layers connected in sequence, without limitation.
  • a neural network is, for example, a deep neural network (DNN).
  • DNNs can include feedforward neural networks (FNN), convolutional neural networks (CNN), and recurrent neural networks (RNN).
  • FNN feedforward neural networks
  • CNN convolutional neural networks
  • RNN recurrent neural networks
  • Figure 1e is a schematic diagram of a FNN network.
  • the characteristic of the FNN network is that the neurons in adjacent layers are fully connected to each other. This characteristic makes FNN usually require a large amount of storage space and leads to high computational complexity.
  • CNN is a neural network that is specifically designed to process data with a grid-like structure. For example, time series data (discrete sampling on the time axis) Samples) and image data (two-dimensional discrete sampling) can be considered as data with a similar grid structure.
  • CNN does not use all the input information for calculation at one time, but uses a fixed-size window to intercept part of the information for convolution operation, which greatly reduces the amount of calculation of model parameters.
  • each window can use different convolution kernel operations, which enables CNN to better extract the features of the input data.
  • RNN is a type of DNN network that uses feedback time series information. Its input includes the new input value at the current moment and its own output value at the previous moment. RNN is suitable for obtaining sequence features that are correlated in time, and is particularly suitable for applications such as speech recognition and channel coding.
  • a loss function can be defined.
  • the loss function describes the gap or difference between the output value of the model and the ideal target value.
  • the loss function can be expressed in many forms, and there is no restriction on the specific form of the loss function.
  • the model training process can be regarded as the following process: by adjusting some or all parameters of the model, the value of the loss function is less than the threshold value or meets the target requirements.
  • Models can also be referred to as AI models, rules or other names.
  • AI models can be considered as specific methods for implementing AI functions.
  • AI models characterize the mapping relationship or function between the input and output of a model.
  • AI functions may include one or more of the following: data collection, model training (or model learning), model information publishing, model inference (or model reasoning, inference, or prediction, etc.), model monitoring or model verification, or reasoning result publishing, etc.
  • AI functions can also be referred to as AI (related) operations, or AI-related functions.
  • Fully connected neural network also called multilayer perceptron (MLP).
  • an MLP consists of an input layer (left), an output layer (right), and multiple hidden layers (middle).
  • Each layer of the MLP contains several nodes, called neurons. The neurons in two adjacent layers are connected to each other.
  • the output h of the neurons in the next layer is the weighted sum of all the neurons x in the previous layer connected to it and passes through the activation function, which can be expressed as:
  • w is the weight matrix
  • b is the bias vector
  • f is the activation function
  • the output of the neural network can be recursively expressed as:
  • a neural network can be understood as a mapping relationship from an input data set to an output data set.
  • neural networks are randomly initialized, and the process of obtaining this mapping relationship from random w and b using existing data is called neural network training.
  • the specific method of training is to use a loss function to evaluate the output results of the neural network.
  • the error can be back-propagated, and the neural network parameters (including w and b) can be iteratively optimized by the gradient descent method until the loss function reaches a minimum value, that is, the "better point (e.g., optimal point)" in FIG2b.
  • the neural network parameters corresponding to the "better point (e.g., optimal point)" in FIG2b can be used as the neural network parameters in the trained AI model information.
  • the gradient descent process can be expressed as:
  • is the parameter to be optimized (including w and b)
  • L is the loss function
  • is the learning rate, which controls the step size of gradient descent.
  • is the learning rate, which controls the step size of gradient descent.
  • the back-propagation process utilizes the chain rule for partial derivatives.
  • the gradient of the previous layer parameters can be recursively calculated from the gradient of the next layer parameters, which can be expressed as:
  • w ij is the weight of node j connecting node i
  • si is the weighted sum of inputs on node i.
  • the FL architecture is the most widely used training architecture in the current FL field.
  • the FedAvg algorithm is the basic algorithm of FL. Its algorithm flow is as follows:
  • the center initializes the model to be trained And broadcast it to all client devices.
  • the central node aggregates and collects local training results from all (or some) clients. Assume that the client set that uploads the local model in round t is The center will use the number of samples of the corresponding client as the weight to perform weighted averaging to obtain a new global model. The specific update rule is: The center then sends the latest version of the global model Broadcast to all client devices for a new round of training.
  • the central node In addition to reporting local models You can also use the local gradient of training After reporting, the central node averages the local gradients and updates the global model according to the direction of the average gradient.
  • the data set exists in the distributed nodes, that is, the distributed nodes collect local data sets, perform local training, and report the local results (models or gradients) obtained from the training to the central node.
  • the central node itself does not have a data set, and is only responsible for fusing the training results of the distributed nodes to obtain the global model and send it to the distributed nodes.
  • Decentralized learning Different from federated learning, there is another distributed learning architecture - decentralized learning.
  • the design goal f(x) of a decentralized learning system is generally the mean of the goals fi (x) of each node, that is, Where n is the number of distributed nodes, x is the parameter to be optimized. In machine learning, x is the parameter of the machine learning (such as neural network) model.
  • Each node uses local data and local target fi (x) to calculate the local gradient Then it is sent to the neighboring nodes that can be communicated with. After any node receives the gradient information sent by its neighbor, it can update the parameter x of the local model according to the following formula:
  • ⁇ k represents the tuning coefficient
  • Ni is the set of neighbor nodes of node i
  • represents the number of elements in the set of neighbor nodes of node i, that is, the number of neighbor nodes of node i.
  • the technical solution provided in the present application can be applied to a wireless communication system (for example, the system shown in FIG. 1a , 1b , or 1c ).
  • the multi-input multi-output (MIMO) technology can be used to meet high-speed transmission requirements.
  • the network device can use the channel information obtained in the channel measurement process to calculate the precoding information between the network device and the terminal device, and subsequently the network device and the terminal device can achieve MIMO communication through the precoding information.
  • the downlink reference signal sent by the network device may include a channel state information reference signal (CSI-RS), and the network device may receive feedback of the CSI-RS, so that the network device can obtain channel information based on the feedback of the CSI-RS.
  • CSI-RS is sent through the port of the network device, that is, the overhead of the CSI-RS is related to the number of ports of the network device.
  • the number of ports of network devices may gradually increase, which will increase the overhead of reference signals (such as CSI-RS) used for channel measurement and occupy more transmission resources, thereby increasing the power consumption of terminal devices.
  • reference signals such as CSI-RS
  • CSI describes some channel characteristics about a wireless link. It can characterize the joint characteristics of path loss, scattering, diffraction, fading, shadowing, etc. in the process of signal propagation from the transmitter to its corresponding receiver. Therefore, obtaining accurate CSI plays a vital role in the communication performance of mobile users in wireless communication systems. More specifically, CSI highly determines the physical layer parameters and schemes deployed for radio communications in wireless communication systems. Obtaining CSI is a crucial issue in wireless communications, which determines whether technologies such as MIMO can release their potential transmission gain. In the new generation of wireless communication systems, with the use of MIMO technology, wider communication frequency bands and support for high mobility, timely and accurate acquisition of CSI faces huge challenges.
  • the spatiotemporal correlation of the channel can be utilized through a neural network processing method to reduce the reference signal overhead, and some implementation examples are provided below for description.
  • Implementation method A The network device can implement channel estimation with channel mapping enhancement through space-frequency correlation based on neural network processing.
  • the channels on some antennas and subcarriers are estimated, and the channels on all antennas and subcarriers are obtained through interpolation and extension, so as to reduce the required pilot overhead.
  • This method essentially utilizes the spatial-frequency domain correlation of the channel.
  • the network device can receive the measurement results of the reference signal by the terminal device to obtain channel information on some antennas and/or some subcarriers. After that, the network device can obtain channel information on other antennas and/or other subcarriers through neural network processing (such as interpolation processing, extension processing, etc.).
  • neural network processing such as interpolation processing, extension processing, etc.
  • the channels that need to occupy pilot resources for estimation are reduced from the channels on all antennas and carriers to some antennas and/or subchannels, which can reduce the overhead of the reference signal. Due to the high nonlinearity of space-frequency correlation, the deep learning-based method is significantly ahead in performance.
  • the space-frequency correlation of the channel is initially utilized, but the channel acquisition result of this solution (i.e., the processing result obtained by the neural network) depends on the quality information of some antennas and/or some subcarriers obtained by measuring the reference signal. If the path information of the wireless channel is too complex or the noise interference in the estimation is large, this solution will face serious performance degradation.
  • Implementation method B network equipment can achieve channel prediction through time domain correlation based on neural network processing.
  • the user channel is calculated based on the current user's location tag by learning the relationship between the user's location coordinates and the channel.
  • Channel prediction based on sequence relationship Based on the channel state of the user in the past, it is assumed that the user's movement obeys a certain motion model to "predict" the user's current channel information.
  • the network device sends a reference signal in a historical time slot and receives the measurement results of the reference signal by the terminal device to obtain the channel information of the historical time slot. After that, the network device can use the channel state information of the past time slot and the processing of the neural network to infer the technical means of the channel state of the current or future time slots. With this technology, it is no longer necessary to pay for the communication resources to obtain the channel in the current time slot, which greatly reduces the cost of channel acquisition. Since the spatiotemporal relationship that needs to be modeled for prediction is highly complex, the channel prediction method is also inseparable from the support of deep learning technology.
  • the scheme is often not accurate enough in inferring and utilizing the time correlation of the channel, and the channel acquisition quality is often very poor.
  • the quality of channel prediction will continue to deteriorate, making it difficult to meet the needs of long-term communication, and therefore it is still difficult to be actually deployed and applied.
  • implementation method A by mining and utilizing the space-frequency correlation within MIMO-OFDM, the pilot overhead required for channel estimation in the current time slot can be reduced to a certain extent. However, it ignores the potential reference information in the channel of several past time slots, and its channel acquisition result depends entirely on the quality of some sub-channels obtained by pilot estimation; therefore, the application of this channel acquisition method is often very limited.
  • implementation method B In implementation method B,
  • implementation method B it is undoubtedly valuable to mine and utilize historical channel information.
  • pure prediction abandons the communication information of the current time slot, making the channel results obtained based on implementation method B vulnerable to the randomness of the communication in this time slot and severely damaged.
  • the information obtained in the communication process itself is used as a label, and no additional label information is required, but the utilization of time domain correlation is ignored, making the saving of communication resources and the utilization of channel correlation not thorough enough. It can be seen that the above implementation methods A and B still have some unresolved issues.
  • FIG3 is a schematic diagram of an implementation of the communication method provided in the present application.
  • the method includes the following steps.
  • the method executed by the network device can also be executed by a module of the network device (such as a chip, a chip system, or a processor), and can also be implemented by a logical node, a logical module, or software that can implement all or part of the network device.
  • the method executed by the terminal device can also be executed by a module of the terminal device (such as a chip, a chip system, or a processor), and can also be implemented by a logical node, a logical module, or software that can implement all or part of the terminal device functions.
  • the method shown in FIG3 includes steps S301 to S303 , and each step will be described below.
  • the network device sends a first reference signal, and correspondingly, the terminal device receives the first reference signal.
  • the terminal device sends first channel information, and correspondingly, the network device receives the first channel information, wherein the first channel information is determined based on the first reference signal, and the channel dimension represented by the first channel information is [T 1 , R 1 , F 1 ], and T 1 , R 1 , and F 1 are all positive numbers.
  • the network device processes the first channel information and Q second channel information based on the first neural network to obtain third channel information.
  • the Q second channel information are determined by Q second reference signals, where Q is a positive integer; the position of the time domain resources carrying the Q second reference signals is located before the position of the time domain resources carrying the first reference signal, and the channel dimension represented by the third channel information is [T, R, F], where T, R, and F are all positive numbers; the parameters in the channel dimension represented by the first channel information and the parameters in the channel dimension represented by the third channel information satisfy at least one of the following: T is greater than T 1 , R is greater than R 1 , and F is greater than F 1 .
  • the channel dimensions represented by the channel information may include multiple dimensions such as dimensions in the spatial domain, dimensions in the frequency domain, and dimensions in the time domain.
  • the dimensions in the spatial domain may include dimensions of the antenna port of the signal transmitting end (i.e., transmitting antenna dimensions) and dimensions of the antenna port of the signal receiving end (i.e., receiving antenna dimensions).
  • the channel dimensions represented by the channel information include three channel dimensions as an example for explanation, one channel dimension is represented as T or T 1 (or other parameters represented by T that may appear later, such as T 2 /T 3 /T 4 /T 5 , etc.), another channel dimension is represented as R or R 1 (or other parameters represented by R that may appear later, such as R 2 /R 3 /R 4 /R 5 , etc.), and another channel dimension is represented as F or F 1 (or other parameters represented by F that may appear later, such as F 2 /F 3 /F 4 /F 5 , etc.).
  • the transmission process of the reference signal in space includes the process of the antenna port of the signal transmitting end sending the reference signal and the process of the antenna port of the signal receiving end receiving the reference signal. That is, the dimension of the channel information in the spatial domain can correspond to the transceiver port of the reference signal.
  • the dimension of the reference signal in the spatial domain can be expressed as [T, R] in [T, R, F], where T (or T 1 , or other parameters represented by T that may appear later, such as T 2 /T 3 /T 4 /T 5 , etc.) represents the antenna port of the signal transmitting end, and R (or R 1 , or other parameters represented by R that may appear later, such as R 2 /R 3 /R 4 /R 5 , etc.) represents the antenna port of the signal receiving end.
  • the dimension of the reference signal in the frequency domain may correspond to the dimension of the reference signal carrying the reference signal.
  • the dimension of the reference signal in the spatial domain can be expressed as [F] in [T, R, F], where F (or F 1 , or other parameters represented by F that may appear later, such as F 2 /F 3 /F 4 /F 5 , etc.) represents the frequency domain resource carrying the reference signal (e.g., one or more frequency domain units contained in the frequency domain resource).
  • the position of the time domain resources carrying the Q second reference signals is located before the position of the time domain resources carrying the first reference signal, which can be understood as at least one of the following: the Q second reference signals are sent by the network device before the network device sends the first reference signal, the Q second reference signals are received by the terminal device before the terminal device receives the first reference signal, and the index number of the time domain resources carrying the Q second reference signals is smaller than the index number of the time domain resources carrying the first reference signal.
  • the index number of the time domain resource can be any one of the following: the index number of the system frame, the index number of the subframe, the index number of the time slot, the index number of the orthogonal frequency division multiplexing (OFDM) symbol, etc.
  • the first neural network is used to process lower dimensional channel information (such as first channel information) to obtain higher dimensional channel information (such as third channel information). Accordingly, the first neural network can be understood as a neural network that performs channel prediction.
  • channel prediction can be replaced by other terms, such as channel estimation, channel pre-estimation, channel simulation recovery, channel reconstruction, channel acquisition, channel inference, etc.
  • neural network can be replaced by other terms, such as AI network, machine learning network, AI learning network, AI neural network, etc.
  • the channel dimension represented by at least one of the Q second channel information is [T 2 , R 2 , F 2 ], T 2 , R 2 , and F 2 are all positive numbers; the parameters in the channel dimension represented by the second channel information satisfy at least one of the following: T is greater than T 2 , R is greater than R 2 , and F is greater than F 2 .
  • the Q second channel information are determined by Q second reference signals, wherein the channel dimension represented by at least one of the Q second channel information satisfies at least one of the above items, that is, the channel dimension represented by the at least one second channel information is lower than the channel dimension represented by the third channel information.
  • the Q second channel information is one of the input data of the first neural network.
  • the parameters in the channel dimension represented by the first channel information and the parameters in the channel dimension represented by the second channel information satisfy at least one of the following: T 1 and T 2 have the same value, R 1 and R 2 have the same value, and F 1 and F 2 have the same value.
  • the position of the time domain resource carrying the Q second reference signals is located before the position of the time domain resource carrying the first reference signal, that is, the Q second channel information is the historical channel information of the first channel information
  • the parameters in the two channel information satisfy at least one of the above items, which enables the network device to obtain the two channel information based on at least one identical dimension, thereby reducing the implementation complexity.
  • the network device may receive the first channel information determined based on the first reference signal in step S302. Thereafter, in step S303, the network device may process the first channel information and Q second channel information based on the first neural network to obtain the third channel information.
  • the channel dimension represented by the first channel information is [T 1 , R 1 , F 1 ]
  • the channel dimension represented by the third channel information is [T, R, F], satisfying at least one of the following: T is greater than the T 1 , R is greater than R 1 , and F is greater than F 1 .
  • the network device can obtain the third channel information of a higher channel dimension based on the first channel information of a lower channel dimension, and the first channel information of the lower channel dimension is determined by the first reference signal.
  • the network device can obtain the channel information of a higher dimension through the processing of the neural network based on the channel information of the lower channel dimension determined by the reference signal, which can reduce the reference signal overhead, thereby improving resource utilization and reducing the power consumption of the terminal device.
  • the input data of the first neural network may include the first channel information of lower dimension and Q second channel information, and the position of the time domain resource carrying the Q second reference signals is located before the position of the time domain resource carrying the first reference signal, that is, the Q second channel information is the historical channel information of the first channel information.
  • the first neural network can utilize the time domain correlation between the Q second channel information and the first channel information, and can also utilize the spatial domain correlation and/or frequency domain correlation between the first channel information and the third channel information, thereby improving the accuracy of the neural network in obtaining the channel information of higher dimension based on the channel information of lower channel dimension.
  • the input of the first neural network includes first channel information and second channel information
  • the output of the first neural network includes third channel information.
  • the first channel information is channel information of a lower dimension and the third channel information is channel information of a higher dimension.
  • the second channel information may also be channel information of a lower dimension and the third channel information may be channel information of a higher dimension.
  • at least one channel dimension of the second channel information may be the same as at least one channel dimension of the first channel information to reduce implementation complexity.
  • the respective advantages of implementation method A and implementation method B can be combined, and the communication reference signal of the current time slot can be appropriately introduced to resist the randomness of the current time slot, and the information in the channel of the past time slot can be used to realize a channel acquisition scheme that occupies less communication resources and can resist communication randomness.
  • the first neural network used to determine the third channel information includes a first sub-neural network and a second sub-neural network; the network device processes the first channel information and Q second channel information based on the first neural network to obtain the third channel information, including: the network device processes the first channel information based on the first sub-neural network to obtain fourth channel information; wherein the channel dimension represented by the fourth channel information is [T, R, F]; the network device processes the fourth channel information and the Q second channel information based on the second sub-neural network to obtain the third channel information.
  • the network device can process the first channel information based on the first sub-neural network to obtain the fourth channel information, and then process the fourth channel information and the Q second channel information based on the second sub-neural network to obtain the third channel information.
  • the network device can obtain the fourth channel information of higher dimension by processing the first sub-neural network based on the first channel information of lower dimension obtained by the reference signal, and then further process the fourth channel information and the Q second channel information obtained historically based on the second sub-neural network to obtain the third channel information.
  • the first neural network includes a first sub-neural network and a second sub-neural network, wherein the input of the first sub-neural network includes first channel information, the output of the first sub-neural network includes fourth channel information, and the input of the second sub-neural network includes Q second channel information and the fourth channel information.
  • the first channel information is a lower dimensional channel information and the third channel information is a higher dimensional channel information.
  • the second channel information may also be channel information of a lower dimension and the third channel information may be channel information of a higher dimension.
  • at least one channel dimension of the second channel information may be the same as at least one channel dimension of the first channel information to reduce implementation complexity.
  • step S303 shown in FIG3 after the network device processes the first channel information and Q second channel information based on the first neural network to obtain the third channel information, the method further includes: the network device processes the third channel information based on the second neural network to obtain the fifth channel information, and the channel dimension represented by the fifth channel information is [T, R, F]. Specifically, after the network device obtains the third channel information of a higher dimension based on the first neural network, the network device can also process the third channel information based on the second neural network to obtain the fifth channel information of the same channel dimension. Thus, by optimizing the second neural network, the accuracy of the higher dimensional channel information obtained based on the neural network can be further improved.
  • the second neural network is used to process the input channel information (such as the third channel information) to obtain the output channel information (such as the fifth channel information), and compared with the input channel information, the output channel information is closer to the channel information of the transmission channel in the actual space.
  • the second neural network can be a neural network for optimizing the channel information.
  • the deviation between the fifth channel information and the sixth channel information is less than or equal to the deviation between the third channel information and the sixth channel information
  • the channel dimension represented by the sixth channel information is [T, R, F]
  • the sixth channel information is obtained by measuring the reference signal.
  • the sixth channel information is obtained by measuring the reference signal
  • the third channel information and the fifth channel information are obtained based on the second neural network. For this reason, compared with the third channel information and the fifth channel information, the sixth channel information is closer to the channel information of the transmission channel in the actual space.
  • the second neural network is used to optimize the third channel information to obtain the fifth channel information, so that the deviation between the fifth channel information and the sixth channel information is less than or equal to the deviation between the third channel information and the sixth channel information, that is, compared with the third channel information, the fifth channel information is closer to the channel information of the transmission channel in the actual space.
  • Figure 4c it is an implementation example of the processing process of the first neural network and the second neural network.
  • the third channel information can also be used as the input of the second neural network, and the fifth channel information is obtained after being processed by the second neural network.
  • Figure 4d it is an implementation example of the processing process of the first neural network and the second neural network.
  • the third channel information can also be used as the input of the second neural network, and the fifth channel information is obtained after being processed by the second neural network.
  • the channel dimension of the third channel information and the channel dimension of the fifth channel information may be the same.
  • the first sub-neural network can be a preliminary mapping network
  • the second sub-neural network can be a past-present information interaction network
  • the third neural network can be a detailed representation network.
  • the first channel information of the lower channel dimension can represent the channel information of the t-th time unit (the time unit can be a frame, a subarray, a time slot, a symbol, etc.) as H′ t , that is, the channel information obtained based on a small amount of communication resources (e.g., fewer transmitting port resources, fewer receiving port resources, fewer frequency domain resources, etc.);
  • the Q second channel information can be represented as the channel information on the Q time units before the t-th time unit, that is, arrive in, arrive It can be the second channel information on the Q time units, and the second channel information can be the channel information obtained based on a small amount of communication resources, or it can be the historical processing result of multiple executions of neural network processing (multiple neural network processing can refer to the implementation process of Figure 6a or Figure 6b later).
  • the Q time units may be continuous time units or discontinuous time units, which is not limited here.
  • the time intervals between two adjacent time units may be the same or different, which is not limited here.
  • the scheme shown in FIG5a can be described as jointly inferring the complete high-dimensional channel information of the current time slot based on the channel information of several past time units and the estimation of the partial channel information of the current time unit (i.e., the t-th time unit), so as to fully utilize the time-space frequency correlation.
  • the scheme shown in FIG5a can include the following three steps.
  • the sparse channel measurement result H′t at the current moment is converted into information with complete channel information dimension after passing through the preliminary mapping network At this time, spatial-frequency correlation is used.
  • the second step is to extract temporal correlation in the past-present information interaction network using an attention-based network (such as a transformer structure) and output information
  • Step 3 After the detailed representation network, the final channel information is obtained
  • the implementation examples of the functions and inputs and outputs of the three network modules shown in Figure 5a are as described in Figures 5b to 5c below.
  • the preliminary mapping network is recorded as module 1
  • the past-present information interaction network is recorded as module 2
  • the detailed representation network is recorded as module 3.
  • the module 1 can expand the sparse channel measurement to the same dimension as the complete channel information through neural network processing.
  • the input of this module 1 is the sparse channel measurement result H′ t at the current moment.
  • the output is a sub-network with complete channel information dimensions.
  • the module 1 may adopt one or more network architectures such as a multi-layer perceptron hybrid (MLPmixer) network structure, an MLP-based network structure, a CNN-based network structure, and a Transformer-based network structure.
  • MLPmixer multi-layer perceptron hybrid
  • the example shown in Figure 5c is an implementation example of module 2.
  • the channel at the current moment and the channel at the past moment have the same dimension, that is, the dimension of the complete channel information. At this time, they are input into module 2 as a sequence.
  • the channel at the past moment is at the front of the sequence, and the channel at the current moment is at the end of the sequence.
  • the solution of the present application does not limit the order of the channel information at the past time (ie, the Q second channel information) in the sequence.
  • the order in which they are placed can be arbitrary, i.e., maintaining permutation invariance, and the number of past moments Q is arbitrary.
  • the terminal device moves freely in a small range for 1 hour, and collects channels at 5 moments during this period.
  • the terminal device moves freely in a small range for 1 hour, and collects channels at 5 moments during this period.
  • the terminal device moves freely in a small range for 1 hour, and collects channels at 5 moments during this period.
  • the terminal device moves freely in a small range for 1 hour, and collects channels at 5 moments during this period.
  • you want to predict the user's channel at the next moment due to the large time span and the unknown current position of the terminal device, even if you know the timestamps of the previous 5 moments, it may not be helpful (or very helpful) for this prediction.
  • the assumption that time domain correlation will bring benefits is that the closer the historical moment to the current moment is, the greater the correlation with the current moment.
  • the first layer of module 2 can be a reshape & fully-connected layer, which first transforms the channel matrix into a vector (assuming that the dimension of the channel information is n (number of antennas) * m (number of carriers), and the dimension of the vector is 2*n*m (real part and imaginary part)), and then passes the vector through a fully connected layer to obtain a vector with a dimension of the embedding size, which can transform the sizes of different inputs into a unified low dimension, facilitating subsequent network processing.
  • this vector can be added (Add) to the position embedding vector (position embedding) (for example, the position embedding vector can include the "past/current embedding (Past/present embedding) vector").
  • the result of the addition (or the backup, copy, etc. of the result) can be used as the input of the first normalization layer (Layer Norm), and after being processed by the multi-head attention layer (Multi-head attention), the processing result can be used as the input of the second addition processing, and the result of the addition (or the backup, copy, etc. of the result) can also be used as the input of the second addition processing.
  • the processing result of the second addition processing (or the backup, copy, etc.
  • the processing result of the result can be used as the input of the second normalization layer, and after being processed by the feed forward (Feed Forward), the processing result can be used as the input of the third addition processing; the processing result of the second addition processing (or the backup, copy, etc. of the result) can also be used as the input of the third addition processing. Thereafter, the output of the third addition processing can be used as the input of the third layer network.
  • the third layer of the network can be represented as (Last output) Fully-connected & Reshape, and the output of the third layer of the network can be
  • networks in the second layer there are one or more networks in the second layer.
  • different networks can be executed in cascade (or series).
  • the position vector in the above example includes two types, namely, vectors of the past moment and the current moment, that is, the same position embedding vector can be used for multiple past moments.
  • the scheme is basically consistent with the encoder of the Transformer network. After being processed by the multi-head attention layer and the fully connected layer (feed forward), the channel information of the past moment and the current moment is fused and feature extracted, and finally a vector is output. The vector is processed by the fully connected layer and then deformed to obtain a vector with complete channel information dimensions.
  • module 3 In order to further improve the accuracy, the output of module 2 can be After being processed by module 3, the final channel information of the same dimension is output Exemplarily, module 3 may adopt one or more network architectures such as an MLPmixer network structure, an MLP-based network structure, a CNN-based network structure, and a Transformer-based network structure.
  • MLPmixer network structure such as an MLPmixer network structure, an MLP-based network structure, a CNN-based network structure, and a Transformer-based network structure.
  • module 3 is optional.
  • the use of module 3 i.e., the detail representation network, which can also be the second neural network in Figure 4c/ Figure 4d above
  • module 3 is to further improve the accuracy of channel acquisition, and its input and output dimensions are consistent, so only module 1 and module 2 can be retained.
  • the accuracy of channel acquisition is high, the error is less than a preset threshold, or the communication system has strict requirements on computational complexity (for example, the computing power of IoT devices is weak, and the number of operations is limited when the terminal device is low on power), the method of this embodiment can be used to reduce computational complexity and improve channel acquisition efficiency.
  • the method may further include: the network device sends a third reference signal; the network device receives channel information corresponding to the third reference signal, the channel dimension represented by the channel information corresponding to the third reference signal is [T 3 , R 3 , F 3 ], T 3 , R 3 , F 3 are all positive numbers; the channel dimension represented by the channel information corresponding to the third reference signal satisfies at least one of the following: T is greater than T 3 , R is greater than R 3 , and F is greater than F 3 ; the network device processes the channel information corresponding to the second reference signal, part or all of the Q second channel information, and the third channel information based on the first neural network to obtain seventh channel information; wherein the channel dimension represented by the seventh channel information is [T, R, F].
  • the network device may also send a third reference signal and receive channel information corresponding to the third reference signal, and subsequently the network device may obtain seventh channel information based on the channel information corresponding to the second reference signal.
  • the channel dimension represented by the seventh channel information is [T, R, F], and the channel dimension represented by the channel information corresponding to the third reference signal satisfies at least one of the following: T is greater than T 3 , R is greater than R 3 , and F is greater than F 3 .
  • the network device may iteratively run the first neural network, based on multiple reference signal transmissions, to obtain higher-dimensional channel information multiple times based on lower-dimensional channel information.
  • the third channel information can also be used as the input of the next first neural network, and the seventh channel information is obtained after the processing of the next first neural network.
  • the input of the next first neural network also includes the channel information corresponding to the third reference signal, that is, by iteratively running the first neural network, based on multiple reference signals (such as the first reference signal and the third reference signal), multiple acquisitions of higher-dimensional channel information based on lower-dimensional channel information are achieved.
  • FIG6b another implementation example of the multiple processing process of the first neural network is shown.
  • the third channel information can also be used as the input of the next first neural network, and the first sub-neural network and the second sub-neural network in the next first neural network are processed to obtain the seventh channel information.
  • the input of the next first neural network also includes the channel information corresponding to the third reference signal, that is, by iteratively running the first neural network, based on multiple reference signals (such as the first reference signal and the third reference signal), multiple acquisitions of higher-dimensional channel information based on lower-dimensional channel information are achieved.
  • the channel dimension represented by the first channel information and the channel dimension represented by the channel information corresponding to the third reference signal satisfy at least one of the following: T 1 and T 3 have the same value, R 1 and R 3 have the same value, and F 1 and F 3 have the same value.
  • the channel dimensions of different channel information obtained based on different reference signals can satisfy at least one of the above items, that is, the different channel information includes at least one same dimension, which can reduce the implementation complexity.
  • the neural network (such as the first neural network, the second neural network, etc.) used by the network device can realize the process of determining the channel information of a higher dimension with high accuracy.
  • the accuracy of the prediction performed by the neural network is very high, the channel information determined by the neural network is the same as the channel information of the transmission channel in the actual space.
  • the performance of the neural network can also be detected and/or optimized.
  • Implementation method 1 The network device detects and/or optimizes the performance of the neural network based on the neural network deployed on the terminal device.
  • the method shown in Figure 3 may also include: the network device sends a fourth reference signal; the network device receives first information, the first information being used to indicate a deviation between channel information corresponding to the fourth reference signal and eighth channel information; wherein the channel dimension represented by the eighth channel information is [T, R, F], and the eighth channel information is obtained by processing the channel information corresponding to the fourth reference signal and P ninth channel information based on the third neural network, the P ninth channel information is determined by P reference signals, and the position of the time domain resources carrying the P reference signals is located before the position of the time domain resources carrying the fourth reference signal, and P is a positive integer.
  • the network device may also send a fourth reference signal, and the eighth channel information is channel information obtained by the receiver (i.e., the terminal device) of the fourth reference signal processing the channel information corresponding to the fourth reference signal based on the third neural network.
  • the network device may also receive first information indicating a deviation between the channel information corresponding to the fourth reference signal and the eighth channel information, so that the network device can determine the performance of the channel prediction of the third neural network based on the first information.
  • the network device can determine the performance of the channel prediction of the neural network through the first information. Since the transmission overhead of the first information is less than the transmission overhead of the channel information corresponding to the fourth reference signal, the transmission overhead can be reduced.
  • the first information is used to indicate the deviation between the channel information corresponding to the fourth reference signal and the eighth channel information.
  • the information is channel information obtained by processing based on the third neural network. Accordingly, the eighth channel information can be understood as the result obtained by the third neural network performing channel prediction, and the channel information corresponding to the fourth reference signal is the channel information of the transmission channel in the actual space. For this reason, the deviation indicated by the first information can be understood as the first information used to indicate the performance test result of the third neural network performing channel prediction.
  • the network structure and/or network parameters of the first neural network deployed on the network device and the third neural network deployed on the terminal device are the same.
  • the implementation process of the third neural network deployed in the terminal device can refer to the implementation process of the first neural network mentioned above, including but not limited to the examples shown in Figures 4a to 4d, the examples shown in Figures 5a to 5d, and the examples shown in Figures 6a to 6b.
  • the channel information obtained historically and the channel information of lower dimensions can be used as the input of the third neural network, and after being processed by the third neural network, the third neural network can output the channel information of higher dimensions.
  • the channel dimension represented by the channel information corresponding to the fourth reference signal is [T 4 , R 4 , F 4 ], where T 4 , R 4 , and F 4 are all positive numbers;
  • the channel dimension represented by the channel information corresponding to the fourth reference signal and the channel dimension represented by the channel information corresponding to the first reference signal (i.e., the first channel information) satisfy at least one of the following: T is greater than or equal to T 4 and T 4 is greater than T 1 , R is greater than or equal to R 4 and R 4 is greater than R 1 , and F is greater than or equal to F 4 and F 4 is greater than F 1 .
  • the channel dimension represented by the channel information corresponding to the fourth reference signal and the channel dimension represented by the channel information corresponding to the first reference signal satisfy at least one of the above items, that is, the channel information corresponding to the fourth reference signal is channel information of higher dimension than the channel information corresponding to the first reference signal (i.e., the fourth reference signal is a reference signal of higher density than the first reference signal), so as to enable the terminal device to detect the channel prediction performance of the third neural network through the reference signal of higher density.
  • the channel information corresponding to the fourth reference signal is channel information of lower dimension than the channel information corresponding to the eighth reference signal, which can reduce the overhead in the detection process of the channel prediction performance of the neural network.
  • the method may further include: the network device sends second information, where the second information is used to indicate whether to adjust the input parameters of the third neural network; wherein the second information is determined based on the first information.
  • the network device may also send second information for indicating whether to adjust the input parameters of the third neural network, so that the network device can adjust the input parameters of the third neural network deployed on the terminal device based on the performance detection results of the third neural network.
  • the input parameters of the third neural network may include the amount of historical channel information input to the third neural network, the channel dimension of the historical channel information input, the channel dimension of channel information with lower channel dimension, etc.
  • the method may further include: the network device sends third information, the third information indicating channel information corresponding to the fourth reference signal processed based on the third neural network; wherein the third information is determined based on the first information.
  • the network device may also send third information for instructing the processing of the channel information corresponding to the fourth reference signal based on the third neural network, so that the terminal device can initiate performance detection of the channel information corresponding to the fourth reference signal based on the third information.
  • the network device detects and/or optimizes the performance of the neural network based on a locally deployed neural network (eg, the first neural network).
  • a locally deployed neural network eg, the first neural network
  • the method shown in FIG3 may further include: the network device sends a fourth reference signal; the network device receives channel information corresponding to the fourth reference signal; the channel dimension represented by the channel information corresponding to the fourth reference signal is [T 4 , R 4 , F 4 ], and T 4 , R 4 , and F 4 are all positive numbers; at least one of the following conditions is satisfied: T is greater than or equal to T 4 and T 4 is greater than T 1 , R is greater than or equal to R 4 and R 4 is greater than R 1 , and F is greater than or equal to F 4 and F 4 is greater than F 1 ; the network device processes part or all of the channel information corresponding to the fourth reference signal and the Q second channel information based on the first neural network to obtain tenth channel information; wherein the channel dimension represented by the tenth channel information is [T, R, F]; the network device determines the deviation between the channel information corresponding to the fourth reference signal and the tenth channel information.
  • the network device may also send a fourth reference signal and receive channel information corresponding to the fourth reference signal, and the network device may process the channel information corresponding to the fourth reference signal based on the first neural network to obtain tenth channel information.
  • the network device may also determine the deviation between the channel information corresponding to the fourth reference signal and the tenth channel information, so that the network device can obtain the tenth channel information based on the deviation. The difference determines the performance of the channel prediction of the first neural network.
  • the method may also include: the network device sends a fifth reference signal, and the channel dimension represented by the channel information corresponding to the fifth reference signal is determined based on the deviation between the channel information corresponding to the fourth reference signal and the tenth channel information.
  • the network device may also send the fifth reference signal based on the deviation between the channel information corresponding to the fourth reference signal and the tenth channel information, that is, the network device may update or adjust the sending of the reference signal based on the performance of the channel prediction of the first neural network.
  • the channel dimension represented by the channel information corresponding to the fifth reference signal is [T 5 , R 5 , F 5 ], and T 5 , R 5 , and F 5 are all positive numbers; the channel dimension represented by the channel information corresponding to the fifth reference signal is determined based on the deviation between the channel information corresponding to the fourth reference signal and the tenth channel information, including: when the deviation between the channel information corresponding to the fourth reference signal and the tenth channel information indicates that the channel quality corresponding to the tenth channel information is better, at least one of the following is satisfied: T 5 is less than or equal to T 4 , R 5 is less than or equal to R 4 , and F 5 is less than or equal to F 4 ; when the deviation between the channel information corresponding to the fourth reference signal and the tenth channel information indicates that the channel quality corresponding to the tenth channel information is poor, at least one of the following is satisfied: T 5 is greater than or equal to T 4 , R 5 is greater than
  • the network device determines that the channel prediction performance of the first neural network is better. For this reason, after sending the fourth reference signal, the network device may send a fifth reference signal with the same or smaller pilot (in this application, "pilot" can be replaced by "reference signal”) density to reduce overhead.
  • the network device may send a fifth reference signal with the same or smaller pilot (in this application, "pilot" can be replaced by "reference signal" density to reduce overhead.
  • the network device determines that the channel prediction performance of the first neural network is poor.
  • the network device may send a fifth reference signal with the same or larger pilot density to facilitate the network device to obtain channel information with a higher channel dimension, and optimize the input parameters of the first neural network based on the channel information with a higher channel dimension to improve the channel prediction performance of the first neural network.
  • the method may also include: the network device adjusts the input parameters of the first neural network based on the deviation between the channel information corresponding to the fourth reference signal and the tenth channel information.
  • the network device can also adjust the input parameters of the first neural network based on the deviation, and can optimize the input parameters of the first neural network to improve the channel prediction performance of the first neural network.
  • the input parameters of the first neural network may include the amount of historical channel information input to the first neural network (i.e., the value of Q), the channel dimension of the historical channel information input (i.e., the channel dimension of the second channel information), the channel dimension of channel information with lower channel dimension (i.e., the channel dimension of the first channel information), etc.
  • the deviation between two channel information can be reflected by a mathematical operation result of the two channel information.
  • the deviation can be a normalized mean square error (NMSE), a mean square error (MSE), etc.
  • NMSE_A the error NMSE of the current channel
  • NMSE_B the error NMSE of the channel at the past moment
  • the channel acquisition can continue to run, and the estimation performance will be improved (the estimation error will be reduced).
  • the error NMSE_A of the current channel is greater than the error NMSE_B of the channel at the past moment, due to error accumulation, the performance of channel acquisition will become worse and worse until it fails to work.
  • the error NMSE_A of the current channel is equal to the error NMSE_B of the channel at the past moment, then the channel acquisition will maintain stable performance and can continue to run.
  • the pilot placement density can be increased on the air interface resources, or the number of input samples n at the past moment can be increased to make the current measurement channel more accurate, that is, to reduce NMSE_A and ensure the stability of channel acquisition. Therefore, the communication system needs to monitor the error and performance of the current channel acquisition.
  • the network device may place a higher density of pilots (such as the pilots of the fourth reference signal) on the air interface resources at regular intervals.
  • the frequency density is higher than the pilot density of the first reference signal), and the network device notifies the terminal device to start the performance monitoring of channel acquisition (for example, the notification is realized by the third information in the first implementation method mentioned above).
  • the extreme scenario of high-density pilots is to fill the air interface resources allocated to the terminal device with pilots.
  • the channel matrix with complete channel information dimensions can be directly obtained by the channel measurement method.
  • the terminal device After that, the terminal device performs channel measurement of high-density pilots, and then uses traditional channel estimation methods (such as least squares (LS) estimation, Wiener filtering, etc.) to obtain the complete channel matrix, and regards this channel as the channel true value.
  • the low-density part of the high-density pilot is still used for channel measurement, and the measurement results are input into the channel acquisition module to obtain the predicted channel matrix. Compare the channel true value with the predicted channel matrix to obtain the error NMSE_A at this time, and compare NMSE_A with NMSE_B at the past moment to determine the performance trend of the current channel acquisition.
  • traditional channel estimation methods such as least squares (LS) estimation, Wiener filtering, etc.
  • NMSE_A is greater than NMSE_B by more than a preset threshold
  • the terminal device believes that the performance of the current channel acquisition will be unsustainable and needs to find a way to reduce the error.
  • the terminal device notifies the network device of the performance monitoring results, and the network device changes the pilot pattern to increase the pilot placement density, or increases the number of input samples n at the past moment, thereby improving the accuracy of channel acquisition.
  • the terminal device continues to monitor the channel error and performance at regular intervals.
  • the terminal device can notify the network device to reduce the pilot density, or reduce the number of input samples n at the past moment, to avoid the waste of air interface resources, because it only needs to ensure that NMSE_A is close to NMSE_B to ensure the continuous operation of the neural network channel acquisition.
  • performance monitoring of channel acquisition can also be enabled.
  • the main purpose at this time is to obtain an accurate channel matrix with a sufficient number of samples from the past for subsequent channel acquisition.
  • the first information of the terminal device based on the feedback of the performance monitoring result in implementation mode 1 can include the channel acquisition error value at the current moment, the error change amount of the channel acquisition, and the error threshold of the channel acquisition.
  • the network device After the network device obtains this information, it can make a decision based on this information, make changes to the subsequent communication configuration for channel acquisition (such as pilot density, number of samples Q of historical channel information), and notify the terminal device.
  • the network device records the channel acquisition error value during each performance monitoring. When the network device receives a new channel acquisition error value, it can be compared with the channel acquisition error value at the past moment.
  • the network device changes the communication configuration for channel acquisition accordingly.
  • the error change amount of the channel acquisition can be calculated by the network device or sent to the network device after the terminal device calculates it.
  • the calculation of the error change of channel acquisition needs to consider the channel acquisition error value at the current moment and the channel acquisition error value at at least one past moment. When using multiple channel acquisition error values at past moments, the error change can be calculated by fitting and interpolation methods.
  • performance feedback and configuration instructions of channel acquisition are sent in the air interface signaling interaction, so that network equipment can flexibly control the communication configuration, reduce resources and computing overhead when the channel acquisition performance is good, and make timely adjustments when the channel acquisition performance is poor to ensure the sustainability of channel acquisition.
  • the method also includes: the network device sends a sixth reference signal; the network device receives channel information corresponding to the sixth reference signal; the network device processes the channel information corresponding to the sixth reference signal and K eleventh channel information based on the fourth neural network to obtain twelfth channel information; wherein the K eleventh channel information are determined by K reference signals, and K is a positive integer; the time domain resource carrying the Kth reference signal is located before the time domain resource carrying the sixth reference signal; the channel dimension represented by the twelfth channel information is [T, R, F]; wherein the network structure of the first neural network is different from the network structure of the fourth neural network; or, the network structure of the first neural network is the same as the network structure of the fourth neural network, and the parameters in the network structure of the first neural network are different from the parameters in the network structure of the fourth neural network; the sixth reference signal satisfies at least one of the following conditions A to C.
  • Condition A The resources carrying the sixth reference signal are different from the resources carrying the first reference signal.
  • Condition B The first channel information and the channel information corresponding to the sixth reference signal come from different terminal devices.
  • the first channel information and the channel information corresponding to the sixth reference signal come from terminal devices with different mobile information.
  • the network device may also send a sixth reference signal, and perform channel prediction on the channel information corresponding to the sixth reference signal based on the fourth neural network to obtain a twelfth channel information of a higher dimension.
  • the transmission of the sixth reference signal satisfies at least one of the above items to enhance the flexibility of the implementation of the solution.
  • the fourth neural network may be the same as the first neural network used in step S303, that is, the network structure of the first neural network is different from the network structure of the fourth neural network.
  • the fourth neural network may be different from the first neural network used in step S303, that is, the network structure of the first neural network is the same as the network structure of the fourth neural network, and the first neural network The parameters in the network structure of the network are different from the parameters in the network structure of the fourth neural network.
  • the sixth reference signal is different from the first reference signal sent in step S301.
  • the above conditions A to C are described exemplarily below.
  • the network device can use different neural networks for channel information at different time-frequency resource locations and group them.
  • the grouping method can be based on at least one of conditions A to C.
  • resources of different reference signals may be grouped.
  • different reference signal resources may include reference signal sending port resources and/or reference signal receiving port resources.
  • different reference signal resources may correspond to different pilot densities.
  • the channel acquisition requirements of the same pilot density can be grouped into the same group.
  • the following Table 2 can be divided into different groups based on different pilot densities. Accordingly, in the above technical solution, the pilot density of the first reference signal and the pilot density of the sixth reference signal can be any two different reference signal pilot densities in Table 2.
  • the resources of different reference signals may include the number of frequency domain resources. Take the number of frequency domain resources as the number of resource blocks (RBs) that carry reference signals as an example.
  • the network device may allocate 10 RBs to some terminal devices, and 20 RBs to other terminal devices.
  • the different number of resources will cause the number of pilot signals and the dimension of the complete channel information to change, that is, the input and output dimensions of the neural network will change. Therefore, the resources can be grouped according to the number of resources allocated by the user. For example, the following Table 3 can be divided into different groups based on different numbers of resources.
  • the number of resources of the first reference signal and the number of resources of the sixth reference signal can be any two different numbers of resources in Table 3.
  • condition B different terminal devices may be grouped.
  • the number of signal transmission paths (i.e., multipath) between different terminal devices and network devices is likely to be different.
  • the channel dimensions of the channel information corresponding to the reference signals fed back by different terminal devices input into the neural network may be different. Therefore, different terminal devices can be divided into different groups.
  • the following Table 4 can be divided into different groups based on different terminal devices. Accordingly, in the above technical solution, the terminal device that feeds back the channel information of the first reference signal (i.e., the first channel information) and the terminal device that feeds back the channel information of the sixth reference signal can be any two different terminal devices in Table 4.
  • terminal devices with different mobility information may be grouped.
  • the moving speed and moving range of the terminal device affect the accuracy of the neural network output results.
  • the effect of this solution is better when the moving range is small and the moving speed is low. Therefore, users can be grouped according to their mobility.
  • the moving speed of the terminal device that feeds back the channel information of the first reference signal i.e., the first channel information
  • the moving speed of the terminal device that feeds back the channel information of the sixth reference signal can be any two different moving speeds in Table 5.
  • the moving range of the terminal device that feeds back the channel information of the first reference signal i.e., the first channel information
  • the moving range of the terminal device that feeds back the channel information of the sixth reference signal can be any two different moving speeds in Table 6.
  • the grouping can also be performed according to a combination of the above conditions. For example, by using different neural networks for different scenarios, such as different pilot density, different number of resources, different user mobility information, etc., the flexibility and scalability of the solution can be increased.
  • FIG. 7 is a schematic diagram of an implementation of the communication method provided in the present application. The method includes the following steps.
  • the method shown in FIG. 7 includes steps S701 to S702 , and each step will be described below.
  • the network device sends a fourth reference signal, and correspondingly, the terminal device receives the fourth reference signal.
  • the terminal device sends the first information, and the network device receives the first information accordingly.
  • the first information is used to indicate the deviation between the channel information corresponding to the fourth reference signal and the eighth channel information;
  • the channel dimension represented by the eighth channel information is [T, R, F], and T, R, and F are all positive numbers;
  • the eighth channel information is obtained by processing the channel information corresponding to the fourth reference signal and P ninth channel information based on the third neural network, and the P ninth channel information is determined by P reference signals, and the position of the time domain resources carrying the P reference signals is located before the position of the time domain resources carrying the fourth reference signal, and P is a positive integer.
  • the network device may send a fourth reference signal in step S701, and the eighth channel information is channel information obtained by the receiver (i.e., the terminal device) of the fourth reference signal processing the channel information corresponding to the fourth reference signal based on the third neural network.
  • the network device may also receive first information indicating a deviation between the channel information corresponding to the fourth reference signal and the eighth channel information in step S702, so that the network device can determine the performance of the channel prediction of the third neural network based on the first information.
  • the network device can determine the performance of the channel prediction of the neural network through the first information. Since the transmission overhead of the first information is less than the transmission overhead of the channel information corresponding to the fourth reference signal, the transmission overhead can be reduced.
  • the channel dimension represented by the channel information corresponding to the fourth reference signal is [T 4 , R 4 , F 4 ], where T 4 , R 4 , F 4 are all positive numbers; the channel dimension represented by the channel information corresponding to the fourth reference signal and the channel dimension represented by the channel information corresponding to the eighth reference signal satisfy at least one of the following: T is greater than or equal to T 4 , R is greater than or equal to R 4 , and F is greater than or equal to F 4 .
  • the channel information corresponding to the fourth reference signal is channel information of a lower dimension than the channel information corresponding to the eighth reference signal, which can reduce the overhead in the detection process of the channel prediction performance of the neural network.
  • the method further includes: the network device sends second information, the second information is used to indicate whether to adjust the input parameters of the third neural network; wherein the second information is determined based on the first information. Specifically, after receiving the first information, the network device may also send second information for indicating whether to adjust the input parameters of the third neural network, so that the network device adjusts the input parameters of the third neural network deployed on the terminal device based on the performance detection result of the third neural network.
  • the method further includes: the network device sends third information, the third information indicating that the channel information corresponding to the fourth reference signal is processed based on the third neural network; wherein the third information is determined based on the first information.
  • the network device may also send third information indicating that the channel information corresponding to the fourth reference signal is processed based on the third neural network, so that the terminal device can start performance detection of the channel information corresponding to the fourth reference signal based on the third information.
  • FIG. 7 may also refer to the description of FIG. 3 and related implementations (eg, the implementation process of the above-mentioned implementation method 1).
  • an embodiment of the present application provides a communication device 800, which can implement the functions of the communication device (the communication device is a terminal device or a network device) in the above method embodiment, and thus can also achieve the beneficial effects of the above method embodiment.
  • the communication device 800 can be a communication device, or an integrated circuit or component inside the communication device, such as a chip.
  • the transceiver unit 802 may include a sending unit and a receiving unit, which are respectively used to perform sending and receiving.
  • the apparatus 800 when the apparatus 800 is used to execute the method executed by the network device in the aforementioned FIG. 3 and related embodiments, the apparatus 800 includes a processing unit 801 and a transceiver unit 802; the transceiver unit 802 is used to send a first reference signal; the transceiver unit 802 is further used to receive first channel information, the first channel information is determined based on the first reference signal, and the channel dimension represented by the first channel information is [T 1 , R 1 , F 1 ], T 1 , R 1 , F 1 are all positive numbers; the processing unit 801 is used to process the first channel information and Q second channel information based on the first neural network to obtain third channel information; wherein the Q second channel information are determined by Q second reference signals, and Q is a positive integer; the position of the time domain resource carrying the Q second reference signals is located before the position of the time domain resource carrying the first reference signal, and the channel dimension represented by the third channel information is [T, R, F], and T, R, and F are all positive numbers
  • the device 800 when the device 800 is used to execute the method executed by the network device in the aforementioned Figure 7 and related embodiments, the device 800 includes a processing unit 801 and a transceiver unit 802; the transceiver unit 802 is used to send a fourth reference signal; the transceiver unit 802 is also used to receive first information, and the processing unit 801 is used to determine the deviation between the channel information corresponding to the fourth reference signal and the eighth channel information based on the first information; wherein the channel dimension represented by the eighth channel information is [T, R, F], and T, R, and F are all positive numbers; wherein the eighth channel information is obtained by processing the channel information corresponding to the fourth reference signal and P ninth channel information based on the third neural network, and the P ninth channel information is determined by P reference signals, and the position of the time domain resources carrying the P reference signals is located before the position of the time domain resources carrying the fourth reference signal, and P is a positive integer.
  • the device 800 when the device 800 is used to execute the method executed by the terminal device in the aforementioned Figure 7 and related embodiments, the device 800 includes a processing unit 801 and a transceiver unit 802; the transceiver unit 802 is used to receive a fourth reference signal; the processing unit 801 is used to determine the first information; the transceiver unit 802 is also used to send the first information, and the first information is used to indicate the deviation between the channel information corresponding to the fourth reference signal and the eighth channel information; wherein the channel dimension represented by the eighth channel information is [T, R, F], and T, R, and F are all positive numbers; wherein the eighth channel information is obtained by processing the channel information corresponding to the fourth reference signal and P ninth channel information based on the third neural network, and the P ninth channel information is determined by P reference signals, and the position of the time domain resources carrying the P reference signals is located before the position of the time domain resources carrying the fourth reference signal, and P is a positive integer.
  • Fig. 9 is another schematic structural diagram of a communication device 900 provided in the present application.
  • the communication device 900 includes a logic circuit 901 and an input/output interface 902.
  • the communication device 900 may be a chip or an integrated circuit.
  • the transceiver unit 802 shown in Fig. 8 may be a communication interface, which may be the input/output interface 902 in Fig. 9, which may include an input interface and an output interface.
  • the communication interface may be a transceiver circuit, which may include an input interface circuit and an output interface circuit.
  • the input-output interface 902 is used to send a first reference signal; the input-output interface 902 is also used to receive first channel information, the first channel information is determined based on the first reference signal, the channel dimension represented by the first channel information is [T 1 , R 1 , F 1 ], and T 1 , R 1 , and F 1 are all positive numbers; the logic circuit 901 is used to process the first channel information and Q second channel information based on a first neural network to obtain third channel information; wherein the Q second channel information are determined by Q second reference signals, and Q is a positive integer; the position of the time domain resource carrying the Q second reference signals is located before the position of the time domain resource carrying the first reference signal, and the channel dimension represented by the third channel information is [T, R, F], and T, R, and F are all positive numbers; wherein the parameters in the channel dimension represented by the first channel information and the parameters in the channel dimension represented by the third channel information satisfy at least one of the following: T is greater than T 1 , R is greater than R 1
  • the input-output interface 902 is used to send a fourth reference signal; the input-output interface 902 is also used to receive first information, and the logic circuit 901 is used to determine the deviation between the channel information corresponding to the fourth reference signal and the eighth channel information based on the first information; wherein the channel dimension represented by the eighth channel information is [T, R, F], and T, R, and F are all positive numbers; wherein the eighth channel information is obtained by processing the channel information corresponding to the fourth reference signal and P ninth channel information based on the third neural network, the P ninth channel information are determined by P reference signals, the position of the time domain resources carrying the P reference signals is located before the position of the time domain resources carrying the fourth reference signal, and P is a positive integer.
  • the input-output interface 902 is used to receive a fourth reference signal; the logic circuit 901 is used to determine the first information; the input-output interface 902 is also used to send the first information, the first information being used to indicate the deviation between the channel information corresponding to the fourth reference signal and the eighth channel information; wherein the channel dimension represented by the eighth channel information is [T, R, F], and T, R, and F are all positive numbers; wherein the eighth channel information is obtained by processing the channel information corresponding to the fourth reference signal and P ninth channel information based on the third neural network, the P ninth channel information are determined by P reference signals, the position of the time domain resources carrying the P reference signals is located before the position of the time domain resources carrying the fourth reference signal, and P is a positive integer.
  • the logic circuit 901 and the input/output interface 902 may also execute other steps executed by the first communication device or the second communication device in any embodiment and achieve corresponding beneficial effects, which will not be described in detail here.
  • the processing unit 801 shown in FIG. 8 may be the logic circuit 901 in FIG. 9 .
  • the logic circuit 901 may be a processing device, and the functions of the processing device may be partially or completely implemented by software.
  • the functions of the processing device may be partially or completely implemented by software.
  • the processing device may include a memory and a processor, wherein the memory is used to store a computer program, and the processor reads and executes the computer program stored in the memory to perform corresponding processing and/or steps in any one of the method embodiments.
  • the processing device may include only a processor.
  • a memory for storing a computer program is located outside the processing device, and the processor is connected to the memory via a circuit/wire to read and execute the computer program stored in the memory.
  • the memory and the processor may be integrated together, or may be physically independent of each other.
  • the processing device may be one or more chips, or one or more integrated circuits.
  • the processing device may be one or more field-programmable gate arrays (FPGA), application specific integrated circuits (ASIC), system on chip (SoC), central processor unit (CPU), network processor (NP), digital signal processor (DSP), microcontroller unit (MCU), programmable logic device (PLD) or other integrated chips, or any combination of the above chips or processors.
  • FPGA field-programmable gate arrays
  • ASIC application specific integrated circuits
  • SoC system on chip
  • CPU central processor unit
  • NP network processor
  • DSP digital signal processor
  • MCU microcontroller unit
  • PLD programmable logic device
  • FIG 10 shows a communication device 1000 involved in the above embodiments provided in an embodiment of the present application.
  • the communication device 1000 can specifically be a communication device as a terminal device in the above embodiments.
  • the example shown in Figure 10 is that the terminal device is implemented through the terminal device (or a component in the terminal device).
  • the communication device 1000 may include but is not limited to at least one processor 1001 and a communication port 1002 .
  • the transceiver unit 802 shown in FIG8 may be a communication interface, and the communication interface may be the communication port 1002 in FIG10.
  • the communication port 1002 may include an input interface and an output interface.
  • the communication port 1002 may also be a transceiver circuit, which may include an input interface circuit and an output interface circuit.
  • the device may also include at least one of the memory 1003 and the bus 1004.
  • the at least one processor 1001 is used to control and process the actions of the communication device 1000.
  • the processor 1001 can be a central processing unit, a general-purpose processor, a digital signal processor, an application-specific integrated circuit, a field programmable gate array or other programmable logic device, a transistor logic device, a hardware component or any combination thereof. It can implement or execute various exemplary logic blocks, modules and circuits described in conjunction with the disclosure of this application.
  • the processor can also be a combination that implements a computing function, such as a combination of one or more microprocessors, a combination of a digital signal processor and a microprocessor, and the like.
  • the communication device 1000 shown in Figure 10 can be specifically used to implement the steps implemented by the terminal device in the aforementioned method embodiment, and to achieve the corresponding technical effects of the terminal device.
  • the specific implementation methods of the communication device shown in Figure 10 can refer to the description in the aforementioned method embodiment, and will not be repeated here one by one.
  • FIG 11 is a structural diagram of the communication device 1100 involved in the above-mentioned embodiments provided in an embodiment of the present application.
  • the communication device 1100 can specifically be a communication device as a network device in the above-mentioned embodiments.
  • the example shown in Figure 11 is that the network device is implemented through the network device (or a component in the network device), wherein the structure of the communication device can refer to the structure shown in Figure 11.
  • the communication device 1100 includes at least one processor 1111 and at least one network interface 1114. Further optionally, the communication device also includes at least one memory 1112, at least one transceiver 1113 and one or more antennas 1115.
  • the processor 1111, the memory 1112, the transceiver 1113 and the network interface 1114 are connected, for example, through a bus. In an embodiment of the present application, the connection may include various interfaces, transmission lines or buses, etc., which are not limited in this embodiment.
  • the antenna 1115 is connected to the transceiver 1113.
  • the network interface 1114 is used to enable the communication device to communicate with other communication devices through a communication link.
  • the network interface 1114 may include a network interface between the communication device and the core network device, such as an S1 interface, and the network interface may include a network interface between the communication device and other communication devices (such as other network devices or core network devices), such as an X2 or Xn interface.
  • the transceiver unit 802 shown in Fig. 8 may be a communication interface, which may be the network interface 1114 in Fig. 11, and the network interface 1114 may include an input interface and an output interface.
  • the network interface 1114 may also be a transceiver circuit, and the transceiver circuit may include an input interface circuit and an output interface circuit.
  • the processor 1111 is mainly used to process the communication protocol and communication data, and to control the entire communication device, execute the software program, and process the data of the software program, for example, to support the communication device to perform the actions described in the embodiment.
  • the communication device may include a baseband processor and a central processor, the baseband processor is mainly used to process the communication protocol and communication data, and the central processor is mainly used to control the entire terminal device, execute the software program, and process the data of the software program.
  • the processor 1111 in Figure 11 can integrate the functions of the baseband processor and the central processor. It can be understood by those skilled in the art that the baseband processor and the central processor can also be independent processors, interconnected by technologies such as buses.
  • the terminal device can include multiple baseband processors to adapt to different network formats, the terminal device can include multiple central processors to enhance its processing capabilities, and the various components of the terminal device can be connected through various buses.
  • the baseband processor can also be described as a baseband processing circuit or a baseband processing chip.
  • the central processor can also be described as a central processing circuit or a central processing chip.
  • the function of processing the communication protocol and communication data can be built into the processor, or it can be stored in the memory in the form of a software program, and the processor executes the software program to realize the baseband processing function.
  • the memory is mainly used to store software programs and data.
  • the memory 1112 can exist independently and be connected to the processor 1111.
  • the memory 1112 can be integrated with the processor 1111, for example, integrated into a chip.
  • the memory 1112 can store program codes for executing the technical solutions of the embodiments of the present application, and the execution is controlled by the processor 1111.
  • the various types of computer program codes executed can also be regarded as drivers of the processor 1111.
  • FIG11 shows only one memory and one processor.
  • the memory may also be referred to as a storage medium or a storage device, etc.
  • the memory may be a storage element on the same chip as the processor, i.e., an on-chip storage element, or an independent storage element, which is not limited in the embodiments of the present application.
  • the transceiver 1113 can be used to support the reception or transmission of radio frequency signals between the communication device and the terminal, and the transceiver 1113 can be connected to the antenna 1115.
  • the transceiver 1113 includes a transmitter Tx and a receiver Rx.
  • one or more antennas 1115 can receive radio frequency signals
  • the receiver Rx of the transceiver 1113 is used to receive the radio frequency signals from the antenna and convert the radio frequency signals into digital baseband signals or digital intermediate frequency signals. signal, and provide the digital baseband signal or digital intermediate frequency signal to the processor 1111, so that the processor 1111 further processes the digital baseband signal or digital intermediate frequency signal, such as demodulation processing and decoding processing.
  • the transmitter Tx in the transceiver 1113 is also used to receive the modulated digital baseband signal or digital intermediate frequency signal from the processor 1111, and convert the modulated digital baseband signal or digital intermediate frequency signal into a radio frequency signal, and send the radio frequency signal through one or more antennas 1115.
  • the receiver Rx can selectively perform one or more stages of down-mixing processing and analog-to-digital conversion processing on the radio frequency signal to obtain a digital baseband signal or a digital intermediate frequency signal, and the order of the down-mixing processing and analog-to-digital conversion processing is adjustable.
  • the transmitter Tx can selectively perform one or more stages of up-mixing processing and digital-to-analog conversion processing on the modulated digital baseband signal or digital intermediate frequency signal to obtain a radio frequency signal, and the order of the up-mixing processing and digital-to-analog conversion processing is adjustable.
  • the digital baseband signal and the digital intermediate frequency signal can be collectively referred to as digital signals.
  • the transceiver 1113 may also be referred to as a transceiver unit, a transceiver, a transceiver device, etc.
  • a device in the transceiver unit for implementing a receiving function may be regarded as a receiving unit
  • a device in the transceiver unit for implementing a sending function may be regarded as a sending unit, that is, the transceiver unit includes a receiving unit and a sending unit
  • the receiving unit may also be referred to as a receiver, an input port, a receiving circuit, etc.
  • the sending unit may be referred to as a transmitter, a transmitter, or a transmitting circuit, etc.
  • the communication device 1100 shown in Figure 11 can be specifically used to implement the steps implemented by the network device in the aforementioned method embodiment, and to achieve the corresponding technical effects of the network device.
  • the specific implementation methods of the communication device 1100 shown in Figure 11 can refer to the description in the aforementioned method embodiment, and will not be repeated here.
  • FIG. 12 is a schematic diagram of the structure of the communication device involved in the above-mentioned embodiment provided in an embodiment of the present application.
  • the communication device 120 includes, for example, modules, units, elements, circuits, or interfaces, etc., which are appropriately configured together to perform the technical solutions provided in this application.
  • the communication device 120 may be the terminal device or network device described above, or a component (such as a chip) in these devices, to implement the method described in the following method embodiment.
  • the communication device 120 includes one or more processors 121.
  • the processor 121 may be a general-purpose processor or a dedicated processor, etc.
  • it may be a baseband processor or a central processing unit.
  • the baseband processor may be used to process communication protocols and communication data
  • the central processing unit may be used to control the communication device (such as a RAN node, a terminal, or a chip, etc.), execute software programs, and process data of software programs.
  • the processor 121 may include a program 123 (sometimes also referred to as code or instruction), and the program 123 may be executed on the processor 121 so that the communication device 120 performs the method described in the following embodiments.
  • the communication device 120 includes a circuit (not shown in FIG. 12 ).
  • the communication device 120 may include one or more memories 122 on which a program 124 (sometimes also referred to as code or instructions) is stored.
  • the program 124 can be run on the processor 121 so that the communication device 120 executes the method described in the above method embodiment.
  • the processor 121 and/or the memory 122 may include an AI module 127, 128, and the AI module is used to implement AI-related functions.
  • the AI module may be implemented by software, hardware, or a combination of software and hardware.
  • the AI module may include a wireless intelligent control (radio intelligence control, RIC) module.
  • the AI module may be a near real-time RIC or a non-real-time RIC.
  • data may also be stored in the processor 121 and/or the memory 122.
  • the processor and the memory may be provided separately or integrated together.
  • the communication device 120 may further include a transceiver 125 and/or an antenna 126.
  • the processor 121 may also be sometimes referred to as a processing unit, which controls the communication device (e.g., a RAN node or a terminal).
  • the transceiver 125 may also be sometimes referred to as a transceiver unit, a transceiver, a transceiver circuit, or a transceiver, etc., which is used to implement the transceiver function of the communication device through the antenna 126.
  • the transceiver unit 802 shown in Fig. 8 may be a communication interface, which may be the transceiver 125 in Fig. 12, and the transceiver 125 may include an input interface and an output interface.
  • the transceiver 125 may also be a transceiver circuit, which may include an input interface circuit and an output interface circuit.
  • An embodiment of the present application further provides a computer-readable storage medium, which is used to store one or more computer-executable instructions.
  • the processor executes the method described in the possible implementation methods of the first communication device or the second communication device in the aforementioned embodiment.
  • An embodiment of the present application also provides a computer program product (or computer program).
  • the processor executes the method that may be implemented by the above-mentioned first communication device or second communication device.
  • the embodiment of the present application also provides a chip system, which includes at least one processor for supporting a communication device to implement the functions involved in the possible implementation of the above communication device.
  • the chip system also includes an interface circuit, which is The at least one processor provides program instructions and/or data.
  • the chip system may also include a memory, which is used to store program instructions and data necessary for the communication device.
  • the chip system may be composed of a chip, or may include a chip and other discrete devices, wherein the communication device may specifically be the first communication device or the second communication device in the aforementioned method embodiment.
  • An embodiment of the present application also provides a communication system, and the network system architecture includes the first communication device and the second communication device in any of the above embodiments.
  • the disclosed systems, devices and methods can be implemented in other ways.
  • the device embodiments described above are only schematic.
  • the division of the units is only a logical function division. There may be other division methods in actual implementation, such as multiple units or components can be combined or integrated into another system, or some features can be ignored or not executed.
  • Another point is that the mutual coupling or direct coupling or communication connection shown or discussed can be an indirect coupling or communication connection through some interfaces, devices or units, which can be electrical, mechanical or other forms.
  • the units described as separate components may or may not be physically separated, and the components shown as units may or may not be physical units, that is, they may be located in one place or distributed on multiple network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
  • each functional unit in each embodiment of the present application can be integrated into a processing unit, or each unit can exist physically separately, or two or more units can be integrated into one unit.
  • the above-mentioned integrated unit can be implemented in the form of hardware or in the form of a software functional unit. If the integrated unit is implemented in the form of a software functional unit and sold or used as an independent product, it can be stored in a computer-readable storage medium.
  • the technical solution of the present application is essentially or the part that contributes or all or part of the technical solution can be embodied in the form of a software product, and the computer software product is stored in a storage medium, including several instructions to enable a computer device (which can be a personal computer, a server, or a network device, etc.) to perform all or part of the steps of the method described in each embodiment of the present application.
  • the aforementioned storage medium includes: U disk, mobile hard disk, read-only memory (ROM, Read-Only Memory), random access memory (RAM, Random Access Memory), disk or optical disk and other media that can store program code.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computational Linguistics (AREA)
  • Evolutionary Computation (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Data Mining & Analysis (AREA)
  • Signal Processing (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Power Engineering (AREA)
  • Mobile Radio Communication Systems (AREA)

Abstract

La présente demande concerne un procédé de communication et un dispositif associé. Dans le procédé, lorsqu'un dispositif réseau détermine des troisièmes informations de canal d'une dimension supérieure au moyen d'un premier réseau de neurones, des données d'entrée du premier réseau de neurones peuvent comprendre des premières informations de canal d'une dimension inférieure et Q éléments de secondes informations de canal, et la position d'une ressource de domaine temporel portant Q second signaux de référence est avant la position d'une ressource de domaine temporel transportant un premier signal de référence. Par conséquent, le premier réseau de neurones peut utiliser une corrélation de domaine temporel entre les Q éléments de secondes informations de canal et les premières informations de canal, et peut également utiliser une corrélation de domaine spatial et/ou une corrélation de domaine fréquentiel entre les premières informations de canal et les troisièmes informations de canal, ce qui permet d'améliorer la précision d'obtention par le réseau de neurones des informations de canal de la dimension supérieure sur la base des informations de canal de la dimension inférieure.
PCT/CN2024/119383 2023-11-09 2024-09-18 Procédé de communication et dispositif associé Pending WO2025098020A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202311494481.1 2023-11-09
CN202311494481.1A CN119966767A (zh) 2023-11-09 2023-11-09 一种通信方法及相关设备

Publications (1)

Publication Number Publication Date
WO2025098020A1 true WO2025098020A1 (fr) 2025-05-15

Family

ID=95588611

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2024/119383 Pending WO2025098020A1 (fr) 2023-11-09 2024-09-18 Procédé de communication et dispositif associé

Country Status (2)

Country Link
CN (1) CN119966767A (fr)
WO (1) WO2025098020A1 (fr)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160294457A1 (en) * 2015-04-01 2016-10-06 Samsung Electronics Co., Ltd. Apparatus and method for feeding back channel information in wireless communication system
WO2017000258A1 (fr) * 2015-06-30 2017-01-05 华为技术有限公司 Procédé et appareil pour l'acquisition d'état de canal
CN114764610A (zh) * 2021-01-15 2022-07-19 华为技术有限公司 一种基于神经网络的信道估计方法及通信装置
WO2022253023A1 (fr) * 2021-06-01 2022-12-08 华为技术有限公司 Procédé et appareil de communication

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160294457A1 (en) * 2015-04-01 2016-10-06 Samsung Electronics Co., Ltd. Apparatus and method for feeding back channel information in wireless communication system
WO2017000258A1 (fr) * 2015-06-30 2017-01-05 华为技术有限公司 Procédé et appareil pour l'acquisition d'état de canal
CN114764610A (zh) * 2021-01-15 2022-07-19 华为技术有限公司 一种基于神经网络的信道估计方法及通信装置
WO2022253023A1 (fr) * 2021-06-01 2022-12-08 华为技术有限公司 Procédé et appareil de communication

Also Published As

Publication number Publication date
CN119966767A (zh) 2025-05-09

Similar Documents

Publication Publication Date Title
WO2025098020A1 (fr) Procédé de communication et dispositif associé
WO2025103115A1 (fr) Procédé de communication et dispositif associé
WO2025139534A1 (fr) Procédé de communication et dispositif associé
WO2025179920A1 (fr) Procédé de communication et appareil associé
WO2025092160A1 (fr) Procédé de communication et dispositif associé
WO2025092159A1 (fr) Procédé de communication et dispositif associé
WO2025086262A1 (fr) Procédé de communication et appareil associé
WO2025179919A1 (fr) Procédé de communication et appareil associé
WO2025175756A1 (fr) Procédé de communication et dispositif associé
WO2025019990A1 (fr) Procédé de communication et dispositif associé
WO2025118980A1 (fr) Procédé de communication et dispositif associé
WO2025059907A1 (fr) Procédé de communication et dispositif associé
WO2025190244A1 (fr) Procédé de communication et appareil associé
WO2025208880A1 (fr) Procédé de communication et appareil associé
WO2025189861A1 (fr) Procédé de communication et appareil associé
WO2025190248A1 (fr) Procédé de communication et appareil associé
WO2025190252A1 (fr) Procédé de communication et appareil associé
WO2025059908A1 (fr) Procédé de communication et dispositif associé
CN121125513A (zh) 一种通信方法及相关装置
WO2025190246A1 (fr) Procédé de communication et appareil associé
WO2025107835A1 (fr) Procédé de communication et dispositif associé
WO2025118759A1 (fr) Procédé de communication et dispositif associé
WO2025167443A1 (fr) Procédé de communication et dispositif associé
CN121126441A (zh) 一种通信方法及相关装置
CN120934716A (zh) 一种通信方法及相关装置

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 24887638

Country of ref document: EP

Kind code of ref document: A1