[go: up one dir, main page]

WO2025086262A1 - Procédé de communication et appareil associé - Google Patents

Procédé de communication et appareil associé Download PDF

Info

Publication number
WO2025086262A1
WO2025086262A1 PCT/CN2023/127224 CN2023127224W WO2025086262A1 WO 2025086262 A1 WO2025086262 A1 WO 2025086262A1 CN 2023127224 W CN2023127224 W CN 2023127224W WO 2025086262 A1 WO2025086262 A1 WO 2025086262A1
Authority
WO
WIPO (PCT)
Prior art keywords
processing
data
communication device
transform domain
neural network
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
PCT/CN2023/127224
Other languages
English (en)
Chinese (zh)
Inventor
王坚
徐晨
李榕
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Technologies Co Ltd
Original Assignee
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co Ltd filed Critical Huawei Technologies Co Ltd
Priority to PCT/CN2023/127224 priority Critical patent/WO2025086262A1/fr
Publication of WO2025086262A1 publication Critical patent/WO2025086262A1/fr
Pending legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W24/00Supervisory, monitoring or testing arrangements
    • H04W24/08Testing, supervising or monitoring using real traffic

Definitions

  • the present application relates to the field of communications, and in particular to a communication method and related equipment.
  • Wireless communication can be the transmission communication between two or more communication nodes without propagation through conductors or cables.
  • the communication nodes generally include network equipment and terminal equipment.
  • communication nodes generally have signal transceiving capabilities and computing capabilities.
  • the computing capabilities of network devices mainly provide computing power support for signal transceiving capabilities (for example: sending and receiving signals) to achieve communication between network devices and other communication nodes.
  • the computing power of communication nodes may have surplus computing power in addition to providing computing power support for the above communication tasks. Therefore, how to utilize this computing power is a technical problem that needs to be solved urgently.
  • the present application provides a communication method and related equipment, which are used to enable the computing power of communication nodes to be applied to the processing of artificial intelligence (AI) tasks while reducing processing delays.
  • AI artificial intelligence
  • the present application provides a communication method, which is executed by a first communication device, or the method is executed by some components in the first communication device (such as a processor, a chip or a chip system, etc.), or the method can also be implemented by a logic module or software that can realize all or part of the functions of the first communication device.
  • the method is described as being executed by a first communication device.
  • the first communication device can be a communication node such as a terminal device or a network device in a communication system.
  • the first communication device performs a first processing on the first data to obtain the second data; wherein the first data is obtained based on artificial intelligence AI data, and the first processing includes a first neural network processing; the first communication device sends the third data, and the third data is obtained based on the second data after the first transform domain processing.
  • the first communication device after the first communication device performs a first processing on the first data to obtain the second data, the first communication device obtains the third data based on the second data through a first transform domain processing, and sends the third data.
  • the first data is obtained based on artificial intelligence AI data
  • the first processing includes a first neural network processing.
  • the second data is the processing result obtained by the first communication device performing at least the first neural network processing on the first data
  • the third data sent by the first communication device is the processing result obtained by the second data through the first transform domain processing, wherein the first transform domain processing is one of the processes of the physical layer processing.
  • the first communication device can use the processing result obtained by the neural network processing as the input of the transform domain processing in the physical layer, so that the first communication device can realize AI data processing without performing quantization processing, which can enable the computing power of the communication device to be applied to AI tasks while reducing processing delays.
  • AI neural network
  • AI neural network machine learning
  • AI processing AI neural network processing
  • the data involved in the present application can be replaced by information, signals, etc.
  • the first data is obtained based on AI data, including: the first data is obtained by subjecting the AI data to resource mapping processing and/or second transform domain processing.
  • the first data can be obtained by processing the AI data through resource mapping and/or second transform domain processing, wherein the first communication device can process the AI data through resource mapping and/or second transform domain processing to obtain the first data, and then use the first data as the input of the first processing including at least the first neural network to obtain the second data.
  • the AI data can be used as the input of the resource mapping processing and/or the second transform domain, so that the first communication device does not need to perform other physical layer processing (such as encoding, rate matching, scrambling, modulation, etc.) before the resource mapping processing during the transmission link processing of the communication signal, which can further reduce the processing delay.
  • the resource mapping processing can map the AI data to the physical layer transmission resources, so that the subsequent transmission of AI data can be adapted to the physical layer transmission resources.
  • the second transform domain processing can sample and transform the AI data to obtain data suitable for the input of the first neural network.
  • the second data is the result of processing the first data by at least performing the first neural network processing, that is, the first data can be used as the input of the first neural network, and the first data may not be obtained by processing the AI data through resource mapping processing and/or second transform domain processing.
  • the function of the first neural network processing includes resource mapping processing and/or second transform domain processing, that is, there is no need to process the corresponding devices through resource mapping processing and/or second transform domain processing, and the same or similar processing effect of resource mapping processing and/or second transform domain processing can be achieved through the processing of the first neural network.
  • the resource mapping process includes at least one of the following: dimensionality conversion processing, translation processing, rotation processing, and interleaving processing.
  • the AI data when AI data is used as input for resource mapping processing, the AI data can be processed by at least one of the above items to enhance the flexibility of the solution implementation.
  • the first processing also includes filtering processing, such as root raised cosine filter (RRC) processing, or the filtering processing may include Bessel filtering, Butterworth filtering, Chebychev filtering, elliptic filtering and other filtering processing.
  • RRC root raised cosine filter
  • the first processing may include filtering processing in addition to the first neural network processing, so as to filter the communication signal through the filtering processing and reduce the peak to average power ratio (PAPR) of the communication signal.
  • PAPR peak to average power ratio
  • the second data is used as a processing result obtained by performing at least the first neural network processing in the first processing on the first data, that is, the first data can be used as the input of the first neural network, and the first processing may not include RRC filtering processing.
  • the function of the first neural network processing includes filtering processing, that is, there is no need to process through the device corresponding to the filtering processing, and the same or similar processing effect of the filtering processing can be achieved through the processing of the first neural network.
  • the method further includes: the first communication device receives fourth data, wherein the AI data is obtained based on the fourth data.
  • the AI data of the first communication device can be obtained by receiving the fourth data, so that when the first communication device participates in the AI task, the first communication device can serve as an intermediate node for multiple nodes participating in the AI task.
  • the AI data when the AI data is not obtained based on data received by the first communication device, the AI data may be data generated locally by the first communication device (or preconfigured data), and accordingly, the first communication device may serve as the head node (or starting node) of multiple nodes participating in the AI task.
  • the method further includes: the first communication device performs a second processing on the fourth data to obtain the AI data; wherein the second processing includes a second neural network processing.
  • the first communication device when the AI data of the first communication device is obtained by receiving the fourth data, the first communication device can perform a second process on the fourth data including at least a second neural network process to obtain the AI data.
  • the first communication device after receiving the fourth data, the first communication device can at least perform a second neural network process on the fourth data to obtain the AI data on the transmission link.
  • the second neural network process can be used to participate in the AI task, so that the first communication device can participate in the AI task through the first neural network and the second neural network.
  • the first communication device performs the second processing on the fourth data to obtain the AI data, including: the first communication device performs the second processing on the fourth data to obtain fifth data; wherein the AI data is obtained by subjecting the fifth data to the first transform domain processing and/or resource de-mapping processing.
  • the first communication device can perform a second processing on the received fourth data to obtain the fifth data. Moreover, the first communication device can obtain the AI data by subjecting the fifth data to the first transform domain processing and/or resource demapping processing. In other words, the fifth data, as the processing result of the second neural network, can be used as an input for the first transform domain processing and/or resource demapping processing.
  • the first communication device can use the processing result obtained by the neural network processing of the received data as the input for the transform domain processing and/or resource demapping processing in the physical layer, so that the first communication device can implement the neural network without performing dequantization processing on the received data.
  • the processing of the network can enable the computing power of the communication device to be applied to AI tasks while reducing processing latency.
  • the AI data may be data on the transmission link in the first communication device (for example, the AI data may be input for resource mapping processing and/or second transform domain processing on the transmission link).
  • the AI data is obtained by subjecting the fifth data to the first transform domain processing and/or de-resource mapping processing, that is, the processing result of the first transform domain processing and/or de-resource mapping processing on the receiving link in the first communication device can be used as the input for a certain processing on the transmission link.
  • the first communication device can avoid performing other physical layer processing (for example, demodulation, descrambling, de-rate matching, channel decoding, etc.) after the de-resource mapping processing during the receiving link processing of the communication signal, thereby further reducing the processing delay.
  • the first communication device performs a second processing on the fourth data to obtain fifth data, including: the first communication device performs a second transform domain processing on the fourth data to obtain sixth data; and the first communication device performs the second processing on the sixth data to obtain the fifth data.
  • the first communication device can perform second transform domain processing on the fourth data to obtain sixth data; thereafter, the first communication device can perform second processing on the sixth data to obtain fifth data.
  • the sixth data is used as the input of the second processing including at least the second neural network, and the second transform domain processing process can sample and transform the AI data to obtain the sixth data adapted to the input of the second neural network.
  • the resource de-mapping process includes at least one of the following: dimensionality conversion processing, translation processing, rotation processing, and interleaving processing.
  • the AI data when AI data is used as the output of resource solution mapping processing, the AI data may be the fifth data obtained by processing at least one of the above items to enhance the flexibility of the solution implementation.
  • the second processing also includes filtering processing, such as RRC filtering processing, or the filtering processing may include Bessel filtering, Butterworth filtering, Chebyshev filtering, elliptical filtering and other filtering processing.
  • filtering processing such as RRC filtering processing
  • the filtering processing may include Bessel filtering, Butterworth filtering, Chebyshev filtering, elliptical filtering and other filtering processing.
  • the second processing may include filtering processing in addition to the second neural network processing, so as to filter the communication signal through the filtering processing and reduce the PAPR of the communication signal.
  • the fifth data is a processing result obtained by the first communication device performing at least the second neural network processing in the second processing on the fourth data, that is, the fourth data can be used as the input of the second neural network, and the second processing may not include filtering processing.
  • the function of the second neural network processing includes filtering processing, that is, there is no need to process the device corresponding to the filtering processing, and the same or similar processing effect of the filtering processing can be achieved through the processing of the second neural network.
  • the second transform domain processing includes any one of the following: Fourier transform processing, wavelet transform processing.
  • the Fourier transform processing may include fast Fourier transform (fast fourier transform, FFT), discrete Fourier transform (discrete fourier transform, DFT), etc.
  • the first transform domain processing includes any one of the following: inverse Fourier transform processing, inverse wavelet transform processing.
  • the inverse Fourier transform may include inverse fast Fourier transform (IFFT), inverse discrete Fourier transform (IDFT), etc.
  • the AI data includes original data of the AI task and/or feature data of the AI task.
  • the first communication device can participate in the AI task, and accordingly, the AI data processed by the first communication device may include the original data of the AI task and/or the characteristic data of the AI task to enhance the flexibility of the solution implementation.
  • the third data is obtained based on the second data processed by the first transform domain, including: the third data is obtained based on the processing result obtained by processing the second data by the first transform domain and the fourth data; or, the third data is obtained based on the processing result obtained by processing the second data by the first transform domain.
  • the third data sent by the first communication device may include any of the above implementations, so that the data sent by the first communication device can be adapted to the processing requirements of different AI network architectures.
  • the second aspect of the present application provides a communication method, which is performed by a second communication device, or the method is performed by some components in the second communication device (such as a processor, a chip or a chip system, etc.), or the method can also be implemented by a logic module or software that can realize all or part of the functions of the second communication device.
  • the method is described as being performed by the second communication device.
  • the second communication device can be a communication node such as a terminal device or a network device in a communication system.
  • the second communication device receives third data; the second communication device performs second transform domain processing on the third data to obtain seventh data; The second communication device performs a second processing on the seventh data to obtain AI data; wherein the second processing includes a second neural network processing.
  • the second communication device performs a second transform domain processing on the received third data to obtain the seventh data, and then performs a second processing on the seventh data to obtain AI data; wherein the second processing includes a second neural network processing.
  • the AI data is the processing result obtained by the second communication device performing at least a second neural network processing on the seventh data
  • the seventh data is the processing result obtained by the second communication device after the third data received by the second communication device is processed by the second transform domain, wherein the second transform domain processing is one of the processes of the physical layer processing.
  • the second communication device can process the received data through the second transform domain in the physical layer, and use the processing result of the second transform domain processing as the input of the neural network processing, so that the second communication device can implement neural network processing without performing dequantization processing on the received data, which can enable the computing power of the communication device to be applied to AI tasks while reducing processing delays.
  • the AI data determined by the second communication device can be obtained by receiving the third data, so that when the second communication device participates in the AI task, the second communication device can serve as an intermediate node or tail node (or termination node) of multiple nodes participating in the AI task.
  • the second communication device performs a second processing on the seventh data to obtain AI data, including: the second communication device performs the second processing on the seventh data to obtain eighth data, wherein the AI data is obtained by subjecting the eighth data to first transform domain processing and/or resource de-mapping processing.
  • the second communication device can perform a second processing on the seventh data to obtain the eighth data; thereafter, the second communication device can perform a first transform domain processing and/or a de-resource mapping processing on the eighth data to obtain AI data.
  • the seventh data is used as the input of the second processing including at least the second neural network, and the second transform domain processing process can sample and transform the AI data to obtain the seventh data adapted to the input of the second neural network.
  • the AI data is obtained by subjecting the eighth data to the first transform domain processing and/or de-resource mapping processing, that is, the first transform domain processing and/or de-resource mapping processing on the receiving link in the second communication device can obtain the AI data.
  • the second communication device can process the communication signal through the receiving link without performing other physical layer processing (such as demodulation, de-scrambling, rate matching, channel decoding, etc.) after the de-resource mapping processing, thereby further reducing the processing delay.
  • the resource de-mapping process includes at least one of the following: dimensionality conversion processing, translation processing, rotation processing, and interleaving processing.
  • the AI data when AI data is used as the output of resource solution mapping processing, the AI data may be the eighth data obtained by processing at least one of the above items to enhance the flexibility of the solution implementation.
  • the second processing also includes filtering processing, such as RRC filtering processing.
  • the second processing may include filtering processing in addition to the second neural network processing, so as to filter the communication signal through the filtering processing and reduce the PAPR of the communication signal.
  • the AI data is the processing result obtained by the second communication device performing at least the second neural network processing in the second processing on the seventh data, that is, the seventh data can be used as the input of the second neural network, and the second processing may not include filtering processing.
  • the function of the second neural network processing includes filtering processing, that is, there is no need to process the device corresponding to the filtering processing, and the same or similar processing effect of the filtering processing can be achieved through the processing of the second neural network.
  • the second transform domain processing includes any one of the following: Fourier transform processing, wavelet transform processing.
  • the first transform domain processing includes any one of the following: inverse Fourier transform processing, inverse wavelet transform processing.
  • the AI data includes original data of the AI task and/or feature data of the AI task.
  • the second communication device can participate in the AI task.
  • the AI data processed by the second communication device may include the original data of the AI task and/or the characteristic data of the AI task to enhance the flexibility of the solution implementation.
  • a communication device which is a first communication device, or the device is a partial component (such as a processor, a chip or a chip system, etc.) in the first communication device, or the device can also be a logic module or software that can implement all or part of the functions of the first communication device.
  • the communication device is described as the first communication device, and the first communication device can be a terminal device or a network device.
  • the device includes a processing unit and a transceiver unit; the processing unit is used to perform a first processing on the first data to obtain the second data; wherein the first data is obtained based on artificial intelligence AI data, and the first processing includes a first neural network processing; the transceiver unit is used to send the third data, and the third data is obtained based on the second data after the first transform domain processing.
  • the first data is obtained based on AI data, including: the first data is obtained by subjecting the AI data to resource mapping processing and/or second transform domain processing.
  • the resource mapping process includes at least one of the following: dimension conversion processing, translation processing, rotation processing, and interleaving processing.
  • the first processing further includes filtering processing, such as root raised cosine RRC filtering processing.
  • the transceiver unit is further used to receive fourth data, wherein the AI data is obtained based on the fourth data.
  • the processing unit is further used to perform a second processing on the fourth data to obtain the AI data; wherein the second processing includes a second neural network processing.
  • the processing unit is used to perform a second processing on the fourth data to obtain the AI data, including: the processing unit performs the second processing on the fourth data to obtain fifth data; wherein the AI data is obtained by subjecting the fifth data to the first transform domain processing and/or de-resource mapping processing.
  • the processing unit performs the second processing on the fourth data to obtain fifth data, including: the processing unit performs the second transform domain processing on the fourth data to obtain sixth data; and the processing unit performs the second processing on the sixth data to obtain the fifth data.
  • the resource de-mapping process includes at least one of the following: dimensionality conversion processing, translation processing, rotation processing, and interleaving processing.
  • the second processing also includes filtering processing, such as RRC filtering processing.
  • the second transform domain processing includes any one of the following: Fourier transform processing and wavelet transform processing.
  • the first transform domain processing includes any one of the following: inverse Fourier transform processing and inverse wavelet transform processing.
  • the AI data includes original data of the AI task and/or feature data of the AI task.
  • the third data is obtained based on the second data processed by the first transform domain, including: the third data is obtained based on the processing result obtained by processing the second data by the first transform domain and the fourth data; or, the third data is obtained based on the processing result obtained by processing the second data by the first transform domain.
  • a communication device which is a second communication device, or the device is a partial component (such as a processor, a chip or a chip system, etc.) in the second communication device, or the device can also be a logic module or software that can implement all or part of the functions of the second communication device.
  • the communication device is described as an example of the second communication device, and the second communication device can be a terminal device or a network device.
  • the device includes a processing unit and a transceiver unit; the transceiver unit is used to receive third data; the processing unit is used to perform a second transform domain processing on the third data to obtain seventh data; the processing unit is also used to perform a second processing on the seventh data to obtain AI data; wherein the second processing includes a second neural network processing.
  • the processing unit is used to perform a second processing on the seventh data to obtain AI data, including: the processing unit performs the second processing on the seventh data to obtain eighth data, wherein the AI data is obtained by subjecting the eighth data to first transform domain processing and/or de-resource mapping processing.
  • the resource de-mapping process includes at least one of the following: dimensionality conversion processing, translation processing, rotation processing, and interleaving processing.
  • the second processing also includes filtering processing, such as RRC filtering processing.
  • the second transform domain processing includes any one of the following: Fourier transform processing and wavelet transform processing.
  • the first transform domain processing includes any one of the following: inverse Fourier transform processing, Inverse wavelet transform processing.
  • the AI data includes original data of the AI task and/or feature data of the AI task.
  • the present application provides a communication device, comprising at least one processor, wherein the at least one processor is coupled to a memory; the memory is used to store programs or instructions; the at least one processor is used to execute the program or instructions so that the device implements the method described in any possible implementation method of any one of the first to second aspects.
  • the present application provides a communication device, comprising at least one logic circuit and an input/output interface; the logic circuit is used to execute the method described in any possible implementation method of any one of the first to second aspects above.
  • a seventh aspect of the present application provides a communication system, which includes the above-mentioned first communication device and second communication device.
  • the communication system further includes a first data sending device and a first data receiving device, wherein the first data sending device may be the first communication device or the second communication device or other communication device, and the first data receiving device may also be the first communication device or the second communication device or other communication device.
  • the present application provides a computer-readable storage medium, which is used to store one or more computer-executable instructions.
  • the processor executes a method as described in any possible implementation of any one of the first to second aspects above.
  • the present application provides a chip system, which includes at least one processor for supporting a communication device to implement the method described in any possible implementation of any one of the first to second aspects.
  • the chip system may also include a memory for storing program instructions and data necessary for the communication device.
  • the chip system may be composed of a chip, or may include a chip and other discrete devices.
  • the chip system also includes an interface circuit, which provides program instructions and/or data for the at least one processor.
  • the technical effects brought about by any design method in the third aspect to the tenth aspect can refer to the technical effects brought about by the different design methods in the above-mentioned first aspect to the second aspect, and will not be repeated here.
  • FIGS. 1a to 1c are schematic diagrams of a communication system provided by the present application.
  • FIG3 is an interactive schematic diagram of the communication method provided by the present application.
  • FIGS. 4a to 4c are schematic diagrams of the AI processing process provided by the present application.
  • FIG5 is another schematic diagram of the AI processing process provided by the present application.
  • FIG6 is another interactive schematic diagram of the communication method provided by the present application.
  • FIG7 is another schematic diagram of the AI processing process provided by the present application.
  • FIG8 is a schematic diagram of a communication device provided by the present application.
  • FIG9 is another schematic diagram of a communication device provided by the present application.
  • FIG10 is another schematic diagram of a communication device provided by the present application.
  • FIG11 is another schematic diagram of a communication device provided by the present application.
  • FIG. 12 is another schematic diagram of the communication device provided in the present application.
  • Terminal device It can be a wireless terminal device that can receive network device scheduling and instruction information.
  • the wireless terminal device can be a device that provides voice and/or data connectivity to users, or a handheld device with wireless connection function, or other processing devices connected to a wireless modem.
  • the terminal device can communicate with one or more core networks or the Internet via a radio access network (RAN).
  • RAN radio access network
  • the terminal device can be a mobile terminal device, such as a mobile phone (also called a "cellular" phone, a mobile phone), Computers and data cards, for example, can be portable, pocket-sized, handheld, computer-built-in or vehicle-mounted mobile devices that exchange voice and/or data with a wireless access network, such as personal communication service (PCS) phones, cordless phones, session initiation protocol (SIP) phones, wireless local loop (WLL) stations, personal digital assistants (PDAs), tablet computers (Pads), computers with wireless transceiver functions, and other devices.
  • PCS personal communication service
  • SIP session initiation protocol
  • WLL wireless local loop
  • PDAs personal digital assistants
  • Pads tablet computers with wireless transceiver functions, and other devices.
  • the wireless terminal device can also be called a system, a subscriber unit, a subscriber station, a mobile station, a mobile station (MS), a remote station, an access point (AP), a remote terminal device (remote terminal), an access terminal device (access terminal), a user terminal device (user terminal), a user agent (user agent), a subscriber station (SS), a customer premises equipment (CPE), a terminal, a user equipment (UE), a mobile terminal (MT), etc.
  • a system a subscriber unit, a subscriber station, a mobile station, a mobile station (MS), a remote station, an access point (AP), a remote terminal device (remote terminal), an access terminal device (access terminal), a user terminal device (user terminal), a user agent (user agent), a subscriber station (SS), a customer premises equipment (CPE), a terminal, a user equipment (UE), a mobile terminal (MT), etc.
  • the terminal device may also be a wearable device.
  • Wearable devices may also be referred to as wearable smart devices or smart wearable devices, etc., which are a general term for the application of wearable technology to intelligently design and develop wearable devices for daily wear, such as glasses, gloves, watches, clothing and shoes.
  • a wearable device is a portable device that is worn directly on the body or integrated into the user's clothes or accessories. Wearable devices are not only hardware devices, but also powerful functions achieved through software support, data interaction, and cloud interaction.
  • wearable smart devices include full-featured, large-size, and independent of smartphones to achieve complete or partial functions, such as smart watches or smart glasses, etc., as well as those that only focus on a certain type of application function and need to be used in conjunction with other devices such as smartphones, such as various types of smart bracelets, smart helmets, and smart jewelry for vital sign monitoring.
  • the terminal can also be a drone, a robot, a terminal in device-to-device (D2D) communication, a terminal in vehicle to everything (V2X), a virtual reality (VR) terminal device, an augmented reality (AR) terminal device, a wireless terminal in industrial control, a wireless terminal in self driving, a wireless terminal in remote medical, a wireless terminal in smart grid, a wireless terminal in transportation safety, a wireless terminal in a smart city, a wireless terminal in a smart home, etc.
  • D2D device-to-device
  • V2X vehicle to everything
  • VR virtual reality
  • AR augmented reality
  • the terminal device may also be a terminal device in a communication system that evolves after the fifth generation (5th generation, 5G) communication system (e.g., a sixth generation (6th generation, 6G) communication system, etc.) or a terminal device in a public land mobile network (PLMN) that evolves in the future, etc.
  • 5G fifth generation
  • 6G sixth generation
  • PLMN public land mobile network
  • the 6G network can further expand the form and function of the 5G communication terminal
  • the 6G terminal includes but is not limited to a car, a cellular network terminal (with integrated satellite terminal function), a drone, and an Internet of Things (IoT) device.
  • IoT Internet of Things
  • the terminal device may also obtain AI services provided by the network device.
  • the terminal device may also have AI processing capabilities.
  • the network equipment can be a RAN node (or device) that connects a terminal device to a wireless network, which can also be called a base station.
  • RAN equipment are: base station, evolved NodeB (eNodeB), gNB (gNodeB) in a 5G communication system, transmission reception point (TRP), evolved Node B (eNB), radio network controller (RNC), Node B (NB), home base station (e.g., home evolved Node B, or home Node B, HNB), baseband unit (BBU), or wireless fidelity (Wi-Fi) access point AP, etc.
  • the network equipment may include a centralized unit (CU) node, a distributed unit (DU) node, or a RAN device including a CU node and a DU node.
  • CU centralized unit
  • DU distributed unit
  • RAN device including a CU node and a DU node.
  • the RAN node can also be a macro base station, a micro base station or an indoor station, a relay node or a donor node, or a wireless controller in a cloud radio access network (CRAN) scenario.
  • the RAN node can also be a server, a wearable device, a vehicle or an onboard device, etc.
  • the access network device in the vehicle to everything (V2X) technology can be a road side unit (RSU).
  • the RAN node can be a central unit (CU), a distributed unit (DU), a CU-control plane (CP), a CU-user plane (UP), or a radio unit (RU).
  • the CU and DU can be set separately, or can also be included in the same network element, such as a baseband unit (BBU).
  • BBU baseband unit
  • the RU may be included in a radio frequency device or a radio frequency unit, for example, in a remote radio unit (RRU), an active antenna unit (AAU) or a remote radio head (RRH).
  • CU or CU-CP and CU-UP
  • DU or RU may also have different names, but those skilled in the art can understand their meanings.
  • O-CU open CU
  • DU may also be called O-DU
  • CU-CP may also be called O-CU-CP
  • CU-UP may also be called O-CU-UP
  • RU may also be called O-RU.
  • CU, CU-CP, CU-UP, DU and RU are used as examples for description in this application.
  • Any unit of CU (or CU-CP, CU-UP), DU and RU in this application may be implemented by a software module, a hardware module, or a combination of a software module and a hardware module.
  • the protocol layer may include a control plane protocol layer and a user plane protocol layer.
  • the control plane protocol layer may include at least one of the following: a radio resource control (RRC) layer, a packet data convergence protocol (PDCP) layer, a radio link control (RLC) layer, a media access control (MAC) layer, or a physical (PHY) layer.
  • the user plane protocol layer may include at least one of the following: a service data adaptation protocol (SDAP) layer, a PDCP layer, an RLC layer, a MAC layer, or a physical layer.
  • SDAP service data adaptation protocol
  • the network device may be any other device that provides wireless communication functions for the terminal device.
  • the embodiments of the present application do not limit the specific technology and specific device form used by the network device. For the convenience of description, the embodiments of the present application do not limit.
  • the network equipment may also include core network equipment, such as mobility management entity (MME), home subscriber server (HSS), serving gateway (S-GW), policy and charging rules function (PCRF), public data network gateway (PDN gateway, P-GW) in the fourth generation (4G) network; access and mobility management function (AMF), user plane function (UPF) or session management function (SMF) in the 5G network.
  • MME mobility management entity
  • HSS home subscriber server
  • S-GW serving gateway
  • PDN gateway public data network gateway
  • P-GW public data network gateway
  • AMF access and mobility management function
  • UPF user plane function
  • SMF session management function
  • SMF session management function
  • 5G network equipment may also include other core network equipment in the 5G network and the next generation network of the 5G network.
  • the above-mentioned network device may also have a network node with AI capabilities, which can provide AI services for terminals or other network devices.
  • a network node with AI capabilities can provide AI services for terminals or other network devices.
  • it may be an AI node on the network side (access network or core network), a computing node, a RAN node with AI capabilities, a core network element with AI capabilities, etc.
  • the device for realizing the function of the network device may be a network device, or may be a device capable of supporting the network device to realize the function, such as a chip system, which may be installed in the network device.
  • the technical solution provided in the embodiment of the present application is described by taking the device for realizing the function of the network device as an example that the network device is used as the device.
  • Configuration and pre-configuration are used at the same time.
  • Configuration refers to the network device/server sending some parameter configuration information or parameter values to the terminal through messages or signaling, so that the terminal can determine the communication parameters or resources during transmission based on these values or information.
  • Pre-configuration is similar to configuration, and can be parameter information or parameter values pre-negotiated between the network device/server and the terminal device, or parameter information or parameter values used by the base station/network device or terminal device specified by the standard protocol, or parameter information or parameter values pre-stored in the base station/server or terminal device. This application does not limit this.
  • Send and “receive” in the embodiments of the present application indicate the direction of signal transmission.
  • send information to XX can be understood as the destination of the information is XX, which can include direct sending through the air interface, and also include indirect sending through the air interface by other units or modules.
  • Receiveive information from YY can be understood as the source of the information is YY, which can include direct receiving from YY through the air interface, and also include indirect receiving from YY through the air interface from other units or modules.
  • Send can also be understood as the "output” of the chip interface, and “receive” can also be understood as the "input” of the chip interface.
  • sending and receiving can be performed between devices, for example, between a network device and a terminal device, or can be performed within a device, for example, sending or receiving between components, modules, chips, software modules, or hardware modules within the device through a bus, wiring, or interface.
  • information may be processed between the source and destination of information transmission, such as coding, modulation, etc., but the destination can understand the valid information from the source. Similar expressions in this application can be understood similarly and will not be repeated.
  • indication may include direct indication and indirect indication, and may also include explicit indication and implicit indication.
  • the information indicated by a certain information is called information to be indicated.
  • information to be indicated In the specific implementation process, there are many ways to indicate the information to be indicated, such as but not limited to, directly indicating the information to be indicated, such as the information to be indicated itself or the index of the information to be indicated.
  • the information to be indicated may also be indirectly indicated by indicating other information, wherein the other information is associated with the information to be indicated; or only a part of the information to be indicated may be indicated, while the other part of the information to be indicated is known or agreed in advance.
  • the indication of specific information may be realized by means of the arrangement order of each information agreed in advance (such as predefined by the protocol), thereby reducing the indication overhead to a certain extent.
  • the present application does not limit the specific method of indication. It is understandable that, for the sender of the indication information, the indication information may be used to indicate the information to be indicated, and for the receiver of the indication information, the indication information may be used to determine the information to be indicated.
  • the present application can be applied to a long term evolution (LTE) system, a new radio (NR) system, or a communication system evolved after 5G (such as 6G, etc.), wherein the communication system includes at least one network device and/or at least one terminal device.
  • LTE long term evolution
  • NR new radio
  • 5G 5th Generation
  • 6G 6th Generation
  • FIG. 1a is a schematic diagram of a communication system in the present application.
  • FIG. 1a shows a network device and six terminal devices, which are terminal device 1, terminal device 2, terminal device 3, terminal device 4, terminal device 5, and terminal device 6.
  • terminal device 1 is a smart tea cup
  • terminal device 2 is a smart air conditioner
  • terminal device 3 is a smart gas station
  • terminal device 4 is a means of transportation
  • terminal device 5 is a mobile phone
  • terminal device 6 is a printer.
  • the AI configuration information sending entity may be a network device.
  • the AI configuration information receiving entity may be a terminal device 1-terminal device 6.
  • the network device and the terminal device 1-terminal device 6 form a communication system.
  • the terminal device 1-terminal device 6 may send data to the network device, and the network device needs to receive the data sent by the terminal device 1-terminal device 6.
  • the network device may send configuration information to the terminal device 1-terminal device 6.
  • terminal device 4-terminal device 6 can also form a communication system.
  • terminal device 5 serves as a network device, that is, an AI configuration information sending entity
  • terminal device 4 and terminal device 6 serve as terminal devices, that is, AI configuration information receiving entities.
  • terminal device 5 sends AI configuration information to terminal device 4 and terminal device 6 respectively, and receives data sent by terminal device 4 and terminal device 6; correspondingly, terminal device 4 and terminal device 6 receive AI configuration information sent by terminal device 5, and send data to terminal device 5.
  • different devices may also execute AI-related services.
  • the base station can perform communication-related services and AI-related services with one or more terminal devices, and communication-related services and AI-related services can also be performed between different terminal devices.
  • communication-related services and AI-related services can also be performed between the TV and the mobile phone.
  • an AI network element can be introduced into the communication system provided in the present application to implement some or all AI-related operations.
  • the AI network element may also be referred to as an AI node, an AI device, an AI entity, an AI module, an AI model, or an AI unit, etc.
  • the AI network element may be a network element built into a communication system.
  • the AI network element may be an AI module built into: an access network device, a core network device, a cloud server, or a network management (operation, administration and maintenance, OAM) to implement AI-related functions.
  • the OAM may be a network management device for a core network device and/or a network management device for an access network device.
  • the AI network element may also be a network element independently set up in the communication system.
  • the terminal or the chip built into the terminal may also include an AI entity to implement AI-related functions.
  • AI artificial intelligence
  • AI Artificial intelligence
  • machines human intelligence for example, it can allow machines to use computer hardware and software to simulate certain intelligent behaviors of humans.
  • machine learning methods can be used.
  • machines use training data to learn (or train) a model.
  • the model represents the mapping from input to output.
  • the learned model can be used for reasoning (or prediction), that is, the model can be used to predict the output corresponding to a given input. Among them, the output can also be called the reasoning result (or prediction result).
  • Machine learning can include supervised learning, unsupervised learning, and reinforcement learning. Among them, unsupervised learning can also be called unsupervised learning.
  • Supervised learning uses machine learning algorithms to learn the mapping relationship from sample values to sample labels based on the collected sample values and sample labels, and uses AI models to express the learned mapping relationship.
  • the process of training a machine learning model is the process of learning this mapping relationship.
  • the sample values are input into the model to obtain the model's predicted values, and the model parameters are optimized by calculating the error between the model's predicted values and the sample labels (ideal values).
  • the learned mapping can be used to predict new sample labels.
  • the mapping relationship learned by supervised learning can include linear mapping or nonlinear mapping. According to the type of label, the learning task can be divided into classification task and regression task.
  • Unsupervised learning uses algorithms to discover the inherent patterns of samples based on the collected sample values.
  • One type of algorithm in unsupervised learning uses the samples themselves as supervisory signals, that is, the model learns the mapping relationship from sample to sample, which is called self-supervised learning.
  • the model parameters are optimized by calculating the error between the model's predicted value and the sample itself.
  • Self-supervised learning can be used in applications such as signal compression and decompression recovery.
  • Common algorithms include autoencoders and adversarial generative networks.
  • Reinforcement learning is different from supervised learning. It is a type of algorithm that learns problem-solving strategies by interacting with the environment. Unlike supervised and unsupervised learning, reinforcement learning problems do not have clear "correct" action label data.
  • the algorithm needs to interact with the environment to obtain reward signals from the environment, and then adjust the decision-making actions to obtain a larger reward signal value. For example, in downlink power control, the reinforcement learning model adjusts the downlink transmission power of each user according to the total system throughput fed back by the wireless network, and then expects to obtain a higher system throughput.
  • the goal of reinforcement learning is also to learn the mapping relationship between the state of the environment and the better (e.g., optimal) decision action.
  • the network cannot be optimized by calculating the error between the action and the "correct action”. Reinforcement learning training is achieved through iterative interaction with the environment.
  • Neural network is a specific model in machine learning technology. According to the universal approximation theorem, neural network can theoretically approximate any continuous function, so that neural network has the ability to learn any mapping.
  • Traditional communication systems require rich expert knowledge to design communication modules, while deep learning communication systems based on neural networks can automatically discover implicit pattern structures from a large number of data sets, establish mapping relationships between data, and obtain performance that is superior to traditional modeling methods.
  • each neuron performs a weighted sum operation on its input values and outputs the operation result through an activation function.
  • FIG. 1d it is a schematic diagram of a neuron structure.
  • the input value is weighted according to the weight.
  • the bias of the weight sum is, for example, b.
  • the activation function can take many forms.
  • the output of the neuron is:
  • the output of the neuron is:
  • b can be a decimal, an integer (eg, 0, a positive integer or a negative integer), or a complex number, etc.
  • the activation functions of different neurons in a neural network can be the same or different.
  • a neural network generally includes multiple layers, each of which may include one or more neurons.
  • the expressive power of the neural network can be improved, providing a more powerful information extraction and abstract modeling capability for complex systems.
  • the depth of a neural network may refer to the number of layers included in the neural network, and the number of neurons included in each layer may be referred to as the width of the layer.
  • the neural network includes an input layer and an output layer. The input layer of the neural network processes the received input information through neurons, passes the processing results to the output layer, and the output layer obtains the output result of the neural network.
  • the neural network includes an input layer, a hidden layer, and an output layer.
  • the input layer of the neural network processes the received input information through neurons, passes the processing results to the middle hidden layer, the hidden layer calculates the received processing results, obtains the calculation results, and the hidden layer passes the calculation results to the output layer or the next adjacent hidden layer, and finally the output layer obtains the output result of the neural network.
  • a neural network may include one hidden layer, or include multiple hidden layers connected in sequence, without limitation.
  • a neural network is, for example, a deep neural network (DNN).
  • DNNs can include feedforward neural networks (FNN), convolutional neural networks (CNN), and recurrent neural networks (RNN).
  • FNN feedforward neural networks
  • CNN convolutional neural networks
  • RNN recurrent neural networks
  • Figure 1e is a schematic diagram of a FNN network.
  • the characteristic of the FNN network is that the neurons in adjacent layers are fully connected to each other. This characteristic makes FNN usually require a large amount of storage space and leads to high computational complexity.
  • CNN is a neural network that is specifically designed to process data with a grid-like structure. For example, time series data (discrete sampling on the time axis) and image data (discrete sampling on two dimensions) can be considered to be data with a grid-like structure.
  • CNN does not use all the input information for calculations at once, but uses a fixed-size window to intercept part of the information for convolution operations, which greatly reduces the amount of calculation of model parameters.
  • each window can use different convolution kernel operations, which enables CNN to better extract the features of the input data.
  • RNN is a type of DNN network that uses feedback time series information. Its input includes the new input value at the current moment and its own output value at the previous moment. RNN is suitable for obtaining sequence features that are correlated in time, and is particularly suitable for applications such as speech recognition and channel coding.
  • a loss function can be defined.
  • the loss function describes the gap or difference between the output value of the model and the ideal target value.
  • the loss function can be expressed in many forms, and there is no restriction on the specific form of the loss function.
  • the model training process can be regarded as the following process: by adjusting some or all parameters of the model, the value of the loss function is less than the threshold value or meets the target requirements.
  • Models can also be referred to as AI models, rules or other names.
  • AI models can be considered as specific methods for implementing AI functions.
  • AI models characterize the mapping relationship or function between the input and output of a model.
  • AI functions may include one or more of the following: data collection, model training (or model learning), model information publishing, model inference (or model reasoning, inference, or prediction, etc.), model monitoring or model verification, or reasoning result publishing, etc.
  • AI functions can also be referred to as AI (related) operations, or AI-related functions.
  • Fully connected neural network also called multilayer perceptron (MLP).
  • an MLP consists of an input layer (left), an output layer (right), and multiple hidden layers (middle).
  • Each layer of the MLP contains several nodes, called neurons. The neurons in two adjacent layers are connected to each other.
  • w is the weight matrix
  • b is the bias vector
  • f is the activation function
  • a neural network can be understood as a mapping relationship from an input data set to an output data set.
  • neural networks are randomly initialized, and the process of obtaining this mapping relationship from random w and b using existing data is called neural network training.
  • the specific method of training is to use a loss function to evaluate the output results of the neural network.
  • the error can be back-propagated, and the neural network parameters (including w and b) can be iteratively optimized by the gradient descent method until the loss function reaches a minimum value, that is, the "better point (e.g., optimal point)" in FIG2b.
  • the neural network parameters corresponding to the "better point (e.g., optimal point)" in FIG2b can be used as the neural network parameters in the trained AI model information.
  • the gradient descent process can be expressed as:
  • is the parameter to be optimized (including w and b)
  • L is the loss function
  • is the learning rate, which controls the step size of gradient descent.
  • is the learning rate, which controls the step size of gradient descent.
  • the back-propagation process utilizes the chain rule for partial derivatives.
  • the gradient of the previous layer parameters can be recursively calculated from the gradient of the next layer parameters, which can be expressed as:
  • w ij is the weight of node j connecting node i
  • si is the weighted sum of inputs on node i.
  • the FL architecture is the most widely used training architecture in the current FL field.
  • the FedAvg algorithm is the basic algorithm of FL. Its algorithm flow is as follows:
  • the center initializes the model to be trained And broadcast it to all client devices.
  • the central node aggregates and collects local training results from all (or some) clients. Assume that the client set that uploads the local model in round t is The center will use the number of samples of the corresponding client as the weight to perform weighted averaging to obtain a new global model. The specific update rule is: The center then sends the latest version of the global model Broadcast to all client devices for a new round of training.
  • the central node In addition to reporting local models You can also use the local gradient of training After reporting, the central node averages the local gradients and updates the global model according to the direction of the average gradient.
  • the data set exists in the distributed nodes, that is, the distributed nodes collect local data sets, perform local training, and report the local results (models or gradients) obtained from the training to the central node.
  • the central node itself does not have a data set, and is only responsible for fusing the training results of the distributed nodes to obtain the global model and send it to the distributed nodes.
  • Decentralized learning Different from federated learning, there is another distributed learning architecture - decentralized learning.
  • the design goal f(x) of a decentralized learning system is generally the mean of the goals fi (x) of each node, that is, Where n is the number of distributed nodes, x is the parameter to be optimized. In machine learning, x is the parameter of the machine learning (such as neural network) model.
  • Each node uses local data and local target fi (x) to calculate the local gradient Then it is sent to the neighboring nodes that can be communicated with. After any node receives the gradient information sent by its neighbor, it can update the parameter x of the local model according to the following formula:
  • ⁇ k represents the tuning coefficient
  • Ni is the set of neighbor nodes of node i
  • represents the number of elements in the set of neighbor nodes of node i, that is, the number of neighbor nodes of node i.
  • a wireless communication system e.g., the system shown in FIG. 1a or FIG. 1b
  • a communication node generally has signal transceiving capability and computing capability.
  • the computing capability of the network device is mainly to provide computing power support for the signal transceiving capability (e.g., sending and receiving signals) to realize the communication task between the network device and other communication nodes.
  • the computing power of communication nodes may have surplus computing power in addition to providing computing power support for the above communication tasks. Therefore, how to utilize this computing power is a technical problem that needs to be solved urgently.
  • the communication node can be used as a participating node in the AI learning system, and the computing power of the communication node can be applied to a certain link of the AI learning system.
  • deep learning models with massive parameters such as bidirectional encoder representations from transformers (BERT) and generative pre-trained transformers (GPT-2)
  • BERT bidirectional encoder representations from transformers
  • GPST-2 generative pre-trained transformers
  • the reasoning process of the model will be limited by the device capacity, so generally large models are stored on cloud central servers.
  • each device in the network generates a huge amount of raw data every day, which requires multiple calls to the large model for reasoning.
  • the device (such as a communication node) can send data to the central server, the central server uses the data for reasoning, and then the central server returns the reasoning result to the device.
  • This process will consume a lot of communication resources for transmitting data, and the privacy of device data will also be at risk.
  • both Node 1 and Node 2 can be communication nodes, such as terminal devices or network devices.
  • the neural network used by the AI learning system can include at least two sub-neural networks, namely, sub-neural network 1 deployed at Node 1 and sub-neural network 2 deployed at Node 2.
  • the processing result of sub-neural network 1 in Node 1 will be used as the processing input of sub-neural network 2 of Node 2.
  • the neural network can be split into more parts, deployed on more devices, and let each device sequentially complete the reasoning of the entire neural network.
  • the neural network is split into at least two sub-neural networks as an example.
  • the neural network is split and deployed on different devices.
  • the processing of the sub-neural network is usually completed at the application layer of the protocol stack, and the intermediate results (real number symbols) are converted into bit sequences after processing (quantization, source coding, etc.), and sent to other devices via the physical layer communication link.
  • node 1 obtains the processing result of the neural network through the processing of the sub-neural network 1 deployed in the application layer; thereafter, the node 1 can obtain a bit stream by quantizing the processing result of the neural network in the application layer, and subsequently send the bit stream after encoding, modulation, FFT, filtering, IFFT and other processing at the physical layer.
  • the data received by node 2 through the wireless channel is processed by the physical layer's FFT, filtering, FFT, demodulation, decoding, etc., and then transformed into data that can be recognized by the application layer through dequantization; thereafter, the sub-neural network 2 deployed at the application layer performs the next step of neural network processing.
  • the physical layer processing involved in node 1 is only an implementation example, and other physical layer processing may be involved in the data transmission process, including but not limited to one or more of rate matching, scrambling, layer mapping, precoding, resource element (RE) mapping, digital beam mapping (beamforming, BF), and adding a cyclic prefix (CP).
  • the physical layer processing involved in node 2 may also involve other physical layer processing, including but not limited to one or more of rate matching, descrambling, layer mapping, channel equalization, RE mapping, digital beam mapping (beamforming, BF), and removing a cyclic prefix (CP).
  • computing e.g., neural network processing, including training, reasoning, etc.
  • communication e.g., transmission of neural network processing results, including transmission of training results, reasoning results and other data
  • the present application provides a communication method and related equipment, which are used to enable the computing power of communication nodes to be applied to the processing of artificial intelligence (AI) tasks while also reducing processing delays.
  • AI artificial intelligence
  • FIG3 is a schematic diagram of an implementation of the communication method provided in the present application.
  • the method includes the following steps.
  • the method is illustrated by taking the first communication device and the second communication device as the execution subject of the interaction diagram as an example, but the present application does not limit the execution subject of the interaction diagram.
  • the execution subject of the method can be replaced by a chip, a chip system, a processor, a logic module or software in a communication device.
  • the first communication device can be a terminal device and the second communication device can be a network device, or the first communication device can be a network device and the second communication device can be a terminal device, or the first communication device and the second communication device are both terminal devices (for example, the method can be applied to the communication process of different terminal devices in a side link communication scenario).
  • the first communication device performs a first process on the first data to obtain second data, wherein the first data is obtained based on artificial intelligence AI data, and the first process includes a first neural network process.
  • the first communication device sends third data, and correspondingly, the second communication device receives the third data, wherein the third data is obtained based on the second data after being processed in the first transform domain.
  • AI neural network
  • AI neural network machine learning
  • AI processing AI neural network processing
  • the data involved in the present application can be replaced by information, signals, etc.
  • the first transform domain processing performed on the second data includes any of the following: inverse Fourier transform processing, inverse wavelet transform processing.
  • the inverse Fourier transform may include inverse fast Fourier transform (IFFT), inverse discrete Fourier transform (IDFT), etc.
  • IFFT inverse fast Fourier transform
  • IDFT inverse discrete Fourier transform
  • the first transform domain processing may perform one-dimensional IFFT processing.
  • the first data is obtained based on the AI data, including: the first data may be obtained by subjecting the AI data to resource mapping processing and/or second transform domain processing.
  • the first data may be obtained by subjecting AI data to resource mapping processing and/or second transform domain processing, wherein the first communication device may subject AI data to resource mapping processing and/or second transform domain processing to obtain the first data, and then use the first data as the input of the first processing including at least the first neural network to obtain the second data.
  • AI data may be used as the input of resource mapping processing and/or the second transform domain, so that the first communication device does not need to perform other physical layer processing (such as encoding, rate matching, scrambling, modulation, etc.) before the resource mapping processing during the transmission link processing of the communication signal, which can further reduce the processing delay.
  • the second transform domain processing includes any one of the following: Fourier transform processing, wavelet transform processing.
  • the Fourier transform processing may include fast Fourier transform (FFT), discrete Fourier transform (DFT), etc.
  • the second transform domain processing may perform one-dimensional IFFT processing.
  • the first data may be obtained by processing the AI data through resource mapping and the second transform domain.
  • the first communication device may use the AI data as the input of the resource mapping process and obtain the first data through the second transform domain.
  • the first communication device does not need to perform other physical layer processing (such as encoding, rate matching, scrambling, modulation, etc.) before the resource mapping process during the transmission link processing of the communication signal, which can further reduce the processing delay.
  • the resource mapping processing can map the AI data to the physical layer transmission resources, so that the subsequent transmission of AI data can be adapted to the physical layer transmission resources.
  • the resource mapping process includes at least one of the following: dimension conversion process, translation process, rotation process, and interleaving process.
  • the AI data when AI data is used as input for resource mapping process, the AI data may be processed by at least one of the above processes to enhance the flexibility of the solution implementation.
  • k is the frequency domain resource index (such as the subcarrier index in the OFDM system)
  • l is the time domain symbol index (such as the OFDM symbol index in the orthogonal frequency division multiplexing (OFDM) system)
  • c is the channel index of the image data symbol (such as the channel index in the red, green, blue (RGB) three-channel image)
  • h is the height index of the image data symbol in the image
  • w is the width index of the image data symbol in the image.
  • the resource mapping function f may be implemented by any one or more processes such as dimension conversion, translation, rotation, and interleaving.
  • the following takes the AI data for input resource mapping processing as image data as an example:
  • Mc is the channel dimension translation
  • M h is the height dimension translation
  • M w is the width dimension translation. That is, symbol translation is performed first, and then resource mapping is performed.
  • filtering can be used to reduce the PAPR of the wireless signal.
  • the first communication device obtains the third data based on the second data through a first transform domain process, and sends the third data in step S302.
  • the first data is obtained based on artificial intelligence AI data, and the first process includes a first neural network process.
  • the second data is the processing result obtained by the first communication device performing at least a first neural network process on the first data
  • the third data sent by the first communication device is the processing result obtained by the first transform domain process on the second data, wherein the first transform domain process is one of the processes of the physical layer process.
  • the first communication device can use the processing result obtained by the neural network processing as the input of the transform domain processing in the physical layer, so that the first communication device can realize AI data processing without performing quantization processing, which can enable the computing power of the communication device to be applied to AI tasks while reducing processing delays.
  • the first communication device can participate in the AI learning system as a communication node. From the implementation shown in Figures 2d, 2e and 2f above, it can be seen that the AI learning system may require multiple communication nodes to participate in the first communication device.
  • the first communication device may be a head node (or starting node) of the multiple communication nodes.
  • the AI data in step S301 (such as the AI data in FIG. 4a and FIG. 4b) may be data generated locally by the first communication device (or preconfigured data).
  • the first communication device may be an intermediate node of the multiple communication nodes. Accordingly, the AI data in step S301 (e.g., the AI data in FIG. 4a and FIG. 4b) may be obtained by the data of the previous hop node received by the first communication device, that is, after the first communication device receives the data of the previous hop node, the first communication device may obtain the AI data based on the data of the previous hop node.
  • the AI data in step S301 e.g., the AI data in FIG. 4a and FIG. 4b
  • the first communication device may obtain the AI data based on the data of the previous hop node.
  • the first communication device may also include processing on the reception link.
  • the processing process that may be involved in the reception link is introduced below.
  • the AI data obtained by the first communication device in step S301 may be data obtained by processing at least the second neural network based on the received fourth data (i.e., the implementation process of Figure 4c).
  • the AI data obtained by the first communication device in step S301 may also be data obtained by processing the received fourth data through a traditional receiving link (e.g., demodulation, descrambling, rate matching, channel decoding, etc.).
  • the implementation process of the former is taken as an example for introduction.
  • the method further includes: the first communication device receives fourth data, wherein the AI data is obtained based on the fourth data.
  • the AI data sent by the first communication device may be obtained by receiving the fourth data, so that when the first communication device participates in the AI task, the first communication device can serve as an intermediate node of multiple nodes participating in the AI task.
  • the method further includes: the first communication device performs a second processing on the fourth data to obtain the AI data; wherein the second processing includes a second neural network processing.
  • the first communication device may perform a second process on the fourth data including at least a second neural network process to obtain the AI data.
  • the first communication device may at least perform a second neural network process on the fourth data to obtain the AI data on the transmission link.
  • the second neural network process may be used to participate in the AI task, so that the first communication device can participate in the AI task through the first neural network and the second neural network.
  • the first communication device performs a second processing on the fourth data
  • the process of obtaining the AI data may include: the first communication device performs the second processing on the fourth data to obtain fifth data; wherein the AI data is obtained by subjecting the fifth data to the first transform domain processing and/or resource demapping processing.
  • the first communication device may perform a second processing on the received fourth data to obtain fifth data.
  • the first communication device may perform the first transform domain processing and/or resource demapping processing on the fifth data to obtain the AI data.
  • the fifth data as the processing result of the second neural network, may be used as the input of the first transform domain processing and/or resource demapping processing.
  • the first communication device can use the processing results obtained by neural network processing of the received data as input for transform domain processing and/or resource demapping processing in the physical layer, so that the first communication device can implement neural network processing without performing dequantization processing on the received data, which can enable the computing power of the communication device to be applied to AI tasks while reducing processing delays.
  • the AI data may be data on the transmission link in the first communication device (for example, the AI data may be input for resource mapping processing and/or second transform domain processing on the transmission link).
  • the AI data is obtained by subjecting the fifth data to the first transform domain processing and/or de-resource mapping processing, that is, the processing result of the first transform domain processing and/or de-resource mapping processing on the receiving link in the first communication device can be used as the input for a certain processing on the transmission link.
  • the first communication device can avoid performing other physical layer processing (for example, demodulation, descrambling, de-rate matching, channel decoding, etc.) after the de-resource mapping processing during the receiving link processing of the communication signal, thereby further reducing the processing delay.
  • the first communication device performs a second processing on the fourth data to obtain the fifth data, including: the first communication device performs a second transform domain processing on the fourth data to obtain the sixth data; the first communication device performs the second processing on the sixth data to obtain the fifth data.
  • the first communication device may perform a second transform domain processing on the fourth data to obtain the sixth data; thereafter, the first communication device may perform a second processing on the sixth data to obtain the fifth data.
  • the sixth data is used as the input of the second processing including at least the second neural network, and the second transform domain processing process can sample and transform the AI data to obtain the sixth data adapted to the input of the second neural network.
  • the resource demapping process includes at least one of the following: dimension conversion process, translation process, rotation process, and interleaving process.
  • the AI data may be the fifth data obtained by at least one of the above processes to enhance the flexibility of the solution implementation.
  • the second processing also includes filtering processing, such as RRC filtering processing.
  • filtering processing such as RRC filtering processing.
  • the second processing may also include filtering processing to filter the communication signal and reduce the PAPR of the communication signal through the filtering processing.
  • the fifth data is a processing result obtained by the first communication device performing at least the second neural network processing in the second processing on the fourth data, that is, the fourth data can be used as the input of the second neural network, and the second processing may not include filtering processing.
  • the function of the second neural network processing includes filtering processing, that is, there is no need to process the device corresponding to the filtering processing, and the same or similar processing effect of the filtering processing can be achieved through the processing of the second neural network.
  • the third data is obtained based on the second data processed by the first transform domain, including: the third data is obtained based on the processing result obtained by processing the second data by the first transform domain and the fourth data; or, the third data is the processing result obtained by processing the second data by the first transform domain.
  • the third data sent by the first communication device may include any of the above implementations, so that the data sent by the first communication device can adapt to the processing requirements of different AI network architectures.
  • the transceiver unit 802 shown in Fig. 8 may be a communication interface, which may be the transceiver 125 in Fig. 12, and the transceiver 125 may include an input interface and an output interface.
  • the transceiver 125 may also be a transceiver circuit, which may include an input interface circuit and an output interface circuit.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Mobile Radio Communication Systems (AREA)

Abstract

La présente demande concerne un procédé de communication et un appareil associé, destinés à être utilisés pour permettre à la puissance de calcul d'un nœud de communication d'être appliquée au traitement de tâches d'intelligence artificielle (IA) et de réduire la latence de traitement. Dans le procédé, un premier dispositif de communication effectue un premier traitement sur des premières données pour obtenir des deuxièmes données, les premières données étant obtenues sur la base de données d'IA, et le premier traitement comprenant un premier traitement de réseau neuronal ; le premier dispositif de communication envoie des troisièmes données, les troisièmes données étant obtenues sur la base des deuxièmes données au moyen d'un premier traitement de domaine de transformée.
PCT/CN2023/127224 2023-10-27 2023-10-27 Procédé de communication et appareil associé Pending WO2025086262A1 (fr)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/CN2023/127224 WO2025086262A1 (fr) 2023-10-27 2023-10-27 Procédé de communication et appareil associé

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2023/127224 WO2025086262A1 (fr) 2023-10-27 2023-10-27 Procédé de communication et appareil associé

Publications (1)

Publication Number Publication Date
WO2025086262A1 true WO2025086262A1 (fr) 2025-05-01

Family

ID=95514893

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2023/127224 Pending WO2025086262A1 (fr) 2023-10-27 2023-10-27 Procédé de communication et appareil associé

Country Status (1)

Country Link
WO (1) WO2025086262A1 (fr)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109995692A (zh) * 2017-12-30 2019-07-09 华为技术有限公司 发送数据的方法及装置
CN114531694A (zh) * 2020-11-23 2022-05-24 维沃移动通信有限公司 通信数据处理方法、装置及通信设备
CN115115020A (zh) * 2021-03-22 2022-09-27 华为技术有限公司 数据处理方法及装置
CN116208976A (zh) * 2021-11-30 2023-06-02 华为技术有限公司 任务处理方法及装置
CN116938658A (zh) * 2022-03-31 2023-10-24 华为技术有限公司 数据处理方法、装置及系统

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109995692A (zh) * 2017-12-30 2019-07-09 华为技术有限公司 发送数据的方法及装置
CN114531694A (zh) * 2020-11-23 2022-05-24 维沃移动通信有限公司 通信数据处理方法、装置及通信设备
CN115115020A (zh) * 2021-03-22 2022-09-27 华为技术有限公司 数据处理方法及装置
CN116208976A (zh) * 2021-11-30 2023-06-02 华为技术有限公司 任务处理方法及装置
CN116938658A (zh) * 2022-03-31 2023-10-24 华为技术有限公司 数据处理方法、装置及系统

Similar Documents

Publication Publication Date Title
WO2025086262A1 (fr) Procédé de communication et appareil associé
WO2025179919A1 (fr) Procédé de communication et appareil associé
WO2025179920A1 (fr) Procédé de communication et appareil associé
WO2025092160A1 (fr) Procédé de communication et dispositif associé
WO2025175756A1 (fr) Procédé de communication et dispositif associé
WO2025139534A1 (fr) Procédé de communication et dispositif associé
WO2025059908A1 (fr) Procédé de communication et dispositif associé
WO2025019990A1 (fr) Procédé de communication et dispositif associé
WO2025092159A1 (fr) Procédé de communication et dispositif associé
WO2025118980A1 (fr) Procédé de communication et dispositif associé
WO2025103115A1 (fr) Procédé de communication et dispositif associé
WO2025098020A1 (fr) Procédé de communication et dispositif associé
WO2025189861A1 (fr) Procédé de communication et appareil associé
WO2025118759A1 (fr) Procédé de communication et dispositif associé
WO2025059907A1 (fr) Procédé de communication et dispositif associé
WO2025025193A1 (fr) Procédé de communication et dispositif associé
WO2025190248A1 (fr) Procédé de communication et appareil associé
WO2025190244A1 (fr) Procédé de communication et appareil associé
WO2025190252A1 (fr) Procédé de communication et appareil associé
WO2025189831A1 (fr) Procédé de communication et appareil associé
WO2025190246A1 (fr) Procédé de communication et appareil associé
WO2025167443A1 (fr) Procédé de communication et dispositif associé
WO2025107835A1 (fr) Procédé de communication et dispositif associé
WO2025208880A1 (fr) Procédé de communication et appareil associé
WO2025140282A1 (fr) Procédé de communication et dispositif associé

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23956505

Country of ref document: EP

Kind code of ref document: A1