[go: up one dir, main page]

WO2025179919A1 - Procédé de communication et appareil associé - Google Patents

Procédé de communication et appareil associé

Info

Publication number
WO2025179919A1
WO2025179919A1 PCT/CN2024/127224 CN2024127224W WO2025179919A1 WO 2025179919 A1 WO2025179919 A1 WO 2025179919A1 CN 2024127224 W CN2024127224 W CN 2024127224W WO 2025179919 A1 WO2025179919 A1 WO 2025179919A1
Authority
WO
WIPO (PCT)
Prior art keywords
data
sample data
communication device
inference
neural network
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
PCT/CN2024/127224
Other languages
English (en)
Chinese (zh)
Inventor
张公正
徐晨
李榕
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Technologies Co Ltd
Original Assignee
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co Ltd filed Critical Huawei Technologies Co Ltd
Publication of WO2025179919A1 publication Critical patent/WO2025179919A1/fr
Pending legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04BTRANSMISSION
    • H04B17/00Monitoring; Testing
    • H04B17/30Monitoring; Testing of propagation channels
    • H04B17/391Modelling the propagation channel
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/14Network analysis or design
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/14Network analysis or design
    • H04L41/149Network analysis or design for prediction of maintenance
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/16Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks using machine learning or artificial intelligence
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W24/00Supervisory, monitoring or testing arrangements
    • H04W24/02Arrangements for optimising operational condition
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W24/00Supervisory, monitoring or testing arrangements
    • H04W24/06Testing, supervising or monitoring using simulated traffic

Definitions

  • the present application relates to the field of communications, and in particular to a communication method and related devices.
  • Wireless communication can be the transmission communication between two or more communication nodes without propagating through conductors or cables.
  • the communication nodes generally include network devices and terminal devices.
  • communication nodes generally possess both signal transceiver and computing capabilities.
  • network devices with computing capabilities primarily provide computing power to support signal transceiver capabilities (for example, calculating the time and frequency domain resources required to carry signals), enabling communication between the network device and other communication nodes.
  • the computing power of communication nodes not only supports the aforementioned communication tasks but also potentially handles the processing of neural network models.
  • reducing the complexity of model management remains a pressing technical challenge.
  • the present application provides a communication method and related devices for reducing the complexity of model management.
  • the present application provides a communication method, which is performed by a first communication device.
  • the first communication device may be a communication device (such as a terminal device or a network device), or the first communication device may be a component of the communication device (such as a processor, a chip, or a chip system, etc.), or the first communication device may also be a logic module or software that can implement all or part of the functions of the communication device.
  • the first communication device obtains one or more sample data; the first communication device processes the one or more sample data and inference data based on a first neural network model to obtain an inference result corresponding to the inference data; wherein the inference result is the same as at least one data feature of the sample data.
  • the first communication device can process the one or more sample data and the inference data based on the first neural network model to obtain an inference result corresponding to the inference data.
  • the inference result obtained by the first communication device based on the first neural network model is the same as at least one data feature of the sample data, that is, the first communication device can obtain an inference result that has at least one data feature that is the same as the data feature of the sample data based on the example/guidance of the sample data, and can adjust the model inference in a specific scenario based on the sample data to reduce the complexity of model management.
  • neural network model artificial intelligence (AI) model
  • AI neural network model AI neural network model
  • machine learning model AI processing model
  • sample data can be used as part of the input of a neural network model to ensure that the inference results output by the neural network model have at least one data feature in common with the sample data.
  • sample data can be replaced by other terms such as reference data, anchor data, example data, or guidance data.
  • the data feature includes at least one of the following: data dimension, parameter quantity, data content, data type, or physical quantity.
  • the first communication device processes the one or more sample data and the inference data based on the first neural network model to obtain an inference result corresponding to the inference data, including:
  • the inference performance of the first neural network model is below a threshold
  • a difference between the data distribution of the inference data and the data distribution of the inference data inputted for the first k times of the first neural network model is greater than a threshold value, where k is a positive integer;
  • the communication state (of the first communication device) changes.
  • the first communication device can add sample data to the input of the first neural network model, that is, the first communication device can process the one or more sample data and the inference data based on the first neural network model to obtain an inference result corresponding to the inference data.
  • the first communication device can process the inference data based on the first neural network model to obtain an inference result corresponding to the inference data, that is, the input of the first neural network model may not include sample data to reduce overhead.
  • the first communication device can locally determine whether at least one of the above items is satisfied, that is, the first communication device can trigger the sending of one or more sample data based on the result of the local determination.
  • the first communication device can determine whether at least one of the above items is satisfied based on the instructions of other communication devices, that is, the first communication device can determine whether the triggering condition for triggering the sending of one or more sample data is satisfied based on the instructions of other communication devices.
  • the first communication device can trigger the sending of one or more sample data based on the instructions of other communication devices.
  • the other communication device may include the management module and/or data storage module described later.
  • the first neural network model is updated to obtain a second neural network model.
  • the first neural network model can be updated to obtain a second neural network model. In other words, through neural network model training, a second neural network model with better performance is obtained.
  • the update frequency of the one or more sample data is less than or equal to a threshold, or when the performance corresponding to the inference result corresponding to the inference data is greater than or equal to a threshold, it can be determined that the performance of the current first neural network model is superior. Accordingly, there is no need to update the first neural network model to avoid unnecessary overhead.
  • the communication device that updates the first neural network model to obtain the second neural network model can be the first communication device, or other communication devices (for example, the other communication device can include the model training module described later).
  • the first communication device obtains one or more sample data, including: the first communication device receives the one or more sample data.
  • the first communication device may acquire the one or more sample data by receiving the one or more sample data.
  • the first communication device may include a module for storing/caching sample data. Accordingly, the first communication device may locally obtain the one or more sample data based on the module.
  • the method further includes: the first communication device sending request information for requesting the one or more sample data.
  • the first communication device may also send request information for requesting the one or more sample data, so that the recipient of the request information provides the one or more sample data to the first communication device based on the request information.
  • the second aspect of the present application provides a communication method, which is performed by a second communication device, which can be a communication device (such as a terminal device or a network device), or the second communication device can be a component of the communication device (such as a processor, a chip or a chip system, etc.), or the second communication device can also be a logic module or software that can implement all or part of the functions of the communication device.
  • the second communication device obtains one or more sample data; wherein the one or more sample data satisfy: the inference result corresponding to the inference data obtained by processing the one or more sample data and the inference data through the first neural network model is the same as at least one data feature of the sample data; and the second communication device sends the one or more sample data.
  • the first communication device can process the one or more sample data and inference data based on the first neural network model to obtain an inference result corresponding to the inference data.
  • the inference result obtained by the first communication device based on the first neural network model is the same as at least one data feature of the sample data, that is, the first communication device can obtain an inference result that has at least one data feature that is the same as the data feature of the sample data based on the example/guidance of the sample data, and can adjust the model inference in a specific scenario based on the sample data to reduce the complexity of model management.
  • the data feature includes at least one of the following: data dimension, parameter quantity, data content, data type, or physical quantity.
  • the method is applied to a communication device that caches data (that is, the second communication device can be used to cache sample data); the second communication device sends the one or more sample data, including: the second communication device sends the one or more sample data to a communication device that deploys the first neural network model or a communication device for storing data.
  • the second communication device can send the one or more sample data to the communication device that deploys the first neural network model, so that the recipient of the sample data can implement inference of the neural network model based on the sample data.
  • the second communication device can also be used to send the one or more sample data to a communication device that stores data, so that the recipient of the sample data can implement storage of the sample data.
  • the second communication device obtains one or more sample data, including: the second communication device receives the one or more sample data, wherein the one or more sample data come from a communication device for collecting data, or the one or more sample data come from a communication device for storing data.
  • the second communication device can obtain one or more sample data by receiving one or more sample data.
  • the method before the second communication device receives the one or more sample data, the method further includes: the second communication device sending request information for requesting the one or more sample data.
  • the second communication device may further send request information for requesting the one or more sample data, so that the recipient of the request information can provide the one or more sample data to the second communication device based on the request information.
  • the method is applied to a communication device for storing data or a communication device for collecting data (that is, the second communication device can be used to store sample data or collect sample data); the second communication device sends the one or more sample data, including: the second communication device sends the one or more sample data to a communication device for caching data.
  • the second communication device can send the one or more sample data to the communication device used to cache data, so that the recipient of the sample data can cache the sample data based on the sample data.
  • the recipient of the sample data can send the one or more sample data to the communication device that deploys the first neural network model to realize reasoning of the neural network model.
  • the method before the second communication device sends the one or more sample data, the method further includes: the second communication device receiving request information for requesting the one or more sample data.
  • the second communication device may also receive request information for requesting the one or more sample data, so that the second communication device can provide the one or more sample data based on the request information.
  • the second communication device when at least one of the following items is met, sends one or more sample data, including:
  • the inference performance of the first neural network model is below a threshold
  • a difference between the data distribution of the inference data and the data distribution of the inference data inputted for the first k times of the first neural network model is greater than a threshold value, where k is a positive integer;
  • the communication state of the communication device deploying the first neural network model changes.
  • the second communication device can send the one or more sample data so that the recipient of the one or more sample data can add the sample data to the input of the first neural network model.
  • the first communication device can process the one or more sample data and the inference data based on the first neural network model to obtain an inference result corresponding to the inference data.
  • the second communication device may not send the one or more sample data to the first communication device, that is, the input of the first neural network model may not include the sample data, to reduce overhead.
  • the second communication device can locally determine whether at least one of the above items is satisfied, that is, the second communication device can trigger the transmission of one or more sample data based on the result of the local determination.
  • the second communication device can determine whether at least one of the above items is satisfied based on the instructions of other communication devices, that is, the second communication device can determine whether the triggering condition for sending one or more sample data is satisfied based on the instructions of other communication devices.
  • the second communication device can trigger the transmission of one or more sample data based on the instructions of other communication devices.
  • the other communication device may include the management module and/or data storage module described later.
  • the method further includes: the second communication device updating the one or more sample data.
  • the second communication device can also update the one or more sample data in order to improve the processing performance of the neural network model through the updated sample data.
  • the second communication apparatus updates the one or more sample data, including:
  • the reasoning performance corresponding to the reasoning result corresponding to the reasoning data satisfies the first condition
  • the data distribution of the inference data satisfies the second condition
  • the communication state of the communication device deploying the first neural network model changes.
  • the second communication device can update the one or more sample data in order to improve the processing performance of the neural network model through the updated sample data.
  • updating one or more sample data may include adding sample data, reducing sample data, or replacing sample data.
  • the third aspect of the present application provides a communication device, which is a first communication device and includes a processing unit; the processing unit is used to obtain one or more sample data; the processing unit is also used to process the one or more sample data and inference data based on a first neural network model to obtain an inference result corresponding to the inference data; wherein the inference result is the same as at least one data feature of the sample data.
  • the constituent modules of the communication device can also be used to execute the steps performed in each possible implementation method of the first aspect and achieve corresponding technical effects.
  • the constituent modules of the communication device can also be used to execute the steps performed in each possible implementation method of the first aspect and achieve corresponding technical effects.
  • the fourth aspect of the present application provides a communication device, which is a second communication device, and includes a transceiver unit and a processing unit, the processing unit being used to obtain one or more sample data; wherein the one or more sample data and inference data are used to be processed by a first neural network model to obtain an inference result corresponding to the inference data, and the inference result is the same as at least one data feature of the sample data; the transceiver unit is used to send the one or more sample data.
  • the constituent modules of the communication device can also be used to execute the steps performed in each possible implementation method of the second aspect and achieve corresponding technical effects.
  • the constituent modules of the communication device can also be used to execute the steps performed in each possible implementation method of the second aspect and achieve corresponding technical effects.
  • the present application provides a communication device, comprising at least one processor, wherein the at least one processor is coupled to a memory; the memory is used to store programs or instructions; the at least one processor is used to execute the program or instructions so that the device implements the method described in any possible implementation method of any one of the first to second aspects.
  • the present application provides a communication device comprising at least one logic circuit and an input/output interface; the logic circuit is used to execute the method described in any possible implementation of any one of the first to second aspects.
  • the present application provides a communication system, which includes the above-mentioned first communication device and second communication device.
  • the present application provides a computer-readable storage medium for storing one or more computer-executable instructions.
  • the processor executes the method described in any possible implementation of any one of the first to second aspects above.
  • the present application provides a computer program product (or computer program).
  • the processor executes the method described in any possible implementation of any one of the first to second aspects above.
  • the present application provides a chip system comprising at least one processor for supporting a communication device to implement the method described in any possible implementation of any one of the first to second aspects.
  • the chip system may further include a memory for storing program instructions and data necessary for the communication device.
  • the chip system may be composed of a chip or may include a chip and other discrete components.
  • the chip system may further include an interface circuit for providing program instructions and/or data to the at least one processor.
  • the technical effects brought about by any design method in the third to tenth aspects can refer to the technical effects brought about by the different design methods in the above-mentioned first to second aspects, and will not be repeated here.
  • FIGS. 1a to 1c are schematic diagrams of a communication system provided by this application.
  • FIGS 1d, 1e, and 2a to 2e are schematic diagrams of the AI processing process involved in this application;
  • FIG3 is an interactive schematic diagram of the communication method provided by this application.
  • FIGS. 4a to 4d are schematic diagrams of the processing process of the neural network model provided by this application.
  • FIG5 is a schematic diagram of an application scenario of the communication method provided in this application.
  • FIGS. 6a to 6d are schematic diagrams of application scenarios of the communication method provided by this application.
  • Terminal device It can be a wireless terminal device that can receive network device scheduling and instruction information.
  • the wireless terminal device can be a device that provides voice and/or data connectivity to the user, or a handheld device with wireless connection function, or other processing device connected to a wireless modem.
  • Terminal devices can communicate with one or more core networks or the Internet via a radio access network (RAN).
  • Terminal devices can be mobile terminal devices, such as mobile phones (also known as "cellular" phones, mobile phones), computers, and data cards.
  • mobile phones also known as "cellular" phones, mobile phones
  • computers and data cards.
  • they can be portable, pocket-sized, handheld, computer-built-in, or vehicle-mounted mobile devices that exchange voice and/or data with the radio access network.
  • PCS personal communication service
  • SIP Session Initiation Protocol
  • WLL wireless local loop
  • PDAs personal digital assistants
  • tablet computers computers with wireless transceiver capabilities, and other devices.
  • Wireless terminal equipment can also be called system, subscriber unit, subscriber station, mobile station, mobile station (MS), remote station, access point (AP), remote terminal equipment (remote terminal), access terminal equipment (access terminal), user terminal equipment (user terminal), user agent (user agent), subscriber station (SS), customer premises equipment (CPE), terminal, user equipment (UE), mobile terminal (MT), etc.
  • the terminal device may also be a wearable device.
  • Wearable devices may also be referred to as wearable smart devices or smart wearable devices, etc., which are a general term for wearable devices that are intelligently designed and developed using wearable technology for daily wear, such as glasses, gloves, watches, clothing, and shoes.
  • a wearable device is a portable device that is worn directly on the body or integrated into the user's clothes or accessories. Wearable devices are not only hardware devices, but also achieve powerful functions through software support, data interaction, and cloud interaction.
  • wearable smart devices include those that are fully functional, large in size, and can achieve complete or partial functions without relying on smartphones, such as smart watches or smart glasses, etc., as well as those that only focus on a certain type of application function and need to be used in conjunction with other devices such as smartphones, such as various smart bracelets, smart helmets, and smart jewelry for vital sign monitoring.
  • the terminal can also be a drone, a robot, a terminal in device-to-device (D2D) communication, a terminal in vehicle to everything (V2X), a virtual reality (VR) terminal device, an augmented reality (AR) terminal device, a wireless terminal in industrial control, a wireless terminal in self-driving, a wireless terminal in remote medical, a wireless terminal in smart grid, a wireless terminal in transportation safety, a wireless terminal in a smart city, a wireless terminal in a smart home, etc.
  • D2D device-to-device
  • V2X vehicle to everything
  • VR virtual reality
  • AR augmented reality
  • the terminal device may also be a terminal device in a communication system that evolves beyond the fifth-generation (5G) communication system (e.g., a sixth-generation (6G) communication system) or a terminal device in a future-evolved public land mobile network (PLMN).
  • 5G fifth-generation
  • 6G sixth-generation
  • PLMN public land mobile network
  • a 6G network may further extend the form and functionality of 5G communication terminals.
  • 6G terminals include, but are not limited to, vehicles, cellular network terminals (with integrated satellite terminal functionality), drones, and Internet of Things (IoT) devices.
  • IoT Internet of Things
  • the terminal device may also obtain AI services provided by the network device.
  • the terminal device may also have AI processing capabilities.
  • a network device can be a RAN node (or device) that connects a terminal device to a wireless network, which can also be called a base station.
  • RAN equipment are: base station, evolved NodeB (eNodeB), gNB (gNodeB) in a 5G communication system, transmission reception point (TRP), evolved NodeB (eNB), radio network controller (RNC), NodeB (NB), home base station (e.g., home evolved NodeB, or home NodeB, HNB), baseband unit (BBU), or wireless fidelity (Wi-Fi) access point AP, etc.
  • the network equipment can include a centralized unit (CU) node, a distributed unit (DU) node, or a RAN device including a CU node and a DU node.
  • CU centralized unit
  • DU distributed unit
  • RAN device including a CU node and a DU node.
  • a RAN node can be a macro base station, micro base station, indoor base station, relay node, donor node, or wireless controller in a cloud radio access network (CRAN) scenario.
  • a RAN node can also be a server, wearable device, vehicle, or onboard device.
  • the access network device in vehicle-to-everything (V2X) technology can be a roadside unit (RSU).
  • the RAN node can be a centralized unit (CU), a distributed unit (DU), a CU-control plane (CP), a CU-user plane (UP), or a radio unit (RU).
  • the CU and DU can be set up separately, or they can be included in the same network element, such as the baseband unit.
  • the RU may be included in a radio frequency device or a radio frequency unit, for example, a remote radio unit (RRU), an active antenna unit (AAU) or a remote radio head (RRH).
  • RRU remote radio unit
  • AAU active antenna unit
  • RRH remote radio head
  • CU or CU-CP and CU-UP
  • DU or RU may have different names, but those skilled in the art can understand their meanings.
  • O-CU open CU
  • DU may also be called O-DU
  • CU-CP may also be called O-CU-CP
  • CU-UP may also be called O-CU-UP
  • RU may also be called O-RU.
  • this application uses CU, CU-CP, CU-UP, DU and RU as examples for description.
  • Any unit among the CU (or CU-CP, CU-UP), DU and RU in this application can be implemented by a software module, a hardware module, or a combination of a software module and a hardware module.
  • This protocol layer may include a control plane protocol layer and a user plane protocol layer.
  • the control plane protocol layer may include at least one of the following: radio resource control (RRC) layer, packet data convergence protocol (PDCP) layer, radio link control (RLC) layer, media access control (MAC) layer, or physical (PHY) layer.
  • the user plane protocol layer may include at least one of the following: service data adaptation protocol (SDAP) layer, PDCP layer, RLC layer, MAC layer, or physical layer.
  • SDAP service data adaptation protocol
  • the network device may be any other device that provides wireless communication functionality to the terminal device.
  • the embodiments of this application do not limit the specific technology and device form used by the network device. For ease of description, the embodiments of this application do not limit this.
  • the network equipment may also include core network equipment, such as the mobility management entity (MME), home subscriber server (HSS), serving gateway (S-GW), policy and charging rules function (PCRF), and public data network gateway (PDN gateway, P-GW) in the fourth generation (4G) network; and the access and mobility management function (AMF), user plane function (UPF), or session management function (SMF) in the 5G network.
  • MME mobility management entity
  • HSS home subscriber server
  • S-GW serving gateway
  • PDN gateway, P-GW public data network gateway
  • the core network equipment may also include other core network equipment in the 5G network and the next generation network of the 5G network.
  • the above-mentioned network device may also have a network node with AI capabilities, which can provide AI services for terminals or other network devices.
  • a network node with AI capabilities can be an AI node on the network side (access network or core network), a computing power node, a RAN node with AI capabilities, a core network element with AI capabilities, etc.
  • the apparatus for implementing the function of the network device may be the network device, or may be a device capable of supporting the network device in implementing the function, such as a chip system, which may be installed in the network device.
  • the technical solutions provided in the embodiments of the present application are described by taking the network device as an example.
  • Configuration and pre-configuration are used simultaneously.
  • Configuration refers to the network device/server sending some parameter configuration information or parameter values to the terminal through messages or signaling, so that the terminal can determine the communication parameters or resources during transmission based on these values or information.
  • Pre-configuration is similar to configuration, and can be parameter information or parameter values pre-negotiated between the network device/server and the terminal device, or parameter information or parameter values used by the base station/network device or terminal device as specified in the standard protocol, or parameter information or parameter values pre-stored in the base station/server or terminal device. This application does not limit this.
  • system and “network” in the embodiments of the present application can be used interchangeably.
  • Multiple refers to two or more.
  • And/or describes the association relationship of associated objects, indicating that three relationships may exist.
  • a and/or B can represent: the existence of A alone, the existence of A and B at the same time, and the existence of B alone, where A and B can be singular or plural.
  • the character “/” generally indicates that the previous and next associated objects are in an “or” relationship.
  • At least one of the following” or similar expressions refers to any combination of these items, including single items. (a) or any combination of multiple items (a).
  • At least one of A, B, and C includes A, B, C, AB, AC, BC, or ABC.
  • ordinal numbers such as “first” and “second” mentioned in the embodiments of the present application are used to distinguish multiple objects and are not used to limit the order, timing, priority, or importance of multiple objects.
  • “Sending” and “receiving” in the embodiments of the present application indicate the direction of signal transmission.
  • sending information to XX can be understood as the destination of the information being XX, which can include direct sending through the air interface, as well as indirect sending through the air interface by other units or modules.
  • Receiviving information from YY can be understood as the source of the information being YY, which can include direct receiving from YY through the air interface, as well as indirect receiving from YY through the air interface from other units or modules.
  • “Sending” can also be understood as the “output” of the chip interface, and “receiving” can also be understood as the “input” of the chip interface.
  • sending and receiving can be performed between devices, for example, between a network device and a terminal device, or can be performed within a device, for example, sending or receiving between components, modules, chips, software modules or hardware modules within the device through a bus, wiring or interface.
  • information may be processed between the source and destination of information transmission, such as coding, modulation, etc., but the destination can understand the valid information from the source. Similar expressions in this application can be understood similarly and will not be repeated.
  • indication may include direct indication and indirect indication, and may also include explicit indication and implicit indication.
  • the information indicated by a certain information is called information to be indicated.
  • information to be indicated In the specific implementation process, there are many ways to indicate the information to be indicated, such as but not limited to, directly indicating the information to be indicated, such as the information to be indicated itself or the index of the information to be indicated.
  • the information to be indicated may also be indirectly indicated by indicating other information, wherein the other information is associated with the information to be indicated; or only a part of the information to be indicated may be indicated, while the other part of the information to be indicated is known or agreed in advance.
  • the indication of specific information may be achieved by means of the arrangement order of each information agreed in advance (such as predefined by the protocol), thereby reducing the indication overhead to a certain extent.
  • the present application does not limit the specific method of indication. It is understandable that for the sender of the indication information, the indication information can be used to indicate the information to be indicated, and for the receiver of the indication information, the indication information can be used to determine the information to be indicated.
  • the communication system includes at least one network device and/or at least one terminal device.
  • Figure 1a is a schematic diagram of a communication system in this application.
  • Figure 1a exemplarily illustrates a network device and six terminal devices, namely terminal device 1, terminal device 2, terminal device 3, terminal device 4, terminal device 5, and terminal device 6.
  • terminal device 1 is a smart teacup
  • terminal device 2 is a smart air conditioner
  • terminal device 3 is a smart gas pump
  • terminal device 4 is a vehicle
  • terminal device 5 is a mobile phone
  • terminal device 6 is a printer.
  • the AI configuration information sending entity can be a network device.
  • the AI configuration information receiving entity can be terminal devices 1-6.
  • the network device and terminal devices 1-6 form a communication system.
  • terminal devices 1-6 can send data to the network device, and the network device needs to receive data sent by terminal devices 1-6.
  • the network device can send configuration information to terminal devices 1-6.
  • terminal devices 4 and 6 can also form a communication system.
  • Terminal device 5 serves as a network device, i.e., the AI configuration information sending entity;
  • terminal devices 4 and 6 serve as terminal devices, i.e., the AI configuration information receiving entities.
  • terminal device 5 sends AI configuration information to terminal devices 4 and 6, respectively, and receives data from them.
  • terminal devices 4 and 6 receive AI configuration information from terminal device 5 and send data to terminal device 5.
  • different devices may also execute AI-related services.
  • the base station can perform communication-related services and AI-related services with one or more terminal devices, and different terminal devices can also perform communication-related services and AI-related services.
  • an AI network element can be introduced into the communication system provided in this application to implement some or all AI-related operations.
  • the AI network element can also be called an AI node, AI device, AI entity, AI module, AI model, or AI unit, etc.
  • the AI network element can be a network element built into the communication system.
  • the AI network element can be an AI module built into: an access network device, a core network device, a cloud server, or a network management (OAM) to implement AI-related functions.
  • the OAM can be a network management for a core network device and/or a network management for an access network device.
  • the AI network element can also be an independently set network element in the communication system.
  • the terminal or the chip built into the terminal can also include an AI entity to implement AI-related functions.
  • AI artificial intelligence
  • AI Artificial intelligence
  • Machine learning methods can be used to implement AI.
  • a machine uses training data to learn (or train) a model. This model represents the mapping from input to output.
  • the learned model can be used for inference (or prediction), meaning that the model can be used to predict the output corresponding to a given input. This output can also be called an inference result (or prediction result).
  • Machine learning can include supervised learning, unsupervised learning, and reinforcement learning. Among them, unsupervised learning can also be called unsupervised learning.
  • Supervised learning uses machine learning algorithms to learn the mapping relationship between sample values and sample labels based on collected sample values and sample labels, and then expresses this learned mapping relationship using an AI model.
  • the process of training a machine learning model is the process of learning this mapping relationship.
  • sample values are input into the model to obtain the model's predicted values.
  • the model parameters are optimized by calculating the error between the model's predicted values and the sample labels (ideal values).
  • the learned mapping can be used to predict new sample labels.
  • the mapping relationship learned by supervised learning can include linear mappings or nonlinear mappings. Based on the type of label, the learning task can be divided into classification tasks and regression tasks.
  • Unsupervised learning uses algorithms to discover inherent patterns in collected sample values.
  • One type of unsupervised learning algorithm uses the samples themselves as supervisory signals, meaning the model learns the mapping from one sample to another. This is called self-supervised learning.
  • the model parameters are optimized by calculating the error between the model's predictions and the samples themselves.
  • Self-supervised learning can be used in signal compression and decompression recovery applications. Common algorithms include autoencoders and generative adversarial networks.
  • Reinforcement learning unlike supervised learning, is a type of algorithm that learns problem-solving strategies through interaction with the environment. Unlike supervised and unsupervised learning, reinforcement learning problems lack explicit label data for "correct" actions. Instead, the algorithm must interact with the environment to obtain reward signals from the environment, and then adjust its decision-making actions to maximize the reward signal value. For example, in downlink power control, the reinforcement learning model adjusts the downlink transmit power of each user based on the overall system throughput fed back by the wireless network, hoping to achieve higher system throughput. The goal of reinforcement learning is also to learn the mapping between environmental states and optimal (e.g., optimal) decision-making actions. However, because the labels for "correct actions" cannot be obtained in advance, network optimization cannot be achieved by calculating the error between actions and "correct actions.” Reinforcement learning training is achieved through iterative interaction with the environment.
  • NN neural network
  • Traditional communication systems require extensive expert knowledge to design communication modules.
  • deep learning communication systems based on neural networks can automatically discover implicit patterns in massive data sets and establish mapping relationships between data, achieving performance superior to traditional modeling methods.
  • each neuron performs a weighted sum operation on its input values and outputs the result through an activation function.
  • FIG. 1d it is a schematic diagram of the neuron structure.
  • w i is used as the weight of xi to weight xi .
  • the bias for weighted summation of input values according to the weights is, for example, b.
  • the activation function can take many forms.
  • the output of the neuron is:
  • b can be a decimal, an integer (eg, 0, a positive integer, or a negative integer), or a complex number, etc.
  • the activation functions of different neurons in a neural network can be the same or different.
  • neural networks generally include multiple layers, each of which may include one or more neurons. Increasing the depth and/or width of a neural network can improve its expressive power, providing more powerful information extraction and abstract modeling capabilities for complex systems.
  • the depth of a neural network can refer to the number of layers it comprises, and the number of neurons in each layer can be referred to as the width of that layer.
  • a neural network includes an input layer and an output layer. The input layer processes the input information received by the neural network through neurons, passing the processing results to the output layer, which then obtains the output of the neural network.
  • a neural network includes an input layer, a hidden layer, and an output layer. The input layer processes the input information received by the neural network through neurons, passing the processing results to an intermediate hidden layer. The hidden layer performs calculations on the received processing results to obtain a calculation result, which is then passed to the output layer or the next adjacent hidden layer, which ultimately obtains the output of the neural network.
  • a neural network can include one hidden layer or multiple hidden layers connected in sequence, without limitation.
  • DNN deep neural network
  • FNNs feedforward neural networks
  • CNNs convolutional neural networks
  • RNNs recurrent neural networks
  • Figure 1e is a schematic diagram of a FNN network.
  • a characteristic of FNN networks is that neurons in adjacent layers are fully connected. This characteristic typically requires a large amount of storage space and results in high computational complexity.
  • CNN is a neural network specifically designed to process data with a grid-like structure. For example, time series data (discrete sampling along the time axis) and image data (discrete sampling along two dimensions) can both be considered grid-like data.
  • CNNs do not utilize all input information at once for computation. Instead, they use a fixed-size window to intercept a portion of the information for convolution operations, significantly reducing the computational complexity of model parameters.
  • each window can use a different convolution kernel, enabling CNNs to better extract features from the input data.
  • RNNs are a type of DNN that utilizes feedback time series information. Their input consists of a new input value at the current moment and their own output value at the previous moment. RNNs are suitable for capturing temporally correlated sequence features and are particularly well-suited for applications such as speech recognition and channel coding.
  • a loss function can be defined. This function describes the gap or discrepancy between the model's output and the ideal target value. Loss functions can be expressed in various forms, and there are no restrictions on their specific form. The model training process can be viewed as adjusting some or all of the model's parameters to keep the loss function below a threshold or meet the target.
  • a model may also be referred to as an AI model, rule, or other name.
  • An AI model can be considered a specific method for implementing an AI function.
  • An AI model represents a mapping relationship or function between the input and output of a model.
  • AI functions may include one or more of the following: data collection, model training (or model learning), model information release, model inference (or model reasoning, inference, or prediction, etc.), model monitoring or model verification, or inference result release, etc.
  • AI functions may also be referred to as AI (related) operations, or AI-related functions.
  • Fully connected neural network also called multilayer perceptron (MLP).
  • an MLP consists of an input layer (left), an output layer (right), and multiple hidden layers (center).
  • Each layer of the MLP contains several nodes, called neurons. Neurons in adjacent layers are connected to each other.
  • w is the weight matrix
  • b is the bias vector
  • f is the activation function
  • n is the index of the neural network layer
  • a neural network can be understood as a mapping from an input data set to an output data set.
  • Neural networks are typically initialized randomly, and the process of obtaining this mapping from random w and b using existing data is called neural network training.
  • the specific training method is to use a loss function to evaluate the output results of the neural network.
  • the error can be backpropagated, and the neural network parameters (including w and b) can be iteratively optimized using gradient descent until the loss function reaches a minimum, which is the "better point (e.g., optimal point)" in Figure 2b. It is understood that the neural network parameters corresponding to the "better point (e.g., optimal point)" in Figure 2b can be used as the neural network parameters in the trained AI model information.
  • the gradient descent process can be expressed as:
  • is the parameter to be optimized (including w and b)
  • L is the loss function
  • is the learning rate, which controls the step size of gradient descent.
  • the backpropagation process utilizes the chain rule for partial derivatives.
  • the gradient of the previous layer parameters can be recursively calculated from the gradient of the next layer parameters, which can be expressed as:
  • wij is the weight of node j connecting to node i
  • si is the weighted sum of the inputs on node i.
  • the FL architecture is the most widely used training architecture in the current FL field.
  • the FedAvg algorithm is the basic algorithm of FL. Its algorithm flow is roughly as follows:
  • the center initializes the model to be trained And broadcast it to all client devices.
  • the central node aggregates and collects the local training results from all (or some) clients. Assume that the client set that uploads the local model in round t is The center will use the number of samples of the corresponding client as the weight to perform weighted averaging to obtain a new global model. The specific update rule is: The center then sends the latest version of the global model Broadcast to all client devices for a new round of training.
  • the central node In addition to reporting local models You can also use the local gradient of training After reporting, the central node averages the local gradients and updates the global model according to the direction of the average gradient.
  • Distributed nodes collect local datasets, perform local training, and report the local training results (models or gradients) to the central node.
  • the central node itself does not have a dataset; it is only responsible for fusing the training results of distributed nodes to obtain a global model and send it to the distributed nodes.
  • decentralized learning Different from federated learning, decentralized learning is another distributed learning architecture.
  • the design goal f(x) of a decentralized learning system is generally the mean of the goals fi (x) of each node, that is, Where n is the number of distributed nodes, x is the parameter to be optimized. In machine learning, x is the parameter of the machine learning (such as neural network) model.
  • Each node uses local data and local target fi (x) to calculate the local gradient Then it is sent to the neighboring nodes that can be communicated with. After any node receives the gradient information sent by its neighbor, it can update the parameter x of the local model according to the following formula:
  • wireless communication systems e.g., the systems shown in Figures 1a and 1b.
  • communication nodes generally have both signal transceiver capabilities and computing capabilities.
  • network devices with computing capabilities primarily provide computing power to support signal transceiver capabilities (e.g., performing signal transmission and reception processing) to enable communication between the network device and other communication nodes.
  • communication nodes may have excess computing power beyond supporting the aforementioned communication tasks. Therefore, how to utilize this computing power is a pressing technical issue.
  • a communication node can serve as a participating node in an AI learning system, applying its computing power to a specific part of the AI learning system (e.g., the AI learning system described in FIG2d or FIG2e ).
  • AI learning system e.g., the AI learning system described in FIG2d or FIG2e .
  • BET bidirectional encoder representations from transformers
  • GPT generative pre-trained transformers
  • model inference it is necessary to switch between multiple models based on conditions; and during the training of a new model, it is necessary to retrain the model (or fine-tune the model).
  • These processes require operations such as model registration/identification and retraining, which increases the complexity of model management.
  • FIG3 is a schematic diagram of an implementation of the communication method provided in this application.
  • the method includes the following steps.
  • the method is illustrated by taking the first communication device and the second communication device as the execution subjects of the interaction diagram as an example, but the present application does not limit the execution subjects of the interaction diagram.
  • the execution subject of the method can be replaced by a chip, a chip system, a processor, a logic module or software in the communication device.
  • the first communication device can be a terminal device and the second communication device can be a network device, or the first communication device and the second communication device are both network devices, or the first communication device and the second communication device are both terminal devices.
  • the second communication device sends one or more sample data, and correspondingly, the first communication device receives the one or more sample data.
  • the first communication device processes the one or more sample data and the inference data based on the first neural network model to obtain an inference result corresponding to the inference data, wherein the inference result has at least one data feature identical to that of the sample data.
  • neural network model artificial intelligence (AI) model
  • AI neural network model AI neural network model
  • machine learning model AI processing model
  • sample data can be used as part of the input of a neural network model to ensure that the inference results output by the neural network model have at least one data feature in common with the sample data.
  • sample data can be replaced by other terms such as reference data, anchor data, example data, or guidance data.
  • the data feature includes at least one of the following: data dimension, parameter quantity, data content, data type, or physical quantity.
  • FIG4a shows an example of the implementation process of the above-mentioned step S302.
  • the first neural network model can be deployed in the first communication device, and the input of the first neural network model can include the one or more sample data and the inference data, and the output of the first neural network model can include the inference result corresponding to the inference data.
  • the first neural network model may be a model for time domain channel prediction, that is, the first neural network model may predict the channel information of the next time unit based on the channel information of the past k (k is a positive integer) time units (such as frames, subframes, time slots, symbols, etc.).
  • each sample data in the one or more sample data includes the channel information of the first p (p is a positive integer) time units and the channel information of the p+1th time unit
  • the inference data may include the channel information of the past k time units
  • the inference result corresponding to the inference data may include the channel information of the k+1th time unit after the k time units.
  • the first neural network may be a model for frequency domain channel prediction, that is, the first neural network model may predict the channel information of all frequency domain units based on the channel information of some frequency domain units (such as subcarriers, part of the bandwidth, etc.).
  • some frequency domain units such as subcarriers, part of the bandwidth, etc.
  • k is a positive integer
  • K is an integer greater than k.
  • each sample data in one or more sample data includes channel information of p subcarriers and channel information of P subcarriers
  • the inference data may include channel information of k subcarriers
  • the inference result corresponding to the inference data may include channel information of K subcarriers.
  • p is equal to k and P is equal to K.
  • sample data can participate in the processing of the first neural network model in a variety of ways, and some implementation examples will be provided below for description.
  • one or more sample data may participate in the processing of the first neural network model based on a cross-attention approach.
  • the first neural network model may include a first module and a second module.
  • the first neural network model may be a transformer model
  • the first module may be a transformer encoder
  • the second module may be a transformer decoder.
  • the one or more sample data are used as input to the first module.
  • a query (Q) vector and a key (K) vector may be obtained.
  • the inference data may be used as a value (V) vector.
  • Q, K, and V are processed by the cross-attention layer of the second module, information fusion is achieved to obtain the final output.
  • one or more sample data may be used as pre-inputs to participate in the processing of the first neural network model.
  • one or more sample data and inference data are concatenated and input into the first neural network model, i.e., the one or more sample data serve as pre-input. After being processed by the first neural network model, the final output is obtained.
  • one or more sample data may be used to determine neural network model parameters and participate in the processing of the first neural network model.
  • one or more sample data can be used to determine some neural network model parameters, and these partial neural network model parameters can be used to process (e.g., generate/adjust/modify) one or more neural network layers in the first neural network model to obtain a processing result. Thereafter, the inference data is processed based on the processing result to obtain an inference result corresponding to the inference data.
  • one or more sample data is not limited to the implementation examples provided in Figures 4b to 4d above.
  • the one or more sample data may also participate in the processing of the first neural network model in other ways, which is not limited here.
  • the first communication device processes the one or more sample data and the inference data based on the first neural network model to obtain an inference result corresponding to the inference data, including:
  • the reasoning performance of the first neural network model is lower than (or equal to) a threshold
  • a difference between the data distribution of the inference data and the data distribution of the inference data inputted into the first neural network model for the previous k times is greater than (or equal to) a threshold value, where k is a positive integer;
  • the communication state (of the first communication device) changes.
  • the first communication device can add sample data to the input of the first neural network model, that is, the first communication device can process the one or more sample data and the inference data based on the first neural network model to obtain an inference result corresponding to the inference data.
  • the first communication device can process the inference data based on the first neural network model to obtain an inference result corresponding to the inference data, that is, the input of the first neural network model may not include sample data to reduce overhead.
  • the first communication device may locally determine whether at least one of the above conditions is met, i.e., the first communication device may trigger the execution of step S302 based on the result of the local determination.
  • the first communication device may determine whether at least one of the above conditions is met based on an instruction from another communication device, i.e., the first communication device may determine whether the triggering condition of step S302 is met based on the instruction from the other communication device.
  • the first communication device may trigger the execution of step S302 based on the instruction from the other communication device.
  • the other communication device may include the management module and/or data storage module described below.
  • the first communication device can process the one or more sample data and the inference data based on the first neural network model in step S302 to obtain an inference result corresponding to the inference data.
  • the inference result obtained by the first communication device based on the first neural network model is the same as at least one data feature of the sample data, that is, the first communication device can obtain an inference result that has at least one data feature that is the same as the data feature of the sample data based on the example/guidance of the sample data, and can adjust the model inference in a specific scenario based on the sample data to reduce the complexity of model management.
  • the first neural network model is updated to obtain a second neural network model.
  • the update frequency of the one or more sample data is greater than (or equal to) a threshold, or when the performance corresponding to the inference result corresponding to the inference data is lower than (or equal to) a threshold
  • the performance of the current first neural network model can be determined. The performance is poor. Accordingly, the first neural network model can be updated to obtain a second neural network model. That is, a second neural network model with better performance is obtained by training the neural network model.
  • the update frequency of the one or more sample data is less than (or equal to) a threshold, or when the performance corresponding to the inference result corresponding to the inference data is higher than (or equal to) a threshold, it can be determined that the performance of the current first neural network model is superior. Accordingly, there is no need to update the first neural network model to avoid unnecessary overhead.
  • the communication device that updates the first neural network model to obtain the second neural network model can be the first communication device, or other communication devices (for example, the other communication device can include the model training module described later).
  • the method before step S301, also includes: the first communication device sends a request information for requesting the one or more sample data, so that the recipient of the request information provides the one or more sample data to the first communication device based on the request information.
  • the first communication device may include a module for storing/caching sample data. Accordingly, the first communication device may locally obtain the one or more sample data based on the module.
  • step S301 of the method shown in FIG3 is an optional step, i.e., the first communication device can obtain the one or more sample data without receiving the sample data.
  • the method shown in FIG3 can be applied to the communication scenario shown in FIG5.
  • the communication scenario includes the following multiple modules.
  • Data collection module collects inference data or sample data.
  • Model training module trains or fine-tunes the neural network model.
  • Model inference module performs inference based on inference data and sample data to obtain inference results.
  • Data cache module used to cache sample data used for model inference.
  • Data storage module used to store sample data, provide sample data to the data cache module (for example, the sample data can be obtained based on retrieval, that is, the data storage module stores a large amount of sample data, the cache module stores a small amount of sample data for direct use, and retrieval is to retrieve a small amount of relevant sample data from a large amount of sample data for model inference), provide training data to the model training module, etc.
  • the communication scenario shown in Figure 5 may further include a management module.
  • the management module is used to monitor model performance, manage sample data usage of the model inference module, manage model training or fine-tuning of the model training module, etc.
  • the data caching module can be used to obtain, delete, and update sample data.
  • the data caching module can be triggered by the management module, such as when model inference performance degrades, data distribution changes, or the status of the first communication device changes.
  • the data caching module can obtain sample data in two ways: one is to obtain it from the data collection module, and the other is to retrieve it from the data storage module.
  • the data caching module can obtain sample data from the data collection module. For example, the data caching module initiates sample data collection from the data collection module, configuring the type and number of sample data to be collected. The data collection module then initiates data collection from other communication devices (e.g., the first communication device) based on the sample data configuration. For example, a channel estimation task collects channel information obtained by estimating a reference signal.
  • the data cache module can obtain sample data from the data storage module.
  • the data cache module can retrieve sample data from the sample data storage module based on the inference data for model inference.
  • the data storage module can store multiple sample data from different scenarios, retrieve sample data similar to the inference data as an example, and send the retrieved sample data to the data cache module.
  • the data cache module can cache the acquired sample data, and the cached sample data can be used for model reasoning, that is, as a part of the input of real-time model reasoning.
  • sample data in the data cache module may also be stored in the data storage module.
  • the data storage module can be used to store (or long-term store) sample data and provide sample data retrieval and other functions to other modules (e.g., data cache module, model training module, etc.).
  • Specific functions include one or more of the following:
  • Data addition Add sample data from the data collection module or data cache module to the storage module.
  • Data deletion Delete specific sample data from storage.
  • Data update Delete specific sample data and add new sample data.
  • Data monitoring Periodic or event-triggered data monitoring for data addition/deletion/update operations, such as long-term sample data Relevance to the data collected at the time.
  • sample data is retrieved from the storage and provided to the data cache module.
  • Provide training data Provide sample data to the model training module as training data.
  • the first communication device can perform model inference based on one or more sample data. Accordingly, the first communication device at least includes the model inference module in FIG. 5 .
  • the second communication device can be used to transmit sample data.
  • the second communication device can include one or more of the data collection module, data cache module, and data storage module shown in FIG5 .
  • the second communication device can be implemented in a variety of ways, which will be described below with reference to some examples.
  • Implementation method 1 In the method shown in FIG. 3 , the second communication device may be used to cache sample data, that is, the second communication device at least includes the data cache module shown in FIG. 5 .
  • the process of the second communication device sending the one or more sample data includes: the second communication device sending the one or more sample data to the communication device that deploys the first neural network model or the communication device used to store data.
  • the second communication device can send the one or more sample data to the communication device that deploys the first neural network model, so that the recipient of the sample data can implement inference of the neural network model based on the sample data.
  • the second communication device can also send the one or more sample data to the communication device used to store data, so that the recipient of the sample data can implement storage of the sample data.
  • the second communication device receives the one or more sample data to acquire the one or more sample data, wherein the one or more sample data come from a communication device for collecting data (e.g., a communication device including the data collection module in FIG. 5 ), or the one or more sample data come from a communication device for storing data (e.g., a communication device including the data storage module in FIG. 5 ).
  • a communication device for collecting data e.g., a communication device including the data collection module in FIG. 5
  • the one or more sample data come from a communication device for storing data (e.g., a communication device including the data storage module in FIG. 5 ).
  • the method before the second communication device receives the one or more sample data, the method further includes: the second communication device sending request information for requesting the one or more sample data. Specifically, the second communication device may further send request information for requesting the one or more sample data, so that a recipient of the request information can provide the one or more sample data to the second communication device based on the request information.
  • Implementation method 2 In the method shown in FIG. 3 , the second communication device may be used to store sample data, that is, the second communication device at least includes the data storage module shown in FIG. 5 .
  • Implementation method three in the method shown in FIG3 , the second communication device can be used to collect sample data, that is, the second communication device at least includes the data collection module shown in FIG5 .
  • the process of the second communication device sending the one or more sample data may include: the second communication device sending the one or more sample data to a communication device for caching data.
  • the second communication device may send the one or more sample data to a communication device for caching data (e.g., a communication device including the data caching module in FIG. 5 ), so that a recipient of the sample data can cache the sample data based on the sample data.
  • the recipient of the sample data can send the one or more sample data to the communication device that deploys the first neural network model to implement inference of the neural network model.
  • the method further includes: the second communication device receiving request information for requesting the one or more sample data.
  • the second communication device may further receive request information for requesting the one or more sample data, so that the second communication device can provide the one or more sample data based on the request information.
  • the second communication device when at least one of the following conditions is met, sends one or more sample data, including:
  • the reasoning performance of the first neural network model is lower than (or equal to) a threshold
  • a difference between the data distribution of the inference data and the data distribution of the inference data inputted into the first neural network model for the previous k times is greater than (or equal to) a threshold value, where k is a positive integer;
  • the communication state of the communication device deploying the first neural network model changes.
  • the second communication device can send the one or more sample data so that a recipient of the one or more sample data can add the sample data to the input of the first neural network model.
  • the first communication device can process the one or more sample data and the inference data based on the first neural network model to obtain an inference result corresponding to the inference data.
  • the second communication device may not send the one or more sample data to the first communication device, that is, the input of the first neural network model may not include the sample data, to reduce overhead.
  • the second communication device can locally determine whether at least one of the above items is satisfied, that is, the second communication device can trigger the transmission of one or more sample data based on the result of the local determination.
  • the second communication device can determine whether at least one of the above items is satisfied based on the instructions of other communication devices, that is, the second communication device can determine whether the triggering condition for sending one or more sample data is satisfied based on the instructions of other communication devices.
  • the second communication device can trigger the transmission of one or more sample data based on the instructions of other communication devices.
  • the other communication device may include the management module and/or data storage module described above, etc.
  • the method further includes: updating the one or more sample data.
  • the one or more sample data may be updated to improve the processing performance of the neural network model through the updated sample data.
  • the one or more sample data are updated when at least one of the following conditions is met:
  • the reasoning performance corresponding to the reasoning result corresponding to the reasoning data satisfies the first condition
  • the data distribution of the inference data satisfies the second condition
  • the communication state of the communication device deploying the first neural network model changes.
  • the one or more sample data can be updated to improve the processing performance of the neural network model through the updated sample data, or to reduce the complexity of model processing through the updated sample data (for example, when the process of updating the sample data is to reduce the sample data).
  • updating one or more sample data may include adding sample data, reducing sample data, or replacing sample data.
  • the different modules of the communication scenario shown in FIG5 may be independently configured, or some modules may be integrated into the same device/communication apparatus. Some implementation examples are provided below for introduction.
  • the data cache module and the data storage module can be set in the same device/communication apparatus.
  • the same device/communication apparatus can provide the functions of the data cache module and the data storage module.
  • the functions of these two modules can be referred to the relevant description of Figure 5 above.
  • the data collection module, data cache module, and data storage module can be provided in the same device/communication apparatus.
  • the same device/communication apparatus can provide the functions of the data collection module, the data cache module, and the data storage module.
  • the functions of these three modules can be referred to the relevant description of Figure 5 above.
  • the data collection module and the data cache module can be set in the same device/communication device.
  • the same device/communication device can provide the functions of the data collection module and the data cache module.
  • the functions of these two modules can be referred to the relevant description of Figure 5 above.
  • the data cache module and the model inference module can be set in the same device/communication device.
  • the same device/communication device can provide the functions of the data cache module and the model inference module.
  • the functions of these two modules can be referred to the relevant description of Figure 5 above.
  • an embodiment of the present application provides a communication device 700.
  • This communication device 700 can implement the functions of the second communication device or the first communication device in the above-mentioned method embodiment, thereby also achieving the beneficial effects of the above-mentioned method embodiment.
  • the communication device 700 can be the first communication device (or the second communication device), or it can be an integrated circuit or component, such as a chip, within the first communication device (or the second communication device).
  • the transceiver unit 702 may include a sending unit and a receiving unit, which are respectively used to perform sending and receiving.
  • the device 700 when the device 700 is used to execute the method executed by the first communication device in the aforementioned embodiment, the device 700 includes a processing unit 701; the processing unit 701 is used to obtain one or more sample data; the processing unit 701 is also used to process the one or more sample data and inference data based on the first neural network model to obtain an inference result corresponding to the inference data; wherein the inference result is the same as at least one data feature of the sample data.
  • the device 700 when the device 700 is used to execute the method executed by the second communication device in the aforementioned embodiment, the device 700 includes a processing unit 701 and a transceiver unit 702; the processing unit 701 obtains one or more sample data; wherein the one or more sample data and inference data are used to be processed by the first neural network model to obtain an inference result corresponding to the inference data, and the inference result is the same as at least one data feature of the sample data; the transceiver unit 702 is used to send the one or more sample data.
  • Fig. 8 is another schematic structural diagram of a communication device 800 provided in this application.
  • the communication device 800 includes a logic circuit 801 and an input/output interface 802.
  • the communication device 800 may be a chip or an integrated circuit.
  • the transceiver unit 702 shown in FIG7 may be a communication interface, which may be the input/output interface 802 in FIG8 , which may include an input interface and an output interface.
  • the communication interface may be a transceiver circuit, which may include an input interface circuit and an output interface circuit.
  • the input-output interface 802 is used to obtain one or more sample data; the logic circuit 801 is used to process the one or more sample data and inference data based on the first neural network model to obtain an inference result corresponding to the inference data; wherein the inference result is the same as at least one data feature of the sample data.
  • the logic circuit 801 is used to obtain one or more sample data; wherein, the one or more sample data and inference data are used to be processed by the first neural network model to obtain an inference result corresponding to the inference data, and the inference result is the same as at least one data feature of the sample data; the input and output interface 802 is used to send the one or more sample data.
  • the logic circuit 801 and the input/output interface 802 may also execute other steps executed by the first communication device or the second communication device in any embodiment and achieve corresponding beneficial effects, which will not be described in detail here.
  • the processing unit 701 shown in FIG. 7 may be the logic circuit 801 in FIG. 8 .
  • the logic circuit 801 may be a processing device, and the functions of the processing device may be partially or entirely implemented by software.
  • the functions of the processing device may be partially or entirely implemented by software.
  • the processing device may include a memory and a processor, wherein the memory is used to store a computer program, and the processor reads and executes the computer program stored in the memory to perform corresponding processing and/or steps in any one of the method embodiments.
  • the processing device may include only a processor.
  • a memory for storing the computer program is located outside the processing device, and the processor is connected to the memory via circuits/wires to read and execute the computer program stored in the memory.
  • the memory and processor may be integrated or physically separate.
  • the processing device may be one or more chips, or one or more integrated circuits.
  • the processing device may be one or more field-programmable gate arrays (FPGAs), application-specific integrated circuits (ASICs), system-on-chips (SoCs), central processing units (CPUs), network processors (NPs), digital signal processing circuits (DSPs), microcontrollers (MCUs), programmable logic devices (PLDs), or other integrated chips, or any combination of the above chips or processors.
  • FPGAs field-programmable gate arrays
  • ASICs application-specific integrated circuits
  • SoCs system-on-chips
  • CPUs central processing units
  • NPs network processors
  • DSPs digital signal processing circuits
  • MCUs microcontrollers
  • PLDs programmable logic devices
  • FIG 9 shows a communication device 900 involved in the above-mentioned embodiments provided in an embodiment of the present application.
  • the communication device 900 can specifically be a communication device serving as a terminal device in the above-mentioned embodiments.
  • the example shown in Figure 9 is that the terminal device is implemented through the terminal device (or a component in the terminal device).
  • the communication device 900 may include but is not limited to at least one processor 901 and a communication port 902 .
  • the transceiver unit 702 shown in FIG7 may be a communication interface, which may be the communication port 902 in FIG9 , which may include an input interface and an output interface.
  • the communication port 902 may be a transceiver circuit, which may include an input interface circuit and an output interface circuit.
  • the device may also include at least one of a memory 903 and a bus 904.
  • the at least one processor 901 is used to control and process the actions of the communication device 900.
  • the processor 901 can be a central processing unit, a general-purpose processor, a digital signal processor, an application-specific integrated circuit, a field programmable gate array or other programmable logic device, a transistor logic device, a hardware component, or any combination thereof. It can implement or execute the various exemplary logic blocks, modules, and circuits described in conjunction with the disclosure of this application.
  • the processor can also be a combination that implements computing functions, such as a combination of one or more microprocessors, a combination of a digital signal processor and a microprocessor, and so on.
  • the communication device 900 shown in Figure 9 can be specifically used to implement the steps implemented by the terminal device in the aforementioned method embodiment and achieve the corresponding technical effects of the terminal device.
  • the specific implementation methods of the communication device shown in Figure 9 can refer to the description in the aforementioned method embodiment and will not be repeated here.
  • FIG10 is a schematic diagram of the structure of the communication device 1000 involved in the above embodiment provided in the embodiment of the present application.
  • Device 1000 can specifically be a communication device as a network device in the above embodiment.
  • the example shown in Figure 10 is that the network device is implemented through a network device (or a component in the network device), wherein the structure of the communication device can refer to the structure shown in Figure 10.
  • the communication device 1000 includes at least one processor 1011 and at least one network interface 1014. Further optionally, the communication device also includes at least one memory 1012, at least one transceiver 1013 and one or more antennas 1015.
  • the processor 1011, the memory 1012, the transceiver 1013 and the network interface 1014 are connected, for example, via a bus. In an embodiment of the present application, the connection may include various interfaces, transmission lines or buses, etc., which are not limited in this embodiment.
  • the antenna 1015 is connected to the transceiver 1013.
  • the network interface 1014 is used to enable the communication device to communicate with other communication devices through a communication link.
  • the network interface 1014 may include a network interface between the communication device and the core network device, such as an S1 interface, and the network interface may include a network interface between the communication device and other communication devices (such as other network devices or core network devices), such as an X2 or Xn interface.
  • the transceiver unit 702 shown in FIG7 may be a communication interface, which may be the network interface 1014 in FIG10 , which may include an input interface and an output interface.
  • the network interface 1014 may be a transceiver circuit, which may include an input interface circuit and an output interface circuit.
  • Processor 1011 is primarily used to process communication protocols and communication data, control the entire communication device, execute software programs, and process software program data, for example, to support the communication device in performing the actions described in the embodiments.
  • the communication device may include a baseband processor and a central processing unit.
  • the baseband processor is primarily used to process communication protocols and communication data, while the central processing unit is primarily used to control the entire terminal device, execute software programs, and process software program data.
  • Processor 1011 in Figure 10 may integrate the functions of both a baseband processor and a central processing unit. Those skilled in the art will appreciate that the baseband processor and the central processing unit may also be independent processors interconnected via a bus or other technology.
  • a terminal device may include multiple baseband processors to accommodate different network standards, multiple central processing units to enhance its processing capabilities, and various components of the terminal device may be connected via various buses.
  • the baseband processor may also be referred to as a baseband processing circuit or a baseband processing chip.
  • the central processing unit may also be referred to as a central processing circuit or a central processing chip.
  • the functionality for processing communication protocols and communication data may be built into the processor or stored in memory as a software program, which is executed by the processor to implement the baseband processing functionality.
  • the memory is primarily used to store software programs and data.
  • Memory 1012 can exist independently and be connected to processor 1011. Alternatively, memory 1012 and processor 1011 can be integrated together, for example, within a single chip.
  • Memory 1012 can store program code for executing the technical solutions of the embodiments of the present application, and execution is controlled by processor 1011. The various computer program codes executed can also be considered drivers for processor 1011.
  • Figure 10 shows only one memory and one processor. In an actual terminal device, there may be multiple processors and multiple memories.
  • the memory may also be referred to as a storage medium or a storage device.
  • the memory may be a storage element on the same chip as the processor, i.e., an on-chip storage element, or an independent storage element, which is not limited in the present embodiment.
  • the transceiver 1013 can be used to support the reception or transmission of radio frequency signals between the communication device and the terminal.
  • the transceiver 1013 can be connected to the antenna 1015.
  • the transceiver 1013 includes a transmitter Tx and a receiver Rx. Specifically, one or more antennas 1015 can receive radio frequency signals.
  • the receiver Rx of the transceiver 1013 is used to receive the radio frequency signal from the antenna, convert the radio frequency signal into a digital baseband signal or a digital intermediate frequency signal, and provide the digital baseband signal or digital intermediate frequency signal to the processor 1011 so that the processor 1011 can further process the digital baseband signal or digital intermediate frequency signal, such as demodulation and decoding.
  • the transmitter Tx in the transceiver 1013 is also used to receive a modulated digital baseband signal or digital intermediate frequency signal from the processor 1011, convert the modulated digital baseband signal or digital intermediate frequency signal into a radio frequency signal, and transmit the radio frequency signal through one or more antennas 1015.
  • the receiver Rx can selectively perform one or more stages of down-mixing and analog-to-digital conversion on the RF signal to obtain a digital baseband signal or a digital intermediate frequency signal.
  • the order of the down-mixing and analog-to-digital conversion processes is adjustable.
  • the transmitter Tx can selectively perform one or more stages of up-mixing and digital-to-analog conversion on the modulated digital baseband signal or digital intermediate frequency signal to obtain a RF signal.
  • the order of the up-mixing and digital-to-analog conversion processes is adjustable.
  • the digital baseband signal and the digital intermediate frequency signal may be collectively referred to as digital signals.
  • the transceiver 1013 may also be referred to as a transceiver unit, a transceiver, a transceiver device, etc.
  • a device in the transceiver unit that implements a receiving function may be referred to as a receiving unit
  • a device in the transceiver unit that implements a transmitting function may be referred to as a transmitting unit. That is, the transceiver unit includes a receiving unit and a transmitting unit.
  • the receiving unit may also be referred to as a receiver, an input port, a receiving circuit, etc.
  • the transmitting unit may be referred to as a transmitter, a transmitter, or a transmitting circuit, etc.
  • the communication device 1000 shown in Figure 10 can be specifically used to implement the steps implemented by the network device in the aforementioned method embodiment, and to achieve the corresponding technical effects of the network device.
  • the specific implementation methods of the communication device 1000 shown in Figure 10 can refer to the description in the aforementioned method embodiment, and will not be repeated here one by one.
  • FIG11 is a schematic structural diagram of the communication device involved in the above-mentioned embodiment provided in an embodiment of the present application.
  • the communication device 110 includes, for example, modules, units, elements, circuits, or interfaces, which are appropriately configured together to implement the technical solutions provided in this application.
  • the communication device 110 can be the terminal device or network device described above, or a component (such as a chip) in these devices, used to implement the method described in the following method embodiment.
  • the communication device 110 includes one or more processors 111.
  • the processor 111 can be a general-purpose processor or a dedicated processor. For example, it can be a baseband processor or a central processing unit.
  • the baseband processor can be used to process communication protocols and communication data
  • the central processing unit can be used to control the communication device (such as a RAN node, terminal, or chip, etc.), execute software programs, and process data of software programs.
  • the processor 111 may include a program 113 (sometimes also referred to as code or instructions), which may be executed on the processor 111 to cause the communication device 110 to perform the methods described in the following embodiments.
  • the communication device 110 includes circuitry (not shown in FIG11 ).
  • the communication device 110 may include one or more memories 112 on which a program 114 (sometimes also referred to as code or instructions) is stored.
  • the program 114 can be run on the processor 111, so that the communication device 110 executes the method described in the above method embodiment.
  • the processor 111 and/or the memory 112 may include AI modules 117 and 118, which are used to implement AI-related functions.
  • the AI module can be implemented through software, hardware, or a combination of software and hardware.
  • the AI module may include a wireless intelligent control (RIC) module.
  • the AI module may be a near-real-time RIC or a non-real-time RIC.
  • data may be stored in the processor 111 and/or the memory 112.
  • the processor and the memory may be provided separately or integrated together.
  • the communication device 110 may further include a transceiver 115 and/or an antenna 116.
  • the processor 111 may also be referred to as a processing unit, and controls the communication device (e.g., a RAN node or terminal).
  • the transceiver 115 may also be referred to as a transceiver unit, a transceiver, a transceiver circuit, or a transceiver, and is configured to implement the transceiver functions of the communication device through the antenna 116.
  • the processing unit 701 shown in FIG7 may be the processor 111.
  • the transceiver unit 702 shown in FIG7 may be a communication interface, which may be the transceiver 115 shown in FIG11 .
  • the transceiver 115 may include an input interface and an output interface.
  • the transceiver 115 may be a transceiver circuit, which may include an input interface circuit and an output interface circuit.
  • An embodiment of the present application further provides a computer-readable storage medium, which is used to store one or more computer-executable instructions.
  • the processor executes the method described in the possible implementation methods of the first communication device or the second communication device in the aforementioned embodiment.
  • An embodiment of the present application also provides a computer program product (or computer program).
  • the processor executes the method that may be implemented by the above-mentioned first communication device or second communication device.
  • An embodiment of the present application also provides a chip system, which includes at least one processor for supporting a communication device to implement the functions involved in the possible implementation methods of the above-mentioned communication device.
  • the chip system also includes an interface circuit, which provides program instructions and/or data to the at least one processor.
  • the chip system may also include a memory, which is used to store the necessary program instructions and data for the communication device.
  • the chip system can be composed of chips, or it can include chips and other discrete devices, wherein the communication device can specifically be the first communication device or the second communication device in the aforementioned method embodiment.
  • An embodiment of the present application further provides a communication system, wherein the network system architecture includes the first communication device and the second communication device in any of the above embodiments.
  • the disclosed systems, devices and methods can be implemented in other ways.
  • the device embodiments described above are merely schematic.
  • the division of the units is merely a logical function division.
  • Another point is that the mutual coupling or direct coupling or communication connection shown or discussed can be an indirect coupling or communication connection through some interfaces, devices or units, which can be electrical, mechanical or other forms.
  • the units described as separate components may or may not be physically separate, and the components shown as units may or may not be physical units, that is, they may be located in one place or distributed across multiple network units. Some or all of these units may be selected to achieve the purpose of this embodiment according to actual needs.
  • the functional units in the various embodiments of the present application may be integrated into one processing unit, or each unit may exist physically separately, or two or more units may be integrated into one unit.
  • the above-mentioned integrated unit may be implemented in the form of hardware or in the form of a software functional unit. If the integrated unit is implemented in the form of a software functional unit and sold or used as an independent product, it may be stored in a computer-readable storage medium.
  • the technical solution of the present application is essentially or the contributing part or all or part of the technical solution can be embodied in the form of a software product, and the computer software
  • the product is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, server, or network device, etc.) to execute all or part of the steps of the method described in each embodiment of the present application.
  • the aforementioned storage medium includes: a USB flash drive, a mobile hard drive, a read-only memory (ROM), a random access memory (RAM), a magnetic disk, or an optical disk, etc., various media that can store program code.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • Artificial Intelligence (AREA)
  • Theoretical Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Electromagnetism (AREA)
  • Health & Medical Sciences (AREA)
  • Databases & Information Systems (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Medical Informatics (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Mobile Radio Communication Systems (AREA)

Abstract

La présente demande concerne un procédé de communication et un appareil associé. Au cours du procédé, après l'acquisition d'une ou plusieurs données d'échantillon, un premier appareil de communication peut traiter lesdites une ou plusieurs données d'échantillon et des données de raisonnement sur la base d'un premier modèle de réseau neuronal de façon à obtenir un résultat de raisonnement qui correspond aux données de raisonnement. De cette manière, lorsqu'un appareil de communication dans un système de communication fait office de nœud de traitement de modèle, la puissance de calcul de l'appareil de communication peut être appliquée au traitement de modèles de réseau neuronal. En outre, le résultat de raisonnement obtenu par le premier appareil de communication sur la base du premier modèle de réseau neuronal est le même qu'au moins une caractéristique de données des données d'échantillon, c'est-à-dire que, sur la base d'exemples ou d'indications des données d'échantillon, le premier appareil de communication peut obtenir le résultat de raisonnement ayant au moins une caractéristique de données qui est la même que celle des données d'échantillon, et peut ajuster un raisonnement de modèle dans un scénario spécifique sur la base des données d'échantillon, ce qui permet de réduire la complexité en termes de gestion de modèles.
PCT/CN2024/127224 2024-02-29 2024-10-25 Procédé de communication et appareil associé Pending WO2025179919A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202410235482.2A CN120568384A (zh) 2024-02-29 2024-02-29 一种通信方法及相关装置
CN202410235482.2 2024-02-29

Publications (1)

Publication Number Publication Date
WO2025179919A1 true WO2025179919A1 (fr) 2025-09-04

Family

ID=96831749

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2024/127224 Pending WO2025179919A1 (fr) 2024-02-29 2024-10-25 Procédé de communication et appareil associé

Country Status (2)

Country Link
CN (1) CN120568384A (fr)
WO (1) WO2025179919A1 (fr)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113642727A (zh) * 2021-08-06 2021-11-12 北京百度网讯科技有限公司 神经网络模型的训练方法和多媒体信息的处理方法、装置
WO2023115254A1 (fr) * 2021-12-20 2023-06-29 Oppo广东移动通信有限公司 Procédé et dispositif de traitement de données
CN116933847A (zh) * 2022-04-02 2023-10-24 华为技术有限公司 神经网络模型的调整方法、电子设备及可读存储介质
CN117010454A (zh) * 2022-11-02 2023-11-07 腾讯科技(深圳)有限公司 神经网络训练方法、装置、电子设备以及存储介质

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113642727A (zh) * 2021-08-06 2021-11-12 北京百度网讯科技有限公司 神经网络模型的训练方法和多媒体信息的处理方法、装置
WO2023115254A1 (fr) * 2021-12-20 2023-06-29 Oppo广东移动通信有限公司 Procédé et dispositif de traitement de données
CN116933847A (zh) * 2022-04-02 2023-10-24 华为技术有限公司 神经网络模型的调整方法、电子设备及可读存储介质
CN117010454A (zh) * 2022-11-02 2023-11-07 腾讯科技(深圳)有限公司 神经网络训练方法、装置、电子设备以及存储介质

Also Published As

Publication number Publication date
CN120568384A (zh) 2025-08-29

Similar Documents

Publication Publication Date Title
WO2025179919A1 (fr) Procédé de communication et appareil associé
WO2025118980A1 (fr) Procédé de communication et dispositif associé
WO2025190252A1 (fr) Procédé de communication et appareil associé
WO2025092159A1 (fr) Procédé de communication et dispositif associé
WO2025175756A1 (fr) Procédé de communication et dispositif associé
WO2025092160A1 (fr) Procédé de communication et dispositif associé
WO2025190244A1 (fr) Procédé de communication et appareil associé
WO2025019990A1 (fr) Procédé de communication et dispositif associé
WO2025179920A1 (fr) Procédé de communication et appareil associé
WO2025190246A1 (fr) Procédé de communication et appareil associé
WO2025189861A1 (fr) Procédé de communication et appareil associé
WO2025025193A1 (fr) Procédé de communication et dispositif associé
WO2025208880A1 (fr) Procédé de communication et appareil associé
WO2025190248A1 (fr) Procédé de communication et appareil associé
WO2025189860A1 (fr) Procédé de communication et appareil associé
WO2025059907A1 (fr) Procédé de communication et dispositif associé
WO2025189831A1 (fr) Procédé de communication et appareil associé
WO2025103115A1 (fr) Procédé de communication et dispositif associé
WO2025139534A1 (fr) Procédé de communication et dispositif associé
WO2025167443A1 (fr) Procédé de communication et dispositif associé
WO2025086262A1 (fr) Procédé de communication et appareil associé
WO2025059908A1 (fr) Procédé de communication et dispositif associé
WO2025118759A1 (fr) Procédé de communication et dispositif associé
WO2025107835A1 (fr) Procédé de communication et dispositif associé
WO2025140282A1 (fr) Procédé de communication et dispositif associé

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 24926780

Country of ref document: EP

Kind code of ref document: A1