[go: up one dir, main page]

WO2025179919A1 - Communication method and related apparatus - Google Patents

Communication method and related apparatus

Info

Publication number
WO2025179919A1
WO2025179919A1 PCT/CN2024/127224 CN2024127224W WO2025179919A1 WO 2025179919 A1 WO2025179919 A1 WO 2025179919A1 CN 2024127224 W CN2024127224 W CN 2024127224W WO 2025179919 A1 WO2025179919 A1 WO 2025179919A1
Authority
WO
WIPO (PCT)
Prior art keywords
data
sample data
communication device
inference
neural network
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
PCT/CN2024/127224
Other languages
French (fr)
Chinese (zh)
Inventor
张公正
徐晨
李榕
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Technologies Co Ltd
Original Assignee
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co Ltd filed Critical Huawei Technologies Co Ltd
Publication of WO2025179919A1 publication Critical patent/WO2025179919A1/en
Pending legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04BTRANSMISSION
    • H04B17/00Monitoring; Testing
    • H04B17/30Monitoring; Testing of propagation channels
    • H04B17/391Modelling the propagation channel
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/14Network analysis or design
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/14Network analysis or design
    • H04L41/149Network analysis or design for prediction of maintenance
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/16Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks using machine learning or artificial intelligence
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W24/00Supervisory, monitoring or testing arrangements
    • H04W24/02Arrangements for optimising operational condition
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W24/00Supervisory, monitoring or testing arrangements
    • H04W24/06Testing, supervising or monitoring using simulated traffic

Definitions

  • the present application relates to the field of communications, and in particular to a communication method and related devices.
  • Wireless communication can be the transmission communication between two or more communication nodes without propagating through conductors or cables.
  • the communication nodes generally include network devices and terminal devices.
  • communication nodes generally possess both signal transceiver and computing capabilities.
  • network devices with computing capabilities primarily provide computing power to support signal transceiver capabilities (for example, calculating the time and frequency domain resources required to carry signals), enabling communication between the network device and other communication nodes.
  • the computing power of communication nodes not only supports the aforementioned communication tasks but also potentially handles the processing of neural network models.
  • reducing the complexity of model management remains a pressing technical challenge.
  • the present application provides a communication method and related devices for reducing the complexity of model management.
  • the present application provides a communication method, which is performed by a first communication device.
  • the first communication device may be a communication device (such as a terminal device or a network device), or the first communication device may be a component of the communication device (such as a processor, a chip, or a chip system, etc.), or the first communication device may also be a logic module or software that can implement all or part of the functions of the communication device.
  • the first communication device obtains one or more sample data; the first communication device processes the one or more sample data and inference data based on a first neural network model to obtain an inference result corresponding to the inference data; wherein the inference result is the same as at least one data feature of the sample data.
  • the first communication device can process the one or more sample data and the inference data based on the first neural network model to obtain an inference result corresponding to the inference data.
  • the inference result obtained by the first communication device based on the first neural network model is the same as at least one data feature of the sample data, that is, the first communication device can obtain an inference result that has at least one data feature that is the same as the data feature of the sample data based on the example/guidance of the sample data, and can adjust the model inference in a specific scenario based on the sample data to reduce the complexity of model management.
  • neural network model artificial intelligence (AI) model
  • AI neural network model AI neural network model
  • machine learning model AI processing model
  • sample data can be used as part of the input of a neural network model to ensure that the inference results output by the neural network model have at least one data feature in common with the sample data.
  • sample data can be replaced by other terms such as reference data, anchor data, example data, or guidance data.
  • the data feature includes at least one of the following: data dimension, parameter quantity, data content, data type, or physical quantity.
  • the first communication device processes the one or more sample data and the inference data based on the first neural network model to obtain an inference result corresponding to the inference data, including:
  • the inference performance of the first neural network model is below a threshold
  • a difference between the data distribution of the inference data and the data distribution of the inference data inputted for the first k times of the first neural network model is greater than a threshold value, where k is a positive integer;
  • the communication state (of the first communication device) changes.
  • the first communication device can add sample data to the input of the first neural network model, that is, the first communication device can process the one or more sample data and the inference data based on the first neural network model to obtain an inference result corresponding to the inference data.
  • the first communication device can process the inference data based on the first neural network model to obtain an inference result corresponding to the inference data, that is, the input of the first neural network model may not include sample data to reduce overhead.
  • the first communication device can locally determine whether at least one of the above items is satisfied, that is, the first communication device can trigger the sending of one or more sample data based on the result of the local determination.
  • the first communication device can determine whether at least one of the above items is satisfied based on the instructions of other communication devices, that is, the first communication device can determine whether the triggering condition for triggering the sending of one or more sample data is satisfied based on the instructions of other communication devices.
  • the first communication device can trigger the sending of one or more sample data based on the instructions of other communication devices.
  • the other communication device may include the management module and/or data storage module described later.
  • the first neural network model is updated to obtain a second neural network model.
  • the first neural network model can be updated to obtain a second neural network model. In other words, through neural network model training, a second neural network model with better performance is obtained.
  • the update frequency of the one or more sample data is less than or equal to a threshold, or when the performance corresponding to the inference result corresponding to the inference data is greater than or equal to a threshold, it can be determined that the performance of the current first neural network model is superior. Accordingly, there is no need to update the first neural network model to avoid unnecessary overhead.
  • the communication device that updates the first neural network model to obtain the second neural network model can be the first communication device, or other communication devices (for example, the other communication device can include the model training module described later).
  • the first communication device obtains one or more sample data, including: the first communication device receives the one or more sample data.
  • the first communication device may acquire the one or more sample data by receiving the one or more sample data.
  • the first communication device may include a module for storing/caching sample data. Accordingly, the first communication device may locally obtain the one or more sample data based on the module.
  • the method further includes: the first communication device sending request information for requesting the one or more sample data.
  • the first communication device may also send request information for requesting the one or more sample data, so that the recipient of the request information provides the one or more sample data to the first communication device based on the request information.
  • the second aspect of the present application provides a communication method, which is performed by a second communication device, which can be a communication device (such as a terminal device or a network device), or the second communication device can be a component of the communication device (such as a processor, a chip or a chip system, etc.), or the second communication device can also be a logic module or software that can implement all or part of the functions of the communication device.
  • the second communication device obtains one or more sample data; wherein the one or more sample data satisfy: the inference result corresponding to the inference data obtained by processing the one or more sample data and the inference data through the first neural network model is the same as at least one data feature of the sample data; and the second communication device sends the one or more sample data.
  • the first communication device can process the one or more sample data and inference data based on the first neural network model to obtain an inference result corresponding to the inference data.
  • the inference result obtained by the first communication device based on the first neural network model is the same as at least one data feature of the sample data, that is, the first communication device can obtain an inference result that has at least one data feature that is the same as the data feature of the sample data based on the example/guidance of the sample data, and can adjust the model inference in a specific scenario based on the sample data to reduce the complexity of model management.
  • the data feature includes at least one of the following: data dimension, parameter quantity, data content, data type, or physical quantity.
  • the method is applied to a communication device that caches data (that is, the second communication device can be used to cache sample data); the second communication device sends the one or more sample data, including: the second communication device sends the one or more sample data to a communication device that deploys the first neural network model or a communication device for storing data.
  • the second communication device can send the one or more sample data to the communication device that deploys the first neural network model, so that the recipient of the sample data can implement inference of the neural network model based on the sample data.
  • the second communication device can also be used to send the one or more sample data to a communication device that stores data, so that the recipient of the sample data can implement storage of the sample data.
  • the second communication device obtains one or more sample data, including: the second communication device receives the one or more sample data, wherein the one or more sample data come from a communication device for collecting data, or the one or more sample data come from a communication device for storing data.
  • the second communication device can obtain one or more sample data by receiving one or more sample data.
  • the method before the second communication device receives the one or more sample data, the method further includes: the second communication device sending request information for requesting the one or more sample data.
  • the second communication device may further send request information for requesting the one or more sample data, so that the recipient of the request information can provide the one or more sample data to the second communication device based on the request information.
  • the method is applied to a communication device for storing data or a communication device for collecting data (that is, the second communication device can be used to store sample data or collect sample data); the second communication device sends the one or more sample data, including: the second communication device sends the one or more sample data to a communication device for caching data.
  • the second communication device can send the one or more sample data to the communication device used to cache data, so that the recipient of the sample data can cache the sample data based on the sample data.
  • the recipient of the sample data can send the one or more sample data to the communication device that deploys the first neural network model to realize reasoning of the neural network model.
  • the method before the second communication device sends the one or more sample data, the method further includes: the second communication device receiving request information for requesting the one or more sample data.
  • the second communication device may also receive request information for requesting the one or more sample data, so that the second communication device can provide the one or more sample data based on the request information.
  • the second communication device when at least one of the following items is met, sends one or more sample data, including:
  • the inference performance of the first neural network model is below a threshold
  • a difference between the data distribution of the inference data and the data distribution of the inference data inputted for the first k times of the first neural network model is greater than a threshold value, where k is a positive integer;
  • the communication state of the communication device deploying the first neural network model changes.
  • the second communication device can send the one or more sample data so that the recipient of the one or more sample data can add the sample data to the input of the first neural network model.
  • the first communication device can process the one or more sample data and the inference data based on the first neural network model to obtain an inference result corresponding to the inference data.
  • the second communication device may not send the one or more sample data to the first communication device, that is, the input of the first neural network model may not include the sample data, to reduce overhead.
  • the second communication device can locally determine whether at least one of the above items is satisfied, that is, the second communication device can trigger the transmission of one or more sample data based on the result of the local determination.
  • the second communication device can determine whether at least one of the above items is satisfied based on the instructions of other communication devices, that is, the second communication device can determine whether the triggering condition for sending one or more sample data is satisfied based on the instructions of other communication devices.
  • the second communication device can trigger the transmission of one or more sample data based on the instructions of other communication devices.
  • the other communication device may include the management module and/or data storage module described later.
  • the method further includes: the second communication device updating the one or more sample data.
  • the second communication device can also update the one or more sample data in order to improve the processing performance of the neural network model through the updated sample data.
  • the second communication apparatus updates the one or more sample data, including:
  • the reasoning performance corresponding to the reasoning result corresponding to the reasoning data satisfies the first condition
  • the data distribution of the inference data satisfies the second condition
  • the communication state of the communication device deploying the first neural network model changes.
  • the second communication device can update the one or more sample data in order to improve the processing performance of the neural network model through the updated sample data.
  • updating one or more sample data may include adding sample data, reducing sample data, or replacing sample data.
  • the third aspect of the present application provides a communication device, which is a first communication device and includes a processing unit; the processing unit is used to obtain one or more sample data; the processing unit is also used to process the one or more sample data and inference data based on a first neural network model to obtain an inference result corresponding to the inference data; wherein the inference result is the same as at least one data feature of the sample data.
  • the constituent modules of the communication device can also be used to execute the steps performed in each possible implementation method of the first aspect and achieve corresponding technical effects.
  • the constituent modules of the communication device can also be used to execute the steps performed in each possible implementation method of the first aspect and achieve corresponding technical effects.
  • the fourth aspect of the present application provides a communication device, which is a second communication device, and includes a transceiver unit and a processing unit, the processing unit being used to obtain one or more sample data; wherein the one or more sample data and inference data are used to be processed by a first neural network model to obtain an inference result corresponding to the inference data, and the inference result is the same as at least one data feature of the sample data; the transceiver unit is used to send the one or more sample data.
  • the constituent modules of the communication device can also be used to execute the steps performed in each possible implementation method of the second aspect and achieve corresponding technical effects.
  • the constituent modules of the communication device can also be used to execute the steps performed in each possible implementation method of the second aspect and achieve corresponding technical effects.
  • the present application provides a communication device, comprising at least one processor, wherein the at least one processor is coupled to a memory; the memory is used to store programs or instructions; the at least one processor is used to execute the program or instructions so that the device implements the method described in any possible implementation method of any one of the first to second aspects.
  • the present application provides a communication device comprising at least one logic circuit and an input/output interface; the logic circuit is used to execute the method described in any possible implementation of any one of the first to second aspects.
  • the present application provides a communication system, which includes the above-mentioned first communication device and second communication device.
  • the present application provides a computer-readable storage medium for storing one or more computer-executable instructions.
  • the processor executes the method described in any possible implementation of any one of the first to second aspects above.
  • the present application provides a computer program product (or computer program).
  • the processor executes the method described in any possible implementation of any one of the first to second aspects above.
  • the present application provides a chip system comprising at least one processor for supporting a communication device to implement the method described in any possible implementation of any one of the first to second aspects.
  • the chip system may further include a memory for storing program instructions and data necessary for the communication device.
  • the chip system may be composed of a chip or may include a chip and other discrete components.
  • the chip system may further include an interface circuit for providing program instructions and/or data to the at least one processor.
  • the technical effects brought about by any design method in the third to tenth aspects can refer to the technical effects brought about by the different design methods in the above-mentioned first to second aspects, and will not be repeated here.
  • FIGS. 1a to 1c are schematic diagrams of a communication system provided by this application.
  • FIGS 1d, 1e, and 2a to 2e are schematic diagrams of the AI processing process involved in this application;
  • FIG3 is an interactive schematic diagram of the communication method provided by this application.
  • FIGS. 4a to 4d are schematic diagrams of the processing process of the neural network model provided by this application.
  • FIG5 is a schematic diagram of an application scenario of the communication method provided in this application.
  • FIGS. 6a to 6d are schematic diagrams of application scenarios of the communication method provided by this application.
  • Terminal device It can be a wireless terminal device that can receive network device scheduling and instruction information.
  • the wireless terminal device can be a device that provides voice and/or data connectivity to the user, or a handheld device with wireless connection function, or other processing device connected to a wireless modem.
  • Terminal devices can communicate with one or more core networks or the Internet via a radio access network (RAN).
  • Terminal devices can be mobile terminal devices, such as mobile phones (also known as "cellular" phones, mobile phones), computers, and data cards.
  • mobile phones also known as "cellular" phones, mobile phones
  • computers and data cards.
  • they can be portable, pocket-sized, handheld, computer-built-in, or vehicle-mounted mobile devices that exchange voice and/or data with the radio access network.
  • PCS personal communication service
  • SIP Session Initiation Protocol
  • WLL wireless local loop
  • PDAs personal digital assistants
  • tablet computers computers with wireless transceiver capabilities, and other devices.
  • Wireless terminal equipment can also be called system, subscriber unit, subscriber station, mobile station, mobile station (MS), remote station, access point (AP), remote terminal equipment (remote terminal), access terminal equipment (access terminal), user terminal equipment (user terminal), user agent (user agent), subscriber station (SS), customer premises equipment (CPE), terminal, user equipment (UE), mobile terminal (MT), etc.
  • the terminal device may also be a wearable device.
  • Wearable devices may also be referred to as wearable smart devices or smart wearable devices, etc., which are a general term for wearable devices that are intelligently designed and developed using wearable technology for daily wear, such as glasses, gloves, watches, clothing, and shoes.
  • a wearable device is a portable device that is worn directly on the body or integrated into the user's clothes or accessories. Wearable devices are not only hardware devices, but also achieve powerful functions through software support, data interaction, and cloud interaction.
  • wearable smart devices include those that are fully functional, large in size, and can achieve complete or partial functions without relying on smartphones, such as smart watches or smart glasses, etc., as well as those that only focus on a certain type of application function and need to be used in conjunction with other devices such as smartphones, such as various smart bracelets, smart helmets, and smart jewelry for vital sign monitoring.
  • the terminal can also be a drone, a robot, a terminal in device-to-device (D2D) communication, a terminal in vehicle to everything (V2X), a virtual reality (VR) terminal device, an augmented reality (AR) terminal device, a wireless terminal in industrial control, a wireless terminal in self-driving, a wireless terminal in remote medical, a wireless terminal in smart grid, a wireless terminal in transportation safety, a wireless terminal in a smart city, a wireless terminal in a smart home, etc.
  • D2D device-to-device
  • V2X vehicle to everything
  • VR virtual reality
  • AR augmented reality
  • the terminal device may also be a terminal device in a communication system that evolves beyond the fifth-generation (5G) communication system (e.g., a sixth-generation (6G) communication system) or a terminal device in a future-evolved public land mobile network (PLMN).
  • 5G fifth-generation
  • 6G sixth-generation
  • PLMN public land mobile network
  • a 6G network may further extend the form and functionality of 5G communication terminals.
  • 6G terminals include, but are not limited to, vehicles, cellular network terminals (with integrated satellite terminal functionality), drones, and Internet of Things (IoT) devices.
  • IoT Internet of Things
  • the terminal device may also obtain AI services provided by the network device.
  • the terminal device may also have AI processing capabilities.
  • a network device can be a RAN node (or device) that connects a terminal device to a wireless network, which can also be called a base station.
  • RAN equipment are: base station, evolved NodeB (eNodeB), gNB (gNodeB) in a 5G communication system, transmission reception point (TRP), evolved NodeB (eNB), radio network controller (RNC), NodeB (NB), home base station (e.g., home evolved NodeB, or home NodeB, HNB), baseband unit (BBU), or wireless fidelity (Wi-Fi) access point AP, etc.
  • the network equipment can include a centralized unit (CU) node, a distributed unit (DU) node, or a RAN device including a CU node and a DU node.
  • CU centralized unit
  • DU distributed unit
  • RAN device including a CU node and a DU node.
  • a RAN node can be a macro base station, micro base station, indoor base station, relay node, donor node, or wireless controller in a cloud radio access network (CRAN) scenario.
  • a RAN node can also be a server, wearable device, vehicle, or onboard device.
  • the access network device in vehicle-to-everything (V2X) technology can be a roadside unit (RSU).
  • the RAN node can be a centralized unit (CU), a distributed unit (DU), a CU-control plane (CP), a CU-user plane (UP), or a radio unit (RU).
  • the CU and DU can be set up separately, or they can be included in the same network element, such as the baseband unit.
  • the RU may be included in a radio frequency device or a radio frequency unit, for example, a remote radio unit (RRU), an active antenna unit (AAU) or a remote radio head (RRH).
  • RRU remote radio unit
  • AAU active antenna unit
  • RRH remote radio head
  • CU or CU-CP and CU-UP
  • DU or RU may have different names, but those skilled in the art can understand their meanings.
  • O-CU open CU
  • DU may also be called O-DU
  • CU-CP may also be called O-CU-CP
  • CU-UP may also be called O-CU-UP
  • RU may also be called O-RU.
  • this application uses CU, CU-CP, CU-UP, DU and RU as examples for description.
  • Any unit among the CU (or CU-CP, CU-UP), DU and RU in this application can be implemented by a software module, a hardware module, or a combination of a software module and a hardware module.
  • This protocol layer may include a control plane protocol layer and a user plane protocol layer.
  • the control plane protocol layer may include at least one of the following: radio resource control (RRC) layer, packet data convergence protocol (PDCP) layer, radio link control (RLC) layer, media access control (MAC) layer, or physical (PHY) layer.
  • the user plane protocol layer may include at least one of the following: service data adaptation protocol (SDAP) layer, PDCP layer, RLC layer, MAC layer, or physical layer.
  • SDAP service data adaptation protocol
  • the network device may be any other device that provides wireless communication functionality to the terminal device.
  • the embodiments of this application do not limit the specific technology and device form used by the network device. For ease of description, the embodiments of this application do not limit this.
  • the network equipment may also include core network equipment, such as the mobility management entity (MME), home subscriber server (HSS), serving gateway (S-GW), policy and charging rules function (PCRF), and public data network gateway (PDN gateway, P-GW) in the fourth generation (4G) network; and the access and mobility management function (AMF), user plane function (UPF), or session management function (SMF) in the 5G network.
  • MME mobility management entity
  • HSS home subscriber server
  • S-GW serving gateway
  • PDN gateway, P-GW public data network gateway
  • the core network equipment may also include other core network equipment in the 5G network and the next generation network of the 5G network.
  • the above-mentioned network device may also have a network node with AI capabilities, which can provide AI services for terminals or other network devices.
  • a network node with AI capabilities can be an AI node on the network side (access network or core network), a computing power node, a RAN node with AI capabilities, a core network element with AI capabilities, etc.
  • the apparatus for implementing the function of the network device may be the network device, or may be a device capable of supporting the network device in implementing the function, such as a chip system, which may be installed in the network device.
  • the technical solutions provided in the embodiments of the present application are described by taking the network device as an example.
  • Configuration and pre-configuration are used simultaneously.
  • Configuration refers to the network device/server sending some parameter configuration information or parameter values to the terminal through messages or signaling, so that the terminal can determine the communication parameters or resources during transmission based on these values or information.
  • Pre-configuration is similar to configuration, and can be parameter information or parameter values pre-negotiated between the network device/server and the terminal device, or parameter information or parameter values used by the base station/network device or terminal device as specified in the standard protocol, or parameter information or parameter values pre-stored in the base station/server or terminal device. This application does not limit this.
  • system and “network” in the embodiments of the present application can be used interchangeably.
  • Multiple refers to two or more.
  • And/or describes the association relationship of associated objects, indicating that three relationships may exist.
  • a and/or B can represent: the existence of A alone, the existence of A and B at the same time, and the existence of B alone, where A and B can be singular or plural.
  • the character “/” generally indicates that the previous and next associated objects are in an “or” relationship.
  • At least one of the following” or similar expressions refers to any combination of these items, including single items. (a) or any combination of multiple items (a).
  • At least one of A, B, and C includes A, B, C, AB, AC, BC, or ABC.
  • ordinal numbers such as “first” and “second” mentioned in the embodiments of the present application are used to distinguish multiple objects and are not used to limit the order, timing, priority, or importance of multiple objects.
  • “Sending” and “receiving” in the embodiments of the present application indicate the direction of signal transmission.
  • sending information to XX can be understood as the destination of the information being XX, which can include direct sending through the air interface, as well as indirect sending through the air interface by other units or modules.
  • Receiviving information from YY can be understood as the source of the information being YY, which can include direct receiving from YY through the air interface, as well as indirect receiving from YY through the air interface from other units or modules.
  • “Sending” can also be understood as the “output” of the chip interface, and “receiving” can also be understood as the “input” of the chip interface.
  • sending and receiving can be performed between devices, for example, between a network device and a terminal device, or can be performed within a device, for example, sending or receiving between components, modules, chips, software modules or hardware modules within the device through a bus, wiring or interface.
  • information may be processed between the source and destination of information transmission, such as coding, modulation, etc., but the destination can understand the valid information from the source. Similar expressions in this application can be understood similarly and will not be repeated.
  • indication may include direct indication and indirect indication, and may also include explicit indication and implicit indication.
  • the information indicated by a certain information is called information to be indicated.
  • information to be indicated In the specific implementation process, there are many ways to indicate the information to be indicated, such as but not limited to, directly indicating the information to be indicated, such as the information to be indicated itself or the index of the information to be indicated.
  • the information to be indicated may also be indirectly indicated by indicating other information, wherein the other information is associated with the information to be indicated; or only a part of the information to be indicated may be indicated, while the other part of the information to be indicated is known or agreed in advance.
  • the indication of specific information may be achieved by means of the arrangement order of each information agreed in advance (such as predefined by the protocol), thereby reducing the indication overhead to a certain extent.
  • the present application does not limit the specific method of indication. It is understandable that for the sender of the indication information, the indication information can be used to indicate the information to be indicated, and for the receiver of the indication information, the indication information can be used to determine the information to be indicated.
  • the communication system includes at least one network device and/or at least one terminal device.
  • Figure 1a is a schematic diagram of a communication system in this application.
  • Figure 1a exemplarily illustrates a network device and six terminal devices, namely terminal device 1, terminal device 2, terminal device 3, terminal device 4, terminal device 5, and terminal device 6.
  • terminal device 1 is a smart teacup
  • terminal device 2 is a smart air conditioner
  • terminal device 3 is a smart gas pump
  • terminal device 4 is a vehicle
  • terminal device 5 is a mobile phone
  • terminal device 6 is a printer.
  • the AI configuration information sending entity can be a network device.
  • the AI configuration information receiving entity can be terminal devices 1-6.
  • the network device and terminal devices 1-6 form a communication system.
  • terminal devices 1-6 can send data to the network device, and the network device needs to receive data sent by terminal devices 1-6.
  • the network device can send configuration information to terminal devices 1-6.
  • terminal devices 4 and 6 can also form a communication system.
  • Terminal device 5 serves as a network device, i.e., the AI configuration information sending entity;
  • terminal devices 4 and 6 serve as terminal devices, i.e., the AI configuration information receiving entities.
  • terminal device 5 sends AI configuration information to terminal devices 4 and 6, respectively, and receives data from them.
  • terminal devices 4 and 6 receive AI configuration information from terminal device 5 and send data to terminal device 5.
  • different devices may also execute AI-related services.
  • the base station can perform communication-related services and AI-related services with one or more terminal devices, and different terminal devices can also perform communication-related services and AI-related services.
  • an AI network element can be introduced into the communication system provided in this application to implement some or all AI-related operations.
  • the AI network element can also be called an AI node, AI device, AI entity, AI module, AI model, or AI unit, etc.
  • the AI network element can be a network element built into the communication system.
  • the AI network element can be an AI module built into: an access network device, a core network device, a cloud server, or a network management (OAM) to implement AI-related functions.
  • the OAM can be a network management for a core network device and/or a network management for an access network device.
  • the AI network element can also be an independently set network element in the communication system.
  • the terminal or the chip built into the terminal can also include an AI entity to implement AI-related functions.
  • AI artificial intelligence
  • AI Artificial intelligence
  • Machine learning methods can be used to implement AI.
  • a machine uses training data to learn (or train) a model. This model represents the mapping from input to output.
  • the learned model can be used for inference (or prediction), meaning that the model can be used to predict the output corresponding to a given input. This output can also be called an inference result (or prediction result).
  • Machine learning can include supervised learning, unsupervised learning, and reinforcement learning. Among them, unsupervised learning can also be called unsupervised learning.
  • Supervised learning uses machine learning algorithms to learn the mapping relationship between sample values and sample labels based on collected sample values and sample labels, and then expresses this learned mapping relationship using an AI model.
  • the process of training a machine learning model is the process of learning this mapping relationship.
  • sample values are input into the model to obtain the model's predicted values.
  • the model parameters are optimized by calculating the error between the model's predicted values and the sample labels (ideal values).
  • the learned mapping can be used to predict new sample labels.
  • the mapping relationship learned by supervised learning can include linear mappings or nonlinear mappings. Based on the type of label, the learning task can be divided into classification tasks and regression tasks.
  • Unsupervised learning uses algorithms to discover inherent patterns in collected sample values.
  • One type of unsupervised learning algorithm uses the samples themselves as supervisory signals, meaning the model learns the mapping from one sample to another. This is called self-supervised learning.
  • the model parameters are optimized by calculating the error between the model's predictions and the samples themselves.
  • Self-supervised learning can be used in signal compression and decompression recovery applications. Common algorithms include autoencoders and generative adversarial networks.
  • Reinforcement learning unlike supervised learning, is a type of algorithm that learns problem-solving strategies through interaction with the environment. Unlike supervised and unsupervised learning, reinforcement learning problems lack explicit label data for "correct" actions. Instead, the algorithm must interact with the environment to obtain reward signals from the environment, and then adjust its decision-making actions to maximize the reward signal value. For example, in downlink power control, the reinforcement learning model adjusts the downlink transmit power of each user based on the overall system throughput fed back by the wireless network, hoping to achieve higher system throughput. The goal of reinforcement learning is also to learn the mapping between environmental states and optimal (e.g., optimal) decision-making actions. However, because the labels for "correct actions" cannot be obtained in advance, network optimization cannot be achieved by calculating the error between actions and "correct actions.” Reinforcement learning training is achieved through iterative interaction with the environment.
  • NN neural network
  • Traditional communication systems require extensive expert knowledge to design communication modules.
  • deep learning communication systems based on neural networks can automatically discover implicit patterns in massive data sets and establish mapping relationships between data, achieving performance superior to traditional modeling methods.
  • each neuron performs a weighted sum operation on its input values and outputs the result through an activation function.
  • FIG. 1d it is a schematic diagram of the neuron structure.
  • w i is used as the weight of xi to weight xi .
  • the bias for weighted summation of input values according to the weights is, for example, b.
  • the activation function can take many forms.
  • the output of the neuron is:
  • b can be a decimal, an integer (eg, 0, a positive integer, or a negative integer), or a complex number, etc.
  • the activation functions of different neurons in a neural network can be the same or different.
  • neural networks generally include multiple layers, each of which may include one or more neurons. Increasing the depth and/or width of a neural network can improve its expressive power, providing more powerful information extraction and abstract modeling capabilities for complex systems.
  • the depth of a neural network can refer to the number of layers it comprises, and the number of neurons in each layer can be referred to as the width of that layer.
  • a neural network includes an input layer and an output layer. The input layer processes the input information received by the neural network through neurons, passing the processing results to the output layer, which then obtains the output of the neural network.
  • a neural network includes an input layer, a hidden layer, and an output layer. The input layer processes the input information received by the neural network through neurons, passing the processing results to an intermediate hidden layer. The hidden layer performs calculations on the received processing results to obtain a calculation result, which is then passed to the output layer or the next adjacent hidden layer, which ultimately obtains the output of the neural network.
  • a neural network can include one hidden layer or multiple hidden layers connected in sequence, without limitation.
  • DNN deep neural network
  • FNNs feedforward neural networks
  • CNNs convolutional neural networks
  • RNNs recurrent neural networks
  • Figure 1e is a schematic diagram of a FNN network.
  • a characteristic of FNN networks is that neurons in adjacent layers are fully connected. This characteristic typically requires a large amount of storage space and results in high computational complexity.
  • CNN is a neural network specifically designed to process data with a grid-like structure. For example, time series data (discrete sampling along the time axis) and image data (discrete sampling along two dimensions) can both be considered grid-like data.
  • CNNs do not utilize all input information at once for computation. Instead, they use a fixed-size window to intercept a portion of the information for convolution operations, significantly reducing the computational complexity of model parameters.
  • each window can use a different convolution kernel, enabling CNNs to better extract features from the input data.
  • RNNs are a type of DNN that utilizes feedback time series information. Their input consists of a new input value at the current moment and their own output value at the previous moment. RNNs are suitable for capturing temporally correlated sequence features and are particularly well-suited for applications such as speech recognition and channel coding.
  • a loss function can be defined. This function describes the gap or discrepancy between the model's output and the ideal target value. Loss functions can be expressed in various forms, and there are no restrictions on their specific form. The model training process can be viewed as adjusting some or all of the model's parameters to keep the loss function below a threshold or meet the target.
  • a model may also be referred to as an AI model, rule, or other name.
  • An AI model can be considered a specific method for implementing an AI function.
  • An AI model represents a mapping relationship or function between the input and output of a model.
  • AI functions may include one or more of the following: data collection, model training (or model learning), model information release, model inference (or model reasoning, inference, or prediction, etc.), model monitoring or model verification, or inference result release, etc.
  • AI functions may also be referred to as AI (related) operations, or AI-related functions.
  • Fully connected neural network also called multilayer perceptron (MLP).
  • an MLP consists of an input layer (left), an output layer (right), and multiple hidden layers (center).
  • Each layer of the MLP contains several nodes, called neurons. Neurons in adjacent layers are connected to each other.
  • w is the weight matrix
  • b is the bias vector
  • f is the activation function
  • n is the index of the neural network layer
  • a neural network can be understood as a mapping from an input data set to an output data set.
  • Neural networks are typically initialized randomly, and the process of obtaining this mapping from random w and b using existing data is called neural network training.
  • the specific training method is to use a loss function to evaluate the output results of the neural network.
  • the error can be backpropagated, and the neural network parameters (including w and b) can be iteratively optimized using gradient descent until the loss function reaches a minimum, which is the "better point (e.g., optimal point)" in Figure 2b. It is understood that the neural network parameters corresponding to the "better point (e.g., optimal point)" in Figure 2b can be used as the neural network parameters in the trained AI model information.
  • the gradient descent process can be expressed as:
  • is the parameter to be optimized (including w and b)
  • L is the loss function
  • is the learning rate, which controls the step size of gradient descent.
  • the backpropagation process utilizes the chain rule for partial derivatives.
  • the gradient of the previous layer parameters can be recursively calculated from the gradient of the next layer parameters, which can be expressed as:
  • wij is the weight of node j connecting to node i
  • si is the weighted sum of the inputs on node i.
  • the FL architecture is the most widely used training architecture in the current FL field.
  • the FedAvg algorithm is the basic algorithm of FL. Its algorithm flow is roughly as follows:
  • the center initializes the model to be trained And broadcast it to all client devices.
  • the central node aggregates and collects the local training results from all (or some) clients. Assume that the client set that uploads the local model in round t is The center will use the number of samples of the corresponding client as the weight to perform weighted averaging to obtain a new global model. The specific update rule is: The center then sends the latest version of the global model Broadcast to all client devices for a new round of training.
  • the central node In addition to reporting local models You can also use the local gradient of training After reporting, the central node averages the local gradients and updates the global model according to the direction of the average gradient.
  • Distributed nodes collect local datasets, perform local training, and report the local training results (models or gradients) to the central node.
  • the central node itself does not have a dataset; it is only responsible for fusing the training results of distributed nodes to obtain a global model and send it to the distributed nodes.
  • decentralized learning Different from federated learning, decentralized learning is another distributed learning architecture.
  • the design goal f(x) of a decentralized learning system is generally the mean of the goals fi (x) of each node, that is, Where n is the number of distributed nodes, x is the parameter to be optimized. In machine learning, x is the parameter of the machine learning (such as neural network) model.
  • Each node uses local data and local target fi (x) to calculate the local gradient Then it is sent to the neighboring nodes that can be communicated with. After any node receives the gradient information sent by its neighbor, it can update the parameter x of the local model according to the following formula:
  • wireless communication systems e.g., the systems shown in Figures 1a and 1b.
  • communication nodes generally have both signal transceiver capabilities and computing capabilities.
  • network devices with computing capabilities primarily provide computing power to support signal transceiver capabilities (e.g., performing signal transmission and reception processing) to enable communication between the network device and other communication nodes.
  • communication nodes may have excess computing power beyond supporting the aforementioned communication tasks. Therefore, how to utilize this computing power is a pressing technical issue.
  • a communication node can serve as a participating node in an AI learning system, applying its computing power to a specific part of the AI learning system (e.g., the AI learning system described in FIG2d or FIG2e ).
  • AI learning system e.g., the AI learning system described in FIG2d or FIG2e .
  • BET bidirectional encoder representations from transformers
  • GPT generative pre-trained transformers
  • model inference it is necessary to switch between multiple models based on conditions; and during the training of a new model, it is necessary to retrain the model (or fine-tune the model).
  • These processes require operations such as model registration/identification and retraining, which increases the complexity of model management.
  • FIG3 is a schematic diagram of an implementation of the communication method provided in this application.
  • the method includes the following steps.
  • the method is illustrated by taking the first communication device and the second communication device as the execution subjects of the interaction diagram as an example, but the present application does not limit the execution subjects of the interaction diagram.
  • the execution subject of the method can be replaced by a chip, a chip system, a processor, a logic module or software in the communication device.
  • the first communication device can be a terminal device and the second communication device can be a network device, or the first communication device and the second communication device are both network devices, or the first communication device and the second communication device are both terminal devices.
  • the second communication device sends one or more sample data, and correspondingly, the first communication device receives the one or more sample data.
  • the first communication device processes the one or more sample data and the inference data based on the first neural network model to obtain an inference result corresponding to the inference data, wherein the inference result has at least one data feature identical to that of the sample data.
  • neural network model artificial intelligence (AI) model
  • AI neural network model AI neural network model
  • machine learning model AI processing model
  • sample data can be used as part of the input of a neural network model to ensure that the inference results output by the neural network model have at least one data feature in common with the sample data.
  • sample data can be replaced by other terms such as reference data, anchor data, example data, or guidance data.
  • the data feature includes at least one of the following: data dimension, parameter quantity, data content, data type, or physical quantity.
  • FIG4a shows an example of the implementation process of the above-mentioned step S302.
  • the first neural network model can be deployed in the first communication device, and the input of the first neural network model can include the one or more sample data and the inference data, and the output of the first neural network model can include the inference result corresponding to the inference data.
  • the first neural network model may be a model for time domain channel prediction, that is, the first neural network model may predict the channel information of the next time unit based on the channel information of the past k (k is a positive integer) time units (such as frames, subframes, time slots, symbols, etc.).
  • each sample data in the one or more sample data includes the channel information of the first p (p is a positive integer) time units and the channel information of the p+1th time unit
  • the inference data may include the channel information of the past k time units
  • the inference result corresponding to the inference data may include the channel information of the k+1th time unit after the k time units.
  • the first neural network may be a model for frequency domain channel prediction, that is, the first neural network model may predict the channel information of all frequency domain units based on the channel information of some frequency domain units (such as subcarriers, part of the bandwidth, etc.).
  • some frequency domain units such as subcarriers, part of the bandwidth, etc.
  • k is a positive integer
  • K is an integer greater than k.
  • each sample data in one or more sample data includes channel information of p subcarriers and channel information of P subcarriers
  • the inference data may include channel information of k subcarriers
  • the inference result corresponding to the inference data may include channel information of K subcarriers.
  • p is equal to k and P is equal to K.
  • sample data can participate in the processing of the first neural network model in a variety of ways, and some implementation examples will be provided below for description.
  • one or more sample data may participate in the processing of the first neural network model based on a cross-attention approach.
  • the first neural network model may include a first module and a second module.
  • the first neural network model may be a transformer model
  • the first module may be a transformer encoder
  • the second module may be a transformer decoder.
  • the one or more sample data are used as input to the first module.
  • a query (Q) vector and a key (K) vector may be obtained.
  • the inference data may be used as a value (V) vector.
  • Q, K, and V are processed by the cross-attention layer of the second module, information fusion is achieved to obtain the final output.
  • one or more sample data may be used as pre-inputs to participate in the processing of the first neural network model.
  • one or more sample data and inference data are concatenated and input into the first neural network model, i.e., the one or more sample data serve as pre-input. After being processed by the first neural network model, the final output is obtained.
  • one or more sample data may be used to determine neural network model parameters and participate in the processing of the first neural network model.
  • one or more sample data can be used to determine some neural network model parameters, and these partial neural network model parameters can be used to process (e.g., generate/adjust/modify) one or more neural network layers in the first neural network model to obtain a processing result. Thereafter, the inference data is processed based on the processing result to obtain an inference result corresponding to the inference data.
  • one or more sample data is not limited to the implementation examples provided in Figures 4b to 4d above.
  • the one or more sample data may also participate in the processing of the first neural network model in other ways, which is not limited here.
  • the first communication device processes the one or more sample data and the inference data based on the first neural network model to obtain an inference result corresponding to the inference data, including:
  • the reasoning performance of the first neural network model is lower than (or equal to) a threshold
  • a difference between the data distribution of the inference data and the data distribution of the inference data inputted into the first neural network model for the previous k times is greater than (or equal to) a threshold value, where k is a positive integer;
  • the communication state (of the first communication device) changes.
  • the first communication device can add sample data to the input of the first neural network model, that is, the first communication device can process the one or more sample data and the inference data based on the first neural network model to obtain an inference result corresponding to the inference data.
  • the first communication device can process the inference data based on the first neural network model to obtain an inference result corresponding to the inference data, that is, the input of the first neural network model may not include sample data to reduce overhead.
  • the first communication device may locally determine whether at least one of the above conditions is met, i.e., the first communication device may trigger the execution of step S302 based on the result of the local determination.
  • the first communication device may determine whether at least one of the above conditions is met based on an instruction from another communication device, i.e., the first communication device may determine whether the triggering condition of step S302 is met based on the instruction from the other communication device.
  • the first communication device may trigger the execution of step S302 based on the instruction from the other communication device.
  • the other communication device may include the management module and/or data storage module described below.
  • the first communication device can process the one or more sample data and the inference data based on the first neural network model in step S302 to obtain an inference result corresponding to the inference data.
  • the inference result obtained by the first communication device based on the first neural network model is the same as at least one data feature of the sample data, that is, the first communication device can obtain an inference result that has at least one data feature that is the same as the data feature of the sample data based on the example/guidance of the sample data, and can adjust the model inference in a specific scenario based on the sample data to reduce the complexity of model management.
  • the first neural network model is updated to obtain a second neural network model.
  • the update frequency of the one or more sample data is greater than (or equal to) a threshold, or when the performance corresponding to the inference result corresponding to the inference data is lower than (or equal to) a threshold
  • the performance of the current first neural network model can be determined. The performance is poor. Accordingly, the first neural network model can be updated to obtain a second neural network model. That is, a second neural network model with better performance is obtained by training the neural network model.
  • the update frequency of the one or more sample data is less than (or equal to) a threshold, or when the performance corresponding to the inference result corresponding to the inference data is higher than (or equal to) a threshold, it can be determined that the performance of the current first neural network model is superior. Accordingly, there is no need to update the first neural network model to avoid unnecessary overhead.
  • the communication device that updates the first neural network model to obtain the second neural network model can be the first communication device, or other communication devices (for example, the other communication device can include the model training module described later).
  • the method before step S301, also includes: the first communication device sends a request information for requesting the one or more sample data, so that the recipient of the request information provides the one or more sample data to the first communication device based on the request information.
  • the first communication device may include a module for storing/caching sample data. Accordingly, the first communication device may locally obtain the one or more sample data based on the module.
  • step S301 of the method shown in FIG3 is an optional step, i.e., the first communication device can obtain the one or more sample data without receiving the sample data.
  • the method shown in FIG3 can be applied to the communication scenario shown in FIG5.
  • the communication scenario includes the following multiple modules.
  • Data collection module collects inference data or sample data.
  • Model training module trains or fine-tunes the neural network model.
  • Model inference module performs inference based on inference data and sample data to obtain inference results.
  • Data cache module used to cache sample data used for model inference.
  • Data storage module used to store sample data, provide sample data to the data cache module (for example, the sample data can be obtained based on retrieval, that is, the data storage module stores a large amount of sample data, the cache module stores a small amount of sample data for direct use, and retrieval is to retrieve a small amount of relevant sample data from a large amount of sample data for model inference), provide training data to the model training module, etc.
  • the communication scenario shown in Figure 5 may further include a management module.
  • the management module is used to monitor model performance, manage sample data usage of the model inference module, manage model training or fine-tuning of the model training module, etc.
  • the data caching module can be used to obtain, delete, and update sample data.
  • the data caching module can be triggered by the management module, such as when model inference performance degrades, data distribution changes, or the status of the first communication device changes.
  • the data caching module can obtain sample data in two ways: one is to obtain it from the data collection module, and the other is to retrieve it from the data storage module.
  • the data caching module can obtain sample data from the data collection module. For example, the data caching module initiates sample data collection from the data collection module, configuring the type and number of sample data to be collected. The data collection module then initiates data collection from other communication devices (e.g., the first communication device) based on the sample data configuration. For example, a channel estimation task collects channel information obtained by estimating a reference signal.
  • the data cache module can obtain sample data from the data storage module.
  • the data cache module can retrieve sample data from the sample data storage module based on the inference data for model inference.
  • the data storage module can store multiple sample data from different scenarios, retrieve sample data similar to the inference data as an example, and send the retrieved sample data to the data cache module.
  • the data cache module can cache the acquired sample data, and the cached sample data can be used for model reasoning, that is, as a part of the input of real-time model reasoning.
  • sample data in the data cache module may also be stored in the data storage module.
  • the data storage module can be used to store (or long-term store) sample data and provide sample data retrieval and other functions to other modules (e.g., data cache module, model training module, etc.).
  • Specific functions include one or more of the following:
  • Data addition Add sample data from the data collection module or data cache module to the storage module.
  • Data deletion Delete specific sample data from storage.
  • Data update Delete specific sample data and add new sample data.
  • Data monitoring Periodic or event-triggered data monitoring for data addition/deletion/update operations, such as long-term sample data Relevance to the data collected at the time.
  • sample data is retrieved from the storage and provided to the data cache module.
  • Provide training data Provide sample data to the model training module as training data.
  • the first communication device can perform model inference based on one or more sample data. Accordingly, the first communication device at least includes the model inference module in FIG. 5 .
  • the second communication device can be used to transmit sample data.
  • the second communication device can include one or more of the data collection module, data cache module, and data storage module shown in FIG5 .
  • the second communication device can be implemented in a variety of ways, which will be described below with reference to some examples.
  • Implementation method 1 In the method shown in FIG. 3 , the second communication device may be used to cache sample data, that is, the second communication device at least includes the data cache module shown in FIG. 5 .
  • the process of the second communication device sending the one or more sample data includes: the second communication device sending the one or more sample data to the communication device that deploys the first neural network model or the communication device used to store data.
  • the second communication device can send the one or more sample data to the communication device that deploys the first neural network model, so that the recipient of the sample data can implement inference of the neural network model based on the sample data.
  • the second communication device can also send the one or more sample data to the communication device used to store data, so that the recipient of the sample data can implement storage of the sample data.
  • the second communication device receives the one or more sample data to acquire the one or more sample data, wherein the one or more sample data come from a communication device for collecting data (e.g., a communication device including the data collection module in FIG. 5 ), or the one or more sample data come from a communication device for storing data (e.g., a communication device including the data storage module in FIG. 5 ).
  • a communication device for collecting data e.g., a communication device including the data collection module in FIG. 5
  • the one or more sample data come from a communication device for storing data (e.g., a communication device including the data storage module in FIG. 5 ).
  • the method before the second communication device receives the one or more sample data, the method further includes: the second communication device sending request information for requesting the one or more sample data. Specifically, the second communication device may further send request information for requesting the one or more sample data, so that a recipient of the request information can provide the one or more sample data to the second communication device based on the request information.
  • Implementation method 2 In the method shown in FIG. 3 , the second communication device may be used to store sample data, that is, the second communication device at least includes the data storage module shown in FIG. 5 .
  • Implementation method three in the method shown in FIG3 , the second communication device can be used to collect sample data, that is, the second communication device at least includes the data collection module shown in FIG5 .
  • the process of the second communication device sending the one or more sample data may include: the second communication device sending the one or more sample data to a communication device for caching data.
  • the second communication device may send the one or more sample data to a communication device for caching data (e.g., a communication device including the data caching module in FIG. 5 ), so that a recipient of the sample data can cache the sample data based on the sample data.
  • the recipient of the sample data can send the one or more sample data to the communication device that deploys the first neural network model to implement inference of the neural network model.
  • the method further includes: the second communication device receiving request information for requesting the one or more sample data.
  • the second communication device may further receive request information for requesting the one or more sample data, so that the second communication device can provide the one or more sample data based on the request information.
  • the second communication device when at least one of the following conditions is met, sends one or more sample data, including:
  • the reasoning performance of the first neural network model is lower than (or equal to) a threshold
  • a difference between the data distribution of the inference data and the data distribution of the inference data inputted into the first neural network model for the previous k times is greater than (or equal to) a threshold value, where k is a positive integer;
  • the communication state of the communication device deploying the first neural network model changes.
  • the second communication device can send the one or more sample data so that a recipient of the one or more sample data can add the sample data to the input of the first neural network model.
  • the first communication device can process the one or more sample data and the inference data based on the first neural network model to obtain an inference result corresponding to the inference data.
  • the second communication device may not send the one or more sample data to the first communication device, that is, the input of the first neural network model may not include the sample data, to reduce overhead.
  • the second communication device can locally determine whether at least one of the above items is satisfied, that is, the second communication device can trigger the transmission of one or more sample data based on the result of the local determination.
  • the second communication device can determine whether at least one of the above items is satisfied based on the instructions of other communication devices, that is, the second communication device can determine whether the triggering condition for sending one or more sample data is satisfied based on the instructions of other communication devices.
  • the second communication device can trigger the transmission of one or more sample data based on the instructions of other communication devices.
  • the other communication device may include the management module and/or data storage module described above, etc.
  • the method further includes: updating the one or more sample data.
  • the one or more sample data may be updated to improve the processing performance of the neural network model through the updated sample data.
  • the one or more sample data are updated when at least one of the following conditions is met:
  • the reasoning performance corresponding to the reasoning result corresponding to the reasoning data satisfies the first condition
  • the data distribution of the inference data satisfies the second condition
  • the communication state of the communication device deploying the first neural network model changes.
  • the one or more sample data can be updated to improve the processing performance of the neural network model through the updated sample data, or to reduce the complexity of model processing through the updated sample data (for example, when the process of updating the sample data is to reduce the sample data).
  • updating one or more sample data may include adding sample data, reducing sample data, or replacing sample data.
  • the different modules of the communication scenario shown in FIG5 may be independently configured, or some modules may be integrated into the same device/communication apparatus. Some implementation examples are provided below for introduction.
  • the data cache module and the data storage module can be set in the same device/communication apparatus.
  • the same device/communication apparatus can provide the functions of the data cache module and the data storage module.
  • the functions of these two modules can be referred to the relevant description of Figure 5 above.
  • the data collection module, data cache module, and data storage module can be provided in the same device/communication apparatus.
  • the same device/communication apparatus can provide the functions of the data collection module, the data cache module, and the data storage module.
  • the functions of these three modules can be referred to the relevant description of Figure 5 above.
  • the data collection module and the data cache module can be set in the same device/communication device.
  • the same device/communication device can provide the functions of the data collection module and the data cache module.
  • the functions of these two modules can be referred to the relevant description of Figure 5 above.
  • the data cache module and the model inference module can be set in the same device/communication device.
  • the same device/communication device can provide the functions of the data cache module and the model inference module.
  • the functions of these two modules can be referred to the relevant description of Figure 5 above.
  • an embodiment of the present application provides a communication device 700.
  • This communication device 700 can implement the functions of the second communication device or the first communication device in the above-mentioned method embodiment, thereby also achieving the beneficial effects of the above-mentioned method embodiment.
  • the communication device 700 can be the first communication device (or the second communication device), or it can be an integrated circuit or component, such as a chip, within the first communication device (or the second communication device).
  • the transceiver unit 702 may include a sending unit and a receiving unit, which are respectively used to perform sending and receiving.
  • the device 700 when the device 700 is used to execute the method executed by the first communication device in the aforementioned embodiment, the device 700 includes a processing unit 701; the processing unit 701 is used to obtain one or more sample data; the processing unit 701 is also used to process the one or more sample data and inference data based on the first neural network model to obtain an inference result corresponding to the inference data; wherein the inference result is the same as at least one data feature of the sample data.
  • the device 700 when the device 700 is used to execute the method executed by the second communication device in the aforementioned embodiment, the device 700 includes a processing unit 701 and a transceiver unit 702; the processing unit 701 obtains one or more sample data; wherein the one or more sample data and inference data are used to be processed by the first neural network model to obtain an inference result corresponding to the inference data, and the inference result is the same as at least one data feature of the sample data; the transceiver unit 702 is used to send the one or more sample data.
  • Fig. 8 is another schematic structural diagram of a communication device 800 provided in this application.
  • the communication device 800 includes a logic circuit 801 and an input/output interface 802.
  • the communication device 800 may be a chip or an integrated circuit.
  • the transceiver unit 702 shown in FIG7 may be a communication interface, which may be the input/output interface 802 in FIG8 , which may include an input interface and an output interface.
  • the communication interface may be a transceiver circuit, which may include an input interface circuit and an output interface circuit.
  • the input-output interface 802 is used to obtain one or more sample data; the logic circuit 801 is used to process the one or more sample data and inference data based on the first neural network model to obtain an inference result corresponding to the inference data; wherein the inference result is the same as at least one data feature of the sample data.
  • the logic circuit 801 is used to obtain one or more sample data; wherein, the one or more sample data and inference data are used to be processed by the first neural network model to obtain an inference result corresponding to the inference data, and the inference result is the same as at least one data feature of the sample data; the input and output interface 802 is used to send the one or more sample data.
  • the logic circuit 801 and the input/output interface 802 may also execute other steps executed by the first communication device or the second communication device in any embodiment and achieve corresponding beneficial effects, which will not be described in detail here.
  • the processing unit 701 shown in FIG. 7 may be the logic circuit 801 in FIG. 8 .
  • the logic circuit 801 may be a processing device, and the functions of the processing device may be partially or entirely implemented by software.
  • the functions of the processing device may be partially or entirely implemented by software.
  • the processing device may include a memory and a processor, wherein the memory is used to store a computer program, and the processor reads and executes the computer program stored in the memory to perform corresponding processing and/or steps in any one of the method embodiments.
  • the processing device may include only a processor.
  • a memory for storing the computer program is located outside the processing device, and the processor is connected to the memory via circuits/wires to read and execute the computer program stored in the memory.
  • the memory and processor may be integrated or physically separate.
  • the processing device may be one or more chips, or one or more integrated circuits.
  • the processing device may be one or more field-programmable gate arrays (FPGAs), application-specific integrated circuits (ASICs), system-on-chips (SoCs), central processing units (CPUs), network processors (NPs), digital signal processing circuits (DSPs), microcontrollers (MCUs), programmable logic devices (PLDs), or other integrated chips, or any combination of the above chips or processors.
  • FPGAs field-programmable gate arrays
  • ASICs application-specific integrated circuits
  • SoCs system-on-chips
  • CPUs central processing units
  • NPs network processors
  • DSPs digital signal processing circuits
  • MCUs microcontrollers
  • PLDs programmable logic devices
  • FIG 9 shows a communication device 900 involved in the above-mentioned embodiments provided in an embodiment of the present application.
  • the communication device 900 can specifically be a communication device serving as a terminal device in the above-mentioned embodiments.
  • the example shown in Figure 9 is that the terminal device is implemented through the terminal device (or a component in the terminal device).
  • the communication device 900 may include but is not limited to at least one processor 901 and a communication port 902 .
  • the transceiver unit 702 shown in FIG7 may be a communication interface, which may be the communication port 902 in FIG9 , which may include an input interface and an output interface.
  • the communication port 902 may be a transceiver circuit, which may include an input interface circuit and an output interface circuit.
  • the device may also include at least one of a memory 903 and a bus 904.
  • the at least one processor 901 is used to control and process the actions of the communication device 900.
  • the processor 901 can be a central processing unit, a general-purpose processor, a digital signal processor, an application-specific integrated circuit, a field programmable gate array or other programmable logic device, a transistor logic device, a hardware component, or any combination thereof. It can implement or execute the various exemplary logic blocks, modules, and circuits described in conjunction with the disclosure of this application.
  • the processor can also be a combination that implements computing functions, such as a combination of one or more microprocessors, a combination of a digital signal processor and a microprocessor, and so on.
  • the communication device 900 shown in Figure 9 can be specifically used to implement the steps implemented by the terminal device in the aforementioned method embodiment and achieve the corresponding technical effects of the terminal device.
  • the specific implementation methods of the communication device shown in Figure 9 can refer to the description in the aforementioned method embodiment and will not be repeated here.
  • FIG10 is a schematic diagram of the structure of the communication device 1000 involved in the above embodiment provided in the embodiment of the present application.
  • Device 1000 can specifically be a communication device as a network device in the above embodiment.
  • the example shown in Figure 10 is that the network device is implemented through a network device (or a component in the network device), wherein the structure of the communication device can refer to the structure shown in Figure 10.
  • the communication device 1000 includes at least one processor 1011 and at least one network interface 1014. Further optionally, the communication device also includes at least one memory 1012, at least one transceiver 1013 and one or more antennas 1015.
  • the processor 1011, the memory 1012, the transceiver 1013 and the network interface 1014 are connected, for example, via a bus. In an embodiment of the present application, the connection may include various interfaces, transmission lines or buses, etc., which are not limited in this embodiment.
  • the antenna 1015 is connected to the transceiver 1013.
  • the network interface 1014 is used to enable the communication device to communicate with other communication devices through a communication link.
  • the network interface 1014 may include a network interface between the communication device and the core network device, such as an S1 interface, and the network interface may include a network interface between the communication device and other communication devices (such as other network devices or core network devices), such as an X2 or Xn interface.
  • the transceiver unit 702 shown in FIG7 may be a communication interface, which may be the network interface 1014 in FIG10 , which may include an input interface and an output interface.
  • the network interface 1014 may be a transceiver circuit, which may include an input interface circuit and an output interface circuit.
  • Processor 1011 is primarily used to process communication protocols and communication data, control the entire communication device, execute software programs, and process software program data, for example, to support the communication device in performing the actions described in the embodiments.
  • the communication device may include a baseband processor and a central processing unit.
  • the baseband processor is primarily used to process communication protocols and communication data, while the central processing unit is primarily used to control the entire terminal device, execute software programs, and process software program data.
  • Processor 1011 in Figure 10 may integrate the functions of both a baseband processor and a central processing unit. Those skilled in the art will appreciate that the baseband processor and the central processing unit may also be independent processors interconnected via a bus or other technology.
  • a terminal device may include multiple baseband processors to accommodate different network standards, multiple central processing units to enhance its processing capabilities, and various components of the terminal device may be connected via various buses.
  • the baseband processor may also be referred to as a baseband processing circuit or a baseband processing chip.
  • the central processing unit may also be referred to as a central processing circuit or a central processing chip.
  • the functionality for processing communication protocols and communication data may be built into the processor or stored in memory as a software program, which is executed by the processor to implement the baseband processing functionality.
  • the memory is primarily used to store software programs and data.
  • Memory 1012 can exist independently and be connected to processor 1011. Alternatively, memory 1012 and processor 1011 can be integrated together, for example, within a single chip.
  • Memory 1012 can store program code for executing the technical solutions of the embodiments of the present application, and execution is controlled by processor 1011. The various computer program codes executed can also be considered drivers for processor 1011.
  • Figure 10 shows only one memory and one processor. In an actual terminal device, there may be multiple processors and multiple memories.
  • the memory may also be referred to as a storage medium or a storage device.
  • the memory may be a storage element on the same chip as the processor, i.e., an on-chip storage element, or an independent storage element, which is not limited in the present embodiment.
  • the transceiver 1013 can be used to support the reception or transmission of radio frequency signals between the communication device and the terminal.
  • the transceiver 1013 can be connected to the antenna 1015.
  • the transceiver 1013 includes a transmitter Tx and a receiver Rx. Specifically, one or more antennas 1015 can receive radio frequency signals.
  • the receiver Rx of the transceiver 1013 is used to receive the radio frequency signal from the antenna, convert the radio frequency signal into a digital baseband signal or a digital intermediate frequency signal, and provide the digital baseband signal or digital intermediate frequency signal to the processor 1011 so that the processor 1011 can further process the digital baseband signal or digital intermediate frequency signal, such as demodulation and decoding.
  • the transmitter Tx in the transceiver 1013 is also used to receive a modulated digital baseband signal or digital intermediate frequency signal from the processor 1011, convert the modulated digital baseband signal or digital intermediate frequency signal into a radio frequency signal, and transmit the radio frequency signal through one or more antennas 1015.
  • the receiver Rx can selectively perform one or more stages of down-mixing and analog-to-digital conversion on the RF signal to obtain a digital baseband signal or a digital intermediate frequency signal.
  • the order of the down-mixing and analog-to-digital conversion processes is adjustable.
  • the transmitter Tx can selectively perform one or more stages of up-mixing and digital-to-analog conversion on the modulated digital baseband signal or digital intermediate frequency signal to obtain a RF signal.
  • the order of the up-mixing and digital-to-analog conversion processes is adjustable.
  • the digital baseband signal and the digital intermediate frequency signal may be collectively referred to as digital signals.
  • the transceiver 1013 may also be referred to as a transceiver unit, a transceiver, a transceiver device, etc.
  • a device in the transceiver unit that implements a receiving function may be referred to as a receiving unit
  • a device in the transceiver unit that implements a transmitting function may be referred to as a transmitting unit. That is, the transceiver unit includes a receiving unit and a transmitting unit.
  • the receiving unit may also be referred to as a receiver, an input port, a receiving circuit, etc.
  • the transmitting unit may be referred to as a transmitter, a transmitter, or a transmitting circuit, etc.
  • the communication device 1000 shown in Figure 10 can be specifically used to implement the steps implemented by the network device in the aforementioned method embodiment, and to achieve the corresponding technical effects of the network device.
  • the specific implementation methods of the communication device 1000 shown in Figure 10 can refer to the description in the aforementioned method embodiment, and will not be repeated here one by one.
  • FIG11 is a schematic structural diagram of the communication device involved in the above-mentioned embodiment provided in an embodiment of the present application.
  • the communication device 110 includes, for example, modules, units, elements, circuits, or interfaces, which are appropriately configured together to implement the technical solutions provided in this application.
  • the communication device 110 can be the terminal device or network device described above, or a component (such as a chip) in these devices, used to implement the method described in the following method embodiment.
  • the communication device 110 includes one or more processors 111.
  • the processor 111 can be a general-purpose processor or a dedicated processor. For example, it can be a baseband processor or a central processing unit.
  • the baseband processor can be used to process communication protocols and communication data
  • the central processing unit can be used to control the communication device (such as a RAN node, terminal, or chip, etc.), execute software programs, and process data of software programs.
  • the processor 111 may include a program 113 (sometimes also referred to as code or instructions), which may be executed on the processor 111 to cause the communication device 110 to perform the methods described in the following embodiments.
  • the communication device 110 includes circuitry (not shown in FIG11 ).
  • the communication device 110 may include one or more memories 112 on which a program 114 (sometimes also referred to as code or instructions) is stored.
  • the program 114 can be run on the processor 111, so that the communication device 110 executes the method described in the above method embodiment.
  • the processor 111 and/or the memory 112 may include AI modules 117 and 118, which are used to implement AI-related functions.
  • the AI module can be implemented through software, hardware, or a combination of software and hardware.
  • the AI module may include a wireless intelligent control (RIC) module.
  • the AI module may be a near-real-time RIC or a non-real-time RIC.
  • data may be stored in the processor 111 and/or the memory 112.
  • the processor and the memory may be provided separately or integrated together.
  • the communication device 110 may further include a transceiver 115 and/or an antenna 116.
  • the processor 111 may also be referred to as a processing unit, and controls the communication device (e.g., a RAN node or terminal).
  • the transceiver 115 may also be referred to as a transceiver unit, a transceiver, a transceiver circuit, or a transceiver, and is configured to implement the transceiver functions of the communication device through the antenna 116.
  • the processing unit 701 shown in FIG7 may be the processor 111.
  • the transceiver unit 702 shown in FIG7 may be a communication interface, which may be the transceiver 115 shown in FIG11 .
  • the transceiver 115 may include an input interface and an output interface.
  • the transceiver 115 may be a transceiver circuit, which may include an input interface circuit and an output interface circuit.
  • An embodiment of the present application further provides a computer-readable storage medium, which is used to store one or more computer-executable instructions.
  • the processor executes the method described in the possible implementation methods of the first communication device or the second communication device in the aforementioned embodiment.
  • An embodiment of the present application also provides a computer program product (or computer program).
  • the processor executes the method that may be implemented by the above-mentioned first communication device or second communication device.
  • An embodiment of the present application also provides a chip system, which includes at least one processor for supporting a communication device to implement the functions involved in the possible implementation methods of the above-mentioned communication device.
  • the chip system also includes an interface circuit, which provides program instructions and/or data to the at least one processor.
  • the chip system may also include a memory, which is used to store the necessary program instructions and data for the communication device.
  • the chip system can be composed of chips, or it can include chips and other discrete devices, wherein the communication device can specifically be the first communication device or the second communication device in the aforementioned method embodiment.
  • An embodiment of the present application further provides a communication system, wherein the network system architecture includes the first communication device and the second communication device in any of the above embodiments.
  • the disclosed systems, devices and methods can be implemented in other ways.
  • the device embodiments described above are merely schematic.
  • the division of the units is merely a logical function division.
  • Another point is that the mutual coupling or direct coupling or communication connection shown or discussed can be an indirect coupling or communication connection through some interfaces, devices or units, which can be electrical, mechanical or other forms.
  • the units described as separate components may or may not be physically separate, and the components shown as units may or may not be physical units, that is, they may be located in one place or distributed across multiple network units. Some or all of these units may be selected to achieve the purpose of this embodiment according to actual needs.
  • the functional units in the various embodiments of the present application may be integrated into one processing unit, or each unit may exist physically separately, or two or more units may be integrated into one unit.
  • the above-mentioned integrated unit may be implemented in the form of hardware or in the form of a software functional unit. If the integrated unit is implemented in the form of a software functional unit and sold or used as an independent product, it may be stored in a computer-readable storage medium.
  • the technical solution of the present application is essentially or the contributing part or all or part of the technical solution can be embodied in the form of a software product, and the computer software
  • the product is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, server, or network device, etc.) to execute all or part of the steps of the method described in each embodiment of the present application.
  • the aforementioned storage medium includes: a USB flash drive, a mobile hard drive, a read-only memory (ROM), a random access memory (RAM), a magnetic disk, or an optical disk, etc., various media that can store program code.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • Artificial Intelligence (AREA)
  • Theoretical Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Electromagnetism (AREA)
  • Health & Medical Sciences (AREA)
  • Databases & Information Systems (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Medical Informatics (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Mobile Radio Communication Systems (AREA)

Abstract

Provided in the present application are a communication method and a related apparatus. In the method, after acquiring one or more pieces of sample data, a first communication apparatus can process the one or more pieces of sample data and reasoning data on the basis of a first neural network model, so as to obtain a reasoning result that corresponds to the reasoning data. In this way, when a communication apparatus in a communication system serves as a model processing node, the computing power of the communication apparatus can be applied to the processing of neural network models. Furthermore, the reasoning result obtained by the first communication apparatus on the basis of the first neural network model is the same as at least one data feature of the sample data, that is, on the basis of examples/guidance of the sample data, the first communication apparatus can obtain the reasoning result having at least one data feature that is the same as that of the sample data, and can adjust model reasoning in a specific scenario on the basis of the sample data, thereby reducing the complexity in terms of model management.

Description

一种通信方法及相关装置A communication method and related device

本申请要求于2024年02月29日提交国家知识产权局、申请号为202410235482.2、申请名称为“一种通信方法及相关装置”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。This application claims priority to the Chinese patent application filed with the State Intellectual Property Office on February 29, 2024, with application number 202410235482.2 and application name “A communication method and related device”, the entire contents of which are incorporated by reference into this application.

技术领域Technical Field

本申请涉及通信领域,尤其涉及一种通信方法及相关装置。The present application relates to the field of communications, and in particular to a communication method and related devices.

背景技术Background Art

无线通信,可以是两个或两个以上的通信节点间不经由导体或缆线传播而进行的传输通讯,该通信节点一般包括网络设备和终端设备。Wireless communication can be the transmission communication between two or more communication nodes without propagating through conductors or cables. The communication nodes generally include network devices and terminal devices.

目前,在无线通信系统中,通信节点一般具备信号收发能力和计算能力。以具备计算能力的网络设备为例,网络设备的计算能力主要是为信号收发能力提供算力支持(例如:对承载信号的时域资源、频域资源等进行计算),以实现网络设备与其它通信节点的通信。Currently, in wireless communication systems, communication nodes generally possess both signal transceiver and computing capabilities. For example, network devices with computing capabilities primarily provide computing power to support signal transceiver capabilities (for example, calculating the time and frequency domain resources required to carry signals), enabling communication between the network device and other communication nodes.

此外,在通信网络中,通信节点的计算能力除了为上述通信任务提供算力支持之外,还可能兼顾神经网络模型的处理。然而,如何降低模型管理的复杂度,是一个亟待解决的技术问题。Furthermore, in communication networks, the computing power of communication nodes not only supports the aforementioned communication tasks but also potentially handles the processing of neural network models. However, reducing the complexity of model management remains a pressing technical challenge.

发明内容Summary of the Invention

本申请提供了一种通信方法及相关装置,用于降低模型管理的复杂度。The present application provides a communication method and related devices for reducing the complexity of model management.

本申请第一方面提供了一种通信方法,该方法由第一通信装置执行,该第一通信装置可以是通信设备(如终端设备或网络设备),或者,该第一通信装置可以是通信设备中的部分组件(例如处理器、芯片或芯片系统等),或者该第一通信装置还可以是能实现全部或部分通信设备功能的逻辑模块或软件。在该方法中,第一通信装置获取一个或多个样本数据;该第一通信装置基于第一神经网络模型处理该一个或多个样本数据和推理数据,得到该推理数据对应的推理结果;其中,该推理结果与该样本数据的至少一个数据特征是相同的。In a first aspect, the present application provides a communication method, which is performed by a first communication device. The first communication device may be a communication device (such as a terminal device or a network device), or the first communication device may be a component of the communication device (such as a processor, a chip, or a chip system, etc.), or the first communication device may also be a logic module or software that can implement all or part of the functions of the communication device. In this method, the first communication device obtains one or more sample data; the first communication device processes the one or more sample data and inference data based on a first neural network model to obtain an inference result corresponding to the inference data; wherein the inference result is the same as at least one data feature of the sample data.

基于上述方案,第一通信装置在获取一个或多个样本数据之后,该第一通信装置可以基于第一神经网络模型处理该一个或多个样本数据和推理数据,得到该推理数据对应的推理结果。通过这种方式,第一通信装置基于第一神经网络模型得到的推理结果与样本数据的至少一个数据特征是相同的,即该第一通信装置能够基于样本数据的示例/指导,获得与该样本数据的数据特征具备至少一个相同的数据特征的推理结果,能够基于该样本数据实现特定场景下模型推理的调整,以降低模型管理的复杂度。Based on the above solution, after the first communication device obtains one or more sample data, the first communication device can process the one or more sample data and the inference data based on the first neural network model to obtain an inference result corresponding to the inference data. In this way, the inference result obtained by the first communication device based on the first neural network model is the same as at least one data feature of the sample data, that is, the first communication device can obtain an inference result that has at least one data feature that is the same as the data feature of the sample data based on the example/guidance of the sample data, and can adjust the model inference in a specific scenario based on the sample data to reduce the complexity of model management.

本申请中,神经网络模型、人工智能(artificial intelligence,AI)模型、AI神经网络模型、机器学习模型、AI处理模型等术语可以相互替换。In this application, terms such as neural network model, artificial intelligence (AI) model, AI neural network model, machine learning model, and AI processing model can be used interchangeably.

应理解,样本数据可以作为神经网络模型的输入的一部分,用于使得该神经网络模型输出的推理结果与该样本数据具备至少一个相同的数据特征。其中,该样本数据可以替换为其它术语,例如,参考数据、锚点数据、示例数据、或指导数据等。It should be understood that sample data can be used as part of the input of a neural network model to ensure that the inference results output by the neural network model have at least one data feature in common with the sample data. The term "sample data" can be replaced by other terms such as reference data, anchor data, example data, or guidance data.

可选地,该数据特征包括以下至少一项:数据维度、参数量、数据内容、数据类型、或物理量。Optionally, the data feature includes at least one of the following: data dimension, parameter quantity, data content, data type, or physical quantity.

在第一方面的一种可能的实现方式中,满足以下至少一项时,该第一通信装置基于第一神经网络模型处理该一个或多个样本数据和推理数据,得到该推理数据对应的推理结果,包括:In a possible implementation of the first aspect, when at least one of the following conditions is met, the first communication device processes the one or more sample data and the inference data based on the first neural network model to obtain an inference result corresponding to the inference data, including:

该第一神经网络模型的推理性能低于阈值;The inference performance of the first neural network model is below a threshold;

该推理数据的数据分布与该第一神经网络模型的前k次输入的推理数据的数据分布的变化大于阈值,k为正整数;A difference between the data distribution of the inference data and the data distribution of the inference data inputted for the first k times of the first neural network model is greater than a threshold value, where k is a positive integer;

(第一通信装置的)通信状态发生变化。The communication state (of the first communication device) changes.

基于上述方案,在满足上述至少一项的情况下,可以确定当前第一神经网络模型基于推理数据得到推理结果的性能较差。为此,该第一通信装置可以在第一神经网络模型的输入中增加样本数据,即该第一通信装置可以基于该第一神经网络模型处理该一个或多个样本数据和推理数据,以得到该推理数据对应的推理结果。Based on the above solution, if at least one of the above items is satisfied, it can be determined that the current first neural network model has poor performance in obtaining an inference result based on the inference data. To this end, the first communication device can add sample data to the input of the first neural network model, that is, the first communication device can process the one or more sample data and the inference data based on the first neural network model to obtain an inference result corresponding to the inference data.

可选地,在不满足上述至少一项时,可以确定当前第一神经网络模型基于推理数据得到推理结果的性能较优。相应的,该第一通信装置可以基于第一神经网络模型处理推理数据,以得到该推理数据对应的推理结果,即第一神经网络模型的输入可以不包括样本数据,以降低开销。 Optionally, when at least one of the above items is not satisfied, it can be determined that the current first neural network model has a better performance in obtaining an inference result based on the inference data. Accordingly, the first communication device can process the inference data based on the first neural network model to obtain an inference result corresponding to the inference data, that is, the input of the first neural network model may not include sample data to reduce overhead.

需要说明的是,第一通信装置可以在本地确定上述至少一项是否满足,即该第一通信装置可以基于本地确定的结果触发一个或多个样本数据的发送。或者,第一通信装置可以基于其它通信装置的指示,以确定上述至少一项是否满足,即该第一通信装置可以基于其它通信装置的指示确定触发一个或多个样本数据的发送的触发条件是否满足。或者,第一通信装置可以基于其它通信装置的指示触发一个或多个样本数据的发送。可选地,该其它通信装置可以包括后文描述的管理模块和/或数据存储模块。It should be noted that the first communication device can locally determine whether at least one of the above items is satisfied, that is, the first communication device can trigger the sending of one or more sample data based on the result of the local determination. Alternatively, the first communication device can determine whether at least one of the above items is satisfied based on the instructions of other communication devices, that is, the first communication device can determine whether the triggering condition for triggering the sending of one or more sample data is satisfied based on the instructions of other communication devices. Alternatively, the first communication device can trigger the sending of one or more sample data based on the instructions of other communication devices. Optionally, the other communication device may include the management module and/or data storage module described later.

在第一方面的一种可能的实现方式中,在该一个或多个样本数据的更新频率大于阈值,或,在该推理数据对应的推理结果对应的性能低于阈值时,对该第一神经网络模型进行更新,得到第二神经网络模型。In a possible implementation of the first aspect, when the update frequency of the one or more sample data is greater than a threshold, or when the performance corresponding to the inference result corresponding to the inference data is lower than a threshold, the first neural network model is updated to obtain a second neural network model.

基于上述方案,在该一个或多个样本数据的更新频率大于阈值,或,在该推理数据对应的推理结果对应的性能低于阈值时,可以确定当前第一神经网络模型的性能较差。相应的,可以对该第一神经网络模型进行更新,得到第二神经网络模型。即通过神经网络模型训练的方式,获得性能较优的第二神经网络模型。Based on the above solution, when the update frequency of the one or more sample data is greater than a threshold, or the performance of the inference result corresponding to the inference data is lower than a threshold, it can be determined that the performance of the current first neural network model is poor. Accordingly, the first neural network model can be updated to obtain a second neural network model. In other words, through neural network model training, a second neural network model with better performance is obtained.

可选地,在该一个或多个样本数据的更新频率小于或等于阈值,或,在该推理数据对应的推理结果对应的性能高于或等于阈值时,可以确定当前第一神经网络模型的性能较优。相应的,可以无需对该第一神经网络模型进行更新,以避免不必要的开销。Optionally, when the update frequency of the one or more sample data is less than or equal to a threshold, or when the performance corresponding to the inference result corresponding to the inference data is greater than or equal to a threshold, it can be determined that the performance of the current first neural network model is superior. Accordingly, there is no need to update the first neural network model to avoid unnecessary overhead.

需要说明的是,上述对第一神经网络模型进行更新得到第二神经网络模型的通信装置,可以为第一通信装置,或者是其他通信装置(例如该其它通信装置可以包括后文描述的模型训练模块)。It should be noted that the communication device that updates the first neural network model to obtain the second neural network model can be the first communication device, or other communication devices (for example, the other communication device can include the model training module described later).

在第一方面的一种可能的实现方式中,该第一通信装置获取一个或多个样本数据,包括:该第一通信装置接收该一个或多个样本数据。In a possible implementation manner of the first aspect, the first communication device obtains one or more sample data, including: the first communication device receives the one or more sample data.

基于上述方案,第一通信装置可以通过接收一个或多个样本数据的方式,以实现该一个或多个样本数据的获取。Based on the above solution, the first communication device may acquire the one or more sample data by receiving the one or more sample data.

可选地,该第一通信装置可以包括用于存储/缓存样本数据的模块,相应的,该第一通信装置可以基于该模块在本地获取该一个或多个样本数据。Optionally, the first communication device may include a module for storing/caching sample data. Accordingly, the first communication device may locally obtain the one or more sample data based on the module.

在第一方面的一种可能的实现方式中,该方法还包括:该第一通信装置发送用于请求该一个或多个样本数据的请求信息。In a possible implementation manner of the first aspect, the method further includes: the first communication device sending request information for requesting the one or more sample data.

基于上述方案,第一通信装置还可以发送用于请求该一个或多个样本数据的请求信息,使得该请求信息的接收方基于该请求信息向该第一通信装置提供该一个或多个样本数据。Based on the above solution, the first communication device may also send request information for requesting the one or more sample data, so that the recipient of the request information provides the one or more sample data to the first communication device based on the request information.

本申请第二方面提供了一种通信方法,该方法由第二通信装置执行,该第二通信装置可以是通信设备(如,终端设备或网络设备),或者,该第二通信装置可以是通信设备中的部分组件(例如处理器、芯片或芯片系统等),或者该第二通信装置还可以是能实现全部或部分通信设备功能的逻辑模块或软件。在该方法中,第二通信装置获取一个或多个样本数据;其中,该一个或多个样本数据满足:经过第一神经网络模型处理该一个或多个样本数据和推理数据得到的该推理数据对应的推理结果,与该样本数据的至少一个数据特征是相同的;该第二通信装置发送该一个或多个样本数据。The second aspect of the present application provides a communication method, which is performed by a second communication device, which can be a communication device (such as a terminal device or a network device), or the second communication device can be a component of the communication device (such as a processor, a chip or a chip system, etc.), or the second communication device can also be a logic module or software that can implement all or part of the functions of the communication device. In this method, the second communication device obtains one or more sample data; wherein the one or more sample data satisfy: the inference result corresponding to the inference data obtained by processing the one or more sample data and the inference data through the first neural network model is the same as at least one data feature of the sample data; and the second communication device sends the one or more sample data.

基于上述方案,第二通信装置在向第一通信装置发送一个或多个样本数据之后,该第一通信装置可以基于第一神经网络模型处理该一个或多个样本数据和推理数据,得到该推理数据对应的推理结果。通过这种方式,第一通信装置基于第一神经网络模型得到的推理结果与样本数据的至少一个数据特征是相同的,即该第一通信装置能够基于样本数据的示例/指导,获得与该样本数据的数据特征具备至少一个相同的数据特征的推理结果,能够基于该样本数据实现特定场景下模型推理的调整,以降低模型管理的复杂度。Based on the above solution, after the second communication device sends one or more sample data to the first communication device, the first communication device can process the one or more sample data and inference data based on the first neural network model to obtain an inference result corresponding to the inference data. In this way, the inference result obtained by the first communication device based on the first neural network model is the same as at least one data feature of the sample data, that is, the first communication device can obtain an inference result that has at least one data feature that is the same as the data feature of the sample data based on the example/guidance of the sample data, and can adjust the model inference in a specific scenario based on the sample data to reduce the complexity of model management.

可选地,该数据特征包括以下至少一项:数据维度、参数量、数据内容、数据类型、或物理量。Optionally, the data feature includes at least one of the following: data dimension, parameter quantity, data content, data type, or physical quantity.

在第二方面的一种可能的实现方式中,该方法应用于缓存数据的通信装置(即该第二通信装置可以用于缓存样本数据);该第二通信装置发送该一个或多个样本数据,包括:该第二通信装置向部署该第一神经网络模型的通信装置或用于存储数据的通信装置发送该一个或多个样本数据。In a possible implementation of the second aspect, the method is applied to a communication device that caches data (that is, the second communication device can be used to cache sample data); the second communication device sends the one or more sample data, including: the second communication device sends the one or more sample data to a communication device that deploys the first neural network model or a communication device for storing data.

基于上述方案,第二通信装置可以向部署该第一神经网络模型的通信装置发送该一个或多个样本数据,以使得样本数据的接收方能够基于该样本数据实现神经网络模型的推理。或,第二通信装置也可以用于存储数据的通信装置发送该一个或多个样本数据,以使得样本数据的接收方能够实现样本数据的存储。 Based on the above solution, the second communication device can send the one or more sample data to the communication device that deploys the first neural network model, so that the recipient of the sample data can implement inference of the neural network model based on the sample data. Alternatively, the second communication device can also be used to send the one or more sample data to a communication device that stores data, so that the recipient of the sample data can implement storage of the sample data.

在第二方面的一种可能的实现方式中,该第二通信装置获取一个或多个样本数据,包括:该第二通信装置接收该一个或多个样本数据,其中,该一个或多个样本数据来自用于收集数据的通信装置,或该一个或多个样本数据来自用于存储数据的通信装置。In a possible implementation of the second aspect, the second communication device obtains one or more sample data, including: the second communication device receives the one or more sample data, wherein the one or more sample data come from a communication device for collecting data, or the one or more sample data come from a communication device for storing data.

基于上述方案,第二通信装置可以通过接收一个或多个样本数据,以实现一个或多个样本数据的获取。Based on the above solution, the second communication device can obtain one or more sample data by receiving one or more sample data.

在第二方面的一种可能的实现方式中,该第二通信装置接收该一个或多个样本数据之前,该方法还包括:该第二通信装置发送用于请求该一个或多个样本数据的请求信息。In a possible implementation manner of the second aspect, before the second communication device receives the one or more sample data, the method further includes: the second communication device sending request information for requesting the one or more sample data.

基于上述方案,第二通信装置还可以发送用于请求该一个或多个样本数据的请求信息,以便于该请求信息的接收方能够基于该请求信息向第二通信装置提供该一个或多个样本数据。Based on the above solution, the second communication device may further send request information for requesting the one or more sample data, so that the recipient of the request information can provide the one or more sample data to the second communication device based on the request information.

在第二方面的一种可能的实现方式中,该方法应用于存储数据的通信装置或收集数据的通信数据(即该第二通信装置可以用于存储样本数据或收集样本数据);该第二通信装置发送该一个或多个样本数据,包括:该第二通信装置向用于缓存数据的通信装置发送该一个或多个样本数据。In a possible implementation of the second aspect, the method is applied to a communication device for storing data or a communication device for collecting data (that is, the second communication device can be used to store sample data or collect sample data); the second communication device sends the one or more sample data, including: the second communication device sends the one or more sample data to a communication device for caching data.

基于上述方案,第二通信装置可以向用于缓存数据的通信装置发送该一个或多个样本数据,以使得样本数据的接收方能够基于该样本数据实现样本数据的缓存之后,该样本数据的接收方能够向部署该第一神经网络模型的通信装置发送该一个或多个样本数据,以实现神经网络模型的推理。Based on the above scheme, the second communication device can send the one or more sample data to the communication device used to cache data, so that the recipient of the sample data can cache the sample data based on the sample data. After that, the recipient of the sample data can send the one or more sample data to the communication device that deploys the first neural network model to realize reasoning of the neural network model.

在第二方面的一种可能的实现方式中,该第二通信装置发送该一个或多个样本数据之前,该方法还包括:该第二通信装置接收用于请求该一个或多个样本数据的请求信息。In a possible implementation manner of the second aspect, before the second communication device sends the one or more sample data, the method further includes: the second communication device receiving request information for requesting the one or more sample data.

基于上述方案,第二通信装置还可以接收用于请求该一个或多个样本数据的请求信息,以便于该第二通信装置能够基于该请求信息提供该一个或多个样本数据。Based on the above solution, the second communication device may also receive request information for requesting the one or more sample data, so that the second communication device can provide the one or more sample data based on the request information.

在第二方面的一种可能的实现方式中,满足以下至少一项时,该第二通信装置发送一个或多个样本数据,包括:In a possible implementation manner of the second aspect, when at least one of the following items is met, the second communication device sends one or more sample data, including:

该第一神经网络模型的推理性能低于阈值;The inference performance of the first neural network model is below a threshold;

该推理数据的数据分布与该第一神经网络模型的前k次输入的推理数据的数据分布的变化大于阈值,k为正整数;A difference between the data distribution of the inference data and the data distribution of the inference data inputted for the first k times of the first neural network model is greater than a threshold value, where k is a positive integer;

部署该第一神经网络模型的通信装置的通信状态发生变化。The communication state of the communication device deploying the first neural network model changes.

基于上述方案,在满足上述至少一项的情况下,可以确定当前第一神经网络模型基于推理数据得到推理结果的性能较差。为此,该第二通信装置可以发送该一个或多个样本数据,使得该一个或多个样本数据的接收方能够在第一神经网络模型的输入中增加样本数据,例如第一通信装置可以基于该第一神经网络模型处理该一个或多个样本数据和推理数据,以得到该推理数据对应的推理结果。Based on the above solution, when at least one of the above items is met, it can be determined that the current first neural network model has poor performance in obtaining an inference result based on the inference data. To this end, the second communication device can send the one or more sample data so that the recipient of the one or more sample data can add the sample data to the input of the first neural network model. For example, the first communication device can process the one or more sample data and the inference data based on the first neural network model to obtain an inference result corresponding to the inference data.

可选地,在不满足上述至少一项时,可以确定当前第一神经网络模型基于推理数据得到推理结果的性能较优。相应的,该第二通信装置可以不向第一通信装置发送该一个或多个样本数据,即第一神经网络模型的输入可以不包括样本数据,以降低开销。Optionally, when at least one of the above conditions is not satisfied, it may be determined that the current first neural network model has a better performance in obtaining an inference result based on the inference data. Accordingly, the second communication device may not send the one or more sample data to the first communication device, that is, the input of the first neural network model may not include the sample data, to reduce overhead.

需要说明的是,第二通信装置可以在本地确定上述至少一项是否满足,即该第二通信装置可以基于本地确定的结果触发一个或多个样本数据的发送。或者,第二通信装置可以基于其它通信装置的指示,以确定上述至少一项是否满足,即该第二通信装置可以基于其它通信装置的指示确定发送一个或多个样本数据的触发条件是否满足。或者,第二通信装置可以基于其它通信装置的指示触发一个或多个样本数据的发送。可选地,该其它通信装置可以包括后文描述的管理模块和/或数据存储模块等。It should be noted that the second communication device can locally determine whether at least one of the above items is satisfied, that is, the second communication device can trigger the transmission of one or more sample data based on the result of the local determination. Alternatively, the second communication device can determine whether at least one of the above items is satisfied based on the instructions of other communication devices, that is, the second communication device can determine whether the triggering condition for sending one or more sample data is satisfied based on the instructions of other communication devices. Alternatively, the second communication device can trigger the transmission of one or more sample data based on the instructions of other communication devices. Optionally, the other communication device may include the management module and/or data storage module described later.

在第二方面的一种可能的实现方式中,该方法还包括:该第二通信装置更新该一个或多个样本数据。In a possible implementation manner of the second aspect, the method further includes: the second communication device updating the one or more sample data.

基于上述方案,第二通信装置还可以更新该一个或多个样本数据,以期通过更新后的样本数据提升神经网络模型的处理性能。Based on the above solution, the second communication device can also update the one or more sample data in order to improve the processing performance of the neural network model through the updated sample data.

在第二方面的一种可能的实现方式中,满足以下至少一项时,该第二通信装置更新该一个或多个样本数据,包括:In a possible implementation manner of the second aspect, when at least one of the following conditions is met, the second communication apparatus updates the one or more sample data, including:

该推理数据对应的推理结果对应的推理性能满足第一条件;The reasoning performance corresponding to the reasoning result corresponding to the reasoning data satisfies the first condition;

该推理数据的数据分布满足第二条件;The data distribution of the inference data satisfies the second condition;

部署该第一神经网络模型的通信装置的通信状态发生变化。The communication state of the communication device deploying the first neural network model changes.

基于上述方案,在满足上述至少一项的情况下,可以确定当前的一个或多个样本数据实现的性能较差。为此,该第二通信装置可以更新该一个或多个样本数据,以期通过更新后的样本数据提升神经网络模型的处理性能。 Based on the above solution, when at least one of the above items is met, it can be determined that the performance achieved by the current one or more sample data is poor. To this end, the second communication device can update the one or more sample data in order to improve the processing performance of the neural network model through the updated sample data.

可选地,更新一个或多个样本数据,可以包括,增加样本数据、减少样本数据或替换样本数据等。Optionally, updating one or more sample data may include adding sample data, reducing sample data, or replacing sample data.

本申请第三方面提供了一种通信装置,该装置为第一通信装置,该装置包括处理单元;该处理单元用于获取一个或多个样本数据;该处理单元还用于基于第一神经网络模型处理该一个或多个样本数据和推理数据,得到该推理数据对应的推理结果;其中,该推理结果与该样本数据的至少一个数据特征是相同的。The third aspect of the present application provides a communication device, which is a first communication device and includes a processing unit; the processing unit is used to obtain one or more sample data; the processing unit is also used to process the one or more sample data and inference data based on a first neural network model to obtain an inference result corresponding to the inference data; wherein the inference result is the same as at least one data feature of the sample data.

本申请第三方面中,通信装置的组成模块还可以用于执行第一方面的各个可能实现方式中所执行的步骤,并实现相应的技术效果,具体均可以参阅第一方面,此处不再赘述。In the third aspect of the present application, the constituent modules of the communication device can also be used to execute the steps performed in each possible implementation method of the first aspect and achieve corresponding technical effects. For details, please refer to the first aspect and will not be repeated here.

本申请第四方面提供了一种通信装置,该装置为第二通信装置,该装置包括收发单元和处理单元,该处理单元用于获取一个或多个样本数据;其中,该一个或多个样本数据和推理数据用于经过第一神经网络模型的处理,得到该推理数据对应的推理结果,该推理结果与该样本数据的至少一个数据特征是相同的;该收发单元用于发送该一个或多个样本数据。The fourth aspect of the present application provides a communication device, which is a second communication device, and includes a transceiver unit and a processing unit, the processing unit being used to obtain one or more sample data; wherein the one or more sample data and inference data are used to be processed by a first neural network model to obtain an inference result corresponding to the inference data, and the inference result is the same as at least one data feature of the sample data; the transceiver unit is used to send the one or more sample data.

本申请第四方面中,通信装置的组成模块还可以用于执行第二方面的各个可能实现方式中所执行的步骤,并实现相应的技术效果,具体均可以参阅第二方面,此处不再赘述。In the fourth aspect of the present application, the constituent modules of the communication device can also be used to execute the steps performed in each possible implementation method of the second aspect and achieve corresponding technical effects. For details, please refer to the second aspect and will not be repeated here.

本申请第五方面提供了一种通信装置,包括至少一个处理器,所述至少一个处理器与存储器耦合;该存储器用于存储程序或指令;该至少一个处理器用于执行该程序或指令,以使该装置实现前述第一方面至第二方面任一方面中的任意一种可能的实现方式所述的方法。In a fifth aspect, the present application provides a communication device, comprising at least one processor, wherein the at least one processor is coupled to a memory; the memory is used to store programs or instructions; the at least one processor is used to execute the program or instructions so that the device implements the method described in any possible implementation method of any one of the first to second aspects.

本申请第六方面提供了一种通信装置,包括至少一个逻辑电路和输入输出接口;该逻辑电路用于执行如前述第一方面至第二方面任一方面中的任意一种可能的实现方式所述的方法。In a sixth aspect, the present application provides a communication device comprising at least one logic circuit and an input/output interface; the logic circuit is used to execute the method described in any possible implementation of any one of the first to second aspects.

本申请第七方面提供了一种通信系统,该通信系统包括上述第一通信装置以及第二通信装置。In a seventh aspect, the present application provides a communication system, which includes the above-mentioned first communication device and second communication device.

本申请第八方面提供一种计算机可读存储介质,该存储介质用于存储一个或多个计算机执行指令,当计算机执行指令被处理器执行时,该处理器执行如上述第一方面至第二方面中任一方面的任意一种可能的实现方式所述的方法。In an eighth aspect, the present application provides a computer-readable storage medium for storing one or more computer-executable instructions. When the computer-executable instructions are executed by a processor, the processor executes the method described in any possible implementation of any one of the first to second aspects above.

本申请第九方面提供一种计算机程序产品(或称计算机程序),当计算机程序产品中的计算机程序被该处理器执行时,该处理器执行上述第一方面至第二方面中任一方面的任意一种可能的实现方式所述的方法。In a ninth aspect, the present application provides a computer program product (or computer program). When the computer program in the computer program product is executed by the processor, the processor executes the method described in any possible implementation of any one of the first to second aspects above.

本申请第十方面提供了一种芯片系统,该芯片系统包括至少一个处理器,用于支持通信装置实现上述第一方面至第二方面中任一方面的任意一种可能的实现方式所述的方法。In a tenth aspect, the present application provides a chip system comprising at least one processor for supporting a communication device to implement the method described in any possible implementation of any one of the first to second aspects.

在一种可能的设计中,该芯片系统还可以包括存储器,存储器,用于保存该通信装置必要的程序指令和数据。该芯片系统,可以由芯片构成,也可以包含芯片和其他分立器件。可选的,所述芯片系统还包括接口电路,所述接口电路为所述至少一个处理器提供程序指令和/或数据。In one possible design, the chip system may further include a memory for storing program instructions and data necessary for the communication device. The chip system may be composed of a chip or may include a chip and other discrete components. Optionally, the chip system may further include an interface circuit for providing program instructions and/or data to the at least one processor.

其中,第三方面至第十方面中任一种设计方式所带来的技术效果可参见上述第一方面至第二方面中不同设计方式所带来的技术效果,在此不再赘述。Among them, the technical effects brought about by any design method in the third to tenth aspects can refer to the technical effects brought about by the different design methods in the above-mentioned first to second aspects, and will not be repeated here.

附图说明BRIEF DESCRIPTION OF THE DRAWINGS

图1a至图1c为本申请提供的通信系统的示意图;Figures 1a to 1c are schematic diagrams of a communication system provided by this application;

图1d、图1e以及图2a至图2e为本申请涉及的AI处理过程的示意图;Figures 1d, 1e, and 2a to 2e are schematic diagrams of the AI processing process involved in this application;

图3为本申请提供的通信方法的一个交互示意图;FIG3 is an interactive schematic diagram of the communication method provided by this application;

图4a至图4d为本申请提供的神经网络模型的处理过程的示意图;Figures 4a to 4d are schematic diagrams of the processing process of the neural network model provided by this application;

图5为本申请提供的通信方法的应用场景示意图;FIG5 is a schematic diagram of an application scenario of the communication method provided in this application;

图6a至图6d为本申请提供的通信方法的应用场景示意图;Figures 6a to 6d are schematic diagrams of application scenarios of the communication method provided by this application;

图7至图11为本申请提供的通信装置的示意图。7 to 11 are schematic diagrams of the communication device provided in this application.

具体实施方式DETAILED DESCRIPTION

首先,对本申请实施例中的部分用语进行解释说明,以便于本领域技术人员理解。First, some of the terms used in the embodiments of the present application are explained to facilitate understanding by those skilled in the art.

(1)终端设备:可以是能够接收网络设备调度和指示信息的无线终端设备,无线终端设备可以是指向用户提供语音和/或数据连通性的设备,或具有无线连接功能的手持式设备,或连接到无线调制解调器的其他处理设备。 (1) Terminal device: It can be a wireless terminal device that can receive network device scheduling and instruction information. The wireless terminal device can be a device that provides voice and/or data connectivity to the user, or a handheld device with wireless connection function, or other processing device connected to a wireless modem.

终端设备可以经无线接入网(radio access network,RAN)与一个或多个核心网或者互联网进行通信,终端设备可以是移动终端设备,如移动电话(或称为“蜂窝”电话,手机(mobile phone))、计算机和数据卡,例如,可以是便携式、袖珍式、手持式、计算机内置的或者车载的移动装置,它们与无线接入网交换语音和/或数据。例如,个人通信业务(personal communication service,PCS)电话、无绳电话、会话发起协议(SIP)话机、无线本地环路(wireless local loop,WLL)站、个人数字助理(personal digital assistant,PDA)、平板电脑(Pad)、带无线收发功能的电脑等设备。无线终端设备也可以称为系统、订户单元(subscriber unit)、订户站(subscriber station),移动站(mobile station)、移动台(mobile station,MS)、远程站(remote station)、接入点(access point,AP)、远程终端设备(remote terminal)、接入终端设备(access terminal)、用户终端设备(user terminal)、用户代理(user agent)、用户站(subscriber station,SS)、用户端设备(customer premises equipment,CPE)、终端(terminal)、用户设备(user equipment,UE)、移动终端(mobile terminal,MT)等。Terminal devices can communicate with one or more core networks or the Internet via a radio access network (RAN). Terminal devices can be mobile terminal devices, such as mobile phones (also known as "cellular" phones, mobile phones), computers, and data cards. For example, they can be portable, pocket-sized, handheld, computer-built-in, or vehicle-mounted mobile devices that exchange voice and/or data with the radio access network. For example, personal communication service (PCS) phones, cordless phones, Session Initiation Protocol (SIP) phones, wireless local loop (WLL) stations, personal digital assistants (PDAs), tablet computers, computers with wireless transceiver capabilities, and other devices. Wireless terminal equipment can also be called system, subscriber unit, subscriber station, mobile station, mobile station (MS), remote station, access point (AP), remote terminal equipment (remote terminal), access terminal equipment (access terminal), user terminal equipment (user terminal), user agent (user agent), subscriber station (SS), customer premises equipment (CPE), terminal, user equipment (UE), mobile terminal (MT), etc.

作为示例而非限定,在本申请实施例中,该终端设备还可以是可穿戴设备。可穿戴设备也可以称为穿戴式智能设备或智能穿戴式设备等,是应用穿戴式技术对日常穿戴进行智能化设计、开发出可以穿戴的设备的总称,如眼镜、手套、手表、服饰及鞋等。可穿戴设备即直接穿在身上,或是整合到用户的衣服或配件的一种便携式设备。可穿戴设备不仅仅是一种硬件设备,更是通过软件支持以及数据交互、云端交互来实现强大的功能。广义穿戴式智能设备包括功能全、尺寸大、可不依赖智能手机实现完整或者部分的功能,例如:智能手表或智能眼镜等,以及只专注于某一类应用功能,需要和其它设备如智能手机配合使用,如各类进行体征监测的智能手环、智能头盔、智能首饰等。As an example and not a limitation, in the embodiments of the present application, the terminal device may also be a wearable device. Wearable devices may also be referred to as wearable smart devices or smart wearable devices, etc., which are a general term for wearable devices that are intelligently designed and developed using wearable technology for daily wear, such as glasses, gloves, watches, clothing, and shoes. A wearable device is a portable device that is worn directly on the body or integrated into the user's clothes or accessories. Wearable devices are not only hardware devices, but also achieve powerful functions through software support, data interaction, and cloud interaction. Broadly speaking, wearable smart devices include those that are fully functional, large in size, and can achieve complete or partial functions without relying on smartphones, such as smart watches or smart glasses, etc., as well as those that only focus on a certain type of application function and need to be used in conjunction with other devices such as smartphones, such as various smart bracelets, smart helmets, and smart jewelry for vital sign monitoring.

终端还可以是无人机、机器人、设备到设备通信(device-to-device,D2D)中的终端、车到一切(vehicle to everything,V2X)中的终端、虚拟现实(virtual reality,VR)终端设备、增强现实(augmented reality,AR)终端设备、工业控制(industrial control)中的无线终端、无人驾驶(self driving)中的无线终端、远程医疗(remote medical)中的无线终端、智能电网(smart grid)中的无线终端、运输安全(transportation safety)中的无线终端、智慧城市(smart city)中的无线终端、智慧家庭(smart home)中的无线终端等。The terminal can also be a drone, a robot, a terminal in device-to-device (D2D) communication, a terminal in vehicle to everything (V2X), a virtual reality (VR) terminal device, an augmented reality (AR) terminal device, a wireless terminal in industrial control, a wireless terminal in self-driving, a wireless terminal in remote medical, a wireless terminal in smart grid, a wireless terminal in transportation safety, a wireless terminal in a smart city, a wireless terminal in a smart home, etc.

此外,终端设备也可以是第五代(5th generation,5G)通信系统之后演进的通信系统(例如第六代(6th generation,6G)通信系统等)中的终端设备或者未来演进的公共陆地移动网络(public land mobile network,PLMN)中的终端设备等。示例性的,6G网络可以进一步扩展5G通信终端的形态和功能,6G终端包括但不限于车、蜂窝网络终端(融合卫星终端功能)、无人机、物联网(internet of things,IoT)设备。Furthermore, the terminal device may also be a terminal device in a communication system that evolves beyond the fifth-generation (5G) communication system (e.g., a sixth-generation (6G) communication system) or a terminal device in a future-evolved public land mobile network (PLMN). For example, a 6G network may further extend the form and functionality of 5G communication terminals. 6G terminals include, but are not limited to, vehicles, cellular network terminals (with integrated satellite terminal functionality), drones, and Internet of Things (IoT) devices.

在本申请实施例中,上述终端设备还可以获得网络设备提供的AI服务。可选地,终端设备还可以具有AI处理能力。In an embodiment of the present application, the terminal device may also obtain AI services provided by the network device. Optionally, the terminal device may also have AI processing capabilities.

(2)网络设备:可以是无线网络中的设备,例如网络设备可以为将终端设备接入到无线网络的RAN节点(或设备),又可以称为基站。目前,一些RAN设备的举例为:基站(base station)、演进型基站(evolved NodeB,eNodeB)、5G通信系统中的基站gNB(gNodeB)、传输接收点(transmission reception point,TRP)、演进型节点B(evolved Node B,eNB)、无线网络控制器(radio network controller,RNC)、节点B(Node B,NB)、家庭基站(例如,home evolved Node B,或home Node B,HNB)、基带单元(base band unit,BBU),或无线保真(wireless fidelity,Wi-Fi)接入点AP等。另外,在一种网络结构中,网络设备可以包括集中单元(centralized unit,CU)节点、或分布单元(distributed unit,DU)节点、或包括CU节点和DU节点的RAN设备。(2) Network equipment: It can be a device in a wireless network. For example, a network device can be a RAN node (or device) that connects a terminal device to a wireless network, which can also be called a base station. Currently, some examples of RAN equipment are: base station, evolved NodeB (eNodeB), gNB (gNodeB) in a 5G communication system, transmission reception point (TRP), evolved NodeB (eNB), radio network controller (RNC), NodeB (NB), home base station (e.g., home evolved NodeB, or home NodeB, HNB), baseband unit (BBU), or wireless fidelity (Wi-Fi) access point AP, etc. In addition, in a network structure, the network equipment can include a centralized unit (CU) node, a distributed unit (DU) node, or a RAN device including a CU node and a DU node.

可选的,RAN节点还可以是宏基站、微基站或室内站、中继节点或施主节点、或者是云无线接入网络(cloud radio access network,CRAN)场景下的无线控制器。RAN节点还可以是服务器,可穿戴设备,车辆或车载设备等。例如,车辆外联(vehicle to everything,V2X)技术中的接入网设备可以为路侧单元(road side unit,RSU)。Alternatively, a RAN node can be a macro base station, micro base station, indoor base station, relay node, donor node, or wireless controller in a cloud radio access network (CRAN) scenario. A RAN node can also be a server, wearable device, vehicle, or onboard device. For example, the access network device in vehicle-to-everything (V2X) technology can be a roadside unit (RSU).

在另一种可能的场景中,由多个RAN节点协作协助终端实现无线接入,不同RAN节点分别实现基站的部分功能。例如,RAN节点可以是集中式单元(central unit,CU),分布式单元(distributed unit,DU),CU-控制面(control plane,CP),CU-用户面(user plane,UP),或者无线单元(radio unit,RU)等。CU和DU可以是单独设置,或者也可以包括在同一个网元中,例如基带单元(baseband  unit,BBU)中。RU可以包括在射频设备或者射频单元中,例如包括在射频拉远单元(remote radio unit,RRU)、有源天线处理单元(active antenna unit,AAU)或远程射频头(remote radio head,RRH)中。In another possible scenario, multiple RAN nodes collaborate to assist the terminal in achieving wireless access, and different RAN nodes implement part of the functions of the base station. For example, the RAN node can be a centralized unit (CU), a distributed unit (DU), a CU-control plane (CP), a CU-user plane (UP), or a radio unit (RU). The CU and DU can be set up separately, or they can be included in the same network element, such as the baseband unit. The RU may be included in a radio frequency device or a radio frequency unit, for example, a remote radio unit (RRU), an active antenna unit (AAU) or a remote radio head (RRH).

在不同系统中,CU(或CU-CP和CU-UP)、DU或RU也可以有不同的名称,但是本领域的技术人员可以理解其含义。例如,在开放式接入网(open RAN,O-RAN或ORAN)系统中,CU也可以称为O-CU(开放式CU),DU也可以称为O-DU,CU-CP也可以称为O-CU-CP,CU-UP也可以称为O-CU-UP,RU也可以称为O-RU。为描述方便,本申请中以CU,CU-CP,CU-UP、DU和RU为例进行描述。本申请中的CU(或CU-CP、CU-UP)、DU和RU中的任一单元,可以是通过软件模块、硬件模块、或者软件模块与硬件模块结合来实现。In different systems, CU (or CU-CP and CU-UP), DU or RU may have different names, but those skilled in the art can understand their meanings. For example, in an open access network (open RAN, O-RAN or ORAN) system, CU may also be called O-CU (open CU), DU may also be called O-DU, CU-CP may also be called O-CU-CP, CU-UP may also be called O-CU-UP, and RU may also be called O-RU. For the convenience of description, this application uses CU, CU-CP, CU-UP, DU and RU as examples for description. Any unit among the CU (or CU-CP, CU-UP), DU and RU in this application can be implemented by a software module, a hardware module, or a combination of a software module and a hardware module.

接入网设备和终端设备之间的通信遵循一定的协议层结构。该协议层可以包括控制面协议层和用户面协议层。控制面协议层可以包括以下至少一项:无线资源控制(radio resource control,RRC)层、分组数据汇聚层协议(packet data convergence protocol,PDCP)层、无线链路控制(radio link control,RLC)层、媒体接入控制(media access control,MAC)层、或物理(physical,PHY)层等。用户面协议层可以包括以下至少一项:业务数据适配协议(service data adaptation protocol,SDAP)层、PDCP层、RLC层、MAC层、或物理层等。Communication between access network equipment and terminal devices follows a specific protocol layer structure. This protocol layer may include a control plane protocol layer and a user plane protocol layer. The control plane protocol layer may include at least one of the following: radio resource control (RRC) layer, packet data convergence protocol (PDCP) layer, radio link control (RLC) layer, media access control (MAC) layer, or physical (PHY) layer. The user plane protocol layer may include at least one of the following: service data adaptation protocol (SDAP) layer, PDCP layer, RLC layer, MAC layer, or physical layer.

对于ORAN系统中的网元及其可实现的协议层功能对应关系,可参照下表1。For the correspondence between network elements in the ORAN system and their achievable protocol layer functions, please refer to Table 1 below.

表1
Table 1

网络设备可以是其它为终端设备提供无线通信功能的装置。本申请的实施例对网络设备所采用的具体技术和具体设备形态不做限定。为方便描述,本申请实施例并不限定。The network device may be any other device that provides wireless communication functionality to the terminal device. The embodiments of this application do not limit the specific technology and device form used by the network device. For ease of description, the embodiments of this application do not limit this.

网络设备还可以包括核心网设备,核心网设备例如包括第四代(4th generation,4G)网络中的移动性管理实体(mobility management entity,MME),归属用户服务器(home subscriber server,HSS),服务网关(serving gateway,S-GW),策略和计费规则功能(policy and charging rules function,PCRF),公共数据网网关(public data network gateway,PDN gateway,P-GW);5G网络中的访问和移动管理功能(access and mobility management function,AMF)、用户面功能(user plane function,UPF)或会话管理功能(session management function,SMF)等网元。此外,该核心网设备还可以包括5G网络以及5G网络的下一代网络中的其他核心网设备。The network equipment may also include core network equipment, such as the mobility management entity (MME), home subscriber server (HSS), serving gateway (S-GW), policy and charging rules function (PCRF), and public data network gateway (PDN gateway, P-GW) in the fourth generation (4G) network; and the access and mobility management function (AMF), user plane function (UPF), or session management function (SMF) in the 5G network. In addition, the core network equipment may also include other core network equipment in the 5G network and the next generation network of the 5G network.

本申请实施例中,上述网络设备还可以具有AI能力的网络节点,可以为终端或其他网络设备提供AI服务,例如,可以为网络侧(接入网或核心网)的AI节点、算力节点、具有AI能力的RAN节点、具有AI能力的核心网网元等。In an embodiment of the present application, the above-mentioned network device may also have a network node with AI capabilities, which can provide AI services for terminals or other network devices. For example, it can be an AI node on the network side (access network or core network), a computing power node, a RAN node with AI capabilities, a core network element with AI capabilities, etc.

本申请实施例中,用于实现网络设备的功能的装置可以是网络设备,也可以是能够支持网络设备实现该功能的装置,例如芯片系统,该装置可以被安装在网络设备中。在本申请实施例提供的技术方案中,以用于实现网络设备的功能的装置是网络设备为例,描述本申请实施例提供的技术方案。In the embodiments of the present application, the apparatus for implementing the function of the network device may be the network device, or may be a device capable of supporting the network device in implementing the function, such as a chip system, which may be installed in the network device. In the technical solutions provided in the embodiments of the present application, the technical solutions provided in the embodiments of the present application are described by taking the network device as an example.

(3)配置与预配置:在本申请中,会同时用到配置与预配置。其中,配置是指网络设备/服务器通过消息或信令将一些参数的配置信息或参数的取值发送给终端,以便终端根据这些取值或信息来确定通信的参数或传输时的资源。预配置与配置类似,可以是网络设备/服务器预先与终端设备协商好的参数信息或参数值,也可以是标准协议规定的基站/网络设备或终端设备采用的参数信息或参数值,还可以是预先存储在基站/服务器或终端设备的参数信息或参数值。本申请对此不做限定。(3) Configuration and pre-configuration: In this application, configuration and pre-configuration are used simultaneously. Configuration refers to the network device/server sending some parameter configuration information or parameter values to the terminal through messages or signaling, so that the terminal can determine the communication parameters or resources during transmission based on these values or information. Pre-configuration is similar to configuration, and can be parameter information or parameter values pre-negotiated between the network device/server and the terminal device, or parameter information or parameter values used by the base station/network device or terminal device as specified in the standard protocol, or parameter information or parameter values pre-stored in the base station/server or terminal device. This application does not limit this.

进一步地,这些取值和参数,是可以变化或更新的。Furthermore, these values and parameters can be changed or updated.

(4)本申请实施例中的术语“系统”和“网络”可被互换使用。“多个”是指两个或两个以上。“和/或”,描述关联对象的关联关系,表示可以存在三种关系,例如,A和/或B,可以表示:单独存在A、同时存在A和B、单独存在B的情况,其中A,B可以是单数或者复数。字符“/”一般表示前后关联对象是一种“或”的关系。“以下至少一项(个)”或其类似表达,是指的这些项中的任意组合,包括单项 (个)或复数项(个)的任意组合。例如“A,B和C中的至少一项”包括A,B,C,AB,AC,BC或ABC。以及,除非有特别说明,本申请实施例提及“第一”、“第二”等序数词是用于对多个对象进行区分,不用于限定多个对象的顺序、时序、优先级或者重要程度。(4) The terms "system" and "network" in the embodiments of the present application can be used interchangeably. "Multiple" refers to two or more. "And/or" describes the association relationship of associated objects, indicating that three relationships may exist. For example, A and/or B can represent: the existence of A alone, the existence of A and B at the same time, and the existence of B alone, where A and B can be singular or plural. The character "/" generally indicates that the previous and next associated objects are in an "or" relationship. "At least one of the following" or similar expressions refers to any combination of these items, including single items. (a) or any combination of multiple items (a). For example, "at least one of A, B, and C" includes A, B, C, AB, AC, BC, or ABC. Also, unless otherwise specified, ordinal numbers such as "first" and "second" mentioned in the embodiments of the present application are used to distinguish multiple objects and are not used to limit the order, timing, priority, or importance of multiple objects.

(5)本申请实施例中的“发送”和“接收”,表示信号传递的走向。例如,“向XX发送信息”可以理解为该信息的目的端是XX,可以包括通过空口直接发送,也包括其他单元或模块通过空口间接发送。“接收来自YY的信息”可以理解为该信息的源端是YY,可以包括通过空口直接从YY接收,也可以包括通过空口从其他单元或模块间接地从YY接收。“发送”也可以理解为芯片接口的“输出”,“接收”也可以理解为芯片接口的“输入”。(5) “Sending” and “receiving” in the embodiments of the present application indicate the direction of signal transmission. For example, “sending information to XX” can be understood as the destination of the information being XX, which can include direct sending through the air interface, as well as indirect sending through the air interface by other units or modules. “Receiving information from YY” can be understood as the source of the information being YY, which can include direct receiving from YY through the air interface, as well as indirect receiving from YY through the air interface from other units or modules. “Sending” can also be understood as the “output” of the chip interface, and “receiving” can also be understood as the “input” of the chip interface.

换言之,发送和接收可以是在设备之间进行的,例如,网络设备和终端设备之间进行的,也可以是在设备内进行的,例如,通过总线、走线或接口在设备内的部件之间、模组之间、芯片之间、软件模块或者硬件模块之间发送或接收。In other words, sending and receiving can be performed between devices, for example, between a network device and a terminal device, or can be performed within a device, for example, sending or receiving between components, modules, chips, software modules or hardware modules within the device through a bus, wiring or interface.

可以理解的是,信息在信息发送的源端和目的端之间可能会被进行必要的处理,比如编码、调制等,但目的端可以理解来自源端的有效信息。本申请中类似的表述可以做相似的理解,不再赘述。It is understandable that information may be processed between the source and destination of information transmission, such as coding, modulation, etc., but the destination can understand the valid information from the source. Similar expressions in this application can be understood similarly and will not be repeated.

(6)在本申请实施例中,“指示”可以包括直接指示和间接指示,也可以包括显式指示和隐式指示。将某一信息(如下文所述的指示信息)所指示的信息称为待指示信息,则具体实现过程中,对待指示信息进行指示的方式有很多种,例如但不限于,可以直接指示待指示信息,如待指示信息本身或者该待指示信息的索引等。也可以通过指示其他信息来间接指示待指示信息,其中该其他信息与待指示信息之间存在关联关系;还可以仅仅指示待指示信息的一部分,而待指示信息的其他部分则是已知的或者提前约定的,例如可以借助预先约定(例如协议预定义)的各个信息的排列顺序来实现对特定信息的指示,从而在一定程度上降低指示开销。本申请对于指示的具体方式不作限定。可以理解的是,对于该指示信息的发送方来说,该指示信息可用于指示待指示信息,对于指示信息的接收方来说,该指示信息可用于确定待指示信息。(6) In the embodiments of the present application, "indication" may include direct indication and indirect indication, and may also include explicit indication and implicit indication. The information indicated by a certain information (such as the indication information described below) is called information to be indicated. In the specific implementation process, there are many ways to indicate the information to be indicated, such as but not limited to, directly indicating the information to be indicated, such as the information to be indicated itself or the index of the information to be indicated. The information to be indicated may also be indirectly indicated by indicating other information, wherein the other information is associated with the information to be indicated; or only a part of the information to be indicated may be indicated, while the other part of the information to be indicated is known or agreed in advance. For example, the indication of specific information may be achieved by means of the arrangement order of each information agreed in advance (such as predefined by the protocol), thereby reducing the indication overhead to a certain extent. The present application does not limit the specific method of indication. It is understandable that for the sender of the indication information, the indication information can be used to indicate the information to be indicated, and for the receiver of the indication information, the indication information can be used to determine the information to be indicated.

本申请中,除特殊说明外,各个实施例之间相同或相似的部分可以互相参考。在本申请中各个实施例、以及各实施例中的各个方法/设计/实现方式中,如果没有特殊说明以及逻辑冲突,不同的实施例之间、以及各实施例中的各个方法/设计/实现方式之间的术语和/或描述具有一致性、且可以相互引用,不同的实施例、以及各实施例中的各个方法/设计/实现方式中的技术特征根据其内在的逻辑关系可以组合形成新的实施例、方法、或实现方式。以下所述的本申请实施方式并不构成对本申请保护范围的限定。In this application, unless otherwise specified, the same or similar parts between the various embodiments can refer to each other. In the various embodiments of this application, and the various methods/designs/implementations in each embodiment, if there is no special explanation and logical conflict, the terms and/or descriptions between different embodiments and the various methods/designs/implementations in each embodiment are consistent and can be referenced to each other. The technical features in different embodiments and the various methods/designs/implementations in each embodiment can be combined to form new embodiments, methods, or implementations according to their inherent logical relationships. The following description of the implementation methods of this application does not constitute a limitation on the scope of protection of this application.

本申请可以应用于长期演进(long term evolution,LTE)系统、新无线(new radio,NR)系统,或者是5G之后演进的通信系统(例如6G等)。其中,该通信系统中包括至少一个网络设备和/或至少一个终端设备。This application can be applied to a long-term evolution (LTE) system, a new radio (NR) system, or a communication system evolved after 5G (e.g., 6G). The communication system includes at least one network device and/or at least one terminal device.

请参阅图1a,为本申请中通信系统的一种示意图。图1a中,示例性的示出了一个网络设备和6个终端设备,6个终端设备分别为终端设备1、终端设备2、终端设备3、终端设备4、终端设备5以及终端设备6等。在图1a所示的示例中,是以终端设备1为智能茶杯,终端设备2为智能空调,终端设备3为智能加油机,终端设备4为交通工具,终端设备5为手机,终端设备6为打印机进行举例说明的。Please refer to Figure 1a, which is a schematic diagram of a communication system in this application. Figure 1a exemplarily illustrates a network device and six terminal devices, namely terminal device 1, terminal device 2, terminal device 3, terminal device 4, terminal device 5, and terminal device 6. In the example shown in Figure 1a, terminal device 1 is a smart teacup, terminal device 2 is a smart air conditioner, terminal device 3 is a smart gas pump, terminal device 4 is a vehicle, terminal device 5 is a mobile phone, and terminal device 6 is a printer.

如图1a所示,AI配置信息发送实体可以为网络设备。AI配置信息接收实体可以为终端设备1-终端设备6,此时,网络设备和终端设备1-终端设备6组成一个通信系统,在该通信系统中,终端设备1-终端设备6可以发送数据给网络设备,网络设备需要接收终端设备1-终端设备6发送的数据。同时,网络设备可以向终端设备1-终端设备6发送配置信息。As shown in Figure 1a, the AI configuration information sending entity can be a network device. The AI configuration information receiving entity can be terminal devices 1-6. In this case, the network device and terminal devices 1-6 form a communication system. In this communication system, terminal devices 1-6 can send data to the network device, and the network device needs to receive data sent by terminal devices 1-6. At the same time, the network device can send configuration information to terminal devices 1-6.

示例性的,在图1a中,终端设备4-终端设备6也可以组成一个通信系统。其中,终端设备5作为网络设备,即AI配置信息发送实体;终端设备4和终端设备6作为终端设备,即AI配置信息接收实体。例如车联网系统中,终端设备5分别向终端设备4和终端设备6发送AI配置信息,并且接收终端设备4和终端设备6发送的数据;相应的,终端设备4和终端设备6接收终端设备5发送的AI配置信息,并向终端设备5发送数据。For example, in Figure 1a, terminal devices 4 and 6 can also form a communication system. Terminal device 5 serves as a network device, i.e., the AI configuration information sending entity; terminal devices 4 and 6 serve as terminal devices, i.e., the AI configuration information receiving entities. For example, in a connected vehicle system, terminal device 5 sends AI configuration information to terminal devices 4 and 6, respectively, and receives data from them. Correspondingly, terminal devices 4 and 6 receive AI configuration information from terminal device 5 and send data to terminal device 5.

以图1a所示通信系统为例,不同的设备之间(包括网络设备与网络设备之间,网络设备与终端设备之间,和/或,终端设备和终端设备之间)除了执行通信相关业务之外,还有可能执行AI相关业务。Taking the communication system shown in Figure 1a as an example, in addition to executing communication-related services, different devices (including between network devices, between network devices and terminal devices, and/or between terminal devices) may also execute AI-related services.

如图1b所示,以网络设备为基站为例,基站可以与一个或多个终端设备之间可以执行通信相关业务和AI相关业务,不同终端设备之间也可以执行通信相关业务和AI相关业务。 As shown in Figure 1b, taking the network device as a base station as an example, the base station can perform communication-related services and AI-related services with one or more terminal devices, and different terminal devices can also perform communication-related services and AI-related services.

如图1c所示,以终端设备包括电视和手机为例,电视和手机之间也可以执行通信相关业务和AI相关业务。As shown in Figure 1c, taking the terminal devices including a TV and a mobile phone as an example, communication-related services and AI-related services can also be performed between the TV and the mobile phone.

本申请提供的技术方案可以应用于无线通信系统(例如图1a、图1b或图1c所示系统),例如本申请提供的通信系统中可以引入AI网元来实现部分或全部AI相关的操作。AI网元也可以称为AI节点、AI设备、AI实体、AI模块、AI模型、或AI单元等。所述AI网元可以是内置在通信系统的网元中。例如,AI网元可以是内置在:接入网设备、核心网设备、云服务器、或网管(operation,administration and maintenance,OAM)中的AI模块,用以实现AI相关的功能。所述OAM可以是作为核心网设备网管和/或作为接入网设备的网管。或者,所述AI网元也可以是通信系统中独立设置的网元。可选的,终端或终端内置的芯片中也可以包括AI实体,用于实现AI相关的功能。The technical solution provided in this application can be applied to wireless communication systems (such as the systems shown in Figures 1a, 1b, or 1c). For example, an AI network element can be introduced into the communication system provided in this application to implement some or all AI-related operations. The AI network element can also be called an AI node, AI device, AI entity, AI module, AI model, or AI unit, etc. The AI network element can be a network element built into the communication system. For example, the AI network element can be an AI module built into: an access network device, a core network device, a cloud server, or a network management (OAM) to implement AI-related functions. The OAM can be a network management for a core network device and/or a network management for an access network device. Alternatively, the AI network element can also be an independently set network element in the communication system. Optionally, the terminal or the chip built into the terminal can also include an AI entity to implement AI-related functions.

下面将本申请中可能涉及到的人工智能(artificial intelligence,AI)进行简要介绍。The following is a brief introduction to artificial intelligence (AI) that may be involved in this application.

人工智能(artificial intelligence,AI),可以让机器具有人类的智能,例如可以让机器应用计算机的软硬件来模拟人类某些智能行为。为了实现人工智能,可以采用机器学习方法。机器学习方法中,机器利用训练数据学习(或训练)得到模型。该模型表征了从输入到输出之间的映射。学习得到的模型可以用于进行推理(或预测),即可以利用该模型预测出给定输入所对应的输出。其中,该输出还可以称为推理结果(或预测结果)。Artificial intelligence (AI) can imbue machines with human intelligence. For example, it can enable machines to simulate certain intelligent human behaviors using computer hardware and software. Machine learning methods can be used to implement AI. In machine learning, a machine uses training data to learn (or train) a model. This model represents the mapping from input to output. The learned model can be used for inference (or prediction), meaning that the model can be used to predict the output corresponding to a given input. This output can also be called an inference result (or prediction result).

机器学习可以包括监督学习、无监督学习、和强化学习。其中,无监督学习还可以称为非监督学习。Machine learning can include supervised learning, unsupervised learning, and reinforcement learning. Among them, unsupervised learning can also be called unsupervised learning.

监督学习依据已采集到的样本值和样本标签,利用机器学习算法学习样本值到样本标签的映射关系,并用AI模型来表达学到的映射关系。训练机器学习模型的过程就是学习这种映射关系的过程。在训练过程中,将样本值输入模型得到模型的预测值,通过计算模型的预测值与样本标签(理想值)之间的误差来优化模型参数。映射关系学习完成后,就可以利用学到的映射来预测新的样本标签。监督学习学到的映射关系可以包括线性映射或非线性映射。根据标签的类型可将学习的任务分为分类任务和回归任务。Supervised learning uses machine learning algorithms to learn the mapping relationship between sample values and sample labels based on collected sample values and sample labels, and then expresses this learned mapping relationship using an AI model. The process of training a machine learning model is the process of learning this mapping relationship. During training, sample values are input into the model to obtain the model's predicted values. The model parameters are optimized by calculating the error between the model's predicted values and the sample labels (ideal values). Once the mapping relationship is learned, the learned mapping can be used to predict new sample labels. The mapping relationship learned by supervised learning can include linear mappings or nonlinear mappings. Based on the type of label, the learning task can be divided into classification tasks and regression tasks.

无监督学习依据采集到的样本值,利用算法自行发掘样本的内在模式。无监督学习中有一类算法将样本自身作为监督信号,即模型学习从样本到样本的映射关系,称为自监督学习。训练时,通过计算模型的预测值与样本本身之间的误差来优化模型参数。自监督学习可用于信号压缩及解压恢复的应用,常见的算法包括自编码器和对抗生成型网络等。Unsupervised learning uses algorithms to discover inherent patterns in collected sample values. One type of unsupervised learning algorithm uses the samples themselves as supervisory signals, meaning the model learns the mapping from one sample to another. This is called self-supervised learning. During training, the model parameters are optimized by calculating the error between the model's predictions and the samples themselves. Self-supervised learning can be used in signal compression and decompression recovery applications. Common algorithms include autoencoders and generative adversarial networks.

强化学习不同于监督学习,是一类通过与环境进行交互来学习解决问题的策略的算法。与监督、无监督学习不同,强化学习问题并没有明确的“正确的”动作标签数据,算法需要与环境进行交互,获取环境反馈的奖励信号,进而调整决策动作以获得更大的奖励信号数值。如下行功率控制中,强化学习模型根据无线网络反馈的系统总吞吐率,调整各个用户的下行发送功率,进而期望获得更高的系统吞吐率。强化学习的目标也是学习环境状态与较优(例如最优)决策动作之间的映射关系。但因为无法事先获得“正确动作”的标签,所以不能通过计算动作与“正确动作”之间的误差来优化网络。强化学习的训练是通过与环境的迭代交互而实现的。Reinforcement learning, unlike supervised learning, is a type of algorithm that learns problem-solving strategies through interaction with the environment. Unlike supervised and unsupervised learning, reinforcement learning problems lack explicit label data for "correct" actions. Instead, the algorithm must interact with the environment to obtain reward signals from the environment, and then adjust its decision-making actions to maximize the reward signal value. For example, in downlink power control, the reinforcement learning model adjusts the downlink transmit power of each user based on the overall system throughput fed back by the wireless network, hoping to achieve higher system throughput. The goal of reinforcement learning is also to learn the mapping between environmental states and optimal (e.g., optimal) decision-making actions. However, because the labels for "correct actions" cannot be obtained in advance, network optimization cannot be achieved by calculating the error between actions and "correct actions." Reinforcement learning training is achieved through iterative interaction with the environment.

神经网络(neural network,NN)是机器学习技术中的一种具体的模型。根据通用近似定理,神经网络在理论上可以逼近任意连续函数,从而使得神经网络具备学习任意映射的能力。传统的通信系统需要借助丰富的专家知识来设计通信模块,而基于神经网络的深度学习通信系统可以从大量的数据集中自动发现隐含的模式结构,建立数据之间的映射关系,获得优于传统建模方法的性能。A neural network (NN) is a specific model in machine learning technology. According to the universal approximation theorem, NNs can theoretically approximate any continuous function, enabling them to learn arbitrary mappings. Traditional communication systems require extensive expert knowledge to design communication modules. However, deep learning communication systems based on neural networks can automatically discover implicit patterns in massive data sets and establish mapping relationships between data, achieving performance superior to traditional modeling methods.

神经网络的思想来源于大脑组织的神经元结构。例如,每个神经元都对其输入值进行加权求和运算,通过一个激活函数输出运算结果。The idea of a neural network is derived from the neuronal structure of the brain. For example, each neuron performs a weighted sum operation on its input values and outputs the result through an activation function.

如图1d所示,为神经元结构的一种示意图。假设神经元的输入为x=[x0,x1,…,xn],与各个输入对应的权值分别为w=[w,w1,…,wn],其中,n为正整数,wi和xi可以是小数、整数(例如0、正整数或负整数等)、或复数等各种可能的类型。wi作为xi的权值,用于对xi进行加权。根据权值对输入值进行加权求和的偏置例如为b。激活函数的形式可以有多种,假设一个神经元的激活函数为:y=f(z)=max(0,z),则该神经元的输出为:再例如,一个神经元的激活函数为:y=f(z)=z,则该神经元的输出为: 其中,b可以是小数、整数(例如0、正整数或负整数)、或复数等各种可能的类型。神经网络中不同神经元的激活函数可以相同或不同。 As shown in Figure 1d, it is a schematic diagram of the neuron structure. Assume that the input of the neuron is x = [x 0 , x 1 ,…, x n ], and the weights corresponding to each input are w = [w, w 1 ,…, w n ], where n is a positive integer, and w i and xi can be various possible types such as decimals, integers (such as 0, positive integers or negative integers, etc.), or complex numbers. w i is used as the weight of xi to weight xi . The bias for weighted summation of input values according to the weights is, for example, b. The activation function can take many forms. Assuming that the activation function of a neuron is: y = f(z) = max(0,z), the output of the neuron is: For another example, if the activation function of a neuron is: y = f(z) = z, then the output of the neuron is: b can be a decimal, an integer (eg, 0, a positive integer, or a negative integer), or a complex number, etc. The activation functions of different neurons in a neural network can be the same or different.

此外,神经网络一般包括多个层,每层可包括一个或多个神经元。通过增加神经网络的深度和/或宽度,能够提高该神经网络的表达能力,为复杂系统提供更强大的信息提取和抽象建模能力。其中,神经网络的深度可以是指神经网络包括的层数,每层包括的神经元个数可以称为该层的宽度。在一种实现方式中,神经网络包括输入层和输出层。神经网络的输入层将接收到的输入信息经过神经元处理,将处理结果传递给输出层,由输出层得到神经网络的输出结果。在另一种实现方式中,神经网络包括输入层、隐藏层和输出层。神经网络的输入层将接收到的输入信息经过神经元处理,将处理结果传递给中间的隐藏层,隐藏层对接收的处理结果进行计算,得到计算结果,隐藏层将计算结果传递给输出层或者下一个相邻的隐藏层,最终由输出层得到神经网络的输出结果。其中,一个神经网络可以包括一个隐藏层,或者包括多个依次连接的隐藏层,不予限制。Furthermore, neural networks generally include multiple layers, each of which may include one or more neurons. Increasing the depth and/or width of a neural network can improve its expressive power, providing more powerful information extraction and abstract modeling capabilities for complex systems. The depth of a neural network can refer to the number of layers it comprises, and the number of neurons in each layer can be referred to as the width of that layer. In one implementation, a neural network includes an input layer and an output layer. The input layer processes the input information received by the neural network through neurons, passing the processing results to the output layer, which then obtains the output of the neural network. In another implementation, a neural network includes an input layer, a hidden layer, and an output layer. The input layer processes the input information received by the neural network through neurons, passing the processing results to an intermediate hidden layer. The hidden layer performs calculations on the received processing results to obtain a calculation result, which is then passed to the output layer or the next adjacent hidden layer, which ultimately obtains the output of the neural network. A neural network can include one hidden layer or multiple hidden layers connected in sequence, without limitation.

神经网络例如为深度神经网络(deep neural network,DNN)。根据网络的构建方式,DNN可以包括前馈神经网络(feedforward neural network,FNN)、卷积神经网络(convolutional neural networks,CNN)和递归神经网络(recurrent neural network,RNN)。An example of a neural network is a deep neural network (DNN). Depending on how the network is constructed, DNNs can include feedforward neural networks (FNNs), convolutional neural networks (CNNs), and recurrent neural networks (RNNs).

图1e为一种FNN网络示意图。FNN网络的特点为相邻层的神经元之间两两完全相连。该特点使得FNN通常需要大量的存储空间、导致较高的计算复杂度。Figure 1e is a schematic diagram of a FNN network. A characteristic of FNN networks is that neurons in adjacent layers are fully connected. This characteristic typically requires a large amount of storage space and results in high computational complexity.

CNN是一种专门来处理具有类似网格结构的数据的神经网络。例如,时间序列数据(时间轴离散采样)和图像数据(二维离散采样)都可以认为是类似网格结构的数据。CNN并不一次性利用全部的输入信息做运算,而是采用一个固定大小的窗截取部分信息做卷积运算,这就大大降低了模型参数的计算量。另外根据窗截取的信息类型的不同(如同一副图中的人和物为不同类型信息),每个窗可以采用不同的卷积核运算,这使得CNN能更好的提取输入数据的特征。CNN is a neural network specifically designed to process data with a grid-like structure. For example, time series data (discrete sampling along the time axis) and image data (discrete sampling along two dimensions) can both be considered grid-like data. CNNs do not utilize all input information at once for computation. Instead, they use a fixed-size window to intercept a portion of the information for convolution operations, significantly reducing the computational complexity of model parameters. Furthermore, depending on the type of information intercepted by the window (e.g., people and objects in an image represent different types of information), each window can use a different convolution kernel, enabling CNNs to better extract features from the input data.

RNN是一类利用反馈时间序列信息的DNN网络。它的输入包括当前时刻的新的输入值和自身在前一时刻的输出值。RNN适合获取在时间上具有相关性的序列特征,特别适用于语音识别、信道编译码等应用。RNNs are a type of DNN that utilizes feedback time series information. Their input consists of a new input value at the current moment and their own output value at the previous moment. RNNs are suitable for capturing temporally correlated sequence features and are particularly well-suited for applications such as speech recognition and channel coding.

在上述机器学习的模型训练过程中,可以定义损失函数。损失函数描述了模型的输出值和理想目标值之间的差距或差异。损失函数可以通过多种形式体现,对于损失函数的具体形式不予限制。模型训练过程可以看作以下过程:通过调整模型的部分或全部参数,使得损失函数的值小于门限值或者满足目标需求。During the machine learning model training process, a loss function can be defined. This function describes the gap or discrepancy between the model's output and the ideal target value. Loss functions can be expressed in various forms, and there are no restrictions on their specific form. The model training process can be viewed as adjusting some or all of the model's parameters to keep the loss function below a threshold or meet the target.

模型还可以被称为AI模型、规则或者其他名称等。AI模型可以认为是实现AI功能的具体方法。AI模型表征了模型的输入和输出之间的映射关系或者函数。AI功能可以包括以下一项或多项:数据收集、模型训练(或模型学习)、模型信息发布、模型推断(或称为模型推理、推理、或预测等)、模型监控或模型校验、或推理结果发布等。AI功能还可以称为AI(相关的)操作、或AI相关的功能。A model may also be referred to as an AI model, rule, or other name. An AI model can be considered a specific method for implementing an AI function. An AI model represents a mapping relationship or function between the input and output of a model. AI functions may include one or more of the following: data collection, model training (or model learning), model information release, model inference (or model reasoning, inference, or prediction, etc.), model monitoring or model verification, or inference result release, etc. AI functions may also be referred to as AI (related) operations, or AI-related functions.

下面将结合附图,对神经网络的实现过程进行示例性描述。The following is an exemplary description of the implementation process of the neural network with reference to the accompanying drawings.

1.全连接神经网络,又叫多层感知机(multilayer perceptron,MLP)。1. Fully connected neural network, also called multilayer perceptron (MLP).

如图2a所示,一个MLP包含一个输入层(左侧),一个输出层(右侧),及多个隐藏层(中间)。其中,MLP的每层包含若干个节点,称为神经元。其中,相邻两层的神经元间两两相连。As shown in Figure 2a, an MLP consists of an input layer (left), an output layer (right), and multiple hidden layers (center). Each layer of the MLP contains several nodes, called neurons. Neurons in adjacent layers are connected to each other.

可选的,考虑相邻两层的神经元,下一层的神经元的输出h为所有与之相连的上一层神经元x的加权和并经过激活函数,可以表示为:
h=f(wx+b)。
Alternatively, considering neurons in two adjacent layers, the output h of the neurons in the next layer is the weighted sum of all neurons x in the previous layer connected to it and passes through the activation function, which can be expressed as:
h=f(wx+b).

其中,w为权重矩阵,b为偏置向量,f为激活函数。Among them, w is the weight matrix, b is the bias vector, and f is the activation function.

进一步可选的,神经网络的输出可以递归表达为:
y=fn(wnfn-1(…)+bn)。
Alternatively, the output of the neural network can be recursively expressed as:
y=f n (w n f n-1 (…)+b n ).

其中,n是神经网络层的索引,1<=n<=N,其中N为神经网络的总层数。Where n is the index of the neural network layer, 1<=n<=N, where N is the total number of neural network layers.

换言之,可以将神经网络理解为一个从输入数据集合到输出数据集合的映射关系。而通常神经网络都是随机初始化的,用已有数据从随机的w和b得到这个映射关系的过程被称为神经网络的训练。In other words, a neural network can be understood as a mapping from an input data set to an output data set. Neural networks are typically initialized randomly, and the process of obtaining this mapping from random w and b using existing data is called neural network training.

可选的,训练的具体方式为采用损失函数(loss function)对神经网络的输出结果进行评价。Optionally, the specific training method is to use a loss function to evaluate the output results of the neural network.

如图2b所示,可以将误差反向传播,通过梯度下降的方法即能迭代优化神经网络参数(包括w和b),直到损失函数达到最小值,即图2b中的“较优点(例如最优点)”。可以理解的是,图2b中的“较优点(例如最优点)”对应的神经网络参数可以作为训练好的AI模型信息中的神经网络参数。As shown in Figure 2b, the error can be backpropagated, and the neural network parameters (including w and b) can be iteratively optimized using gradient descent until the loss function reaches a minimum, which is the "better point (e.g., optimal point)" in Figure 2b. It is understood that the neural network parameters corresponding to the "better point (e.g., optimal point)" in Figure 2b can be used as the neural network parameters in the trained AI model information.

进一步可选的,梯度下降的过程可以表示为:
Alternatively, the gradient descent process can be expressed as:

其中,θ为待优化参数(包括w和b),L为损失函数,η为学习率,控制梯度下降的步长,表示求导运算,表示对L求θ的导数。Among them, θ is the parameter to be optimized (including w and b), L is the loss function, and η is the learning rate, which controls the step size of gradient descent. represents the derivative operation, represents the derivative of θ with respect to L.

进一步可选的,反向传播的过程利用到求偏导的链式法则。Optionally, the backpropagation process utilizes the chain rule for partial derivatives.

如图2c所示,前一层参数的梯度可以由后一层参数的梯度递推计算得到,可以表达为:
As shown in Figure 2c, the gradient of the previous layer parameters can be recursively calculated from the gradient of the next layer parameters, which can be expressed as:

其中,wij为节点j连接节点i的权重,si为节点i上的输入加权和。Among them, wij is the weight of node j connecting to node i, and si is the weighted sum of the inputs on node i.

2.联邦学习(Federated Learning,FL)。2. Federated Learning (FL).

联邦学习这一概念的提出有效地解决了当前人工智能发展所面临的困境,其在充分保障用户数据隐私和安全的前提下,通过促使各个边缘设备和中心端服务器协同合作来高效地完成模型的学习任务。The concept of federated learning effectively solves the current difficulties faced by the development of artificial intelligence. On the premise of fully protecting user data privacy and security, it efficiently completes the model learning task by promoting the collaboration between various edge devices and central servers.

如图2d所示,FL架构是当前FL领域最为广泛的训练架构,FedAvg算法是FL的基础算法,其算法流程大致如下:As shown in Figure 2d, the FL architecture is the most widely used training architecture in the current FL field. The FedAvg algorithm is the basic algorithm of FL. Its algorithm flow is roughly as follows:

(1)中心端初始化待训练模型并将其广播发送给所有客户端设备。(1) The center initializes the model to be trained And broadcast it to all client devices.

(2)在第t∈[1,T]轮中,客户端k∈[1,K]基于局部数据集对接收到的全局模型进行E个epoch的训练以得到本地训练结果将其上报给中心节点。(2) In the round t∈[1,T], client k∈[1,K] based on the local dataset For the received global model Perform E epochs of training to obtain local training results Report it to the central node.

(3)中心节点汇总收集来自全部(或部分)客户端的本地训练结果,假设第t轮上传局部模型的客户端集合为中心端将以对应客户端的样本数为权重进行加权求均得到新的全局模型,具体更新法则为其后中心端再将最新版本的全局模型广播发送给所有客户端设备进行新一轮的训练。(3) The central node aggregates and collects the local training results from all (or some) clients. Assume that the client set that uploads the local model in round t is The center will use the number of samples of the corresponding client as the weight to perform weighted averaging to obtain a new global model. The specific update rule is: The center then sends the latest version of the global model Broadcast to all client devices for a new round of training.

(4)重复步骤(2)和(3)直至模型最终收敛或训练轮数达到上限。(4) Repeat steps (2) and (3) until the model finally converges or the number of training rounds reaches the upper limit.

除了上报本地模型还可以将训练的本地梯度进行上报,中心节点将本地梯度求平均,并根据这个平均梯度的方向更新全局模型。In addition to reporting local models You can also use the local gradient of training After reporting, the central node averages the local gradients and updates the global model according to the direction of the average gradient.

可以看到,在FL框架中,数据集存在于分布式节点处,即分布式节点收集本地的数据集,并进行本地训练,将训练得到的本地结果(模型或梯度)上报给中心节点。中心节点本身没有数据集,只负责将分布式节点的训练结果进行融合处理,得到全局模型,并下发给分布式节点。As you can see, in the FL framework, datasets exist on distributed nodes. Distributed nodes collect local datasets, perform local training, and report the local training results (models or gradients) to the central node. The central node itself does not have a dataset; it is only responsible for fusing the training results of distributed nodes to obtain a global model and send it to the distributed nodes.

3.去中心式学习。与联邦学习不同,另一种分布式学习架构——去中心式学习。3. Decentralized learning: Different from federated learning, decentralized learning is another distributed learning architecture.

如图2e所示,考虑没有中心节点的完全分布式系统。去中心式学习系统的设计目标f(x)一般是各节点目标fi(x)的均值,即其中n是分布式节点数量,x是待优化参数,在机器学习中,x就是机器学习(如神经网络)模型的参数。各节点利用本地数据和本地目标fi(x)计算本地梯度然后将其发送给通信可达的邻居节点。任一节点收到其邻点发来的梯度信息后,可以按照下式更新本地模型的参数x:
As shown in Figure 2e, consider a fully distributed system without a central node. The design goal f(x) of a decentralized learning system is generally the mean of the goals fi (x) of each node, that is, Where n is the number of distributed nodes, x is the parameter to be optimized. In machine learning, x is the parameter of the machine learning (such as neural network) model. Each node uses local data and local target fi (x) to calculate the local gradient Then it is sent to the neighboring nodes that can be communicated with. After any node receives the gradient information sent by its neighbor, it can update the parameter x of the local model according to the following formula:

其中,表示第i个节点中第k+1(k为自然数)次更新后的本地模型的参数,表示第i个节点中第k次更新后的本地模型的参数(若k为0,则表示为第i个节点的未参与更新的本地模型的参数),αk表 示调优系数,Ni是节点i的邻居节点集合,|Ni|表示节点i的邻居节点集合中的元素数量,即节点i的邻居节点数量。通过节点间的信息交互,去中心式学习系统最终将学到一个统一的模型。in, represents the parameters of the local model after the k+1th (k is a natural number) update in the i-th node, Represents the parameters of the local model after the kth update in the i-th node (if k is 0, it means is the parameter of the local model of the i-th node that does not participate in the update), αk represents represents the tuning coefficient, N i is the set of neighboring nodes of node i, and |N i | represents the number of elements in the set of neighboring nodes of node i, that is, the number of neighboring nodes of node i. Through information exchange between nodes, the decentralized learning system will eventually learn a unified model.

本申请提供的技术方案可以应用于无线通信系统(例如图1a或图1b所示系统),在无线通信系统中,通信节点一般具备信号收发能力和计算能力。以具备计算能力的网络设备为例,网络设备的计算能力主要是为信号收发能力提供算力支持(例如:对信号进行发送处理和接收处理),以实现网络设备与其它通信节点的通信任务。The technical solutions provided in this application can be applied to wireless communication systems (e.g., the systems shown in Figures 1a and 1b). In wireless communication systems, communication nodes generally have both signal transceiver capabilities and computing capabilities. For example, network devices with computing capabilities primarily provide computing power to support signal transceiver capabilities (e.g., performing signal transmission and reception processing) to enable communication between the network device and other communication nodes.

在通信网络中,通信节点的计算能力除了为上述通信任务提供算力支持之外,还可能具备富余的计算能力。为此,如何利用这些计算能力,是一个亟待解决的技术问题。In communication networks, communication nodes may have excess computing power beyond supporting the aforementioned communication tasks. Therefore, how to utilize this computing power is a pressing technical issue.

示例性的,通信节点可以作为AI学习系统的参与节点,将该通信节点的算力应用于AI学习系统(例如图2d或图2e所述AI学习系统)的某一个环节。随着大模型时代的到来,拥有海量参数的深度学习模型如基于变换器的双向编码器表示(bidirectional encoder representations from transformers,BERT),基于变换器的生成式预训练(generative pre-trained transformer,GPT)等能完成越来越复杂的任务,并能达到较好的性能。For example, a communication node can serve as a participating node in an AI learning system, applying its computing power to a specific part of the AI learning system (e.g., the AI learning system described in FIG2d or FIG2e ). With the advent of the era of large models, deep learning models with massive parameters, such as bidirectional encoder representations from transformers (BERT) and generative pre-trained transformers (GPT), can accomplish increasingly complex tasks and achieve superior performance.

一般地,针对不同的模型功能,可以提前训练多个模型,每个模型可能对应于不同的条件。在模型管理的过程中需要执行比较复杂的操作。例如,在模型推理的过程中,需要在多个模型之间基于条件进行切换;在新模型的训练过程中,需要重新进行模型训练(或者模型微调)。在这些过程中,需要对模型进行注册/标识、重新训练等操作,导致模型管理的复杂度高。Generally, multiple models can be pre-trained for different model functions, each potentially corresponding to different conditions. This requires relatively complex operations during model management. For example, during model inference, it is necessary to switch between multiple models based on conditions; and during the training of a new model, it is necessary to retrain the model (or fine-tune the model). These processes require operations such as model registration/identification and retraining, which increases the complexity of model management.

为了解决上述问题,本申请提供了一种通信方法及相关装置,下面将结合附图进行详细介绍。In order to solve the above problems, the present application provides a communication method and related devices, which will be described in detail below with reference to the accompanying drawings.

请参阅图3,为本申请提供的通信方法的一个实现示意图,该方法包括如下步骤。Please refer to FIG3 , which is a schematic diagram of an implementation of the communication method provided in this application. The method includes the following steps.

需要说明的是,在图3中以第一通信装置和第二通信装置作为该交互示意的执行主体为例来示意该方法,但本申请并不限制该交互示意的执行主体。例如,在图3中,方法的执行主体可以替换为通信装置中的芯片、芯片系统、处理器、逻辑模块或软件等。其中,该第一通信装置可以为终端设备且第二通信装置可以为网络设备,或者,该第一通信装置和第二通信装置均为网络设备,或者,该第一通信装置和第二通信装置均为终端设备。It should be noted that in Figure 3, the method is illustrated by taking the first communication device and the second communication device as the execution subjects of the interaction diagram as an example, but the present application does not limit the execution subjects of the interaction diagram. For example, in Figure 3, the execution subject of the method can be replaced by a chip, a chip system, a processor, a logic module or software in the communication device. The first communication device can be a terminal device and the second communication device can be a network device, or the first communication device and the second communication device are both network devices, or the first communication device and the second communication device are both terminal devices.

S301.第二通信装置发送一个或多个样本数据,相应的,第一通信装置接收该一个或多个样本数据。S301. The second communication device sends one or more sample data, and correspondingly, the first communication device receives the one or more sample data.

S302.第一通信装置基于第一神经网络模型处理该一个或多个样本数据和推理数据,得到该推理数据对应的推理结果。其中,该推理结果与该样本数据的至少一个数据特征是相同的。S302. The first communication device processes the one or more sample data and the inference data based on the first neural network model to obtain an inference result corresponding to the inference data, wherein the inference result has at least one data feature identical to that of the sample data.

本申请中,神经网络模型、人工智能(artificial intelligence,AI)模型、AI神经网络模型、机器学习模型、AI处理模型等术语可以相互替换。In this application, terms such as neural network model, artificial intelligence (AI) model, AI neural network model, machine learning model, and AI processing model can be used interchangeably.

应理解,样本数据可以作为神经网络模型的输入的一部分,用于使得该神经网络模型输出的推理结果与该样本数据具备至少一个相同的数据特征。其中,该样本数据可以替换为其它术语,例如,参考数据、锚点数据、示例数据、或指导数据等。It should be understood that sample data can be used as part of the input of a neural network model to ensure that the inference results output by the neural network model have at least one data feature in common with the sample data. The term "sample data" can be replaced by other terms such as reference data, anchor data, example data, or guidance data.

可选地,该数据特征包括以下至少一项:数据维度、参数量、数据内容、数据类型、或物理量。Optionally, the data feature includes at least one of the following: data dimension, parameter quantity, data content, data type, or physical quantity.

作为一种实现示例,如图4a所示,为上述步骤S302的实现过程的一种示例。在图4a中,第一神经网络模型可以部署于第一通信装置中,并且,该第一神经网络模型的输入可以包括该一个或多个样本数据和推理数据,该第一神经网络模型的输出可以包括该推理数据对应的推理结果。下面将提供一些实现示例进行描述。As an implementation example, FIG4a shows an example of the implementation process of the above-mentioned step S302. In FIG4a, the first neural network model can be deployed in the first communication device, and the input of the first neural network model can include the one or more sample data and the inference data, and the output of the first neural network model can include the inference result corresponding to the inference data. Some implementation examples are provided below for description.

例如,第一神经网络模型可以为用于时域信道预测的模型,即该第一神经网络模型可以根据过去k(k为正整数)个时间单元(如帧、子帧、时隙、符号等)的信道信息,预测下一时间单元的信道信息。在该示例中,在一个或多个样本数据中的每个样本数据包括前p(p为正整数)个时间单元的信道信息和第p+1时间单元的信道信息,推理数据可以包括过去k个时间单元的信道信息,该推理数据对应的推理结果可以包括该k个时间单元之后的第k+1个时间单元的信道信息。For example, the first neural network model may be a model for time domain channel prediction, that is, the first neural network model may predict the channel information of the next time unit based on the channel information of the past k (k is a positive integer) time units (such as frames, subframes, time slots, symbols, etc.). In this example, each sample data in the one or more sample data includes the channel information of the first p (p is a positive integer) time units and the channel information of the p+1th time unit, the inference data may include the channel information of the past k time units, and the inference result corresponding to the inference data may include the channel information of the k+1th time unit after the k time units.

又如,第一神经网络可以为用于频域信道预测的模型,即该第一神经网络模型可以根据部分频域单元(例如子载波、部分带宽等)的信道信息,预测全部频域单元的信道信息。以该部分频域单元为k个子载波,该全部频域单元为K个子载波为例,k为正整数,K为大于k的整数。在该示例中,在一个或多个样本数据中的每个样本数据包括p个子载波的信道信息和P个子载波的信道信息,推理数据可以包括k个子载波的信道信息,该推理数据对应的推理结果可以包括K个子载波的信道信息。 For another example, the first neural network may be a model for frequency domain channel prediction, that is, the first neural network model may predict the channel information of all frequency domain units based on the channel information of some frequency domain units (such as subcarriers, part of the bandwidth, etc.). Taking the example where the part of the frequency domain units is k subcarriers and the total number of frequency domain units is K subcarriers, k is a positive integer, and K is an integer greater than k. In this example, each sample data in one or more sample data includes channel information of p subcarriers and channel information of P subcarriers, the inference data may include channel information of k subcarriers, and the inference result corresponding to the inference data may include channel information of K subcarriers.

可选地,p等于k,P等于K。Optionally, p is equal to k and P is equal to K.

需要说明的是,一个或多个样本数据可以通过多种方式参与第一神经网络模型的处理,下面将提供一些实现示例进行描述。It should be noted that one or more sample data can participate in the processing of the first neural network model in a variety of ways, and some implementation examples will be provided below for description.

实现示例一,一个或多个样本数据可以基于交叉注意力的方式,参与第一神经网络模型的处理。In implementation example one, one or more sample data may participate in the processing of the first neural network model based on a cross-attention approach.

示例性的,以图4b为例,第一神经网络模型可以包括第一模块和第二模块。其中,该第一神经网络模型可以为变换器(Transformer)模型,该第一模块可以为变换器编码器(Transformer Encoder),第二模块可以为变换器解码器(Transformer decoder)。其中,该一个或多个样本数据作为第一模块的输入,经过该第一模块的处理可以得到查询(query,Q)向量和键(key,K)向量,推理数据可以作为值(value,V)向量,并且,Q、K、V经过第二模块的交叉注意力层处理后,实现信息融合,以获得最终的输出。Exemplarily, taking FIG4b as an example, the first neural network model may include a first module and a second module. The first neural network model may be a transformer model, the first module may be a transformer encoder, and the second module may be a transformer decoder. The one or more sample data are used as input to the first module. After processing by the first module, a query (Q) vector and a key (K) vector may be obtained. The inference data may be used as a value (V) vector. Furthermore, after Q, K, and V are processed by the cross-attention layer of the second module, information fusion is achieved to obtain the final output.

实现示例二,一个或多个样本数据可以作为前置输入,参与第一神经网络模型的处理。In the second implementation example, one or more sample data may be used as pre-inputs to participate in the processing of the first neural network model.

示例性的,以图4c为例,一个或多个样本数据和推理数据(可选地,还包括“空”数据,用于指示输入的“推理结果”为空)级联后,输入第一神经网络模型,即该一个或多个样本数据作为前置输入。经过第一神经网络模型的处理,以获得最终的输出。For example, using FIG4c as an example, one or more sample data and inference data (optionally, including "empty" data to indicate that the input "inference result" is empty) are concatenated and input into the first neural network model, i.e., the one or more sample data serve as pre-input. After being processed by the first neural network model, the final output is obtained.

实现示例三,一个或多个样本数据可以用于确定神经网络模型参数,参与第一神经网络模型的处理。In implementation example three, one or more sample data may be used to determine neural network model parameters and participate in the processing of the first neural network model.

示例性的,以图4d为例,一个或多个样本数据可以用于确定部分神经网络模型参数,并且,该部分神经网络模型参数能够用于处理(例如生成/调整/修改)第一神经网络模型中的一个或多个神经网络层,得到处理结果。此后,基于该处理结果对推理数据进行处理,能够得到该推理数据对应的推理结果。For example, using FIG. 4d as an example, one or more sample data can be used to determine some neural network model parameters, and these partial neural network model parameters can be used to process (e.g., generate/adjust/modify) one or more neural network layers in the first neural network model to obtain a processing result. Thereafter, the inference data is processed based on the processing result to obtain an inference result corresponding to the inference data.

可以理解的是,一个或多个样本数据的使用不局限于上述图4b至图4d提供的实现示例,在实际应用中,该一个或多个样本数据还可能通过其它方式参与第一神经网络模型的处理,此处不做限定。It is understandable that the use of one or more sample data is not limited to the implementation examples provided in Figures 4b to 4d above. In actual applications, the one or more sample data may also participate in the processing of the first neural network model in other ways, which is not limited here.

在一种可能的实现方式中,在步骤S302的实现过程中,满足以下至少一项时,该第一通信装置基于第一神经网络模型处理该一个或多个样本数据和推理数据,得到该推理数据对应的推理结果,包括:In one possible implementation, during the implementation of step S302, when at least one of the following conditions is satisfied, the first communication device processes the one or more sample data and the inference data based on the first neural network model to obtain an inference result corresponding to the inference data, including:

该第一神经网络模型的推理性能低于(或等于)阈值;The reasoning performance of the first neural network model is lower than (or equal to) a threshold;

该推理数据的数据分布与该第一神经网络模型的前k次输入的推理数据的数据分布的变化大于(或等于)阈值,k为正整数;A difference between the data distribution of the inference data and the data distribution of the inference data inputted into the first neural network model for the previous k times is greater than (or equal to) a threshold value, where k is a positive integer;

(第一通信装置的)通信状态发生变化。The communication state (of the first communication device) changes.

具体地,在满足上述至少一项的情况下,可以确定当前第一神经网络模型基于推理数据得到推理结果的性能较差。为此,该第一通信装置可以在第一神经网络模型的输入中增加样本数据,即该第一通信装置可以基于该第一神经网络模型处理该一个或多个样本数据和推理数据,以得到该推理数据对应的推理结果。Specifically, if at least one of the above items is satisfied, it can be determined that the current first neural network model has poor performance in obtaining an inference result based on the inference data. To this end, the first communication device can add sample data to the input of the first neural network model, that is, the first communication device can process the one or more sample data and the inference data based on the first neural network model to obtain an inference result corresponding to the inference data.

可选地,在不满足上述至少一项时,可以确定当前第一神经网络模型基于推理数据得到推理结果的性能较优。相应的,该第一通信装置可以基于第一神经网络模型处理推理数据,以得到该推理数据对应的推理结果,即第一神经网络模型的输入可以不包括样本数据,以降低开销。Optionally, when at least one of the above items is not satisfied, it can be determined that the current first neural network model has a better performance in obtaining an inference result based on the inference data. Accordingly, the first communication device can process the inference data based on the first neural network model to obtain an inference result corresponding to the inference data, that is, the input of the first neural network model may not include sample data to reduce overhead.

需要说明的是,第一通信装置可以在本地确定上述至少一项是否满足,即该第一通信装置可以基于本地确定的结果触发步骤S302的执行。或者,第一通信装置可以基于其它通信装置的指示,以确定上述至少一项是否满足,即该第一通信装置可以基于其它通信装置的指示确定步骤S302的触发条件是否满足。或者,第一通信装置可以基于其它通信装置的指示触发步骤S302的执行。可选地,该其它通信装置可以包括后文描述的管理模块和/或数据存储模块。It should be noted that the first communication device may locally determine whether at least one of the above conditions is met, i.e., the first communication device may trigger the execution of step S302 based on the result of the local determination. Alternatively, the first communication device may determine whether at least one of the above conditions is met based on an instruction from another communication device, i.e., the first communication device may determine whether the triggering condition of step S302 is met based on the instruction from the other communication device. Alternatively, the first communication device may trigger the execution of step S302 based on the instruction from the other communication device. Optionally, the other communication device may include the management module and/or data storage module described below.

基于图3所示方案,第一通信装置在步骤S301中获取一个或多个样本数据之后,该第一通信装置可以在步骤S302中基于第一神经网络模型处理该一个或多个样本数据和推理数据,得到该推理数据对应的推理结果。通过这种方式,第一通信装置基于第一神经网络模型得到的推理结果与样本数据的至少一个数据特征是相同的,即该第一通信装置能够基于样本数据的示例/指导,获得与该样本数据的数据特征具备至少一个相同的数据特征的推理结果,能够基于该样本数据实现特定场景下模型推理的调整,以降低模型管理的复杂度。Based on the scheme shown in FIG3 , after the first communication device obtains one or more sample data in step S301, the first communication device can process the one or more sample data and the inference data based on the first neural network model in step S302 to obtain an inference result corresponding to the inference data. In this way, the inference result obtained by the first communication device based on the first neural network model is the same as at least one data feature of the sample data, that is, the first communication device can obtain an inference result that has at least one data feature that is the same as the data feature of the sample data based on the example/guidance of the sample data, and can adjust the model inference in a specific scenario based on the sample data to reduce the complexity of model management.

在图3所示方法的一种可能的实现方式中,在该一个或多个样本数据的更新频率大于(或等于)阈值,或,在该推理数据对应的推理结果对应的性能低于(或等于)阈值时,对该第一神经网络模型进行更新,得到第二神经网络模型。具体地,在该一个或多个样本数据的更新频率大于(或等于)阈值,或,在该推理数据对应的推理结果对应的性能低于(或等于)阈值时,可以确定当前第一神经网络模型的性 能较差。相应的,可以对该第一神经网络模型进行更新,得到第二神经网络模型。即通过神经网络模型训练的方式,获得性能较优的第二神经网络模型。In a possible implementation of the method shown in FIG3 , when the update frequency of the one or more sample data is greater than (or equal to) a threshold, or when the performance corresponding to the inference result corresponding to the inference data is lower than (or equal to) a threshold, the first neural network model is updated to obtain a second neural network model. Specifically, when the update frequency of the one or more sample data is greater than (or equal to) a threshold, or when the performance corresponding to the inference result corresponding to the inference data is lower than (or equal to) a threshold, the performance of the current first neural network model can be determined. The performance is poor. Accordingly, the first neural network model can be updated to obtain a second neural network model. That is, a second neural network model with better performance is obtained by training the neural network model.

可选地,在该一个或多个样本数据的更新频率小于(或等于)阈值,或,在该推理数据对应的推理结果对应的性能高于(或等于)阈值时,可以确定当前第一神经网络模型的性能较优。相应的,可以无需对该第一神经网络模型进行更新,以避免不必要的开销。Optionally, when the update frequency of the one or more sample data is less than (or equal to) a threshold, or when the performance corresponding to the inference result corresponding to the inference data is higher than (or equal to) a threshold, it can be determined that the performance of the current first neural network model is superior. Accordingly, there is no need to update the first neural network model to avoid unnecessary overhead.

需要说明的是,上述对第一神经网络模型进行更新得到第二神经网络模型的通信装置,可以为第一通信装置,或者是其他通信装置(例如该其它通信装置可以包括后文描述的模型训练模块)。It should be noted that the communication device that updates the first neural network model to obtain the second neural network model can be the first communication device, or other communication devices (for example, the other communication device can include the model training module described later).

在图3所示方法的一种可能的实现方式中,在步骤S301之前,该方法还包括:该第一通信装置发送用于请求该一个或多个样本数据的请求信息,使得该请求信息的接收方基于该请求信息向该第一通信装置提供该一个或多个样本数据。In a possible implementation of the method shown in Figure 3, before step S301, the method also includes: the first communication device sends a request information for requesting the one or more sample data, so that the recipient of the request information provides the one or more sample data to the first communication device based on the request information.

可选地,该第一通信装置可以包括用于存储/缓存样本数据的模块,相应的,该第一通信装置可以基于该模块在本地获取该一个或多个样本数据。换言之,在这种情况下,图3所示方法的步骤S301为可选步骤,即第一通信装置无需通过接收的方式,即可获得该一个或多个样本数据。Optionally, the first communication device may include a module for storing/caching sample data. Accordingly, the first communication device may locally obtain the one or more sample data based on the module. In other words, in this case, step S301 of the method shown in FIG3 is an optional step, i.e., the first communication device can obtain the one or more sample data without receiving the sample data.

需要说明的是,图3所示方法可以应用于图5所示通信场景,如图5所示,该通信场景包括如下多个模块。It should be noted that the method shown in FIG3 can be applied to the communication scenario shown in FIG5. As shown in FIG5, the communication scenario includes the following multiple modules.

数据收集模块:收集推理数据或样本数据。Data collection module: collects inference data or sample data.

模型训练模块:对神经网络模型进行训练或微调。Model training module: trains or fine-tunes the neural network model.

模型推理模块:基于推理数据和样本数据进行推理,获取推理结果。Model inference module: performs inference based on inference data and sample data to obtain inference results.

数据缓存模块:用于缓存模型推理使用的样本数据。Data cache module: used to cache sample data used for model inference.

数据存储模块:用于存储样本数据、向数据缓存模块提供样本数据(例如,该样本数据可基于检索获得,即数据存储模块存储大量的样本数据,缓存模块存储直接使用的少量样本数据,检索是从大量的样本数据中检索得到相关的少量样本数据,以用于模型推理)、向模型训练模块提供训练数据等。Data storage module: used to store sample data, provide sample data to the data cache module (for example, the sample data can be obtained based on retrieval, that is, the data storage module stores a large amount of sample data, the cache module stores a small amount of sample data for direct use, and retrieval is to retrieve a small amount of relevant sample data from a large amount of sample data for model inference), provide training data to the model training module, etc.

可选地,图5所示通信场景还可以包括管理模块。该管理模块用于监测模型性能、管理模型推理模块的样本数据使用、管理模型训练模块的模型训练或微调等。Optionally, the communication scenario shown in Figure 5 may further include a management module. The management module is used to monitor model performance, manage sample data usage of the model inference module, manage model training or fine-tuning of the model training module, etc.

作为一种实现示例,在图5所示场景中,数据缓存模块可以用于获取、删除和更新样本数据等。可选地,该数据缓存模块可以由管理模块触发,如模型推理性能下降、数据分布变化或者第一通信装置状态变化等。对于数据缓存模块而言,样本数据的获取包括两种方式,一是从数据收集模块获取,一是从数据存储模块中检索得到。As an implementation example, in the scenario shown in Figure 5, the data caching module can be used to obtain, delete, and update sample data. Optionally, the data caching module can be triggered by the management module, such as when model inference performance degrades, data distribution changes, or the status of the first communication device changes. The data caching module can obtain sample data in two ways: one is to obtain it from the data collection module, and the other is to retrieve it from the data storage module.

例如,数据缓存模块可以从数据收集模块获取样本数据。例如,数据缓存模块向数据收集模块发起样本数据收集,配置收集样本数据的类型、样本个数等;数据收集模块根据样本数据配置向其它通信装置(例如第一通信装置)发起数据收集,如信道估计任务收集参考信号估计得到的信道信息。For example, the data caching module can obtain sample data from the data collection module. For example, the data caching module initiates sample data collection from the data collection module, configuring the type and number of sample data to be collected. The data collection module then initiates data collection from other communication devices (e.g., the first communication device) based on the sample data configuration. For example, a channel estimation task collects channel information obtained by estimating a reference signal.

又如,在图5所示场景中,数据缓存模块可以从数据存储模块获取样本数据、例如,数据缓存模块可以根据推理数据从样本数据存储模块中检索得到样本数据用于模型推理。数据存储模块可以存储不同场景下的多个样本数据,通过检索得到与推理数据相似的样本数据作为示例,并向数据缓存模块发送检索到的样本数据。For another example, in the scenario shown in Figure 5, the data cache module can obtain sample data from the data storage module. For example, the data cache module can retrieve sample data from the sample data storage module based on the inference data for model inference. The data storage module can store multiple sample data from different scenarios, retrieve sample data similar to the inference data as an example, and send the retrieved sample data to the data cache module.

可以理解的是,数据缓存模块可以对获取的样本数据进行缓存,该缓存的样本数据可用于模型推理,即作为实时模型推理的输入的一部分。It is understandable that the data cache module can cache the acquired sample data, and the cached sample data can be used for model reasoning, that is, as a part of the input of real-time model reasoning.

可选地,数据缓存模块中的样本数据也可以存储到数据存储模块。Optionally, the sample data in the data cache module may also be stored in the data storage module.

作为一种实现示例,在图5所示场景中,数据存储模块可以用于存储(或长期存储)样本数据,并向其它模块(例如数据缓存模块、模型训练模块等)提供样本数据的检索等功能,具体功能包括下述一种或多种:As an implementation example, in the scenario shown in FIG5 , the data storage module can be used to store (or long-term store) sample data and provide sample data retrieval and other functions to other modules (e.g., data cache module, model training module, etc.). Specific functions include one or more of the following:

数据添加:将数据收集模块或数据缓存模块中的样本数据添加到存储模块中。Data addition: Add sample data from the data collection module or data cache module to the storage module.

数据删除:从存储中删除特定样本数据。Data deletion: Delete specific sample data from storage.

数据更新:删除特定样本数据,并添加新的样本数据。Data update: Delete specific sample data and add new sample data.

数据监测:周期性或事件触发的数据监测,用于数据添加/删除/更新等操作,如基于长期样本数据 与当时收集数据的相关性。Data monitoring: Periodic or event-triggered data monitoring for data addition/deletion/update operations, such as long-term sample data Relevance to the data collected at the time.

数据检索:根据推理数据,从存储中检索样本数据,并提供给数据缓存模块。Data retrieval: Based on the inference data, sample data is retrieved from the storage and provided to the data cache module.

提供训练数据:向模型训练模块提供样本数据作为训练数据。Provide training data: Provide sample data to the model training module as training data.

如前文所示,第一通信装置可以基于一个或多个样本数据进行模型推理,相应的,第一通信装置至少包括图5中的模型推理模块。As described above, the first communication device can perform model inference based on one or more sample data. Accordingly, the first communication device at least includes the model inference module in FIG. 5 .

如前文所示,第二通信装置可以用于发送样本数据,相应的,第二通信装置可以包括图5中的数据收集模块、数据缓存模块、数据存储模块中的一个或多个模块。换言之,该第二通信装置通过多种方式实现,下面将结合一些示例进行描述。As previously described, the second communication device can be used to transmit sample data. Accordingly, the second communication device can include one or more of the data collection module, data cache module, and data storage module shown in FIG5 . In other words, the second communication device can be implemented in a variety of ways, which will be described below with reference to some examples.

实现方式一、在图3所示方法中,第二通信装置可以用于缓存样本数据,即该第二通信装置至少包括图5中的数据缓存模块。Implementation method 1: In the method shown in FIG. 3 , the second communication device may be used to cache sample data, that is, the second communication device at least includes the data cache module shown in FIG. 5 .

在实现方式一中,第二通信装置发送该一个或多个样本数据的过程,包括:该第二通信装置向部署该第一神经网络模型的通信装置或用于存储数据的通信装置发送该一个或多个样本数据。具体地,第二通信装置可以向部署该第一神经网络模型的通信装置发送该一个或多个样本数据,以使得样本数据的接收方能够基于该样本数据实现神经网络模型的推理。或,第二通信装置也可以向用于存储数据的通信装置发送该一个或多个样本数据,以使得样本数据的接收方能够实现样本数据的存储。In implementation method 1, the process of the second communication device sending the one or more sample data includes: the second communication device sending the one or more sample data to the communication device that deploys the first neural network model or the communication device used to store data. Specifically, the second communication device can send the one or more sample data to the communication device that deploys the first neural network model, so that the recipient of the sample data can implement inference of the neural network model based on the sample data. Alternatively, the second communication device can also send the one or more sample data to the communication device used to store data, so that the recipient of the sample data can implement storage of the sample data.

在实现方式一的一种可能的实现方式中,该第二通信装置接收该一个或多个样本数据,以实现一个或多个样本数据的获取。其中,该一个或多个样本数据来自用于收集数据的通信装置(例如包含图5中数据收集模块的通信装置),或该一个或多个样本数据来自用于存储数据的通信装置(例如包含图5中数据存储模块的通信装置)。In one possible implementation of Implementation Mode 1, the second communication device receives the one or more sample data to acquire the one or more sample data, wherein the one or more sample data come from a communication device for collecting data (e.g., a communication device including the data collection module in FIG. 5 ), or the one or more sample data come from a communication device for storing data (e.g., a communication device including the data storage module in FIG. 5 ).

在实现方式一的一种可能的实现方式中,该第二通信装置接收该一个或多个样本数据之前,该方法还包括:该第二通信装置发送用于请求该一个或多个样本数据的请求信息。具体地,第二通信装置还可以发送用于请求该一个或多个样本数据的请求信息,以便于该请求信息的接收方能够基于该请求信息向第二通信装置提供该一个或多个样本数据。In one possible implementation of the first implementation, before the second communication device receives the one or more sample data, the method further includes: the second communication device sending request information for requesting the one or more sample data. Specifically, the second communication device may further send request information for requesting the one or more sample data, so that a recipient of the request information can provide the one or more sample data to the second communication device based on the request information.

实现方式二、在图3所示方法中,第二通信装置可以用于存储样本数据,即该第二通信装置至少包括图5中的数据存储模块。Implementation method 2: In the method shown in FIG. 3 , the second communication device may be used to store sample data, that is, the second communication device at least includes the data storage module shown in FIG. 5 .

实现方式三、在图3所示方法中,第二通信装置可以用于收集样本数据,即该第二通信装置至少包括图5中的数据收集模块。Implementation method three: in the method shown in FIG3 , the second communication device can be used to collect sample data, that is, the second communication device at least includes the data collection module shown in FIG5 .

在实现方式二或实现方式三中,第二通信装置发送该一个或多个样本数据的过程,可以包括:该第二通信装置向用于缓存数据的通信装置发送该一个或多个样本数据。具体地,第二通信装置可以向用于缓存数据的通信装置(例如包含图5中数据缓存模块的通信装置)发送该一个或多个样本数据,以使得样本数据的接收方能够基于该样本数据实现样本数据的缓存之后,该样本数据的接收方能够向部署该第一神经网络模型的通信装置发送该一个或多个样本数据,以实现神经网络模型的推理。In implementation manner 2 or implementation manner 3, the process of the second communication device sending the one or more sample data may include: the second communication device sending the one or more sample data to a communication device for caching data. Specifically, the second communication device may send the one or more sample data to a communication device for caching data (e.g., a communication device including the data caching module in FIG. 5 ), so that a recipient of the sample data can cache the sample data based on the sample data. After that, the recipient of the sample data can send the one or more sample data to the communication device that deploys the first neural network model to implement inference of the neural network model.

在实现方式二或实现方式三的一种可能的实现方式中,该第二通信装置发送该一个或多个样本数据之前,该方法还包括:该第二通信装置接收用于请求该一个或多个样本数据的请求信息。具体地,第二通信装置还可以接收用于请求该一个或多个样本数据的请求信息,以便于该第二通信装置能够基于该请求信息提供该一个或多个样本数据。In one possible implementation of Implementation Mode 2 or Implementation Mode 3, before the second communication device sends the one or more sample data, the method further includes: the second communication device receiving request information for requesting the one or more sample data. Specifically, the second communication device may further receive request information for requesting the one or more sample data, so that the second communication device can provide the one or more sample data based on the request information.

在实现方式二或实现方式三的一种可能的实现方式中,满足以下至少一项时,该第二通信装置发送一个或多个样本数据,包括:In a possible implementation of Implementation Mode 2 or Implementation Mode 3, when at least one of the following conditions is met, the second communication device sends one or more sample data, including:

该第一神经网络模型的推理性能低于(或等于)阈值;The reasoning performance of the first neural network model is lower than (or equal to) a threshold;

该推理数据的数据分布与该第一神经网络模型的前k次输入的推理数据的数据分布的变化大于(或等于)阈值,k为正整数;A difference between the data distribution of the inference data and the data distribution of the inference data inputted into the first neural network model for the previous k times is greater than (or equal to) a threshold value, where k is a positive integer;

部署该第一神经网络模型的通信装置的通信状态发生变化。The communication state of the communication device deploying the first neural network model changes.

具体地,在满足上述至少一项的情况下,可以确定当前第一神经网络模型基于推理数据得到推理结果的性能较差。为此,该第二通信装置可以发送该一个或多个样本数据,使得该一个或多个样本数据的接收方能够在第一神经网络模型的输入中增加样本数据,例如第一通信装置可以基于该第一神经网络模型处理该一个或多个样本数据和推理数据,以得到该推理数据对应的推理结果。 Specifically, when at least one of the above items is satisfied, it can be determined that the current first neural network model has poor performance in obtaining an inference result based on the inference data. To this end, the second communication device can send the one or more sample data so that a recipient of the one or more sample data can add the sample data to the input of the first neural network model. For example, the first communication device can process the one or more sample data and the inference data based on the first neural network model to obtain an inference result corresponding to the inference data.

可选地,在不满足上述至少一项时,可以确定当前第一神经网络模型基于推理数据得到推理结果的性能较优。相应的,该第二通信装置可以不向第一通信装置发送该一个或多个样本数据,即第一神经网络模型的输入可以不包括样本数据,以降低开销。Optionally, when at least one of the above conditions is not satisfied, it may be determined that the current first neural network model has a better performance in obtaining an inference result based on the inference data. Accordingly, the second communication device may not send the one or more sample data to the first communication device, that is, the input of the first neural network model may not include the sample data, to reduce overhead.

需要说明的是,第二通信装置可以在本地确定上述至少一项是否满足,即该第二通信装置可以基于本地确定的结果触发一个或多个样本数据的发送。或者,第二通信装置可以基于其它通信装置的指示,以确定上述至少一项是否满足,即该第二通信装置可以基于其它通信装置的指示确定发送一个或多个样本数据的触发条件是否满足。或者,第二通信装置可以基于其它通信装置的指示触发一个或多个样本数据的发送。可选地,该其它通信装置可以包括前文描述的管理模块和/或数据存储模块等。It should be noted that the second communication device can locally determine whether at least one of the above items is satisfied, that is, the second communication device can trigger the transmission of one or more sample data based on the result of the local determination. Alternatively, the second communication device can determine whether at least one of the above items is satisfied based on the instructions of other communication devices, that is, the second communication device can determine whether the triggering condition for sending one or more sample data is satisfied based on the instructions of other communication devices. Alternatively, the second communication device can trigger the transmission of one or more sample data based on the instructions of other communication devices. Optionally, the other communication device may include the management module and/or data storage module described above, etc.

在实现方式一至实现方式三的任一方式的中,该方法还包括:更新该一个或多个样本数据。具体地,还可以更新该一个或多个样本数据,以期通过更新后的样本数据提升神经网络模型的处理性能。In any one of implementations 1 to 3, the method further includes: updating the one or more sample data. Specifically, the one or more sample data may be updated to improve the processing performance of the neural network model through the updated sample data.

可选地,满足以下至少一项时,更新该一个或多个样本数据,包括:Optionally, the one or more sample data are updated when at least one of the following conditions is met:

该推理数据对应的推理结果对应的推理性能满足第一条件;The reasoning performance corresponding to the reasoning result corresponding to the reasoning data satisfies the first condition;

该推理数据的数据分布满足第二条件;The data distribution of the inference data satisfies the second condition;

部署该第一神经网络模型的通信装置的通信状态发生变化。The communication state of the communication device deploying the first neural network model changes.

具体地,在满足上述至少一项的情况下,可以确定当前的一个或多个样本数据实现的性能较差。为此,可以更新该一个或多个样本数据,以期通过更新后的样本数据提升神经网络模型的处理性能,或,通过更新后的样本数据降低模型处理的复杂度(例如,在更新样本数据的过程为减少样本数据的情况下)。Specifically, if at least one of the above conditions is met, it can be determined that the performance achieved by the current one or more sample data is poor. To this end, the one or more sample data can be updated to improve the processing performance of the neural network model through the updated sample data, or to reduce the complexity of model processing through the updated sample data (for example, when the process of updating the sample data is to reduce the sample data).

可选地,更新一个或多个样本数据,可以包括,增加样本数据、减少样本数据或替换样本数据等。Optionally, updating one or more sample data may include adding sample data, reducing sample data, or replacing sample data.

在一种可能的实现方式中,图5所示通信场景的不同模块可以是相互独立设置的,也可以是将部分模块集成在同一设备/通信装置中的,下面将提供一些实现示例进行介绍。In one possible implementation, the different modules of the communication scenario shown in FIG5 may be independently configured, or some modules may be integrated into the same device/communication apparatus. Some implementation examples are provided below for introduction.

如图6a所示,数据缓存模块和数据存储模块可以设置于同一设备/通信装置中。换言之,该同一设备/通信装置可以提供数据缓存模块和数据存储模块的功能,这两个模块的功能可以参考前文图5的相关描述。As shown in Figure 6a, the data cache module and the data storage module can be set in the same device/communication apparatus. In other words, the same device/communication apparatus can provide the functions of the data cache module and the data storage module. The functions of these two modules can be referred to the relevant description of Figure 5 above.

如图6b所示,数据收集模块、数据缓存模块和数据存储模块可以设置于同一设备/通信装置中。换言之,该同一设备/通信装置可以提供数据收集模块、数据缓存模块和数据存储模块的功能,这三个模块的功能可以参考前文图5的相关描述。As shown in Figure 6b, the data collection module, data cache module, and data storage module can be provided in the same device/communication apparatus. In other words, the same device/communication apparatus can provide the functions of the data collection module, the data cache module, and the data storage module. The functions of these three modules can be referred to the relevant description of Figure 5 above.

如图6c所示,数据收集模块和数据缓存模块可以设置于同一设备/通信装置中。换言之,该同一设备/通信装置可以提供数据收集模块和数据缓存模块的功能,这两个模块的功能可以参考前文图5的相关描述。As shown in Figure 6c, the data collection module and the data cache module can be set in the same device/communication device. In other words, the same device/communication device can provide the functions of the data collection module and the data cache module. The functions of these two modules can be referred to the relevant description of Figure 5 above.

如图6d所示,数据缓存模块和模型推理模块可以设置于同一设备/通信装置中。换言之,该同一设备/通信装置可以提供数据缓存模块和模型推理模块的功能,这两个模块的功能可以参考前文图5的相关描述。As shown in Figure 6d, the data cache module and the model inference module can be set in the same device/communication device. In other words, the same device/communication device can provide the functions of the data cache module and the model inference module. The functions of these two modules can be referred to the relevant description of Figure 5 above.

请参阅图7,本申请实施例提供了一种通信装置700,该通信装置700可以实现上述方法实施例中第二通信装置或第一通信装置的功能,因此也能实现上述方法实施例所具备的有益效果。在本申请实施例中,该通信装置700可以是第一通信装置(或第二通信装置),也可以是第一通信装置(或第二通信装置)内部的集成电路或者元件等,例如芯片。Referring to Figure 7, an embodiment of the present application provides a communication device 700. This communication device 700 can implement the functions of the second communication device or the first communication device in the above-mentioned method embodiment, thereby also achieving the beneficial effects of the above-mentioned method embodiment. In this embodiment of the present application, the communication device 700 can be the first communication device (or the second communication device), or it can be an integrated circuit or component, such as a chip, within the first communication device (or the second communication device).

需要说明的是,收发单元702可以包括发送单元和接收单元,分别用于执行发送和接收。It should be noted that the transceiver unit 702 may include a sending unit and a receiving unit, which are respectively used to perform sending and receiving.

一种可能的实现方式中,当该装置700为用于执行前述实施例中第一通信装置所执行的方法时,该装置700包括处理单元701;该处理单元701用于获取一个或多个样本数据;该处理单元701还用于基于第一神经网络模型处理该一个或多个样本数据和推理数据,得到该推理数据对应的推理结果;其中,该推理结果与该样本数据的至少一个数据特征是相同的。In one possible implementation, when the device 700 is used to execute the method executed by the first communication device in the aforementioned embodiment, the device 700 includes a processing unit 701; the processing unit 701 is used to obtain one or more sample data; the processing unit 701 is also used to process the one or more sample data and inference data based on the first neural network model to obtain an inference result corresponding to the inference data; wherein the inference result is the same as at least one data feature of the sample data.

一种可能的实现方式中,当该装置700为用于执行前述实施例中第二通信装置所执行的方法时,该装置700包括处理单元701和收发单元702;该处理单元701获取一个或多个样本数据;其中,该一个或多个样本数据和推理数据用于经过第一神经网络模型的处理,得到该推理数据对应的推理结果,该推理结果与该样本数据的至少一个数据特征是相同的;该收发单元702用于发送该一个或多个样本数据。In one possible implementation, when the device 700 is used to execute the method executed by the second communication device in the aforementioned embodiment, the device 700 includes a processing unit 701 and a transceiver unit 702; the processing unit 701 obtains one or more sample data; wherein the one or more sample data and inference data are used to be processed by the first neural network model to obtain an inference result corresponding to the inference data, and the inference result is the same as at least one data feature of the sample data; the transceiver unit 702 is used to send the one or more sample data.

需要说明的是,上述通信装置700的单元的信息执行过程等内容,具体可参见本申请前述所示的方 法实施例中的叙述,此处不再赘述。It should be noted that the information execution process of the units of the above communication device 700 and other contents can be specifically referred to the method shown in the above application. The description in the embodiment of the method will not be repeated here.

请参阅图8,为本申请提供的通信装置800的另一种示意性结构图,通信装置800包括逻辑电路801和输入输出接口802。其中,通信装置800可以为芯片或集成电路。Please refer to Fig. 8, which is another schematic structural diagram of a communication device 800 provided in this application. The communication device 800 includes a logic circuit 801 and an input/output interface 802. The communication device 800 may be a chip or an integrated circuit.

其中,图7所示收发单元702可以为通信接口,该通信接口可以是图8中的输入输出接口802,该输入输出接口802可以包括输入接口和输出接口。或者,该通信接口也可以是收发电路,该收发电路可以包括输入接口电路和输出接口电路。The transceiver unit 702 shown in FIG7 may be a communication interface, which may be the input/output interface 802 in FIG8 , which may include an input interface and an output interface. Alternatively, the communication interface may be a transceiver circuit, which may include an input interface circuit and an output interface circuit.

可选的,该输入输出接口802用于获取一个或多个样本数据;该逻辑电路801用于基于第一神经网络模型处理该一个或多个样本数据和推理数据,得到该推理数据对应的推理结果;其中,该推理结果与该样本数据的至少一个数据特征是相同的。Optionally, the input-output interface 802 is used to obtain one or more sample data; the logic circuit 801 is used to process the one or more sample data and inference data based on the first neural network model to obtain an inference result corresponding to the inference data; wherein the inference result is the same as at least one data feature of the sample data.

可选地,该逻辑电路801用于获取一个或多个样本数据;其中,该一个或多个样本数据和推理数据用于经过第一神经网络模型的处理,得到该推理数据对应的推理结果,该推理结果与该样本数据的至少一个数据特征是相同的;该输入输出接口802用于发送该一个或多个样本数据。Optionally, the logic circuit 801 is used to obtain one or more sample data; wherein, the one or more sample data and inference data are used to be processed by the first neural network model to obtain an inference result corresponding to the inference data, and the inference result is the same as at least one data feature of the sample data; the input and output interface 802 is used to send the one or more sample data.

其中,逻辑电路801和输入输出接口802还可以执行任一实施例中第一通信装置或第二通信装置执行的其他步骤并实现对应的有益效果,此处不再赘述。The logic circuit 801 and the input/output interface 802 may also execute other steps executed by the first communication device or the second communication device in any embodiment and achieve corresponding beneficial effects, which will not be described in detail here.

在一种可能的实现方式中,图7所示处理单元701可以为图8中的逻辑电路801。In a possible implementation, the processing unit 701 shown in FIG. 7 may be the logic circuit 801 in FIG. 8 .

可选的,逻辑电路801可以是一个处理装置,处理装置的功能可以部分或全部通过软件实现。其中,处理装置的功能可以部分或全部通过软件实现。Optionally, the logic circuit 801 may be a processing device, and the functions of the processing device may be partially or entirely implemented by software. The functions of the processing device may be partially or entirely implemented by software.

可选的,处理装置可以包括存储器和处理器,其中,存储器用于存储计算机程序,处理器读取并执行存储器中存储的计算机程序,以执行任意一个方法实施例中的相应处理和/或步骤。Optionally, the processing device may include a memory and a processor, wherein the memory is used to store a computer program, and the processor reads and executes the computer program stored in the memory to perform corresponding processing and/or steps in any one of the method embodiments.

可选地,处理装置可以仅包括处理器。用于存储计算机程序的存储器位于处理装置之外,处理器通过电路/电线与存储器连接,以读取并执行存储器中存储的计算机程序。其中,存储器和处理器可以集成在一起,或者也可以是物理上互相独立的。Alternatively, the processing device may include only a processor. A memory for storing the computer program is located outside the processing device, and the processor is connected to the memory via circuits/wires to read and execute the computer program stored in the memory. The memory and processor may be integrated or physically separate.

可选地,该处理装置可以是一个或多个芯片,或一个或多个集成电路。例如,处理装置可以是一个或多个现场可编程门阵列(field-programmable gate array,FPGA)、专用集成芯片(application specific integrated circuit,ASIC)、系统芯片(system on chip,SoC)、中央处理器(central processor unit,CPU)、网络处理器(network processor,NP)、数字信号处理电路(digital signal processor,DSP)、微控制器(micro controller unit,MCU),可编程控制器(programmable logic device,PLD)或其它集成芯片,或者上述芯片或者处理器的任意组合等。Optionally, the processing device may be one or more chips, or one or more integrated circuits. For example, the processing device may be one or more field-programmable gate arrays (FPGAs), application-specific integrated circuits (ASICs), system-on-chips (SoCs), central processing units (CPUs), network processors (NPs), digital signal processing circuits (DSPs), microcontrollers (MCUs), programmable logic devices (PLDs), or other integrated chips, or any combination of the above chips or processors.

请参阅图9,为本申请的实施例提供的上述实施例中所涉及的通信装置900,该通信装置900具体可以为上述实施例中的作为终端设备的通信装置,图9所示示例为终端设备通过终端设备(或者终端设备中的部件)实现。Please refer to Figure 9, which shows a communication device 900 involved in the above-mentioned embodiments provided in an embodiment of the present application. The communication device 900 can specifically be a communication device serving as a terminal device in the above-mentioned embodiments. The example shown in Figure 9 is that the terminal device is implemented through the terminal device (or a component in the terminal device).

其中,该通信装置900的一种可能的逻辑结构示意图,该通信装置900可以包括但不限于至少一个处理器901以及通信端口902。Herein, a possible logical structure diagram of the communication device 900 is shown. The communication device 900 may include but is not limited to at least one processor 901 and a communication port 902 .

其中,图7所示收发单元702可以为通信接口,该通信接口可以是图9中的通信端口902,该通信端口902可以包括输入接口和输出接口。或者,该通信端口902也可以是收发电路,该收发电路可以包括输入接口电路和输出接口电路。The transceiver unit 702 shown in FIG7 may be a communication interface, which may be the communication port 902 in FIG9 , which may include an input interface and an output interface. Alternatively, the communication port 902 may be a transceiver circuit, which may include an input interface circuit and an output interface circuit.

进一步可选的,该装置还可以包括存储器903、总线904中的至少一个,在本申请的实施例中,该至少一个处理器901用于对通信装置900的动作进行控制处理。Further optionally, the device may also include at least one of a memory 903 and a bus 904. In an embodiment of the present application, the at least one processor 901 is used to control and process the actions of the communication device 900.

此外,处理器901可以是中央处理器单元,通用处理器,数字信号处理器,专用集成电路,现场可编程门阵列或者其他可编程逻辑器件、晶体管逻辑器件、硬件部件或者其任意组合。其可以实现或执行结合本申请公开内容所描述的各种示例性的逻辑方框,模块和电路。该处理器也可以是实现计算功能的组合,例如包含一个或多个微处理器组合,数字信号处理器和微处理器的组合等等。所属领域的技术人员可以清楚地了解到,为描述的方便和简洁,上述描述的系统,装置和单元的具体工作过程,可以参考前述方法实施例中的对应过程,在此不再赘述。In addition, the processor 901 can be a central processing unit, a general-purpose processor, a digital signal processor, an application-specific integrated circuit, a field programmable gate array or other programmable logic device, a transistor logic device, a hardware component, or any combination thereof. It can implement or execute the various exemplary logic blocks, modules, and circuits described in conjunction with the disclosure of this application. The processor can also be a combination that implements computing functions, such as a combination of one or more microprocessors, a combination of a digital signal processor and a microprocessor, and so on. Those skilled in the art will clearly understand that for the convenience and brevity of description, the specific working processes of the systems, devices, and units described above can refer to the corresponding processes in the aforementioned method embodiments and will not be repeated here.

需要说明的是,图9所示通信装置900具体可以用于实现前述方法实施例中终端设备所实现的步骤,并实现终端设备对应的技术效果,图9所示通信装置的具体实现方式,均可以参考前述方法实施例中的叙述,此处不再一一赘述。It should be noted that the communication device 900 shown in Figure 9 can be specifically used to implement the steps implemented by the terminal device in the aforementioned method embodiment and achieve the corresponding technical effects of the terminal device. The specific implementation methods of the communication device shown in Figure 9 can refer to the description in the aforementioned method embodiment and will not be repeated here.

请参阅图10,为本申请的实施例提供的上述实施例中所涉及的通信装置1000的结构示意图,该通信 装置1000具体可以为上述实施例中的作为网络设备的通信装置,图10所示示例为网络设备通过网络设备(或者网络设备中的部件)实现,其中,该通信装置的结构可以参考图10所示的结构。Please refer to FIG10, which is a schematic diagram of the structure of the communication device 1000 involved in the above embodiment provided in the embodiment of the present application. Device 1000 can specifically be a communication device as a network device in the above embodiment. The example shown in Figure 10 is that the network device is implemented through a network device (or a component in the network device), wherein the structure of the communication device can refer to the structure shown in Figure 10.

通信装置1000包括至少一个处理器1011以及至少一个网络接口1014。进一步可选的,该通信装置还包括至少一个存储器1012、至少一个收发器1013和一个或多个天线1015。处理器1011、存储器1012、收发器1013和网络接口1014相连,例如通过总线相连,在本申请实施例中,该连接可包括各类接口、传输线或总线等,本实施例对此不做限定。天线1015与收发器1013相连。网络接口1014用于使得通信装置通过通信链路,与其它通信设备通信。例如网络接口1014可以包括通信装置与核心网设备之间的网络接口,例如S1接口,网络接口可以包括通信装置和其他通信装置(例如其他网络设备或者核心网设备)之间的网络接口,例如X2或者Xn接口。The communication device 1000 includes at least one processor 1011 and at least one network interface 1014. Further optionally, the communication device also includes at least one memory 1012, at least one transceiver 1013 and one or more antennas 1015. The processor 1011, the memory 1012, the transceiver 1013 and the network interface 1014 are connected, for example, via a bus. In an embodiment of the present application, the connection may include various interfaces, transmission lines or buses, etc., which are not limited in this embodiment. The antenna 1015 is connected to the transceiver 1013. The network interface 1014 is used to enable the communication device to communicate with other communication devices through a communication link. For example, the network interface 1014 may include a network interface between the communication device and the core network device, such as an S1 interface, and the network interface may include a network interface between the communication device and other communication devices (such as other network devices or core network devices), such as an X2 or Xn interface.

其中,图7所示收发单元702可以为通信接口,该通信接口可以是图10中的网络接口1014,该网络接口1014可以包括输入接口和输出接口。或者,该网络接口1014也可以是收发电路,该收发电路可以包括输入接口电路和输出接口电路。The transceiver unit 702 shown in FIG7 may be a communication interface, which may be the network interface 1014 in FIG10 , which may include an input interface and an output interface. Alternatively, the network interface 1014 may be a transceiver circuit, which may include an input interface circuit and an output interface circuit.

处理器1011主要用于对通信协议以及通信数据进行处理,以及对整个通信装置进行控制,执行软件程序,处理软件程序的数据,例如用于支持通信装置执行实施例中所描述的动作。通信装置可以包括基带处理器和中央处理器,基带处理器主要用于对通信协议以及通信数据进行处理,中央处理器主要用于对整个终端设备进行控制,执行软件程序,处理软件程序的数据。图10中的处理器1011可以集成基带处理器和中央处理器的功能,本领域技术人员可以理解,基带处理器和中央处理器也可以是各自独立的处理器,通过总线等技术互联。本领域技术人员可以理解,终端设备可以包括多个基带处理器以适应不同的网络制式,终端设备可以包括多个中央处理器以增强其处理能力,终端设备的各个部件可以通过各种总线连接。该基带处理器也可以表述为基带处理电路或者基带处理芯片。该中央处理器也可以表述为中央处理电路或者中央处理芯片。对通信协议以及通信数据进行处理的功能可以内置在处理器中,也可以以软件程序的形式存储在存储器中,由处理器执行软件程序以实现基带处理功能。Processor 1011 is primarily used to process communication protocols and communication data, control the entire communication device, execute software programs, and process software program data, for example, to support the communication device in performing the actions described in the embodiments. The communication device may include a baseband processor and a central processing unit. The baseband processor is primarily used to process communication protocols and communication data, while the central processing unit is primarily used to control the entire terminal device, execute software programs, and process software program data. Processor 1011 in Figure 10 may integrate the functions of both a baseband processor and a central processing unit. Those skilled in the art will appreciate that the baseband processor and the central processing unit may also be independent processors interconnected via a bus or other technology. Those skilled in the art will appreciate that a terminal device may include multiple baseband processors to accommodate different network standards, multiple central processing units to enhance its processing capabilities, and various components of the terminal device may be connected via various buses. The baseband processor may also be referred to as a baseband processing circuit or a baseband processing chip. The central processing unit may also be referred to as a central processing circuit or a central processing chip. The functionality for processing communication protocols and communication data may be built into the processor or stored in memory as a software program, which is executed by the processor to implement the baseband processing functionality.

存储器主要用于存储软件程序和数据。存储器1012可以是独立存在,与处理器1011相连。可选的,存储器1012可以和处理器1011集成在一起,例如集成在一个芯片之内。其中,存储器1012能够存储执行本申请实施例的技术方案的程序代码,并由处理器1011来控制执行,被执行的各类计算机程序代码也可被视为是处理器1011的驱动程序。The memory is primarily used to store software programs and data. Memory 1012 can exist independently and be connected to processor 1011. Alternatively, memory 1012 and processor 1011 can be integrated together, for example, within a single chip. Memory 1012 can store program code for executing the technical solutions of the embodiments of the present application, and execution is controlled by processor 1011. The various computer program codes executed can also be considered drivers for processor 1011.

图10仅示出了一个存储器和一个处理器。在实际的终端设备中,可以存在多个处理器和多个存储器。存储器也可以称为存储介质或者存储设备等。存储器可以为与处理器处于同一芯片上的存储元件,即片内存储元件,或者为独立的存储元件,本申请实施例对此不做限定。Figure 10 shows only one memory and one processor. In an actual terminal device, there may be multiple processors and multiple memories. The memory may also be referred to as a storage medium or a storage device. The memory may be a storage element on the same chip as the processor, i.e., an on-chip storage element, or an independent storage element, which is not limited in the present embodiment.

收发器1013可以用于支持通信装置与终端之间射频信号的接收或者发送,收发器1013可以与天线1015相连。收发器1013包括发射机Tx和接收机Rx。具体地,一个或多个天线1015可以接收射频信号,该收发器1013的接收机Rx用于从天线接收该射频信号,并将射频信号转换为数字基带信号或数字中频信号,并将该数字基带信号或数字中频信号提供给该处理器1011,以便处理器1011对该数字基带信号或数字中频信号做进一步的处理,例如解调处理和译码处理。此外,收发器1013中的发射机Tx还用于从处理器1011接收经过调制的数字基带信号或数字中频信号,并将该经过调制的数字基带信号或数字中频信号转换为射频信号,并通过一个或多个天线1015发送该射频信号。具体地,接收机Rx可以选择性地对射频信号进行一级或多级下混频处理和模数转换处理以得到数字基带信号或数字中频信号,该下混频处理和模数转换处理的先后顺序是可调整的。发射机Tx可以选择性地对经过调制的数字基带信号或数字中频信号时进行一级或多级上混频处理和数模转换处理以得到射频信号,该上混频处理和数模转换处理的先后顺序是可调整的。数字基带信号和数字中频信号可以统称为数字信号。The transceiver 1013 can be used to support the reception or transmission of radio frequency signals between the communication device and the terminal. The transceiver 1013 can be connected to the antenna 1015. The transceiver 1013 includes a transmitter Tx and a receiver Rx. Specifically, one or more antennas 1015 can receive radio frequency signals. The receiver Rx of the transceiver 1013 is used to receive the radio frequency signal from the antenna, convert the radio frequency signal into a digital baseband signal or a digital intermediate frequency signal, and provide the digital baseband signal or digital intermediate frequency signal to the processor 1011 so that the processor 1011 can further process the digital baseband signal or digital intermediate frequency signal, such as demodulation and decoding. In addition, the transmitter Tx in the transceiver 1013 is also used to receive a modulated digital baseband signal or digital intermediate frequency signal from the processor 1011, convert the modulated digital baseband signal or digital intermediate frequency signal into a radio frequency signal, and transmit the radio frequency signal through one or more antennas 1015. Specifically, the receiver Rx can selectively perform one or more stages of down-mixing and analog-to-digital conversion on the RF signal to obtain a digital baseband signal or a digital intermediate frequency signal. The order of the down-mixing and analog-to-digital conversion processes is adjustable. The transmitter Tx can selectively perform one or more stages of up-mixing and digital-to-analog conversion on the modulated digital baseband signal or digital intermediate frequency signal to obtain a RF signal. The order of the up-mixing and digital-to-analog conversion processes is adjustable. The digital baseband signal and the digital intermediate frequency signal may be collectively referred to as digital signals.

收发器1013也可以称为收发单元、收发机、收发装置等。可选的,可以将收发单元中用于实现接收功能的器件视为接收单元,将收发单元中用于实现发送功能的器件视为发送单元,即收发单元包括接收单元和发送单元,接收单元也可以称为接收机、输入口、接收电路等,发送单元可以称为发射机、发射器或者发射电路等。The transceiver 1013 may also be referred to as a transceiver unit, a transceiver, a transceiver device, etc. Optionally, a device in the transceiver unit that implements a receiving function may be referred to as a receiving unit, and a device in the transceiver unit that implements a transmitting function may be referred to as a transmitting unit. That is, the transceiver unit includes a receiving unit and a transmitting unit. The receiving unit may also be referred to as a receiver, an input port, a receiving circuit, etc., and the transmitting unit may be referred to as a transmitter, a transmitter, or a transmitting circuit, etc.

需要说明的是,图10所示通信装置1000具体可以用于实现前述方法实施例中网络设备所实现的步骤,并实现网络设备对应的技术效果,图10所示通信装置1000的具体实现方式,均可以参考前述方法实施例中的叙述,此处不再一一赘述。It should be noted that the communication device 1000 shown in Figure 10 can be specifically used to implement the steps implemented by the network device in the aforementioned method embodiment, and to achieve the corresponding technical effects of the network device. The specific implementation methods of the communication device 1000 shown in Figure 10 can refer to the description in the aforementioned method embodiment, and will not be repeated here one by one.

请参阅图11,为本申请的实施例提供的上述实施例中所涉及的通信装置的结构示意图。 Please refer to FIG11 , which is a schematic structural diagram of the communication device involved in the above-mentioned embodiment provided in an embodiment of the present application.

可以理解的是,通信装置110包括例如模块、单元、元件、电路、或接口等,以适当地配置在一起以执行本申请提供的技术方案。所述通信装置110可以是前文描述的终端设备或网络设备,也可以是这些设备中的部件(例如芯片),用以实现下述方法实施例中描述的方法。通信装置110包括一个或多个处理器111。所述处理器111可以是通用处理器或者专用处理器等。例如可以是基带处理器、或中央处理器。基带处理器可以用于对通信协议以及通信数据进行处理,中央处理器可以用于对通信装置(如,RAN节点、终端、或芯片等)进行控制,执行软件程序,处理软件程序的数据。It can be understood that the communication device 110 includes, for example, modules, units, elements, circuits, or interfaces, which are appropriately configured together to implement the technical solutions provided in this application. The communication device 110 can be the terminal device or network device described above, or a component (such as a chip) in these devices, used to implement the method described in the following method embodiment. The communication device 110 includes one or more processors 111. The processor 111 can be a general-purpose processor or a dedicated processor. For example, it can be a baseband processor or a central processing unit. The baseband processor can be used to process communication protocols and communication data, and the central processing unit can be used to control the communication device (such as a RAN node, terminal, or chip, etc.), execute software programs, and process data of software programs.

可选的,在一种设计中,处理器111可以包括程序113(有时也可以称为代码或指令),所述程序113可以在所述处理器111上被运行,使得所述通信装置110执行下述实施例中描述的方法。在又一种可能的设计中,通信装置110包括电路(图11未示出)。Optionally, in one design, the processor 111 may include a program 113 (sometimes also referred to as code or instructions), which may be executed on the processor 111 to cause the communication device 110 to perform the methods described in the following embodiments. In yet another possible design, the communication device 110 includes circuitry (not shown in FIG11 ).

可选的,所述通信装置110中可以包括一个或多个存储器112,其上存有程序114(有时也可以称为代码或指令),所述程序114可在所述处理器111上被运行,使得所述通信装置110执行上述方法实施例中描述的方法。Optionally, the communication device 110 may include one or more memories 112 on which a program 114 (sometimes also referred to as code or instructions) is stored. The program 114 can be run on the processor 111, so that the communication device 110 executes the method described in the above method embodiment.

可选的,所述处理器111和/或存储器112中可以包括AI模块117,118,所述AI模块用于实现AI相关的功能。所述AI模块可以是通过软件,硬件,或软硬结合的方式实现。例如,AI模块可以包括无线智能控制(radio intelligence control,RIC)模块。例如AI模块可以是近实时RIC或者非实时RIC。Optionally, the processor 111 and/or the memory 112 may include AI modules 117 and 118, which are used to implement AI-related functions. The AI module can be implemented through software, hardware, or a combination of software and hardware. For example, the AI module may include a wireless intelligent control (RIC) module. For example, the AI module may be a near-real-time RIC or a non-real-time RIC.

可选的,所述处理器111和/或存储器112中还可以存储有数据。所述处理器和存储器可以单独设置,也可以集成在一起。Optionally, data may be stored in the processor 111 and/or the memory 112. The processor and the memory may be provided separately or integrated together.

可选的,所述通信装置110还可以包括收发器115和/或天线116。所述处理器111有时也可以称为处理单元,对通信装置(例如RAN节点或终端)进行控制。所述收发器115有时也可以称为收发单元、收发机、收发电路、或者收发器等,用于通过天线116实现通信装置的收发功能。Optionally, the communication device 110 may further include a transceiver 115 and/or an antenna 116. The processor 111 may also be referred to as a processing unit, and controls the communication device (e.g., a RAN node or terminal). The transceiver 115 may also be referred to as a transceiver unit, a transceiver, a transceiver circuit, or a transceiver, and is configured to implement the transceiver functions of the communication device through the antenna 116.

其中,图7所示处理单元701可以是处理器111。图7所示收发单元702可以为通信接口,该通信接口可以是图11中的收发器115,该收发器115可以包括输入接口和输出接口。或者,该收发器115也可以是收发电路,该收发电路可以包括输入接口电路和输出接口电路。The processing unit 701 shown in FIG7 may be the processor 111. The transceiver unit 702 shown in FIG7 may be a communication interface, which may be the transceiver 115 shown in FIG11 . The transceiver 115 may include an input interface and an output interface. Alternatively, the transceiver 115 may be a transceiver circuit, which may include an input interface circuit and an output interface circuit.

本申请实施例还提供一种计算机可读存储介质,该存储介质用于存储一个或多个计算机执行指令,当计算机执行指令被处理器执行时,该处理器执行如前述实施例中第一通信装置或第二通信装置可能的实现方式所述的方法。An embodiment of the present application further provides a computer-readable storage medium, which is used to store one or more computer-executable instructions. When the computer-executable instructions are executed by a processor, the processor executes the method described in the possible implementation methods of the first communication device or the second communication device in the aforementioned embodiment.

本申请实施例还提供一种计算机程序产品(或称计算机程序),当计算机程序产品被该处理器执行时,该处理器执行上述第一通信装置或第二通信装置可能实现方式的方法。An embodiment of the present application also provides a computer program product (or computer program). When the computer program product is executed by the processor, the processor executes the method that may be implemented by the above-mentioned first communication device or second communication device.

本申请实施例还提供了一种芯片系统,该芯片系统包括至少一个处理器,用于支持通信装置实现上述通信装置可能的实现方式中所涉及的功能。可选的,所述芯片系统还包括接口电路,所述接口电路为所述至少一个处理器提供程序指令和/或数据。在一种可能的设计中,该芯片系统还可以包括存储器,存储器,用于保存该通信装置必要的程序指令和数据。该芯片系统,可以由芯片构成,也可以包含芯片和其他分立器件,其中,该通信装置具体可以为前述方法实施例中第一通信装置或第二通信装置。An embodiment of the present application also provides a chip system, which includes at least one processor for supporting a communication device to implement the functions involved in the possible implementation methods of the above-mentioned communication device. Optionally, the chip system also includes an interface circuit, which provides program instructions and/or data to the at least one processor. In one possible design, the chip system may also include a memory, which is used to store the necessary program instructions and data for the communication device. The chip system can be composed of chips, or it can include chips and other discrete devices, wherein the communication device can specifically be the first communication device or the second communication device in the aforementioned method embodiment.

本申请实施例还提供了一种通信系统,该网络系统架构包括上述任一实施例中的第一通信装置和第二通信装置。An embodiment of the present application further provides a communication system, wherein the network system architecture includes the first communication device and the second communication device in any of the above embodiments.

在本申请所提供的几个实施例中,应该理解到,所揭露的系统,装置和方法,可以通过其它的方式实现。例如,以上所描述的装置实施例仅仅是示意性的,例如,所述单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,装置或单元的间接耦合或通信连接,可以是电性,机械或其它的形式。In the several embodiments provided in this application, it should be understood that the disclosed systems, devices and methods can be implemented in other ways. For example, the device embodiments described above are merely schematic. For example, the division of the units is merely a logical function division. In actual implementation, there may be other division methods, such as multiple units or components can be combined or integrated into another system, or some features can be ignored or not executed. Another point is that the mutual coupling or direct coupling or communication connection shown or discussed can be an indirect coupling or communication connection through some interfaces, devices or units, which can be electrical, mechanical or other forms.

所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。The units described as separate components may or may not be physically separate, and the components shown as units may or may not be physical units, that is, they may be located in one place or distributed across multiple network units. Some or all of these units may be selected to achieve the purpose of this embodiment according to actual needs.

另外,在本申请各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。上述集成的单元既可以采用硬件的形式实现,也可以采用软件功能单元的形式实现。所述集成的单元如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。基于这样的理解,本申请的技术方案本质上或者说做出贡献的部分或者该技术方案的全部或部分可以以软件产品的形式体现出来,该计算机软件 产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)执行本申请各个实施例所述方法的全部或部分步骤。而前述的存储介质包括:U盘、移动硬盘、只读存储器(ROM,Read-Only Memory)、随机存取存储器(RAM,Random Access Memory)、磁碟或者光盘等各种可以存储程序代码的介质。 In addition, the functional units in the various embodiments of the present application may be integrated into one processing unit, or each unit may exist physically separately, or two or more units may be integrated into one unit. The above-mentioned integrated unit may be implemented in the form of hardware or in the form of a software functional unit. If the integrated unit is implemented in the form of a software functional unit and sold or used as an independent product, it may be stored in a computer-readable storage medium. Based on this understanding, the technical solution of the present application is essentially or the contributing part or all or part of the technical solution can be embodied in the form of a software product, and the computer software The product is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, server, or network device, etc.) to execute all or part of the steps of the method described in each embodiment of the present application. The aforementioned storage medium includes: a USB flash drive, a mobile hard drive, a read-only memory (ROM), a random access memory (RAM), a magnetic disk, or an optical disk, etc., various media that can store program code.

Claims (21)

一种通信方法,其特征在于,包括:A communication method, comprising: 获取一个或多个样本数据;Obtain one or more sample data; 基于第一神经网络模型处理所述一个或多个样本数据和推理数据,得到所述推理数据对应的推理结果;其中,所述推理结果与所述样本数据的至少一个数据特征是相同的。The one or more sample data and inference data are processed based on the first neural network model to obtain an inference result corresponding to the inference data; wherein the inference result is the same as at least one data feature of the sample data. 根据权利要求1所述的方法,其特征在于,所述数据特征包括以下至少一项:The method according to claim 1, wherein the data characteristics include at least one of the following: 数据维度、参数量、数据内容、数据类型、或物理量。Data dimension, parameter quantity, data content, data type, or physical quantity. 根据权利要求1或2所述的方法,其特征在于,满足以下至少一项时,基于第一神经网络模型处理所述一个或多个样本数据和推理数据,得到所述推理数据对应的推理结果,包括:The method according to claim 1 or 2, wherein when at least one of the following conditions is satisfied, processing the one or more sample data and the inference data based on the first neural network model to obtain an inference result corresponding to the inference data comprises: 所述第一神经网络模型的推理性能低于阈值;The inference performance of the first neural network model is lower than a threshold; 所述推理数据的数据分布与所述第一神经网络模型的前k次输入的推理数据的数据分布的变化大于阈值,k为正整数;A difference between the data distribution of the inference data and the data distribution of the inference data inputted for the first k times of the first neural network model is greater than a threshold value, where k is a positive integer; 通信状态发生变化。The communication status has changed. 根据权利要求1至3任一项所述的方法,其特征在于,所述方法还包括:The method according to any one of claims 1 to 3, characterized in that the method further comprises: 在所述一个或多个样本数据的更新频率大于阈值时,或,在所述推理数据对应的推理结果对应的性能低于阈值时,对所述第一神经网络模型进行更新,得到第二神经网络模型。When the update frequency of the one or more sample data is greater than a threshold, or when the performance corresponding to the inference result corresponding to the inference data is lower than a threshold, the first neural network model is updated to obtain a second neural network model. 根据权利要求1至4任一项所述的方法,其特征在于,所述获取一个或多个样本数据,包括:The method according to any one of claims 1 to 4, wherein obtaining one or more sample data comprises: 接收所述一个或多个样本数据。The one or more sample data are received. 根据权利要求5所述的方法,其特征在于,所述方法还包括:The method according to claim 5, further comprising: 发送用于请求所述一个或多个样本数据的请求信息。Sending request information for requesting the one or more sample data. 一种通信方法,其特征在于,包括:A communication method, comprising: 获取一个或多个样本数据;其中,所述一个或多个样本数据满足:经过第一神经网络模型处理所述一个或多个样本数据和推理数据得到的所述推理数据对应的推理结果,与所述样本数据的至少一个数据特征是相同的;Acquire one or more sample data; wherein the one or more sample data satisfy: an inference result corresponding to the inference data obtained by processing the one or more sample data and inference data by a first neural network model is identical to at least one data feature of the sample data; 发送所述一个或多个样本数据。The one or more sample data are sent. 根据权利要求7所述的方法,其特征在于,所述方法应用于缓存数据的通信装置;The method according to claim 7, characterized in that the method is applied to a communication device that caches data; 所述发送所述一个或多个样本数据,包括:The sending of the one or more sample data includes: 向部署所述第一神经网络模型的通信装置或用于存储数据的通信装置发送所述一个或多个样本数据。The one or more sample data are sent to a communication device that deploys the first neural network model or a communication device that stores data. 根据权利要求8所述的方法,其特征在于,所述获取一个或多个样本数据,包括:The method according to claim 8, wherein obtaining one or more sample data comprises: 接收所述一个或多个样本数据,其中,所述一个或多个样本数据来自用于收集数据的通信装置,或所述一个或多个样本数据来自用于存储数据的通信装置。The one or more sample data are received, wherein the one or more sample data are from a communication device for collecting data, or the one or more sample data are from a communication device for storing data. 根据权利要求9所述的方法,其特征在于,所述接收所述一个或多个样本数据之前,所述方法还包括:The method according to claim 9, characterized in that before receiving the one or more sample data, the method further comprises: 发送用于请求所述一个或多个样本数据的请求信息。Sending request information for requesting the one or more sample data. 根据权利要求7所述的方法,其特征在于,所述方法应用于存储数据的通信装置或收集数据的通信数据;The method according to claim 7, characterized in that the method is applied to a communication device that stores data or communication data that collects data; 所述发送所述一个或多个样本数据,包括:The sending of the one or more sample data includes: 向用于缓存数据的通信装置发送所述一个或多个样本数据。 The one or more sample data are sent to a communication device for buffering data. 根据权利要求11所述的方法,其特征在于,所述发送所述一个或多个样本数据之前,所述方法还包括:The method according to claim 11, characterized in that before sending the one or more sample data, the method further comprises: 接收用于请求所述一个或多个样本数据的请求信息。Request information for requesting the one or more sample data is received. 根据权利要求7至12任一项所述的方法,其特征在于,所述数据特征包括以下至少一项:The method according to any one of claims 7 to 12, wherein the data characteristics include at least one of the following: 数据维度、参数量、数据内容、数据类型、或物理量。Data dimension, parameter quantity, data content, data type, or physical quantity. 根据权利要求7至13任一项所述的方法,其特征在于,满足以下至少一项时,发送一个或多个样本数据,包括:The method according to any one of claims 7 to 13, wherein sending one or more sample data when at least one of the following conditions is met includes: 所述第一神经网络模型的推理性能低于阈值;The inference performance of the first neural network model is lower than a threshold; 所述推理数据的数据分布与所述第一神经网络模型的前k次输入的推理数据的数据分布的变化大于阈值,k为正整数;A difference between the data distribution of the inference data and the data distribution of the inference data inputted for the first k times of the first neural network model is greater than a threshold value, where k is a positive integer; 部署所述第一神经网络模型的通信装置的通信状态发生变化。The communication state of the communication device deploying the first neural network model changes. 根据权利要求7至14任一项所述的方法,其特征在于,所述方法还包括:The method according to any one of claims 7 to 14, further comprising: 更新所述一个或多个样本数据。The one or more sample data are updated. 根据权利要求15所述的方法,其特征在于,满足以下至少一项时,更新所述一个或多个样本数据,包括:The method according to claim 15, wherein updating the one or more sample data when at least one of the following conditions is satisfied comprises: 所述推理数据对应的推理结果对应的推理性能满足第一条件;The reasoning performance corresponding to the reasoning result corresponding to the reasoning data satisfies the first condition; 所述推理数据的数据分布满足第二条件;The data distribution of the inference data satisfies the second condition; 部署所述第一神经网络模型的通信装置的通信状态发生变化。The communication state of the communication device deploying the first neural network model changes. 一种通信装置,其特征在于,包括用于执行如权利要求1至16任一项所述的方法的模块。A communication device, characterized by comprising a module for executing the method according to any one of claims 1 to 16. 一种通信装置,其特征在于,包括至少一个处理器,所述至少一个处理器与存储器耦合;所述至少一个处理器用于执行如权利要求1至16中任一项所述的方法。A communication device, characterized by comprising at least one processor, wherein the at least one processor is coupled to a memory; the at least one processor is configured to execute the method according to any one of claims 1 to 16. 根据权利要求18所述的通信装置,其特征在于,所述通信装置为芯片或芯片系统。The communication device according to claim 18, wherein the communication device is a chip or a chip system. 一种可读存储介质,其特征在于,所述存储介质中存储有计算机程序或指令,当所述计算机程序或指令被通信装置执行时,实现如权利要求1至16中任一项所述的方法。A readable storage medium, characterized in that a computer program or instruction is stored in the storage medium, and when the computer program or instruction is executed by a communication device, the method according to any one of claims 1 to 16 is implemented. 一种计算机程序产品,其特征在于,包括指令,当所述指令在计算机上运行时,使得所述计算机执行如权利要求1至16中任一项所述的方法。 A computer program product, characterized by comprising instructions, which, when the instructions are executed on a computer, cause the computer to execute the method according to any one of claims 1 to 16.
PCT/CN2024/127224 2024-02-29 2024-10-25 Communication method and related apparatus Pending WO2025179919A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202410235482.2A CN120568384A (en) 2024-02-29 2024-02-29 Communication method and related device
CN202410235482.2 2024-02-29

Publications (1)

Publication Number Publication Date
WO2025179919A1 true WO2025179919A1 (en) 2025-09-04

Family

ID=96831749

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2024/127224 Pending WO2025179919A1 (en) 2024-02-29 2024-10-25 Communication method and related apparatus

Country Status (2)

Country Link
CN (1) CN120568384A (en)
WO (1) WO2025179919A1 (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113642727A (en) * 2021-08-06 2021-11-12 北京百度网讯科技有限公司 Training method of neural network model and multimedia information processing method and device
WO2023115254A1 (en) * 2021-12-20 2023-06-29 Oppo广东移动通信有限公司 Data processing method and device
CN116933847A (en) * 2022-04-02 2023-10-24 华为技术有限公司 Neural network model adjustment method, electronic device and readable storage medium
CN117010454A (en) * 2022-11-02 2023-11-07 腾讯科技(深圳)有限公司 Neural network training method, device, electronic equipment and storage medium

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113642727A (en) * 2021-08-06 2021-11-12 北京百度网讯科技有限公司 Training method of neural network model and multimedia information processing method and device
WO2023115254A1 (en) * 2021-12-20 2023-06-29 Oppo广东移动通信有限公司 Data processing method and device
CN116933847A (en) * 2022-04-02 2023-10-24 华为技术有限公司 Neural network model adjustment method, electronic device and readable storage medium
CN117010454A (en) * 2022-11-02 2023-11-07 腾讯科技(深圳)有限公司 Neural network training method, device, electronic equipment and storage medium

Also Published As

Publication number Publication date
CN120568384A (en) 2025-08-29

Similar Documents

Publication Publication Date Title
WO2025179919A1 (en) Communication method and related apparatus
WO2025118980A1 (en) Communication method and related device
WO2025190252A1 (en) Communication method and related apparatus
WO2025092159A1 (en) Communication method and related device
WO2025175756A1 (en) Communication method and related device
WO2025092160A1 (en) Communication method and related device
WO2025190244A1 (en) Communication method and related apparatus
WO2025019990A1 (en) Communication method and related device
WO2025179920A1 (en) Communication method and related apparatus
WO2025190246A1 (en) Communication method and related apparatus
WO2025189861A1 (en) Communication method and related apparatus
WO2025025193A1 (en) Communication method and related device
WO2025208880A1 (en) Communication method, and related apparatus
WO2025190248A1 (en) Communication method and related apparatus
WO2025189860A1 (en) Communication method and related apparatus
WO2025059907A1 (en) Communication method and related device
WO2025189831A1 (en) Communication method and related apparatus
WO2025103115A1 (en) Communication method and related device
WO2025139534A1 (en) Communication method and related device
WO2025167443A1 (en) Communication method and related device
WO2025086262A1 (en) Communication method and related apparatus
WO2025059908A1 (en) Communication method and related device
WO2025118759A1 (en) Communication method and related device
WO2025107835A1 (en) Communication method and related device
WO2025140282A1 (en) Communication method and related device

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 24926780

Country of ref document: EP

Kind code of ref document: A1