WO2025066789A1 - Procédé de traitement de données et appareil de communication - Google Patents
Procédé de traitement de données et appareil de communication Download PDFInfo
- Publication number
- WO2025066789A1 WO2025066789A1 PCT/CN2024/116032 CN2024116032W WO2025066789A1 WO 2025066789 A1 WO2025066789 A1 WO 2025066789A1 CN 2024116032 W CN2024116032 W CN 2024116032W WO 2025066789 A1 WO2025066789 A1 WO 2025066789A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- message
- information
- identifier
- delay
- quantization resolution
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N20/00—Machine learning
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/098—Distributed learning, e.g. federated learning
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L41/00—Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
- H04L41/14—Network analysis or design
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L41/00—Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
- H04L41/16—Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks using machine learning or artificial intelligence
Definitions
- the present application relates to the field of communications, and in particular to a data processing method and a communication device.
- Federated learning is a distributed machine learning paradigm that enables multiple devices to conduct joint training and build shared machine learning models without sharing data resources. Specifically, in the process of federated learning, the distributed nodes participating in federated learning can conduct joint training by sharing artificial intelligence (AI)-assisted data.
- AI artificial intelligence
- the AI-assisted data can usually be compressed in a quantized manner.
- the AI-assisted data is represented by multiple bits, and the number of these bits can determine the quantization level (or quantization resolution), thereby affecting the performance of the gradient quantization algorithm.
- the fewer bits used by the quantization algorithm the lower the quantization resolution, the greater the quantization error introduced during the AI-assisted data upload process, and the longer the model convergence time; the more bits used by the quantization algorithm, the higher the quantization resolution, and the longer the communication time for transmitting AI-assisted data.
- the embodiments of the present application provide a data processing method and a communication device, and the second device can quantize the AI collaborative data through dynamically changing quantization resolution, which is conducive to improving the training efficiency of collaborative learning.
- the present application provides a data processing method.
- the method includes: the second device sends a first message to the first device, and the first message includes one or more of communication condition information of the second device, computing resource information of the second device, or model training information of the second device; the second device receives a second message from the first device, and the second message includes indication information, and the indication information indicates a quantization resolution, and the quantization resolution is related to one or more of the communication condition information, computing resource information, or model training information; further, the second device quantizes the first artificial intelligence AI collaboration data based on the quantization resolution to obtain second AI collaboration data; and sends the second AI collaboration data to the first device.
- the quantization resolution of the second device is dynamically adjusted in combination with the communication conditions, computing resources or model training information of the current second device, which is beneficial to improving the compatibility of the quantization resolution with the current second device.
- the training efficiency of collaborative learning is improved.
- the communication condition information includes one or more of communication bandwidth information, transmission delay, reference signal received power RSRP, reference signal received quality RSRQ, signal to interference plus noise ratio SINR, bit error rate or throughput;
- the computing resource information includes one or more of the size of computing resources, the size of computing power or computing delay;
- the model training information includes one or more of the test set loss value, the training set loss value, the gradient norm size, the gradient size, and the model size.
- the second device receives a third message from the first device, the third message being used to request assistance in updating the quantization resolution; the aforementioned first message is a response message corresponding to the third message.
- the third message includes one or more of the following information: a measurement identifier, an identifier of the second device, an identifier of the first device, an identifier of the model, indication information indicating the reason for updating the quantization resolution, an identifier of a training round corresponding to the model, an identifier of information requested for feedback, or configuration information for transmitting the first message; wherein the information requested for feedback includes one or more of communication condition information, computing resource information, or model training information; and the first message is generated based on the identifier of the information requested for feedback.
- the indication information indicating the reason for updating the quantization resolution includes the time difference between the first delay and the average delay, or the ratio between the first delay and the average delay; wherein the first delay is the delay of the first device waiting for the second device to feedback the AI collaboration data, and the average delay is the average value of the delay of the first device waiting for multiple distributed nodes to feedback the AI collaboration data, and the multiple distributed nodes include the second device.
- the configuration information for transmitting the first message includes one or more of a transmission period for transmitting the first message, a trigger condition for transmitting the first message, or a number of times of transmitting the first message.
- the second device can The configuration information automatically sends a first message to the first device to assist in updating the quantization resolution, thereby facilitating saving communication transmission resources.
- the first message further includes one or more of the following information: a measurement identifier, a second device identifier, a first device identifier, an identifier of a model, and an identifier of a training wheel corresponding to the model.
- the present application provides a data processing method.
- the method includes: the first device receives a first message from a second device, the first message including one or more of communication condition information of the second device, computing resource information of the second device, or model training information of the second device; further, based on one or more of the communication condition information, the computing resource information, or the model training information, a quantization resolution is determined; and a second message is sent to the second device, the second message including indication information, the indication information indicating the quantization resolution, and the quantization resolution is related to one or more of the communication condition information, the computing resource information, or the model training information; the first device receives second artificial intelligence AI collaboration data from the second device, and the second AI collaboration data is obtained by quantizing the first AI collaboration data based on the quantization resolution.
- the communication condition information includes one or more of communication bandwidth information, transmission delay, reference signal received power RSRP, reference signal received quality RSRQ, signal to interference plus noise ratio SINR, bit error rate or throughput;
- the computing resource information includes one or more of the size of computing resources, the size of computing power or computing delay;
- the model training information includes one or more of the test set loss value, the training set loss value, the gradient norm size, the gradient size, and the model size.
- the first device sends a third message to the second device, where the third message is used to request assistance in updating the quantization resolution; and the aforementioned first message is a response message corresponding to the third message.
- the third message includes one or more of the following information: a measurement identifier, an identifier of the second device, an identifier of the first device, a model identifier, indication information indicating the reason for updating the quantization resolution, an identifier of a training round corresponding to the model, an identifier of information requested for feedback, and/or configuration information for transmitting the first message; wherein the information requested for feedback includes one or more of communication condition information, computing resource information, or model training information; and the first message is generated based on the identifier of the information requested for feedback.
- the indication information indicating the reason for updating the quantization resolution includes the time difference between the first delay and the average delay, or the ratio between the first delay and the average delay; wherein the first delay is the delay of the first device waiting for the second device to feedback the AI collaboration data, and the average delay is the average value of the delay of the first device waiting for multiple distributed nodes to feedback the AI collaboration data, and the multiple distributed nodes include the second device.
- the configuration information for transmitting the first message includes one or more of a transmission period for transmitting the first message, a triggering condition for transmitting the first message, or a number of times of transmitting the first message.
- the first message further includes one or more of the following information: a measurement identifier, a second device identifier, a first device identifier, an identifier of a model, and an identifier of a training wheel corresponding to the model.
- the present application provides a communication device, which may be a second device, or a device in the second device, or a device that can be used in combination with the second device.
- the communication device may also be a chip system.
- the communication device may execute the method described in the first aspect.
- the functions of the communication device may be implemented by hardware, or by hardware executing corresponding software implementations.
- the hardware or software includes one or more units or modules corresponding to the above functions.
- the unit or module may be software and/or hardware.
- the operations and beneficial effects performed by the communication device may refer to the method and beneficial effects described in the first aspect above.
- the present application provides a communication device, which may be a first device, or a device in the first device, or a device that can be used in combination with the first device.
- the communication device may also be a chip system.
- the communication device may execute the method described in the second aspect.
- the functions of the communication device may be implemented by hardware, or by hardware executing corresponding software.
- the hardware or software includes one or more units or modules corresponding to the above functions.
- the unit or module may be software and/or hardware.
- the operations and beneficial effects performed by the communication device may refer to the method and beneficial effects described in the second aspect above.
- the present application provides a communication device, comprising a processor and an interface circuit, wherein the interface circuit is used to receive signals from other communication devices outside the communication device and transmit them to the processor or send signals from the processor to other communication devices outside the communication device, and the processor is used to implement the method described in the first aspect through a logic circuit or by executing code instructions, or the processor is used to implement the method described in the second aspect through a logic circuit or by executing code instructions.
- the present application provides a computer-readable storage medium, in which a computer program or instruction is stored.
- a computer program or instruction is stored.
- the method described in the first aspect is implemented, or the method described in the second aspect is implemented.
- the present application provides a computer program product comprising instructions, which, when a communication device reads and executes the instructions, causes the communication device to execute the method as described in the first aspect, or causes the communication device to execute the method as described in the second aspect.
- the present application provides a communication system, comprising a communication device for executing the method described in the first aspect above, and a communication device for executing the method executed by the network device described in the second aspect above.
- FIG1 is a schematic diagram of a communication system provided in an embodiment of the present application.
- FIG2 is a schematic diagram of an application framework of AI in an NR system provided in an embodiment of the present application
- FIG3 is a schematic diagram of a process flow of federated learning provided in an embodiment of the present application.
- FIG4a is a schematic diagram of a flow chart of a data processing method provided in an embodiment of the present application.
- FIG4b is a schematic diagram of a collaborative learning process provided by an embodiment of the present application.
- FIG4c is a schematic diagram of another collaborative learning process provided by an embodiment of the present application.
- FIG5 is a schematic diagram of the structure of a communication device provided in an embodiment of the present application.
- FIG6 is a schematic diagram of the structure of another communication device provided in an embodiment of the present application.
- FIG. 1 is a schematic diagram of the architecture of a communication system 1000 used in an embodiment of the present application.
- the communication system includes a radio access network (RAN) 100 and a core network 200.
- the communication system 1000 may also include the Internet 300.
- the RAN 100 includes at least one RAN node (such as 110a and 110b in FIG. 1 , collectively referred to as 110), and may also include at least one terminal (such as 120a-120j in FIG. 1 , collectively referred to as 120).
- the RAN 100 may also include other RAN nodes, for example, a wireless relay device and/or a wireless backhaul device (not shown in FIG. 1 ).
- the terminal 120 is connected to the RAN node 110 wirelessly, and the RAN node 110 is connected to the core network 200 wirelessly or by wire.
- the core network device in the core network 200 and the RAN node 110 in the RAN 100 may be independent and different physical devices, or may be the same physical device that integrates the logical functions of the core network device and the logical functions of the RAN node. Terminals and RAN nodes may be connected to each other via wired or wireless means. It should be noted that the RAN node 110 may also be referred to as a network device 110 in the following text.
- RAN100 may be an evolved universal terrestrial radio access (E-UTRA) system, a new radio (NR) system, and a future radio access system defined in the 3rd generation partnership project (3GPP). RAN100 may also include two or more of the above different radio access systems. RAN100 may also be an open RAN (O-RAN).
- E-UTRA evolved universal terrestrial radio access
- NR new radio
- 3GPP 3rd generation partnership project
- RAN100 may also include two or more of the above different radio access systems.
- RAN100 may also be an open RAN (O-RAN).
- RAN nodes also known as radio access network equipment, RAN entities or access nodes, are used to help terminals access the communication system wirelessly.
- RAN nodes can be base stations (base stations), evolved NodeBs (eNodeBs), transmission reception points (TRPs), next generation NodeBs (gNBs) in the fifth generation (5G) mobile communication system, next generation NodeBs in the sixth generation (6G) mobile communication system, and base stations in future mobile communication systems.
- RAN nodes can be macro base stations (such as 110a in FIG. 1 ), micro base stations or indoor stations (such as 110b in FIG. 1 ), or relay nodes or donor nodes.
- the cooperation of multiple RAN nodes can help the terminal achieve wireless access, and different RAN nodes respectively implement part of the functions of the base station.
- the RAN node can be a centralized unit (CU), a distributed unit (DU) or a radio unit (RU).
- the CU here completes the functions of the radio resource control protocol and the packet data convergence protocol (PDCP) of the base station, and can also complete the function of the service data adaptation protocol (SDAP);
- SDAP service data adaptation protocol
- the DU completes the functions of the radio link control layer and the medium access control (MAC) layer of the base station, and can also complete the functions of part or all of the physical layer.
- PDCP packet data convergence protocol
- SDAP service data adaptation protocol
- MAC medium access control
- RU can be used to implement the transceiver function of the radio frequency signal.
- CU and DU can be two independent RAN nodes, or they can be integrated in the same RAN node, such as integrated in the baseband unit (BBU).
- the RU may be included in a radio frequency device, such as a remote radio unit (RRU) or an active antenna unit (AAU).
- RRU remote radio unit
- AAU active antenna unit
- the CU may be further divided into two types of RAN nodes: CU-control plane and CU-user plane.
- RAN nodes may have different names.
- CU may be called an open CU (open CU, O-CU)
- DU may be called an open DU (open DU, O-DU)
- RU may be called an open RU (open RU, O-RU).
- the RAN node in the embodiments of the present application may be implemented by a software module, a hardware module, or a combination of a software module and a hardware module.
- the RAN node may be a server loaded with a corresponding software module.
- the embodiments of the present application do not limit the specific technology and specific device form adopted by the RAN node. For ease of description, the following description takes a base station as an example of a RAN node.
- the quantization resolution will affect the training time of the federated learning model to a certain extent.
- the distributed nodes of federated learning will quantize AI collaborative data based on a fixed (or understood as predetermined) quantization resolution.
- the resources that the distributed nodes can use for federated learning may change.
- the quantization resolution will not match the current distributed nodes, which may increase the model training time of federated learning and reduce the training efficiency of federated learning.
- the present application provides a data processing method and a communication device.
- the data processing method and the communication device provided by the embodiment of the present application are described in detail below in conjunction with the accompanying drawings.
- the collaborative learning mentioned in the present application can be understood as an AI model training method in which multiple devices collaborate (or are understood as collaborative or joint) to perform training.
- the collaborative learning can be distributed learning, federated learning, edge learning, or segmented learning.
- the first device and the second device are devices participating in collaborative learning.
- the first device can be a centralized node corresponding to collaborative learning
- the second device can be a distributed node in collaborative learning (or understood as a device that performs model training).
- the first device can be any one of the devices in RAN 100 (such as a RAN node or terminal, etc.), the functional network element in CN 200, or the server corresponding to the Internet 300 in the communication system of Figure 1
- the second device can also be any one of the devices in RAN 100, the functional network element in CN 200, or the server corresponding to the Internet 300 in the communication system of Figure 1; this application does not make specific limitations on this.
- the first device can be a centralized node corresponding to collaborative learning
- the second device can be a distributed node in collaborative learning (or understood as a device that performs model training).
- the first device can be any one of the devices in RAN 100 (such as a RAN node or terminal, etc.), the functional network element in CN
- the second device sends a first message to the first device, wherein the first message includes one or more of communication condition information of the second device, computing resource information of the second device, or model training information of the second device.
- the second device is a device that uploads AI collaborative data during the collaborative learning process
- the first device is a device that determines the quantization resolution of the second device during the collaborative learning process.
- the first device can be a service node (server) of the federated learning
- the second device can be one of the multiple client nodes (client) of the federated learning.
- the first device receives a first message from the second device, and the first message is used to assist the first device in updating the quantization resolution of the second device.
- the second device can indicate the communication environment in which the second device is located (or called corresponding) through the communication condition information in the first message, indicate the computing resources that the second device can use for collaborative learning through the computing information in the first message, and indicate the relevant information of the second device during the model training process (such as the size of AI collaborative data or quantization resolution related data, etc.) through the model training information in the first message.
- this application does not specifically limit the way in which the second device obtains communication condition information, computing resource information and model training information.
- the communication condition information includes one or more of communication bandwidth information, transmission delay, reference signal receiving power (RSRP), reference signal receiving quality (RSRQ), signal to interference plus noise ratio (SINR), bit error rate or throughput.
- the computing resource information includes one or more of the size of computing resources, the size of computing power or computing delay.
- the model training information includes one or more of the test set loss value, the training set loss value, the gradient norm size, the gradient size, and the model size.
- the first message sent by the second device includes, in addition to the communication condition information of the second device, In addition to one or more of the computing resource information or the model training information of the second device, it may also include one or more of the following information: a measurement identifier, used to identify the reporting of the first message and to associate the subsequent second message; a second device identifier, used to identify the device that sends the first message, or understood as the device to be updated with the quantization resolution; a first device identifier, used to identify the device that receives the first message, or understood as the device for determining the quantization resolution; a model identifier, used to identify the AI model corresponding to the first message; an identifier of the training wheel corresponding to the model, used to indicate that the first message is used to update the quantization resolution used when the second device feeds back which training wheel of AI collaboration data.
- a measurement identifier used to identify the reporting of the first message and to associate the subsequent second message
- a second device identifier used to identify the device that sends the first
- the first device determines a quantization resolution according to one or more of the communication condition information, the computing resource information, or the model training information.
- the first device determines the quantization resolution according to the information included in the first message.
- the first device determines the quantization resolution according to one or more of the communication condition information, the computing resource information, or the model training information with the goal of minimizing the training time of the model.
- the first device will wait for multiple distributed nodes (including the second device) of collaborative learning to feedback AI collaborative data before executing subsequent steps (such as aggregating AI collaborative data, etc.). It can be seen that the training duration of the model is affected by the delay in the distributed nodes of collaborative learning to feedback AI collaborative data.
- the first device can determine a larger quantization resolution (i.e., a larger number of bits corresponding to the quantization resolution) for distributed nodes with large computing power, small delay, or large gradient norm to improve the accuracy of the AI collaborative data fed back by the distributed nodes, and determine a smaller quantization resolution (i.e., a smaller number of bits corresponding to the quantization resolution) for distributed nodes with small computing power, large delay, or small gradient norm to reduce the communication delay between the first device and the distributed node, thereby reducing the training delay.
- a larger quantization resolution i.e., a larger number of bits corresponding to the quantization resolution
- a smaller quantization resolution i.e., a smaller number of bits corresponding to the quantization resolution
- the first device sends a second message to the second device, where the second message includes indication information indicating a quantization resolution.
- the indication information may indicate the quantization resolution by directly indicating the quantization resolution (i.e., the number of bits used for quantization), for example, the second message includes 4 bits or 16 bits, etc.; or it may indicate the quantization index corresponding to the quantization resolution, for example, if the quantization index included in the second message is 0, it means that the quantization is based on 4 bits, and if the quantization index included in the second message is 1, it means that the quantization is based on 16 bits.
- the second message may also include one or more of the following information: a measurement identifier, corresponding to the measurement identifier contained in the first message, used to identify the issuance of the second message; a second device identifier, used to identify the device receiving the second message, or understood as a device applying the quantization resolution; a first device identifier, used to identify the device sending the second message, or understood as a device determining the quantization resolution; a model identifier, used to identify the AI model corresponding to the quantization resolution; an identifier of the training wheel corresponding to the model, used to indicate the quantization resolution used when the second device feeds back the AI collaboration data of which training wheel is updated.
- a measurement identifier corresponding to the measurement identifier contained in the first message, used to identify the issuance of the second message
- a second device identifier used to identify the device receiving the second message, or understood as a device applying the quantization resolution
- a first device identifier used to identify the device sending the second message, or understood as a
- the second device quantizes the first AI collaboration data based on the quantization resolution to obtain second AI collaboration data.
- the second device trains the model to obtain the first AI collaboration data (i.e., the AI collaboration data before quantization). Further, the second device quantizes the first AI collaboration data based on the quantization resolution to obtain the second AI collaboration data (i.e., the AI collaboration data after quantization).
- the second device sends second AI collaboration data to the first device.
- the second device sends a fourth message to the first device, and the fourth message includes the second AI collaboration data.
- the fourth message may also include one or more of the following information: a second device identifier, used to identify the device that sends the fourth message; a first device identifier, used to identify the device that receives the fourth message; a model identifier, used to identify the AI model corresponding to the second AI collaboration data; an identifier of the training wheel corresponding to the model, used to indicate which training wheel the second AI collaboration data belongs to.
- the quantization resolution of the second device can be dynamically adjusted in combination with the communication conditions, computing resources or model training information of the current second device, which is beneficial to improving the compatibility of the quantization resolution with the current second device.
- By quantizing and transmitting AI collaborative data at a quantization resolution adapted to the current second device it is beneficial to improve the training efficiency of collaborative learning.
- the collaborative learning process is described in detail in the following two cases according to the triggering method of triggering the second device to send the first message to the first device.
- the devices participating in the collaborative learning include the first device, the second device and the third device as an example.
- the devices participating in the collaborative learning may also include other devices (not shown in the figure), and this application does not make specific limitations on this.
- Case 1 triggering the second device to send the first message to the first device through passive triggering.
- the second device receives a third message from another device (other devices except the second device, including the first device), and the third message is used to trigger the second device to send the aforementioned first message to the first device.
- the first message may be
- the following text uses the example that the third message is sent by the first device to the second device as an example for illustrative explanation, which should not be regarded as a specific limitation on the sending device corresponding to the third message.
- FIG. 4b is a schematic diagram of a collaborative learning process provided by the present application.
- the collaborative learning includes the following steps S411 to S419. Among them:
- the first device sends a third message to the second device, where the third message is used to request the second device to assist in updating a quantization resolution.
- the first device determines that there is a need to update the quantization resolution of the second device, the first device sends a third message to the second device, and the third message is used to request the second device to assist in updating the quantization resolution.
- the specific name of the third message is not specifically limited in this application.
- the third message can be a request message for assisting in updating the quantization resolution.
- delay 1 is the maximum value of delay 1 to delay 4, and the absolute value of the difference between the average delay of delay 1 to delay 4 and delay 1 is greater than the first threshold; the first threshold is a preset value greater than 0, and its specific value can be adjusted according to the specific application scenario.
- device 1 in order to reduce the time consumed by device 1 waiting for all clients to feedback AI collaboration data, device 1 can reduce the delay of device 1 waiting for device 2 to feedback AI collaboration data by adjusting the quantization resolution of device 2, which can be regarded as device 1 determining the need to update the quantization resolution of device 2 (that is, it can be understood as determining device 2 as the second device). Further, device 1 sends a third message to device 2, and the third message is used to request device 2 to assist device 1 in updating the quantization resolution, that is, requesting device 2 to send the first message to device 1.
- Measurement ID also known as measurement ID, is used to indicate the request for assistance in updating the quantization resolution.
- the identifier of the second device used to indicate the device whose quantization resolution is to be updated, or used to indicate the device that receives the third message, or used to indicate the device that sends the first message.
- the identifier of the first device used to indicate a device for determining the quantization resolution, or used to indicate a device for sending the third message, or used to indicate a device for receiving the first message.
- Model identification used to indicate that the second device needs to update the AI model (or understand it as an AI collaboration model) of the quantization resolution.
- the model identification can be the identification of the AI model, or associated information associated with the AI model (such as the business identification corresponding to the AI model), etc.
- the third message includes field A, and the value of field A is used to indicate the reason for this update of the quantization resolution.
- the third message indicates that the reason for updating the quantization resolution is: the latency of the second device (i.e., the latency of the first device waiting for the second device to feedback the AI collaborative data) is large, or it is understood that the efficiency of the second device to feedback the AI collaborative data needs to be improved and the quantization resolution needs to be updated.
- the third message indicates that the reason for updating the quantization resolution is: the error of the AI collaborative data fed back by the second device is large, or it is understood that the accuracy of the AI collaborative data fed back by the second device needs to be improved and the quantization resolution needs to be updated.
- the indication information indicating the reason for updating the quantization resolution includes the time difference between the first delay and the average delay, or the ratio between the first delay and the average delay.
- the first delay is the delay of the first device waiting for the second device to feedback the AI collaboration data
- the average delay is the average value of the delay of the first device waiting for multiple distributed nodes to feedback the AI collaboration data
- the multiple distributed nodes include the second device. It can be understood that when the reason for updating the quantization resolution is that the delay of the second device is large (or it is understood that the quantization resolution needs to be updated to improve the efficiency of the second device to feedback the AI collaboration data), the indication information included in the third message can be the time difference between the first delay and the average delay, or the ratio between the first delay and the average delay.
- the identifier of the training wheel corresponding to the model is used to indicate the quantization resolution used when the second device feeds back the AI collaboration data of which training wheel the first device updates based on the first message.
- the identifier of the Nth training wheel of the model included in the third message indicates that the first device updates the quantization resolution used when the second device feeds back the AI collaboration data of the Nth training wheel based on the first message, that is, when the second device subsequently feeds back the AI collaboration data of the Nth training wheel, it can use the quantization resolution updated based on the first message for quantization.
- the training wheel may also be referred to as a wheel, and the whole text is the same.
- the identifier of the information requested for feedback is used to indicate that the first device expects (or understands as a request) information to be fed back by the second device.
- the information requested for feedback includes one or more of communication condition information, computing resource information, or model training information.
- the second device may subsequently generate a first message based on the identifier of the information requested for feedback, that is, the first message is generated based on the identifier of the information requested for feedback.
- the information included in the first message may be part or all of the information requested for feedback in the third message.
- the third message sent by the first device may include the information requested for feedback.
- the identifier of the information fed back includes: the identifier of the communication condition information (such as the identifier of the transmission delay or the identifier of the RSRP, etc.), the identifier of the computing resource information and the identifier of the model training information (such as the identifier of the test set loss value or the identifier of the gradient norm size).
- the second device can only feedback part (that is, feedback one or two of the communication condition information, computing resource information or model training information) according to the third message, or it can feedback all (that is, feedback communication condition information, computing resource information and model training information).
- the configuration information for transmitting the first message includes one or more of a transmission period for transmitting the first message, a trigger condition for transmitting the first message, or a number of times of transmitting the first message.
- the transmission period of transmitting the first message may be determined based on a time interval value/index or based on a training round. Taking the transmission period determined based on a training round as an example, when the configuration information indicates that the transmission period of the first message is 3 training rounds, the configuration information indicates that the second device transmits the first message once every 3 training rounds.
- the triggering condition for transmitting the first message may include one or more of the following conditions: 1.
- a triggering condition for triggering instant transmission of the first message i.e., the second device transmits the first message once if the triggering condition is met
- the triggering condition is that the gradient norm of the second device is greater than or equal to the gradient norm threshold, the communication bandwidth is less than or equal to the bandwidth threshold, the channel quality (e.g., RSRP, RSRQ, SINR, bit error rate, throughput, etc.) is less than or equal to the channel quality threshold, the computing power is less than or equal to the computing power threshold, etc.
- the triggering condition is that the gradient norm of the second device is greater than or equal to the gradient norm threshold, the communication bandwidth is less than or equal to the bandwidth threshold, the channel quality (e.g., RSRP, RSRQ, SINR, bit error rate, throughput, etc.) is less than or equal to the channel quality threshold, the computing power is less than or equal to the computing power threshold, etc.
- a triggering condition for triggering periodic transmission of the first message i.e., the second device transmits the first message once if the triggering condition is met, according to the transmission period of the first message, the period
- the trigger condition for periodically transmitting the first message is as follows: for example, the gradient norm of the second device is greater than or equal to the gradient norm threshold, the communication bandwidth is less than or equal to the bandwidth threshold, the channel quality is less than or equal to the channel quality threshold, the computing power is less than or equal to the computing power threshold, etc.; 3.
- the trigger condition for ending the periodic transmission of the first message (that is, when the second message periodically transmits the first message, if the second device meets the trigger condition, the second device stops periodically transmitting the first message), for example, the trigger condition is that the gradient norm of the second device is less than the gradient norm threshold, the communication bandwidth is greater than the bandwidth threshold, the channel quality is greater than the channel quality threshold, the computing power is greater than the computing power threshold, etc.
- the number of times the first message is transmitted is used to indicate the total number of times the second device needs to transmit the first message after receiving the configuration information for transmitting the first message.
- S412 The second device sends the first message to the first device according to the third message.
- the first message is understood as a response message of the third message.
- the measurement identifier in the first message corresponds to (or is understood to be the same as) the measurement identifier included in the third message, or the identifier of the model in the first message is the same as the identifier of the model included in the third message, or the identifier of the training wheel corresponding to the model in the first message is the same as the identifier of the training wheel corresponding to the model included in the third message.
- the present application does not specifically limit the specific name of the first message.
- the first message may be a response message for assisting in updating the quantization resolution.
- the first message please refer to the specific description of the first message in S401 above, which will not be repeated here.
- the first device determines a quantization resolution according to one or more of the communication condition information, the computing resource information, or the model training information.
- S414 The first device sends a second message to the second device, where the second message includes indication information indicating a quantization resolution.
- the second device quantizes the first AI collaboration data based on the quantization resolution to obtain second AI collaboration data.
- the second device sends second AI collaboration data to the first device.
- the specific implementation process of S413 to S416 can refer to the description of the specific implementation process of S402 to S405 above, which will not be repeated here.
- the third device sends third AI collaboration data to the first device.
- the third device trains the model to obtain AI collaboration data of the third device; further, the third device quantizes the AI collaboration data of the third device based on its corresponding quantization resolution to obtain third AI collaboration data, and sends the third AI collaboration data to the first device.
- the specific content of the AI collaboration data of the third device can refer to the specific description of the aforementioned AI collaboration data.
- the AI collaboration data of the third device includes one or more of the following data: gradient data, AI model, AI sub-model, AI model output, AI model intermediate value, etc.
- the quantization resolution of the third device can also refer to the data processing method provided in the aforementioned FIG. 4a for updating the quantization resolution, and this application does not specifically limit this.
- the first device aggregates the second AI collaboration data and the third AI collaboration data to obtain aggregated AI collaboration data.
- the first device receives the AI collaboration data fed back from all (or part) of the distributed nodes (such as the second device and the third device) participating in the collaborative learning, it aggregates the AI collaboration data to obtain aggregated AI collaboration data.
- the first device sends the aggregated AI collaboration data to the second device and the third device respectively.
- the first device after the first device obtains the aggregated AI collaborative data, the first device sends the aggregated AI collaborative data to the distributed nodes participating in the collaborative learning, so that the distributed nodes participating in the collaborative learning (including the second device and the third device) can respectively
- the AI collaboration data is then used for model training.
- the first device when it sends aggregated AI collaborative data to the distributed nodes participating in collaborative learning, it can also indicate the identifier of the distributed node (for example, the identifier of the second device or the identifier of the third device) to indicate the device receiving the aggregated AI collaborative data; the first device identifier is used to identify the device sending the aggregated AI collaborative data; the identifier of the model is used to indicate the AI model corresponding to the aggregated AI collaborative data; the identifier of the training wheel corresponding to the model is used to indicate which training wheel of the model the aggregated AI collaborative data corresponds to.
- the identifier of the distributed node for example, the identifier of the second device or the identifier of the third device
- the aggregated AI collaborative data sent by the first device to the distributed nodes participating in collaborative learning can be unquantized data (that is, it can be understood as aggregated data obtained by aggregating the second AI collaborative data and the third AI collaborative data), or quantized data (that is, it can be understood as aggregated data obtained by aggregating the second AI collaborative data and the third AI collaborative data, and then subjected to an aggregated quantization resolution quantization operation).
- unquantized data that is, it can be understood as aggregated data obtained by aggregating the second AI collaborative data and the third AI collaborative data
- quantized data that is, it can be understood as aggregated data obtained by aggregating the second AI collaborative data and the third AI collaborative data, and then subjected to an aggregated quantization resolution quantization operation.
- the aggregated quantization resolution can be understood as the quantization resolution of the quantized aggregated AI collaborative data, and the aggregated quantization resolution can also be determined based on one or more of the communication condition information of the first device, the communication condition information of the second device, or the computing resource information of the second device, and this application does not make specific restrictions.
- Case 2 triggering the second device to send the first message to the first device by actively triggering the second device.
- FIG. 4c is a schematic diagram of another collaborative learning process provided by the present application.
- the collaborative learning includes the following steps S421 to S428. Among them:
- the second device sends a first message to the first device, where the first message is used to request the first device to update a quantization resolution of the second device.
- the second device when the second device determines that it has a need to update the quantization resolution, the second device actively sends a first message to the first device based on the need to update the quantization resolution.
- the first message can be a request message for updating the quantization resolution.
- the first message please refer to the specific description of the first message in S401 above, which will not be repeated here.
- device 1 and device 2 are devices that perform federated learning tasks.
- device 1 is the server that performs the federated learning task
- device 2 is the client that performs the federated learning task.
- device 2 will feedback AI collaboration data to device 1.
- the gradient norm of the AI model is greater than or equal to the gradient norm threshold
- the communication bandwidth of the second device is less than or equal to the bandwidth threshold
- the channel quality of the second device is less than or equal to the channel quality threshold
- the computing power of the second device is less than or equal to the computing power threshold
- the first device determines a quantization resolution according to one or more of the communication condition information, the computing resource information, or the model training information.
- the first device sends a second message to the second device, where the second message includes indication information indicating a quantization resolution.
- the second device quantizes the first AI collaboration data based on the quantization resolution to obtain second AI collaboration data.
- the second device sends second AI collaboration data to the first device.
- the third device sends third AI collaboration data to the first device.
- the first device aggregates the second AI collaboration data and the third AI collaboration data to obtain aggregated AI collaboration data.
- the first device sends the aggregated AI collaboration data to the second device and the third device respectively.
- the specific implementation process of S422 to S428 can refer to the description of the specific implementation process of S413 to S419 mentioned above, which will not be repeated here.
- the device includes a hardware structure and/or software module corresponding to the execution of each function. It should be easily appreciated by those skilled in the art that, in combination with the units and method steps of each example described in the embodiments disclosed in this application, the present application can be implemented in the form of hardware or a combination of hardware and computer software. Whether a function is executed in the form of hardware or a computer software transceiver unit driving the hardware depends on the specific application scenario and design constraints of the technical solution.
- Figures 5 and 6 are schematic diagrams of the structures of possible communication devices provided by embodiments of the present application. These communication devices can be used to implement the functions of the devices in the above method embodiments, and thus can also achieve the beneficial effects possessed by the above method embodiments.
- the communication device can be the first device in Figures 4a to 4c, or a module (such as a chip) applied to the first device, or the communication device can be the second device in Figures 4a to 4c, or a module (such as a chip) applied to the second device.
- the communication device 500 includes a processing unit 510 and a transceiver unit 520.
- the communication device 500 is used to implement the function of the first device or the function of the second device in the method embodiments shown in Figs. 4a to 4c above.
- the transceiver unit 520 is used to send The first device sends a first message, which includes one or more of communication condition information of the second device, computing resource information of the second device, or model training information of the second device; the transceiver unit 520 is also used to receive a second message from the first device, and the second message includes indication information, which indicates a quantization resolution, and the quantization resolution is related to one or more of the communication condition information, computing resource information, or model training information; the processing unit 510 is used to quantize the first artificial intelligence AI collaboration data based on the quantization resolution to obtain second AI collaboration data; the transceiver unit 520 is also used to send the second AI collaboration data to the first device.
- the communication condition information includes one or more of communication bandwidth information, transmission delay, reference signal received power RSRP, reference signal received quality RSRQ, signal to interference plus noise ratio SINR, bit error rate or throughput;
- the computing resource information includes one or more of the size of computing resources, the size of computing power or computing delay;
- the model training information includes one or more of the test set loss value, the training set loss value, the gradient norm size, the gradient size, and the model size.
- the indication information indicating the reason for updating the quantization resolution includes the time difference between the first delay and the average delay, or the ratio between the first delay and the average delay; wherein the first delay is the delay of the first device waiting for the second device to feedback the AI collaboration data, and the average delay is the average value of the delay of the first device waiting for multiple distributed nodes to feedback the AI collaboration data, and the multiple distributed nodes include the second device.
- the configuration information for transmitting the first message includes one or more of a transmission period for transmitting the first message, a triggering condition for transmitting the first message, or a number of times of transmitting the first message.
- transceiver unit 520 For a more detailed description of the transceiver unit 520 and the processing unit 510, reference may be made to the related description of the second device in the method embodiment shown in FIG. 4a to FIG. 4c.
- the transceiver unit 520 is used to receive a first message from the second device, and the first message includes one or more of the communication condition information of the second device, the computing resource information of the second device, or the model training information of the second device; the processing unit 510 is used to determine the quantization resolution based on one or more of the communication condition information, the computing resource information, or the model training information; the transceiver unit 520 is also used to send a second message to the second device, and the second message includes indication information, and the indication information indicates the quantization resolution, and the quantization resolution is related to one or more of the communication condition information, the computing resource information, or the model training information; the transceiver unit 520 is also used to receive second artificial intelligence AI collaboration data from the second device, and the second AI collaboration data is obtained by quantizing the first AI collaboration data based on the quantization resolution.
- the communication condition information includes one or more of communication bandwidth information, transmission delay, reference signal received power RSRP, reference signal received quality RSRQ, signal to interference plus noise ratio SINR, bit error rate or throughput;
- the computing resource information includes one or more of the size of computing resources, the size of computing power or computing delay;
- the model training information includes one or more of the test set loss value, the training set loss value, the gradient norm size, the gradient size, and the model size.
- the transceiver unit 520 is further used to send a third message to the second device, where the third message is used to request assistance in updating the quantization resolution; and the first message is a response message corresponding to the third message.
- the third message includes one or more of the following information: a measurement identifier, an identifier of the second device, an identifier of the first device, a model identifier, indication information indicating the reason for updating the quantization resolution, an identifier of a training round corresponding to the model, an identifier of information requested for feedback, and/or configuration information for transmitting the first message; wherein the information requested for feedback includes one or more of communication condition information, computing resource information, or model training information; and the first message is generated based on the identifier of the information requested for feedback.
- the indication information indicating the reason for updating the quantization resolution includes the time difference between the first delay and the average delay, or the ratio between the first delay and the average delay; wherein the first delay is the delay of the first device waiting for the second device to feedback the AI collaboration data, and the average delay is the average value of the delay of the first device waiting for multiple distributed nodes to feedback the AI collaboration data, and the multiple distributed nodes include the second device.
- the first message further includes one or more of the following information: a measurement identifier, a second device identifier, a first device identifier, an identifier of the model, and an identifier of a training wheel corresponding to the model.
- transceiver unit 520 For a more detailed description of the transceiver unit 520 and the processing unit 510, reference may be made to the related description of the first device in the method embodiment shown in FIG. 4a to FIG. 4c.
- the chip implements the function of the second device in the above-mentioned method embodiment.
- the chip receives information from the first device, which can be understood as the information is first received by other modules in the second device (such as a radio frequency module or an antenna), and then sent to the chip by these modules.
- the chip sends information to the first device, which can be understood as the information is first sent to other modules in the second device (such as a radio frequency module or an antenna), and then sent to the first device by these modules.
- Entities A and B can be RAN nodes or terminals, or modules inside the RAN nodes or terminals.
- the sending and receiving of information can be information interaction between a RAN node and a terminal, for example, information interaction between a base station and a terminal; the sending and receiving of information can also be information interaction between two RAN nodes, for example, information interaction between a CU and a DU; the sending and receiving of information can also be information interaction between different modules inside a device, for example, information interaction between a terminal chip and other modules of the terminal, or information interaction between a base station chip and other modules in the base station.
- processors in the embodiments of the present application may be a central processing unit (CPU), or other general-purpose processors, digital signal processors (DSP), application specific integrated circuits (ASIC), field programmable gate arrays (FPGA) or other programmable logic devices, transistor logic devices, hardware components or any combination thereof.
- the general-purpose processor may be a microprocessor or any conventional processor.
- the method steps in the embodiments of the present application can be implemented in hardware or in software instructions that can be executed by a processor.
- the software instructions can be composed of corresponding software modules, and the software modules can be stored in random access memory, flash memory, read-only memory, programmable read-only memory, erasable programmable read-only memory, electrically erasable programmable read-only memory, register, hard disk, mobile hard disk, CD-ROM or any other form of storage medium known in the art.
- An exemplary storage medium is coupled to the processor so that the processor can read information from the storage medium and write information to the storage medium.
- the storage medium can also be a component of the processor.
- the processor and the storage medium can be located in an ASIC.
- the ASIC can be located in a base station or a terminal.
- the processor and the storage medium can also be present in a base station or a terminal as discrete components.
- the computer program product includes one or more computer programs or instructions.
- the computer may be a general-purpose computer, a special-purpose computer, a computer network, a network device, a user device or other programmable device.
- the computer program or instructions may be stored in a computer-readable storage medium, or transmitted from one computer-readable storage medium to another computer-readable storage medium.
- the computer program or instructions may be transmitted from one website site, computer, server or data center to another website site, computer, server or data center by wire or wireless means.
- the computer-readable storage medium may be any available medium that can be accessed by a computer or a data storage device such as a server or data center that integrates one or more available media.
- the available medium may be a magnetic medium, such as a floppy disk, a hard disk, or a magnetic tape; it may also be an optical medium, such as a digital video disc; it may also be a
- the computer readable storage medium may be a semiconductor medium, such as a solid state drive.
- the computer readable storage medium may be a volatile or non-volatile storage medium, or may include both volatile and non-volatile storage media.
- “at least one” means one or more, and “more than one” means two or more.
- “And/or” describes the association relationship of associated objects, indicating that three relationships may exist.
- a and/or B can mean: A exists alone, A and B exist at the same time, and B exists alone, where A and B can be singular or plural.
- the character “/” generally indicates that the previous and next associated objects are in an “or” relationship; in the formula of the present application, the character “/” indicates that the previous and next associated objects are in a “division” relationship.
- “Including at least one of A, B and C” can mean: including A; including B; including C; including A and B; including A and C; including B and C; including A, B and C.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Software Systems (AREA)
- Physics & Mathematics (AREA)
- Artificial Intelligence (AREA)
- Evolutionary Computation (AREA)
- Computing Systems (AREA)
- Mathematical Physics (AREA)
- General Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- General Engineering & Computer Science (AREA)
- Biophysics (AREA)
- Molecular Biology (AREA)
- General Health & Medical Sciences (AREA)
- Computational Linguistics (AREA)
- Biomedical Technology (AREA)
- Life Sciences & Earth Sciences (AREA)
- Health & Medical Sciences (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Medical Informatics (AREA)
- Computer Networks & Wireless Communication (AREA)
- Signal Processing (AREA)
- Databases & Information Systems (AREA)
- Mobile Radio Communication Systems (AREA)
Abstract
La présente demande propose un procédé de traitement de données et un appareil de communication. Le procédé comprend les étapes suivantes : un second dispositif envoie un premier message à un premier dispositif, le premier message comprenant une ou plusieurs informations parmi des informations de condition de communication du second dispositif, des informations de ressources informatiques du second dispositif ou des informations d'entraînement de modèle du second dispositif ; le second dispositif reçoit un second message provenant du premier dispositif, le second message consistant en des informations d'indication, les informations d'indication indiquant une résolution de quantification, et la résolution de quantification étant associée à une ou plusieurs des informations de condition de communication, des informations de ressources informatiques ou des informations d'entraînement de modèle ; en outre, le second dispositif quantifie de premières données de collaboration d'intelligence artificielle (IA) sur la base d'une résolution de quantification pour obtenir des secondes données de collaboration d'IA ; et envoie les secondes données de collaboration d'IA au premier dispositif. Au moyen du procédé, le second dispositif peut quantifier des données de collaboration d'IA au moyen d'une résolution de quantification changeant dynamiquement, de telle sorte que l'efficacité d'entraînement de l'apprentissage fédéré est améliorée.
Applications Claiming Priority (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202311246336.1A CN119697040A (zh) | 2023-09-25 | 2023-09-25 | 一种数据处理方法及通信装置 |
| CN202311246336.1 | 2023-09-25 |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| WO2025066789A1 true WO2025066789A1 (fr) | 2025-04-03 |
Family
ID=95041550
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| PCT/CN2024/116032 Pending WO2025066789A1 (fr) | 2023-09-25 | 2024-08-30 | Procédé de traitement de données et appareil de communication |
Country Status (2)
| Country | Link |
|---|---|
| CN (1) | CN119697040A (fr) |
| WO (1) | WO2025066789A1 (fr) |
Citations (4)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN111401552A (zh) * | 2020-03-11 | 2020-07-10 | 浙江大学 | 一种基于调整批量大小与梯度压缩率的联邦学习方法和系统 |
| US20220245527A1 (en) * | 2021-02-01 | 2022-08-04 | Qualcomm Incorporated | Techniques for adaptive quantization level selection in federated learning |
| CN115174397A (zh) * | 2022-07-28 | 2022-10-11 | 河海大学 | 联合梯度量化与带宽分配的联邦边缘学习训练方法及系统 |
| CN115280335A (zh) * | 2020-03-24 | 2022-11-01 | Oppo广东移动通信有限公司 | 一种机器学习模型训练方法、电子设备及存储介质 |
-
2023
- 2023-09-25 CN CN202311246336.1A patent/CN119697040A/zh active Pending
-
2024
- 2024-08-30 WO PCT/CN2024/116032 patent/WO2025066789A1/fr active Pending
Patent Citations (4)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN111401552A (zh) * | 2020-03-11 | 2020-07-10 | 浙江大学 | 一种基于调整批量大小与梯度压缩率的联邦学习方法和系统 |
| CN115280335A (zh) * | 2020-03-24 | 2022-11-01 | Oppo广东移动通信有限公司 | 一种机器学习模型训练方法、电子设备及存储介质 |
| US20220245527A1 (en) * | 2021-02-01 | 2022-08-04 | Qualcomm Incorporated | Techniques for adaptive quantization level selection in federated learning |
| CN115174397A (zh) * | 2022-07-28 | 2022-10-11 | 河海大学 | 联合梯度量化与带宽分配的联邦边缘学习训练方法及系统 |
Also Published As
| Publication number | Publication date |
|---|---|
| CN119697040A (zh) | 2025-03-25 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| US11838787B2 (en) | Functional architecture and interface for non-real-time ran intelligent controller | |
| CN113873538A (zh) | 一种模型数据传输方法及通信装置 | |
| EP4580230A1 (fr) | Procédé et appareil de communication | |
| WO2024169522A1 (fr) | Procédé et appareil de communication | |
| WO2024051789A1 (fr) | Procédé de gestion de faisceau | |
| WO2025066789A1 (fr) | Procédé de traitement de données et appareil de communication | |
| WO2024026846A1 (fr) | Procédé de traitement de modèle d'intelligence artificielle et dispositif associé | |
| WO2025060349A1 (fr) | Procédés, dispositifs et support lisible par ordinateur pour service d'intelligence artificielle (ia) | |
| US20250148370A1 (en) | Method and apparatus for intelligent operating of communication system | |
| EP4590017A1 (fr) | Procédé et appareil de communication | |
| EP4657815A1 (fr) | Procédé et appareil de communication | |
| WO2025209121A1 (fr) | Procédé d'indication de format de données et produit associé | |
| US20250373510A1 (en) | Communication method, communication apparatus, and communication system | |
| CN119922528A (zh) | 一种通信方法及装置 | |
| WO2025124135A1 (fr) | Procédé et appareil de communication | |
| TW202533603A (zh) | 一種通信方法、通信裝置及通信系統 | |
| WO2025228310A1 (fr) | Procédé de communication et appareil associé | |
| WO2025124143A1 (fr) | Procédé d'entraînement de modèle et appareil de communication | |
| EP4657907A1 (fr) | Procédé de communication, appareil de communication et système de communication | |
| WO2025140003A1 (fr) | Procédé de communication et appareil de communication | |
| WO2025039276A1 (fr) | Procédé de transmission de modèle et dispositif de communication | |
| CN119856416A (zh) | 用于无线通信的方法及装置 | |
| WO2024255785A1 (fr) | Procédé de communication et appareil de communication | |
| CN118193001A (zh) | 模型更新方法、装置、设备及存储介质 | |
| JP2025529936A (ja) | 通信方法および通信装置 |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| 121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 24870335 Country of ref document: EP Kind code of ref document: A1 |