[go: up one dir, main page]

WO2024067280A1 - Procédé et appareil de mise à jour de paramètre de modèle d'ia, et dispositif de communication - Google Patents

Procédé et appareil de mise à jour de paramètre de modèle d'ia, et dispositif de communication Download PDF

Info

Publication number
WO2024067280A1
WO2024067280A1 PCT/CN2023/119938 CN2023119938W WO2024067280A1 WO 2024067280 A1 WO2024067280 A1 WO 2024067280A1 CN 2023119938 W CN2023119938 W CN 2023119938W WO 2024067280 A1 WO2024067280 A1 WO 2024067280A1
Authority
WO
WIPO (PCT)
Prior art keywords
parameters
neurons
model
row
order
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Ceased
Application number
PCT/CN2023/119938
Other languages
English (en)
Chinese (zh)
Inventor
杨昂
孙鹏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vivo Mobile Communication Co Ltd
Original Assignee
Vivo Mobile Communication Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vivo Mobile Communication Co Ltd filed Critical Vivo Mobile Communication Co Ltd
Publication of WO2024067280A1 publication Critical patent/WO2024067280A1/fr
Anticipated expiration legal-status Critical
Ceased legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0499Feedforward networks
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/098Distributed learning, e.g. federated learning
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/08Configuration management of networks or network elements
    • H04L41/0803Configuration setting
    • H04L41/0813Configuration setting characterised by the conditions triggering a change of settings
    • H04L41/082Configuration setting characterised by the conditions triggering a change of settings the condition being updates or upgrades of network functionality

Definitions

  • the present application belongs to the field of communication technology, and specifically relates to a method, device and communication equipment for updating AI model parameters.
  • AI Artificial Intelligence
  • neural networks such as neural networks, decision trees, support vector machines, Bayesian classifiers, etc.
  • the model is updated by transmitting the AI model.
  • this update method is inefficient and has a large network signaling overhead.
  • the embodiments of the present application provide a method, device and communication equipment for updating AI model parameters, which solve the problems of low efficiency of AI model update method and high network signaling overhead in related technologies.
  • a method for updating AI model parameters comprising:
  • the first node sends first information related to the AI model parameters to the second node;
  • the first information includes at least one of the following: an update mode of the AI model parameters, and an indication method of the update parameters of the AI model parameters.
  • a method for updating AI model parameters comprising:
  • the second node receives first information related to AI model parameters sent by the first node
  • the second node updates the AI model parameters according to the first information
  • the first information includes at least one of the following: an update mode of the AI model parameters, and an indication method of the update parameters of the AI model parameters.
  • a device for updating AI model parameters comprising:
  • a first sending module configured to send first information related to the AI model parameters to the second node
  • the first information includes at least one of the following: an update mode of the AI model parameters, and an indication method of the update parameters of the AI model parameters.
  • a device for updating AI model parameters comprising:
  • a first receiving module configured to receive first information related to AI model parameters sent by the first node
  • An updating module configured to update the AI model parameters according to the first information
  • the first information includes at least one of the following: an update mode of the AI model parameters, and an indication method of the update parameters of the AI model parameters.
  • a communication device comprising: a processor, a memory, and a program or instruction stored in the memory and executable on the processor, wherein the program or instruction, when executed by the processor, implements the steps of the method described in the first aspect or the second aspect.
  • a readable storage medium on which a program or instruction is stored.
  • the program or instruction is executed by a processor, the steps of the method described in the first aspect or the second aspect are implemented.
  • a chip comprising a processor and a communication interface, wherein the communication interface is coupled to the processor, and the processor is used to run a program or instruction to implement the steps of the method described in the first aspect or the second aspect.
  • a computer program/program product is provided, wherein the computer program/program product is stored in a non-volatile storage medium, and the program/program product is executed by at least one processor to implement the steps of the method described in the first aspect or the second aspect.
  • a communication system comprising a terminal and a network side device, the terminal being used to execute the steps of the method described in the first aspect, and the network side device being used to execute the steps of the method described in the second aspect.
  • the first node may send the update mode of the AI model parameters and/or the indication method of the update parameters of the AI model parameters to the second node.
  • the second node does not need to compile or recompile the AI model.
  • the second node updates the existing AI model parameters according to the received update mode of the AI model parameters and/or the indication method of the update parameters of the AI model parameters, which can effectively improve the efficiency of transmitting the AI model in the wireless communication system and reduce network signaling overhead.
  • Figure 1 is a schematic diagram of a neural network
  • Figure 2 is a schematic diagram of a neuron
  • FIG3 is a schematic diagram of the architecture of a wireless communication system according to an embodiment of the present application.
  • FIG4 is a flowchart of a method for updating AI model parameters according to an embodiment of the present application.
  • FIG5 is a second flowchart of a method for updating AI model parameters according to an embodiment of the present application.
  • FIG6 is a schematic diagram of one of the indication methods of AI model parameters according to an embodiment of the present application.
  • FIG. 7 is a second schematic diagram of the indication method of AI model parameters according to an embodiment of the present application.
  • FIG8 is a third schematic diagram of the indication method of AI model parameters according to an embodiment of the present application.
  • FIG9 is a fourth schematic diagram of the indication method of AI model parameters according to an embodiment of the present application.
  • FIG10 is a structural diagram of a device for updating AI model parameters according to an embodiment of the present application.
  • FIG11 is a second structural diagram of the device for updating AI model parameters according to an embodiment of the present application.
  • FIG12 is a schematic diagram of a terminal according to an embodiment of the present application.
  • FIG13 is a schematic diagram of a network side device according to an embodiment of the present application.
  • FIG14 is a schematic diagram of a communication device according to an embodiment of the present application.
  • first, second, etc. in the specification and claims of the present application are used to distinguish similar objects, and are not used to describe a specific order or sequence. It should be understood that the terms used in this way are interchangeable under appropriate circumstances, so that the embodiments of the present application can be implemented in an order other than those illustrated or described here, and the objects distinguished by “first” and “second” are generally of the same type, and the number of objects is not limited.
  • the first object can be one or more.
  • “and/or” in the specification and claims represents at least one of the connected objects, and the character “/" generally represents that the objects associated with each other are in an "or” relationship.
  • LTE Long Term Evolution
  • LTE-A Long Term Evolution
  • CDMA Code Division Multiple Access
  • TDMA Time Division Multiple Access
  • FDMA Frequency Division Multiple Access
  • OFDMA Orthogonal Frequency Division Multiple Access
  • SC-FDMA Single-carrier Frequency Division Multiple Access
  • NR New Radio
  • 6G 6th Generation
  • This application uses a neural network as an example for illustration, but does not limit the specific type of AI module.
  • the structure of the neural network is shown in FIG1 .
  • the neural network is composed of neurons, and the schematic diagram of neurons is shown in Figure 2.
  • a 1 , a 2 , ... a K are inputs
  • w is the weight (multiplicative coefficient)
  • b is the bias (additive coefficient)
  • ⁇ (.) is the activation function
  • z a 1 *w 1 + ... + a k *w k + ... + a K *w K + b.
  • Common activation functions include Sigmoid function, tanh function, Rectified Linear Unit (ReLU), etc.
  • the parameters of a neural network can be optimized using an optimization algorithm.
  • An optimization algorithm is a type of algorithm that can minimize or maximize an objective function (sometimes called a loss function).
  • the objective function is often a mathematical combination of model parameters and data. For example, given data X and its corresponding label Y, a neural network model f(.) is constructed. Once the model is in place, Based on the input x, we can get the predicted output f(x), and we can calculate the difference between the predicted value and the true value (f(x)-Y), which is the loss function. If we find a suitable W,b to minimize the value of the above loss function, the smaller the loss value, the closer the model is to the actual situation.
  • the common optimization algorithms are basically based on the error back propagation (BP) algorithm.
  • BP error back propagation
  • the basic idea of the BP algorithm is that the learning process consists of two processes: the forward propagation of the signal and the back propagation of the error.
  • the input sample is transmitted from the input layer, processed by each hidden layer layer by layer, and then transmitted to the output layer. If the actual output of the output layer does not match the expected output, it will enter the error back propagation stage.
  • Error back propagation is to propagate the output error layer by layer through the hidden layer to the input layer in some form, and distribute the error to all units in each layer, so as to obtain the error signal of each layer unit, and this error signal is used as the basis for correcting the weights of each unit.
  • This process of adjusting the weights of each layer of the signal forward propagation and error back propagation is repeated.
  • the process of continuous adjustment of weights is the learning and training process of the network. This process continues until the error of the network output is reduced to an acceptable level, or until the pre-set number of learning times is reached.
  • the selected AI algorithms and models vary depending on the type of solution.
  • the main way to improve the network performance of the fifth generation mobile communication technology (5th Generation, 5G) with the help of AI is to enhance or replace existing algorithms or processing modules through algorithms and models based on neural networks.
  • algorithms and models based on neural networks can achieve better performance than those based on deterministic algorithms.
  • the more commonly used neural networks include deep neural networks, convolutional neural networks, and recurrent neural networks.
  • FIG3 shows a block diagram of a wireless communication system that can be applied to an embodiment of the present application.
  • the wireless communication system includes a terminal 31 and a network side device 32.
  • the terminal 31 may be a mobile phone, a tablet computer (Tablet Personal Computer), a laptop computer (Laptop Computer) or a notebook computer, a personal digital assistant (PDA), a handheld computer, a netbook, an ultra-mobile personal computer (UMPC), a mobile Internet device (Mobile Internet Device, MID), an augmented reality (AR)/virtual reality (VR) device, a robot, a wearable device (Wearable Device), a vehicle user equipment (VUE), a pedestrian terminal (Pedestrian User Equipment, PUE), a smart home (a home appliance with wireless communication function, such as a refrigerator, a television, a washing machine or furniture, etc.), a game console, a personal computer (PC), a teller machine or a self-service machine and other terminal side devices, and the wearable devices include: a smart watch, a smart bracelet, a smart Headphones, smart glasses, smart jewelry (smart bracelets, smart bracelets, smart rings, smart necklaces, smart anklets, smart anklets, etc
  • the terminal involved in this application can also be a chip in the terminal, such as a modem chip, a system-on-chip (SoC). It should be noted that the specific type of the terminal 31 is not limited in the embodiment of this application.
  • the network side device 32 may include an access network device or a core network device, wherein the access network device may also be referred to as a wireless access network device, a wireless access network (Radio Access Network, RAN), a wireless access network function or a wireless access network unit.
  • the access network device may include a base station, a wireless local area network (Wireless Local Area Network, WLAN) access point or a WiFi node, etc.
  • the base station may be referred to as a node B, an evolved node B (eNB), an access point, a base transceiver station (Base Transceiver Station, BTS), a radio base station, a radio transceiver, a basic service set (Basic Service Set, BSS), an extended service set (Extended Service Set, ESS), a home B node, a home evolved B node, a transmitting and receiving point (Transmitting Receiving Point, TRP) or other appropriate terms in the field, as long as the same technical effect is achieved, the base station is not limited to a specific technical vocabulary, it should be noted that in the embodiment of the present application, only the base station in the NR system is used as an example for introduction, and the specific type of the base station is not limited.
  • the core network equipment may include but is not limited to at least one of the following: core network nodes, core network functions, mobility management entity (Mobility Management Entity, MME), access and mobility management function (Access and Mobility Management Function, AMF), session management function (Session Management Function, SMF), user plane function (User Plane Function, UPF), policy control function (Policy Control Function, PCF), policy and charging rules function unit (Policy and Charging Rules Function, PCRF), edge application service discovery function (Edge Application Server Discovery Function, EASDF), unified data management (Unifi).
  • MME mobility management entity
  • AMF Access and Mobility Management Function
  • SMF Session Management Function
  • SMF session management function
  • User Plane Function User Plane Function
  • Policy Control Function Policy Control Function
  • PCRF Policy and Charging Rules Function
  • EASDF edge application service discovery function
  • EASDF unified data management
  • an embodiment of the present application provides a method for updating AI model parameters, which is applied to a first node, which may be a network side device or a terminal.
  • the specific steps include: Step 401.
  • Step 401 The first node sends first information related to AI model parameters to the second node;
  • the first information includes at least one of the following: an update mode of the AI model parameters, and an indication method of the AI model parameters.
  • the first information is used by the second node to update the AI model parameters.
  • the update mode of the AI model parameters in the first information is used to indicate the method used to update the AI model parameters.
  • the indication method is used to indicate the location of the update parameter or to indicate the sending order of the AI model parameters.
  • the first node can indicate the sending order of all AI model parameters to the second node, and the second node updates the AI model according to some of all the received AI model parameters according to the situation; or the first node can indicate the sending order of some AI model parameters to the second node, and the second node updates the AI model according to the received some AI model parameters, where some AI model parameters can also be called AI model update parameters.
  • the first node can send the update mode of the AI model parameters and/or the indication method of the AI model parameters to the second node.
  • the second node does not need to compile or recompile the AI model.
  • the second node updates the existing AI model parameters according to the received update mode of the AI model parameters and/or the indication method of the AI model parameters, which can effectively improve the efficiency of transmitting the AI model in the wireless communication system and reduce network signaling overhead.
  • the first node is a first network side device or a first terminal
  • the second node is a second network side device or a second terminal.
  • the first node is a first network side device and the second node is a second terminal
  • the first node is a first terminal and the second node is a second network side device
  • the first node is a first network side device and the second node is a second network side device
  • the first node is a first terminal and the second node is a second terminal
  • the first node is a first terminal and the second node is a second terminal.
  • the update mode includes one of the following:
  • the first mode includes: updating all parameters in a first parameter subset, where the first parameter subset is a subset of the AI model parameters, and the first parameter subset is pre-configured or agreed upon by a protocol or indicated by a network side;
  • the second mode includes: updating all parameters in a second parameter subset, where the second parameter subset is a subset of the first parameter subset;
  • the third mode includes: updating all parameters of the AI model;
  • the fourth mode includes: updating a third parameter subset, wherein the third parameter subset is a subset of the AI model parameters.
  • the indication method includes one of the following:
  • first information where the first information is used to indicate the location of the update parameter in the AI model
  • Second information where the second information is used to indicate the order in which updated parameters in the AI model are sent.
  • FIG1 the structure of a neural network applicable to an embodiment of the present application is shown in FIG1 , where one layer corresponds to one column, i.e., the input layer is the first column, the output layer is the last column, and the hidden layer is the middle column.
  • the embodiment of the present application may also be applicable to neural networks of other structures.
  • the structure of the neural network may be another structure of the neural network obtained by rotating according to FIG. 1 , so that another structure based on the neural network “indicates the position of the update parameter in the AI model”, and “Indicating the order of sending updated parameters in the AI model”, the indication method is applied to the structure of the neural network shown in FIG1 , and is similar to other structures applied to the neural network, and will not be repeated here.
  • another structure of the neural network can be rotated 90 degrees to the right relative to the schematic diagram of Figure 1, that is, one layer corresponds to one row, the input layer is the first row, the output layer is the last row, and the hidden layer is the middle row; for another example, another structure of the neural network can be rotated 90 degrees to the left relative to the neural network structure of Figure 1, that is, one layer corresponds to one row, the input layer is the last row, the output layer is the first row, and the hidden layer is the middle row. It can be understood that the structure of the neural network is not limited to the above three forms.
  • the AI model is a neural network, which includes an input layer, one or more hidden layers and an output layer.
  • the input layer, one or more hidden layers, the output layer, and the neurons in each layer form a structure with multiple rows and columns.
  • the order in which the parameters of the neurons in the neural network are sent is determined according to the position of the neurons in the neural network.
  • the order of sending the parameters of the neurons in the neural network includes one of the following:
  • the first column corresponds to the input layer, and the last column corresponds to the output layer; or, the first column corresponds to the output layer, and the last column corresponds to the input layer.
  • the first row corresponds to the input layer, and the last row corresponds to the output layer; or, the first row corresponds to the output layer, and the last row corresponds to the input layer;
  • the first row is the top row in the neural network, and the last row is the bottom row in the neural network; or, the first row is the bottom row in the neural network, and the last row is the top row in the neural network.
  • the order of sending the parameters of neurons in the neural network includes: the parameters of all neurons in the first column, the parameters of all neurons in the second column, and so on, until the parameters of all neurons in the last column, the order of sending the parameters of neurons in the neural network also includes one of the following:
  • the order of sending the parameters of neurons in the neural network when the order of sending the parameters of neurons in the neural network includes the parameters of all neurons in the first row, the parameters of all neurons in the second row, and so on, until the parameters of all neurons in the last row, the order of sending the parameters of neurons in the neural network also includes one of the following:
  • the parameters of the neurons are sent through at least one of the following:
  • Each row sends the parameters of the neurons in the row with the largest number of neurons, and the layers with missing neurons are padded with preset values (e.g., 0).
  • the second and fourth layers of the 50th row will be filled with 0 when sending.
  • Each row sends the parameters of the neuron according to the actual number of neurons.
  • the first layer there are 50 neurons in the first layer, 40 neurons in the second layer, 60 neurons in the third layer, and 30 neurons in the fourth layer. If there are no neurons in the second and fourth layers in line 50, the second and fourth layers are skipped when sending, and the parameters of the neurons in the first and third layers are sent in line 50.
  • the sending order of the AI model parameters is preconfigured or agreed upon by the protocol or indicated by the network side.
  • the AI model parameters include multiplicative coefficients and additive coefficients, and the sending priority of the multiplicative coefficients is higher than the sending priority of the additive coefficients; or, the sending priority of the additive coefficients is higher than the sending priority of the multiplicative coefficients.
  • the quantization level and/or compression method of the AI model is pre-configured or agreed upon by the protocol or indicated by the network side.
  • the model identification (Identity, ID) of the updated AI model is the same as the model ID of the AI model before the update, and the AI model before the update is not saved;
  • the model ID of the updated AI model is different from the model ID of the AI model before the update, and the AI model before the update is saved, and the model ID of the AI model before the update remains unchanged.
  • the AI model includes a first functional module, and the first functional module is used for at least one of the following:
  • signal processing including but not limited to at least one of the following: signal detection, filtering, equalization, etc.
  • the signal includes but is not limited to at least one of the following: demodulation reference signal (DMRS), sounding reference signal (SRS), synchronization signal block (Synchronization Signal and PBCH block, SSB), tracking reference signal (TRS), phase tracking reference signals (PTRS), channel state information reference signal (CSI-RS), etc.;
  • DMRS demodulation reference signal
  • SRS sounding reference signal
  • SSB synchronization signal block
  • TRS tracking reference signal
  • PTRS phase tracking reference signals
  • CSI-RS channel state information reference signal
  • Channel signal transmission channel signal reception, channel demodulation or channel signal transmission, wherein the channel includes but is not limited to at least one of the following: Physical Downlink Control Channel (PDCCH), Physical Downlink Shared Channel (PDSCH), Physical Uplink Control Channel (PUCCH), Physical Uplink Shared Channel (PUSCH), Physical Random Access Channel (PRACH), Physical Broadcast Channel (PBCH);
  • PDCCH Physical Downlink Control Channel
  • PDSCH Physical Downlink Shared Channel
  • PUCH Physical Uplink Control Channel
  • PUSCH Physical Uplink Shared Channel
  • PRACH Physical Broadcast Channel
  • PBCH Physical Broadcast Channel
  • channel state information feedback includes but is not limited to at least one of the following: channel-related information, channel matrix-related information, channel characteristic information, channel matrix characteristic information, precoding matrix indicator (PMI), rank indicator (RI), CSI-RS resource indicator (CSI-RS Resource Indicator, CRI), channel quality indicator (CQI), layer indicator (LI), etc.
  • PMI precoding matrix indicator
  • RI rank indicator
  • CSI-RS resource indicator CSI-RS Resource Indicator
  • CQI channel quality indicator
  • LI layer indicator
  • Another example is the partial reciprocity of uplink and downlink in Frequency Division Duplex (FDD).
  • FDD Frequency Division Duplex
  • the base station obtains angle and delay information based on the uplink channel, and can notify the terminal of the angle and delay information through CSI-RS precoding or direct indication.
  • the terminal reports according to the indication of the base station or selects and reports within the indication range of the base station, thereby reducing the terminal's calculation workload and the CSI reporting overhead.
  • beam management including but not limited to at least one of the following: beam measurement, beam reporting, beam prediction, beam failure detection, beam failure recovery, and new beam indication during beam failure recovery;
  • channel prediction including but not limited to at least one of the following: prediction of channel state information and beam prediction;
  • Interference suppression including but not limited to at least one of the following: intra-cell interference, inter-cell interference, out-of-band interference, and intermodulation interference;
  • Positioning such as estimating the specific position (including horizontal position and/or vertical position) or possible future trajectory of the terminal through a reference signal (such as SRS), or estimating information of auxiliary position estimation or trajectory estimation of the terminal;
  • predicting or managing high-level services and/or high-level parameters including but not limited to at least one of the following: throughput, required packet size, service demand, mobile speed, noise information, etc.;
  • control signaling including but not limited to at least one of the following: power control related signaling, beam management related signaling.
  • the efficiency of transmitting AI models in wireless communication systems can be effectively improved, and network signaling overhead can be reduced.
  • an embodiment of the present application provides a method for updating AI model parameters, which is applied to a second node, where the second node is a network side device or terminal.
  • the method includes: step 501 and step 502.
  • Step 501 The second node receives first information related to AI model parameters sent by the first node;
  • Step 502 The second node updates the AI model parameters according to the first information
  • the first information includes at least one of the following: an update mode of the AI model parameters, and an indication method of the AI model parameters.
  • the first node is a first network side device or a first terminal
  • the second node is a second network side device or a second terminal.
  • the first node is a first network side device
  • the second node is a second terminal. end, or, the first node is the first terminal and the second node is the second network side device, or, the first node is the first network side device and the second node is the second network side device, or the first node is the first terminal and the second node is the second terminal.
  • the update mode includes one of the following:
  • the first mode includes: updating all parameters in a first parameter subset, where the first parameter subset is a subset of the AI model parameters, and the first parameter subset is pre-configured or agreed upon by a protocol or indicated by a network side;
  • the second mode includes: updating all parameters in a second parameter subset, the second parameter subset being a subset of the first parameter subset;
  • the third mode includes: updating all parameters of the AI model;
  • the fourth mode includes: updating a third parameter subset, wherein the third parameter subset is a subset of the AI model parameters.
  • the indication method includes one of the following:
  • first information where the first information is used to indicate the position of the AI model parameter in the AI model
  • Second information where the second information is used to indicate the order in which updated parameters in the AI model are sent.
  • the AI model is a neural network, which includes an input layer, one or more hidden layers and an output layer.
  • the input layer, one or more hidden layers, the output layer, and the neurons in each layer form a multi-row and multi-column structure, and the order in which the parameters of each neuron in the neural network are sent is determined according to the position of the neuron in the neural network.
  • the order in which the parameters of each neuron in the neural network are sent includes:
  • the first column contains the parameters of all neurons
  • the second column contains the parameters of all neurons, and so on, until the last column contains the parameters of all neurons;
  • the first row contains the parameters of all neurons
  • the second row contains the parameters of all neurons
  • so on until the last row contains the parameters of all neurons.
  • the first column corresponds to the input layer, and the last column corresponds to the output layer; or, the first column corresponds to the output layer, and the last column corresponds to the input layer;
  • the first row corresponds to the input layer, and the last row corresponds to the output layer; or, the first row corresponds to the output layer, and the last row corresponds to the input layer.
  • the parameters of the neurons in each column are sent in the order of neurons from top to bottom, or the parameters of the neurons in each column are sent in the order of neurons from bottom to top, or the sending order of the parameters of the neurons in each column is the same, or the sending order of the parameters of the neurons in adjacent columns is opposite.
  • the first row is the top row in the neural network, and the last row is the bottom row in the neural network; or, the first row is the bottom row in the neural network, and the last row is the top row in the neural network.
  • the parameters of the neurons in each row are sent in the order of neurons from left to right, or the parameters of the neurons in each row are sent in the order of neurons from right to left, or the sending order of the parameters of the neurons in each row is the same, or the sending order of the parameters of the neurons in adjacent rows is opposite.
  • the parameters of the neurons are sent in each row according to the row with the largest number of neurons, and the layers lacking neurons are supplemented with preset values, or the parameters of the neurons are sent in each row according to the actual number of neurons.
  • the sending order of the AI model parameters is preconfigured or agreed upon by the protocol or indicated by the network side.
  • the AI model parameters include multiplicative coefficients and additive coefficients, and the sending priority of the multiplicative coefficients in the AI model is higher than the sending priority of the additive coefficients; or, the sending priority of the additive coefficients in the AI model is higher than the sending priority of the multiplicative coefficients.
  • the quantization level and/or compression method of the AI model is pre-configured or agreed upon by the protocol or indicated by the network side.
  • the model ID of the updated AI model is the same as the model ID of the AI model before the update, and the AI model before the update is not saved;
  • the model ID of the updated AI model is different from the model ID of the AI model before the update, and the AI model before the update is saved, and the model ID of the AI model before the update remains unchanged.
  • the AI model includes a first functional module, and the first functional module is used for at least one of the following:
  • signal processing including but not limited to at least one of the following: signal detection, filtering, equalization, etc., wherein the signal includes but is not limited to at least one of the following: DMRS, SRS, SSB, TRS, PTRS, CSI-RS, etc.;
  • channel signal transmission channel signal reception, channel demodulation or channel signal transmission, where the channel includes but is not limited to at least one of the following: PDCCH, PDSCH, PUCCH, PUSCH, PRACH, PBCH;
  • channel state information feedback includes but is not limited to at least one of the following: channel-related information, channel matrix-related information, channel characteristic information, channel matrix characteristic information, PMI, RI, CRI, CQI, LI, etc.
  • Another example is the partial reciprocity of FDD uplink and downlink.
  • the base station obtains angle and delay information based on the uplink channel, and can notify the terminal of the angle and delay information through CSI-RS precoding or direct indication.
  • the terminal reports according to the indication of the base station or selects and reports within the indication range of the base station, thereby reducing the terminal's calculation workload and the CSI reporting overhead.
  • Beam management including but not limited to at least one of the following: beam measurement, beam reporting, beam prediction, Beam failure detection, beam failure recovery, new beam indication in beam failure recovery;
  • channel prediction including but not limited to at least one of the following: prediction of channel state information and beam prediction;
  • Interference suppression including but not limited to at least one of the following: intra-cell interference, inter-cell interference, out-of-band interference, and intermodulation interference;
  • Positioning such as estimating the specific position (including horizontal position and/or vertical position) or possible future trajectory of the terminal through a reference signal (such as SRS), or estimating information of auxiliary position estimation or trajectory estimation of the terminal;
  • predicting or managing high-level services and/or high-level parameters including but not limited to at least one of the following: throughput, required packet size, service demand, mobile speed, noise information, etc.;
  • control signaling including but not limited to at least one of the following: power control related signaling, beam management related signaling.
  • the efficiency of transmitting AI models in wireless communication systems can be effectively improved, and network signaling overhead can be reduced.
  • an embodiment of the present application provides a device for updating AI model parameters, which is applied to a first node.
  • the device 1000 includes:
  • the first information includes at least one of the following: an update mode of the AI model parameters, and an indication method of the AI model parameters.
  • the update mode includes one of the following: a first mode, a second mode, a third mode, and a fourth mode;
  • the first mode includes: updating all parameters in a first parameter subset, where the first parameter subset is a subset of the AI model parameters, and the first parameter subset is pre-configured or agreed upon by a protocol or indicated by a network side;
  • the second mode includes: updating all parameters in a second parameter subset, the second parameter subset being a subset of the first parameter subset;
  • the third mode includes: updating all parameters of the AI model
  • the fourth mode includes: updating a third parameter subset, where the third parameter subset is a subset of the AI model parameters.
  • the indication method includes one of the following:
  • first information where the first information is used to indicate a position of the AI model parameter in the AI model
  • Second information where the second information is used to indicate the order in which the AI model parameters are sent.
  • the AI model is a neural network, which includes an input layer, one or more hidden layers and an output layer.
  • the input layer, one or more hidden layers, the output layer, and the neurons in each layer form a multi-row and multi-column structure, and the order in which the parameters of each neuron in the neural network are sent is determined according to the position of the neuron in the neural network.
  • the order in which the parameters of each neuron in the neural network are sent includes:
  • the first column contains the parameters of all neurons
  • the second column contains the parameters of all neurons, and so on, until the last column contains all neurons.
  • the parameters of the neuron or,
  • the first row contains the parameters of all neurons
  • the second row contains the parameters of all neurons
  • so on until the last row contains the parameters of all neurons.
  • the first column corresponds to the input layer
  • the last column corresponds to the output layer, or the first column corresponds to the output layer, and the last column corresponds to the input layer
  • the first row corresponds to the input layer and the last row corresponds to the output layer, or the first row corresponds to the output layer and the last row corresponds to the input layer.
  • the order of sending the parameters of neurons in the neural network in the case where the order of sending the parameters of neurons in the neural network includes the parameters of all neurons in the first column, the parameters of all neurons in the second column, and so on, until the parameters of all neurons in the last column, the order of sending the parameters of neurons in the neural network also includes one of the following:
  • the parameters of the neurons in each column are sent in the order of the neurons from top to bottom;
  • the parameters of the neurons in each column are sent in the order of the neurons from bottom to top;
  • the parameters of neurons in adjacent columns are sent in the opposite order.
  • the first row is the top row in the neural network, and the last row is the bottom row in the neural network; or, the first row is the bottom row in the neural network, and the last row is the top row in the neural network.
  • the order of sending the parameters of the neurons in the neural network in the case where the order of sending the parameters of the neurons in the neural network includes the parameters of all neurons in the first row, the parameters of all neurons in the second row, and so on, until the parameters of all neurons in the last row, the order of sending the parameters of the neurons in the neural network also includes one of the following:
  • the parameters of the neurons in each row are sent in the order of neurons from left to right;
  • the parameters of the neurons in each row are sent in the order of neurons from right to left;
  • the parameters of neurons in each row are sent in the same order;
  • the parameters of neurons in adjacent rows are sent in the opposite order.
  • the sending order of the AI model parameters is preconfigured or agreed upon by the protocol or indicated by the network side.
  • the AI model parameters include multiplicative coefficients and additive coefficients, and the sending priority of the multiplicative coefficients is higher than the sending priority of the additive coefficients; or, the sending priority of the additive coefficients is higher than the sending priority of the multiplicative coefficients.
  • the quantization level and/or compression method of the AI model is pre-configured or agreed upon by the protocol or indicated by the network side.
  • the model ID of the updated AI model is the same as the model ID of the AI model before the update, and the AI model before the update is not saved;
  • the model ID of the updated AI model is different from the model ID of the AI model before the update, and the AI model before the update is saved, and the model ID of the AI model before the update remains unchanged.
  • the AI model includes a first functional module, and the first functional module is used for one or more of the following:
  • Forecast or manage high-level business and/or high-level parameter forecasts
  • the first node is a first network-side device or a first terminal
  • the second node is a second network-side device or a second terminal.
  • the device provided in the embodiment of the present application can implement each process implemented by the method embodiment of Figure 4 and achieve the same technical effect. To avoid repetition, it will not be repeated here.
  • an embodiment of the present application provides a device for updating AI model parameters, which is applied to a second node.
  • the device 1100 includes:
  • a first receiving module 1101 is used to receive first information related to AI model parameters sent by a first node
  • An updating module 1102 configured to update the AI model parameters according to the first information
  • the first information includes at least one of the following: an update mode of the AI model parameters, and an indication method of the AI model parameters.
  • the update mode includes one of the following: a first mode, a second mode, a third mode, and a fourth mode;
  • the first mode includes: updating all parameters in a first parameter subset, where the first parameter subset is a subset of the AI model parameters, and the first parameter subset is pre-configured or agreed upon by a protocol or indicated by a network side;
  • the second mode includes: updating all parameters in a second parameter subset, the second parameter subset is a subset of the first parameter subset, the first parameter subset is a subset of the AI model parameters, and the first parameter subset is pre-indicated or agreed upon by protocol;
  • the third mode includes: updating all parameters of the AI model
  • the fourth mode includes: updating a third parameter subset, where the third parameter subset is a subset of the AI model parameters.
  • the indication method includes one of the following:
  • first information where the first information is used to indicate a position of the AI model parameter in the AI model
  • Second information where the second information is used to indicate the order in which the AI model parameters are sent.
  • the AI model is a neural network, which includes an input layer, one or more hidden layers and an output layer.
  • the input layer, one or more hidden layers, the output layer, and the neurons in each layer form a multi-row and multi-column structure, and the order in which the parameters of each neuron in the neural network are sent is determined according to the position of the neuron in the neural network.
  • the order in which the parameters of each neuron in the neural network are sent includes:
  • the first column contains the parameters of all neurons
  • the second column contains the parameters of all neurons, and so on, until the last column contains the parameters of all neurons;
  • the first row contains the parameters of all neurons
  • the second row contains the parameters of all neurons
  • so on until the last row contains the parameters of all neurons.
  • the first column corresponds to the input layer
  • the last column corresponds to the output layer, or the first column corresponds to the output layer, and the last column corresponds to the input layer
  • the first row corresponds to the input layer and the last row corresponds to the output layer, or the first row corresponds to the output layer and the last row corresponds to the input layer.
  • the order of sending the parameters of neurons in the neural network in the case where the order of sending the parameters of neurons in the neural network includes the parameters of all neurons in the first column, the parameters of all neurons in the second column, and so on, until the parameters of all neurons in the last column, the order of sending the parameters of neurons in the neural network also includes one of the following:
  • the parameters of the neurons in each column are sent in the order of the neurons from top to bottom;
  • the parameters of the neurons in each column are sent in the order of the neurons from bottom to top;
  • the parameters of neurons in adjacent columns are sent in the opposite order.
  • the first row is the top row in the neural network, and the last row is the bottom row in the neural network; or, the first row is the bottom row in the neural network, and the last row is the top row in the neural network.
  • the order of sending the parameters of the neurons in the neural network in the case where the order of sending the parameters of the neurons in the neural network includes the parameters of all neurons in the first row, the parameters of all neurons in the second row, and so on, until the parameters of all neurons in the last row, the order of sending the parameters of the neurons in the neural network also includes one of the following:
  • the parameters of the neurons in each row are sent in the order of neurons from left to right;
  • the parameters of the neurons in each row are sent in the order of neurons from right to left;
  • the parameters of neurons in each row are sent in the same order;
  • the parameters of neurons in adjacent rows are sent in the opposite order.
  • each row when the number of neurons in each layer is not exactly the same, each row sends the parameters of the neurons according to the row with the largest number of neurons, and the layers lacking neurons are supplemented with preset values, or each row The parameters of the neurons are sent according to the actual number of neurons.
  • the sending order of the AI model parameters is preconfigured or agreed upon by the protocol or indicated by the network side.
  • the AI model parameters include multiplicative coefficients and additive coefficients, and the sending priority of the multiplicative coefficients is higher than the sending priority of the additive coefficients; or, the sending priority of the additive coefficients is higher than the sending priority of the multiplicative coefficients.
  • the quantization level and/or compression method of the AI model is pre-configured or agreed upon by the protocol or indicated by the network side.
  • the model ID of the updated AI model is the same as the model ID of the AI model before the update, and the AI model before the update is not saved;
  • the model ID of the updated AI model is different from the model ID of the AI model before the update, and the AI model before the update is saved, and the model ID of the AI model before the update remains unchanged.
  • the AI model includes a first functional module, and the first functional module is used for one or more of the following:
  • Forecast or manage high-level business and/or high-level parameter forecasts
  • the second node is a second network-side device or a second terminal
  • the first node is a first network-side device or a first terminal
  • the device provided in the embodiment of the present application can implement each process implemented by the method embodiment of Figure 5 and achieve the same technical effect. To avoid repetition, it will not be repeated here.
  • Fig. 12 is a schematic diagram of the hardware structure of a terminal implementing an embodiment of the present application.
  • the terminal 1200 includes but is not limited to: a radio frequency unit 1201, a network module 1202, an audio output unit 1203, an input unit 1204, a sensor 1205, a display unit 1206, a user input unit 1207, an interface unit 1208, a memory 1209, and at least some of the components in the processor 1210.
  • the terminal 1200 may also include a power source (such as a battery) for supplying power to various components, and the power source may be logically connected to the processor 1210 through a power management system, so that the power management system can manage charging, discharging, and power consumption.
  • a power source such as a battery
  • the present invention may include more or fewer components than those shown in the figure, or some components may be combined, or the components may be arranged differently, which will not be described in detail here.
  • the input unit 1204 may include a graphics processing unit (GPU) 12041 and a microphone 12042, and the graphics processor 12041 processes the image data of the static picture or video obtained by the image capture device (such as a camera) in the video capture mode or the image capture mode.
  • the display unit 1206 may include a display panel 12061, and the display panel 12061 may be configured in the form of a liquid crystal display, an organic light emitting diode, etc.
  • the user input unit 1207 includes a touch panel 12071 and at least one of other input devices 12072.
  • the touch panel 12071 is also called a touch screen.
  • the touch panel 12071 may include two parts: a touch detection device and a touch controller.
  • Other input devices 12072 may include, but are not limited to, a physical keyboard, function keys (such as a volume control key, a switch key, etc.), a trackball, a mouse, and a joystick, which will not be repeated here.
  • the RF unit 1201 can transmit the data to the processor 1210 for processing; in addition, the RF unit 1201 can send uplink data to the network side device.
  • the RF unit 1201 includes but is not limited to an antenna, an amplifier, a transceiver, a coupler, a low noise amplifier, a duplexer, etc.
  • the memory 1209 can be used to store software programs or instructions and various data.
  • the memory 1209 may mainly include a first storage area for storing programs or instructions and a second storage area for storing data, wherein the first storage area may store an operating system, an application program or instruction required for at least one function (such as a sound playback function, an image playback function, etc.), etc.
  • the memory 1209 may include a volatile memory or a non-volatile memory, or the memory 1209 may include both volatile and non-volatile memories.
  • the non-volatile memory may be a read-only memory (ROM), a programmable read-only memory (PROM), an erasable programmable read-only memory (EPROM), an electrically erasable programmable read-only memory (EEPROM), or a flash memory.
  • the volatile memory may be a random access memory (RAM), a static random access memory (SRAM), a dynamic random access memory (DRAM), a synchronous dynamic random access memory (SDRAM), a double data rate synchronous dynamic random access memory (DDRSDRAM), an enhanced synchronous dynamic random access memory (ESDRAM), a synchronous link dynamic random access memory (SLDRAM) and a direct memory bus random access memory (DRRAM).
  • the memory 1209 in the embodiment of the present application includes but is not limited to these and any other suitable types of memory.
  • the processor 1210 may include one or more processing units; optionally, the processor 1210 integrates an application processor and a modem processor, wherein the application processor mainly processes operations related to an operating system, a user interface, and application programs, and the modem processor mainly processes wireless communication signals, such as a baseband processor. It is understandable that the modem processor may not be integrated into the processor 1210.
  • the terminal provided in the embodiment of the present application can implement each process implemented by the method embodiment of Figure 4 and achieve the same technical effect. To avoid repetition, it will not be repeated here.
  • FIG. 13 is a structural diagram of a network side device applied in an embodiment of the present application.
  • the network side device 1300 includes: a processor 1301, a transceiver 1302, a memory 1303 and a bus interface, wherein the processor
  • the processor 1301 may be responsible for managing the bus architecture and general processing.
  • the memory 1303 may store data used by the processor 1301 when performing operations.
  • the network side device 1300 further includes: a program stored in the memory 1303 and executable on the processor 1301 , and when the program is executed by the processor 1301 , the steps in the method shown in FIG. 4 above are implemented.
  • the bus architecture may include any number of interconnected buses and bridges, specifically linking together various circuits of one or more processors represented by processor 1301 and memory represented by memory 1303.
  • the bus architecture may also link together various other circuits such as peripherals, voltage regulators, and power management circuits, which are well known in the art and are therefore not further described herein.
  • the bus interface provides an interface.
  • the transceiver 1302 may be a plurality of components, namely, a transmitter and a receiver, providing a unit for communicating with various other devices over a transmission medium.
  • an embodiment of the present application further provides a network side device 1300, including a processor 1401 and a memory 1402, and the memory 1402 stores programs or instructions that can be run on the processor 1401.
  • the communication device 1400 is a first node
  • the program or instruction is executed by the processor 1401 to implement the various steps of the method embodiment of Figure 4 above.
  • the communication device 1400 is a second node
  • the program or instruction is executed by the processor 1401 to implement the various steps of the method embodiment of Figure 5 above and can achieve the same technical effect. To avoid repetition, it will not be repeated here.
  • An embodiment of the present application also provides a readable storage medium, on which a program or instruction is stored.
  • a program or instruction is stored.
  • the method of Figure 4 or Figure 5 and the various processes of the above-mentioned embodiments are implemented, and the same technical effect can be achieved. To avoid repetition, it will not be repeated here.
  • the processor is the processor in the terminal described in the above embodiment.
  • the readable storage medium includes a computer readable storage medium, such as a computer read-only memory ROM, a random access memory RAM, a magnetic disk or an optical disk.
  • An embodiment of the present application further provides a chip, which includes a processor and a communication interface, wherein the communication interface is coupled to the processor, and the processor is used to run programs or instructions to implement the various processes shown in Figure 4 or Figure 5 and the various method embodiments mentioned above, and can achieve the same technical effect. To avoid repetition, it will not be repeated here.
  • the chip mentioned in the embodiments of the present application can also be called a system-level chip, a system chip, a chip system or a system-on-chip chip, etc.
  • the embodiments of the present application further provide a computer program/program product, which is stored in a storage medium, and is executed by at least one processor to implement the various processes shown in Figure 4 or Figure 5 and the various method embodiments described above, and can achieve the same technical effect. To avoid repetition, it will not be repeated here.
  • An embodiment of the present application further provides a communication system, which includes a terminal and a network side device.
  • the terminal is used to execute the various processes as shown in Figure 4 and the various method embodiments described above
  • the network side device is used to execute the various processes as shown in Figure 5 and the various method embodiments described above, and can achieve the same technical effects. To avoid repetition, they are not repeated here.
  • the technical solution of the present application can be embodied in the form of a computer software product, which is stored in a storage medium (such as ROM/RAM, a magnetic disk, or an optical disk), and includes a number of instructions for enabling a terminal (which can be a mobile phone, a computer, a server, an air conditioner, or a network device, etc.) to execute the methods described in each embodiment of the present application.
  • a storage medium such as ROM/RAM, a magnetic disk, or an optical disk
  • a terminal which can be a mobile phone, a computer, a server, an air conditioner, or a network device, etc.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • Mathematical Physics (AREA)
  • Artificial Intelligence (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Biomedical Technology (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Medical Informatics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Mobile Radio Communication Systems (AREA)

Abstract

La présente demande concerne un procédé et un appareil de mise à jour d'un paramètre de modèle d'IA, et un dispositif de communication. Le procédé comprend les étapes suivantes : un premier nœud envoie, à un second nœud, des premières informations relatives à un paramètre de modèle d'IA, les premières informations comprenant au moins l'un des éléments suivants : un mode de mise à jour pour le paramètre de modèle d'IA, et un mode d'indication pour le paramètre de modèle d'IA.
PCT/CN2023/119938 2022-09-26 2023-09-20 Procédé et appareil de mise à jour de paramètre de modèle d'ia, et dispositif de communication Ceased WO2024067280A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202211177497.5 2022-09-26
CN202211177497.5A CN117834427A (zh) 2022-09-26 2022-09-26 更新ai模型参数的方法、装置及通信设备

Publications (1)

Publication Number Publication Date
WO2024067280A1 true WO2024067280A1 (fr) 2024-04-04

Family

ID=90476175

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2023/119938 Ceased WO2024067280A1 (fr) 2022-09-26 2023-09-20 Procédé et appareil de mise à jour de paramètre de modèle d'ia, et dispositif de communication

Country Status (2)

Country Link
CN (1) CN117834427A (fr)
WO (1) WO2024067280A1 (fr)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108924910A (zh) * 2018-07-25 2018-11-30 Oppo广东移动通信有限公司 Ai模型的更新方法及相关产品
US20200151558A1 (en) * 2018-11-13 2020-05-14 Gyrfalcon Technology Inc. Systems and methods for updating an artificial intelligence model by a subset of parameters in a communication system
CN114091679A (zh) * 2020-08-24 2022-02-25 华为技术有限公司 一种更新机器学习模型的方法及通信装置
CN114363921A (zh) * 2020-10-13 2022-04-15 维沃移动通信有限公司 Ai网络参数的配置方法和设备
CN114519435A (zh) * 2022-02-14 2022-05-20 维沃移动通信有限公司 模型参数更新方法、模型参数更新装置和电子设备

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108924910A (zh) * 2018-07-25 2018-11-30 Oppo广东移动通信有限公司 Ai模型的更新方法及相关产品
US20200151558A1 (en) * 2018-11-13 2020-05-14 Gyrfalcon Technology Inc. Systems and methods for updating an artificial intelligence model by a subset of parameters in a communication system
CN114091679A (zh) * 2020-08-24 2022-02-25 华为技术有限公司 一种更新机器学习模型的方法及通信装置
CN114363921A (zh) * 2020-10-13 2022-04-15 维沃移动通信有限公司 Ai网络参数的配置方法和设备
CN114519435A (zh) * 2022-02-14 2022-05-20 维沃移动通信有限公司 模型参数更新方法、模型参数更新装置和电子设备

Also Published As

Publication number Publication date
CN117834427A (zh) 2024-04-05

Similar Documents

Publication Publication Date Title
US20240224082A1 (en) Parameter selection method, parameter configuration method, terminal, and network side device
WO2023040887A1 (fr) Procédé et appareil de rapport d'informations, terminal et support de stockage lisible
US20240323741A1 (en) Measurement method and apparatus, device, and storage medium
US20240357382A1 (en) Communication model configuration method and apparatus, and communication device
US20250227507A1 (en) Ai model processing method and apparatus, and communication device
WO2023179540A1 (fr) Procédé et appareil de prédiction de canal, et dispositif de communication sans fil
US20250021841A1 (en) Sample determining method and apparatus, and device
WO2023186099A1 (fr) Procédé et appareil de rétroaction d'informations, et dispositif
US20250310756A1 (en) Ai computing power reporting method, terminal, and network-side device
WO2024032694A1 (fr) Procédé et appareil de traitement prédiction de csi, dispositif de communication et support de stockage lisible
WO2024067280A1 (fr) Procédé et appareil de mise à jour de paramètre de modèle d'ia, et dispositif de communication
WO2024099091A1 (fr) Procédé et appareil de prédiction de faisceau, terminal, dispositif côté réseau, et support de stockage
WO2024083004A1 (fr) Procédé de configuration de modèle d'ia, terminal et dispositif côté réseau
WO2025146039A1 (fr) Procédé et appareil de traitement d'informations, et dispositif de communication
WO2024032695A1 (fr) Procédé et appareil de traitement de prédiction de csi, dispositif de communication et support de stockage lisible
WO2025140604A1 (fr) Procédés et appareils de rapport d'informations, et premiers dispositifs et seconds dispositifs
WO2025201273A1 (fr) Procédé et appareil de traitement de modèle, dispositif de communication et support de stockage
WO2024067665A1 (fr) Procédé et appareil de traitement de prédiction de csi, dispositif de communication et support de stockage lisible
WO2024120409A1 (fr) Procédé et appareil de détermination de modèle de réseau d'ia, procédé et appareil de transmission d'informations, et dispositif de communication
CN120238893A (zh) 信息指示方法、装置、通信设备及可读存储介质
CN120238974A (zh) 功能切换方法、装置、设备及可读存储介质
WO2025140233A1 (fr) Procédé et appareil de prédiction de csi, procédé et appareil de surveillance de résultat de prédiction de csi, dispositif et support de stockage lisible
WO2024093713A1 (fr) Procédé et appareil de configuration de ressources, et dispositif de communication et support de stockage lisible
WO2025092999A1 (fr) Procédé et appareil de supervision de performance de modèle, et dispositif
WO2024235043A1 (fr) Procédé et appareil d'acquisition d'informations, et dispositif de communication

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23870512

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 23870512

Country of ref document: EP

Kind code of ref document: A1