WO2025170023A1 - Procédé de commande de communication - Google Patents
Procédé de commande de communicationInfo
- Publication number
- WO2025170023A1 WO2025170023A1 PCT/JP2025/004063 JP2025004063W WO2025170023A1 WO 2025170023 A1 WO2025170023 A1 WO 2025170023A1 JP 2025004063 W JP2025004063 W JP 2025004063W WO 2025170023 A1 WO2025170023 A1 WO 2025170023A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- performance evaluation
- model
- fine
- tuning
- trained
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N20/00—Machine learning
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L41/00—Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
- H04L41/16—Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks using machine learning or artificial intelligence
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04W—WIRELESS COMMUNICATION NETWORKS
- H04W24/00—Supervisory, monitoring or testing arrangements
- H04W24/04—Arrangements for maintaining operational condition
Definitions
- This disclosure relates to a communication control method.
- the communication control method is a communication control method for a user device in a mobile communication system.
- the communication control method includes a step in which the user device performs a performance evaluation based on a performance evaluation execution time indicating the time required to perform a performance evaluation of a trained AI/ML model.
- the communication control method also includes a step in which the user device performs fine-tuning of the trained AI/ML model based on the results of the performance evaluation.
- the fine-tuning involves training the trained AI/ML model using training data with a data volume equal to or less than a data volume threshold.
- a communication control method is a communication control method in a network node of a mobile communication system.
- the communication control method includes a step in which the network node determines performance evaluation execution conditions, which indicate conditions for executing performance evaluation of a trained AI/ML model in a user device, based on a performance evaluation execution time, which indicates the time required to execute performance evaluation of the trained AI/ML model.
- the communication control method also includes a step in which the network node transmits the performance evaluation execution conditions to the user device.
- the user device performs performance evaluation based on the performance evaluation execution conditions, and fine-tunes the trained AI/ML model based on the results of the performance evaluation.
- the fine-tuning involves training the trained AI/ML model using training data with a data volume equal to or less than a data volume threshold.
- a communication control method is a communication control method in a network node of a mobile communication system.
- the communication control method includes a step in which the network node decides to execute performance evaluation of a trained AI/ML model in a user device based on a performance evaluation execution time indicating the time required to execute performance evaluation of the trained AI/ML model.
- the communication control method also includes a step in which the network node transmits a performance evaluation execution instruction to the user device, instructing the user device to execute performance evaluation.
- performance evaluation is performed in the user device in accordance with the performance evaluation instruction, and fine-tuning of the trained AI/ML model is performed based on the evaluation results of the performance evaluation.
- the fine-tuning also involves training the trained AI/ML model using training data with a data volume equal to or less than a data volume threshold.
- a communication control method is a communication control method for a user device in a mobile communication system.
- the communication control method includes a step in which the user device receives, from a network node, performance evaluation execution conditions indicating conditions for executing a performance evaluation of a trained AI/ML model.
- the communication control method also includes a step in which the user device executes inference using the trained AI/ML model.
- the communication control method further includes a step in which the user device executes performance evaluation in response to detecting a predetermined number of times that the inference result of the trained AI/ML model is equal to or less than a performance threshold.
- the performance evaluation execution conditions include the performance threshold and the predetermined number of times.
- FIG. 1 is a diagram showing an example of the configuration of a mobile communication system according to the first embodiment.
- FIG. 2 is a diagram illustrating an example of the configuration of a UE (user equipment) according to the first embodiment.
- Figure 3 is a diagram showing an example configuration of a gNB (base station) according to the first embodiment.
- FIG. 4 is a diagram illustrating an example of the configuration of a protocol stack according to the first embodiment.
- FIG. 5 is a diagram illustrating an example of the configuration of a protocol stack according to the first embodiment.
- FIG. 6 is a diagram illustrating an example of the configuration of functional blocks of the AI/ML technology according to the first embodiment.
- FIG. 7 is a diagram illustrating an example of operation in the AI/ML technique according to the first embodiment.
- FIG. 8 is a diagram illustrating an example of an arrangement of functional blocks of the AI/ML technology according to the first embodiment.
- FIG. 9 is a diagram illustrating an example of operation according to the first embodiment.
- FIG. 10 is a diagram illustrating an example of operation according to the first embodiment.
- FIG. 11 is a diagram illustrating an example of a setting message according to the first embodiment.
- FIG. 12 is a diagram illustrating an example of operation according to the first embodiment.
- FIG. 13 is a diagram illustrating an example of operation according to the first embodiment.
- FIG. 14 is a diagram illustrating an example of operation according to the first embodiment.
- FIG. 15 is a diagram illustrating an example of operation according to the first embodiment.
- FIG. 16 is a diagram illustrating an example of operation according to the second embodiment.
- FIG. 17 is a diagram illustrating an example of operation according to the second embodiment.
- FIG. 18 is a diagram illustrating an example of operation according to the second embodiment.
- FIG. 19 is a diagram illustrating an example of operation according to the second embodiment.
- FIG. 20 is a diagram illustrating an example of operation according to the second embodiment.
- FIG. 21 is a diagram illustrating an example of operation according to the third embodiment.
- FIG. 22 is a diagram illustrating an example of operation according to the third embodiment.
- FIG. 23 is a diagram illustrating an example of operation according to the fourth embodiment.
- FIG. 24 is a diagram illustrating an example of operation according to the fifth embodiment.
- FIG. 25 is a diagram illustrating an example of operation according to the fifth embodiment.
- FIG. 26 is a diagram illustrating an example of operation according to the fifth embodiment.
- the purpose of this disclosure is to appropriately perform fine-tuning on trained AI/ML models.
- UE100 is a mobile wireless communication device.
- UE100 may be any device that is used by a user.
- UE100 is a mobile phone terminal (including a smartphone) and/or a tablet terminal, a notebook PC, a communication module (including a communication card or chipset), a sensor or a device provided in a sensor, a vehicle or a device provided in a vehicle (Vehicle UE), or an aircraft or a device provided in an aircraft (Aerial UE).
- the NG-RAN 10 includes a base station (called a "gNB” in a 5G system) 200.
- the gNBs 200 are connected to each other via an Xn interface, which is an interface between base stations.
- the gNB 200 manages one or more cells.
- the gNB 200 performs wireless communication with a UE 100 that has established a connection with its own cell.
- the gNB 200 has a radio resource management (RRM) function, a routing function for user data (hereinafter simply referred to as "data”), and a measurement control function for mobility control and scheduling.
- RRM radio resource management
- the term "cell” is used to indicate the smallest unit of a wireless communication area.
- the term “cell” is also used to indicate a function or resource for wireless communication with a UE 100.
- One cell belongs to one carrier frequency (hereinafter simply referred to as "frequency").
- gNBs can also connect to the EPC (Evolved Packet Core), which is the LTE core network.
- EPC Evolved Packet Core
- LTE base stations can also connect to 5GC.
- LTE base stations and gNBs can also be connected via a base station-to-base station interface.
- 5GC20 includes an AMF (Access and Mobility Management Function) and a UPF (User Plane Function) 300.
- the AMF performs various mobility controls for UE100.
- the AMF manages the mobility of UE100 by communicating with UE100 using NAS (Non-Access Stratum) signaling.
- the UPF controls data forwarding.
- the AMF and UPF 300 are connected to gNB200 via an NG interface, which is an interface between a base station and a core network.
- the AMF and UPF 300 may be core network devices included in CN20.
- the core network device and gNB200 may collectively be referred to as a network device.
- FIG. 2 is a diagram showing an example configuration of UE100 (user equipment) according to the first embodiment.
- UE100 includes a receiver 110, a transmitter 120, and a controller 130.
- the receiver 110 and transmitter 120 constitute a communication unit that performs wireless communication with gNB200.
- UE100 is an example of a communication device.
- the receiving unit 110 performs various types of reception under the control of the control unit 130.
- the receiving unit 110 includes an antenna and a receiver.
- the receiver converts the radio signal received by the antenna into a baseband signal (received signal) and outputs it to the control unit 130.
- the transmitting unit 120 performs various transmissions under the control of the control unit 130.
- the transmitting unit 120 includes an antenna and a transmitter.
- the transmitter converts the baseband signal (transmission signal) output by the control unit 130 into a radio signal and transmits it from the antenna.
- the control unit 130 performs various types of control and processing in the UE 100. Such processing includes processing for each layer, which will be described later.
- the control unit 130 includes at least one processor and at least one memory.
- the memory stores programs executed by the processor and information used for processing by the processor.
- the processor may include a baseband processor and a CPU (Central Processing Unit).
- the baseband processor performs modulation/demodulation, encoding/decoding, etc. of baseband signals.
- the CPU executes programs stored in the memory to perform various types of processing. Note that the processing or operations performed by the UE 100 may be performed in the control unit 130.
- FIG 3 is a diagram showing an example configuration of a gNB200 (base station) according to the first embodiment.
- the gNB200 comprises a transmitter 210, a receiver 220, a controller 230, and a backhaul communication unit 250.
- the transmitter 210 and receiver 220 constitute a communication unit that performs wireless communication with the UE100.
- the backhaul communication unit 250 constitutes a network communication unit that performs communication with the CN20.
- the gNB200 is another example of a communication device. Alternatively, the gNB200 may be an example of a network node.
- the transmitting unit 210 performs various transmissions under the control of the control unit 230.
- the transmitting unit 210 includes an antenna and a transmitter.
- the transmitter converts the baseband signal (transmission signal) output by the control unit 230 into a radio signal and transmits it from the antenna.
- the receiving unit 220 performs various types of reception under the control of the control unit 230.
- the receiving unit 220 includes an antenna and a receiver.
- the receiver converts the radio signal received by the antenna into a baseband signal (received signal) and outputs it to the control unit 230.
- the control unit 230 performs various types of control and processing in the gNB 200. Such processing includes processing for each layer, which will be described later.
- the control unit 230 includes at least one processor and at least one memory.
- the memory stores programs executed by the processor and information used in processing by the processor.
- the processor may include a baseband processor and a CPU.
- the baseband processor performs modulation/demodulation, encoding/decoding, etc. of baseband signals.
- the CPU executes programs stored in the memory to perform various types of processing. Note that processing or operations performed in the gNB 200 may be performed by the control unit 230.
- the backhaul communication unit 250 is connected to adjacent base stations via an Xn interface, which is an interface between base stations.
- the backhaul communication unit 250 is connected to the AMF/UPF 300 via an NG interface, which is an interface between a base station and a core network.
- the gNB 200 may be composed of a central unit (CU) and a distributed unit (DU) (i.e., functionally divided), with the two units connected via an F1 interface, which is a fronthaul interface.
- CU central unit
- DU distributed unit
- Figure 4 shows an example of the protocol stack configuration for the wireless interface of the user plane that handles data.
- the user plane radio interface protocol includes a physical (PHY) layer, a medium access control (MAC) layer, a radio link control (RLC) layer, a packet data convergence protocol (PDCP) layer, and a service data adaptation protocol (SDAP) layer.
- PHY physical
- MAC medium access control
- RLC radio link control
- PDCP packet data convergence protocol
- SDAP service data adaptation protocol
- the PHY layer performs encoding/decoding, modulation/demodulation, antenna mapping/demapping, and resource mapping/demapping.
- Data and control information are transmitted between the PHY layer of UE100 and the PHY layer of gNB200 via a physical channel.
- the PHY layer of UE100 receives downlink control information (DCI) transmitted from gNB200 on the physical downlink control channel (PDCCH).
- DCI downlink control information
- UE100 performs blind decoding of the PDCCH using a radio network temporary identifier (RNTI) and acquires successfully decoded DCI as DCI addressed to the UE.
- RNTI radio network temporary identifier
- the DCI transmitted from gNB200 has CRC (Cyclic Redundancy Code) parity bits scrambled by the RNTI added.
- UE100 can use a bandwidth narrower than the system bandwidth (i.e., the cell bandwidth).
- gNB200 configures UE100 with a bandwidth portion (BWP) consisting of contiguous PRBs (Physical Resource Blocks).
- BWP bandwidth portion
- UE100 transmits and receives data and control signals in the active BWP.
- UE100 may be configured with, for example, up to four BWPs. Each BWP may have a different subcarrier spacing. The BWPs may overlap in frequency.
- gNB200 can specify which BWP to apply by controlling the downlink. This allows gNB200 to dynamically adjust the UE bandwidth according to the amount of data traffic of UE100, etc., thereby reducing UE power consumption.
- gNB200 can configure up to three control resource sets (CORESETs) for each of up to four BWPs on the serving cell.
- a CORESET is a radio resource for control information to be received by UE100. Up to 12 or more CORESETs may be configured on the serving cell for UE100.
- Each CORESET may have an index of 0 to 11 or more.
- a CORESET may consist of six resource blocks (PRBs) and one, two, or three consecutive Orthogonal Frequency Division Multiplex (OFDM) symbols in the time domain.
- PRBs resource blocks
- OFDM Orthogonal Frequency Division Multiplex
- the RLC layer uses the functions of the MAC layer and PHY layer to transmit data to the RLC layer on the receiving side. Data and control information are transmitted between the RLC layer of UE100 and the RLC layer of gNB200 via logical channels.
- the PDCP layer performs header compression/decompression, encryption/decryption, etc.
- the SDAP layer maps IP flows, which are the units by which the core network controls QoS (Quality of Service), to radio bearers, which are the units by which the access stratum (AS) controls QoS. Note that if the RAN is connected to the EPC, SDAP is not necessary.
- Figure 5 shows the protocol stack configuration of the radio interface of the control plane, which handles signaling (control signals).
- the protocol stack for the radio interface of the control plane includes a Radio Resource Control (RRC) layer and a Non-Access Stratum (NAS) instead of the SDAP layer shown in Figure 4.
- RRC Radio Resource Control
- NAS Non-Access Stratum
- RRC signaling for various settings is transmitted between the RRC layer of UE100 and the RRC layer of gNB200.
- the RRC layer controls logical channels, transport channels, and physical channels in accordance with the establishment, re-establishment, and release of radio bearers.
- RRC connection connection between the RRC of UE100 and the RRC of gNB200
- UE100 is in the RRC connected state.
- RRC connection no connection between the RRC of UE100 and the RRC of gNB200
- UE100 is in the RRC idle state.
- UE100 is in the RRC inactive state.
- the NAS which is located above the RRC layer, performs session management, mobility management, etc.
- NAS signaling is transmitted between the NAS of UE100 and the NAS of AMF300.
- UE100 also has an application layer, etc.
- the layer below the NAS is called the AS (Access Stratum).
- FIG. 6 is a diagram showing an example of the configuration of functional blocks of the AI/ML technology in the mobile communication system 1 according to the first embodiment.
- the functional block configuration example shown in Figure 6 includes a data collection unit (Data Collection) A1, a model training unit (Model Training) A2, a model inference unit (Inference) A3, a management unit (Management) A5, and a model storage unit (Model Storage) A6.
- Data Collection Data Collection
- Model Training model training unit
- Model inference Inference
- Management Management
- Model Storage Model Storage
- the example functional block configuration shown in Figure 6 represents the functional framework of general AI/ML technology. Therefore, depending on the hypothetical use case, some of the example functional block configuration (such as the model recording unit A6) may not be included in the example functional block configuration. Furthermore, the example functional block configuration shown in Figure 6 may be distributed between the UE 100 and the network-side device. Alternatively, some functions of the example functional block configuration (such as the model learning unit A2 or model inference unit A3) may be located in both the UE 100 and the network-side device.
- the data collection unit A1 provides input data to the model learning unit A2, the model inference unit A3, and the management unit A5.
- the input data includes training data for the model learning unit A2, inference data for the model inference unit A3, and monitoring data for the management unit A5.
- Learning data is data that is required as input when an AI/ML model is learning.
- Inference data is data that is required as input when an AI/ML model is making inferences.
- Monitoring data is data that is required as input when managing an AI/ML model.
- data collection may refer to the process of collecting data in a network node, management entity, or UE 100, for example, to train an AI/ML model, manage an AI/ML model, and perform inference on an AI/ML model.
- the model training unit A2 performs AI/ML model training, AI/ML model validation, and AI/ML model testing.
- AI/ML model training is the process of learning an AI/ML model from input/output relationships to obtain a trained AI/ML model that can be used for inference.
- y ax + b
- the process of optimizing a (slope) and b (intercept) by providing input (x) and output (y) may be AI/ML model learning.
- machine learning is divided into supervised learning, unsupervised learning, and reinforcement learning.
- Supervised learning is a method that uses correct answer data for training data.
- Unsupervised learning is a method that does not use correct answer data for training data. For example, unsupervised learning memorizes feature points from large amounts of training data and determines the correct answer (estimates the range).
- Reinforcement learning is a method that assigns a score to the output result and learns how to maximize the score. Supervised learning is explained below, but either unsupervised learning or reinforcement learning can also be applied to machine learning.
- the model learning unit A2 outputs the trained AI/ML model (Trained Model) obtained by AI/ML model learning to the model recording unit A6.
- the model learning unit A2 also outputs the updated AI/ML model (Updated Model) obtained by relearning the trained AI/ML model to the model recording unit A6.
- AI/ML model learning may be referred to as “model learning” or “learning.”
- the model inference unit A3 outputs inference output data to the management unit A5.
- the model inference unit A3 also receives management instructions from the management unit A5.
- management instructions include selecting an AI/ML model, activating (deactivating) an AI/ML model, switching between AI/ML models, and fallback (performing inference without using an AI/ML model).
- the model inference unit A3 performs model inference in accordance with the management instructions.
- AI/ML model inference is, for example, the process of obtaining a set of outputs from a set of inputs using a trained AI/ML model (or an updated AI/ML model).
- model inference may be the process of obtaining inference output data from inference data using a trained AI/ML model (or an updated AI/ML model).
- AI/ML model inference may be referred to as "model inference” or "inference.”
- an AI/ML model that is currently being trained may be referred to as a training AI/ML model (or an updating AI/ML model).
- a training AI/ML model or an updating AI/ML model
- AI/ML model when there is no need to distinguish between a training (or updating) AI/ML model and a trained (or updated) AI/ML model, they may simply be referred to as an "AI/ML model.”
- the management unit A5 supervises operations on the AI/ML model (selection, activation, deactivation, switching, fallback, etc.).
- the management unit A5 also supervises monitoring of the AI/ML model.
- the management unit A5 can also perform operations to ensure appropriate inference operations based on monitoring data and inference output data.
- the management unit A5 outputs a model transfer and/or model delivery request (Model Transfer/Delivery Request) to the model recording unit A6, and causes the trained (or updated) AI/ML model recorded in the model recording unit A6 to be output to the model inference unit A3.
- the management unit A5 also outputs management instructions to the model inference unit A3 and supervises operations on the AI/ML model.
- the management unit A5 can output performance feedback and a re-learning request to the model learning unit A2, causing the model learning unit A2 to re-learn the AI/ML model (i.e., update the learned AI/ML model).
- Figure 7 shows an example of the operation of the AI/ML technology according to the first embodiment.
- the transmitting entity TE is an entity capable of performing model inference and transmitting inference output data to the receiving entity RE.
- the receiving entity RE is an entity capable of receiving inference output data from the transmitting entity TE.
- Model training may be performed in the transmitting entity TE.
- the model training may also be performed in the receiving entity RE.
- the trained AI/ML model may be transmitted from the receiving entity RE to the transmitting entity TE.
- an entity may be, for example, a device.
- the entity may also be a functional block included in the device.
- the entity may also be a hardware block included in the device.
- the transmitting entity TE may be UE100, and the receiving entity RE may be gNB200 or a core network device.
- the transmitting entity TE may be gNB200 or a core network device, and the receiving entity RE may be UE100.
- the transmitting entity TE transmits control data related to AI/ML technology to the receiving entity RE and receives the control data from the receiving entity RE.
- the control data may be an RRC message, which is signaling of the RRC layer (i.e., layer 3).
- the control data may be a MAC Control Element (CE), which is signaling of the MAC layer (i.e., layer 2).
- CE MAC Control Element
- the control data may be downlink control information (DCI), which is signaling of the PHY layer (i.e., layer 1).
- DCI downlink control information
- the downlink signaling may be UE-specific signaling.
- the downlink signaling may be broadcast signaling.
- the control data may be a control message in a control layer (e.g., the AI/ML layer) specialized for artificial intelligence or machine learning.
- the control data may be a NAS message in the NAS layer.
- the control data may include a performance feedback request and/or a re-learning request sent from the management unit A5 to the model learning unit A2.
- the control data may include a model transfer request and/or a model delivery request sent from the management unit A5 to the model recording unit A6.
- the control data may include a management instruction sent from the management unit A5 to the model inference unit A3.
- CSI feedback improvement represents a use case in which AI/ML technology is applied to CSI fed back from UE100 to gNB200, for example.
- CSI is information about the channel state in the downlink between UE100 and gNB200.
- the CSI includes at least one of a channel quality indicator (CQI), a precoding matrix indicator (PMI), and a rank indicator (RI).
- CQI channel quality indicator
- PMI precoding matrix indicator
- RI rank indicator
- the gNB200 performs, for example, downlink scheduling based on the CSI feedback from UE100.
- Figure 8 is a diagram showing an example of the arrangement of each functional block in "CSI feedback improvement.”
- the data collection unit A1, model learning unit A2, and model inference unit A3 are included in the control unit 130 of the UE 100.
- the data processing unit A4 is included in the control unit 230 of the gNB 200.
- model learning and model inference are performed in the UE 100.
- Figure 8 shows an example in which the transmitting entity TE is the UE 100 and the receiving entity RE is the gNB 200.
- the gNB 200 transmits a reference signal for the UE 100 to estimate the downlink channel state.
- CSI-RS CSI reference signal
- DMRS demodulation reference signal
- UE100 receives a first reference signal from gNB200 using a first resource. Then, UE100 (model learning unit A2) uses learning data including the first reference signal and CSI to derive a learned model for inferring CSI from the reference signal.
- a first reference signal is sometimes referred to as a full CSI-RS.
- the CSI generation unit 131 performs channel estimation using the received signal (CSI-RS) received by the receiving unit 110 to generate CSI.
- the transmitting unit 120 transmits the generated CSI to the gNB 200.
- the model learning unit A2 performs model learning using a set of the received signal (CSI-RS) and CSI as learning data, and derives a learned model for inferring CSI from the received signal (CSI-RS).
- the receiver 110 receives a second reference signal from the gNB 200 using second resources that are fewer than the first resources. Then, the model inference unit A3 uses the trained model to infer CSI as inference result data using the second reference signal as inference data.
- a second reference signal may be referred to as a partial CSI-RS or a punctured CSI-RS.
- the model inference unit A3 inputs the partial CSI-RS received by the receiver 110 as inference data into the trained model and infers CSI from the CSI-RS.
- the transmitter 120 transmits the inferred CSI to the gNB 200.
- gNB200 can reduce (puncture) CSI-RS when intended to reduce overhead. It also enables UE100 to respond to situations where the radio conditions deteriorate and some CSI-RS cannot be received normally.
- Figure 9 shows an example of operation for "CSI feedback improvement" according to the first embodiment.
- gNB200 may notify or set the CSI-RS transmission pattern (puncture pattern) in inference mode to UE100 as control data. For example, gNB200 transmits to UE100 the antenna ports and/or time-frequency resources that will or will not transmit CSI-RS in inference mode.
- step S12 UE100 starts learning mode.
- step S13 gNB200 transmits full CSI-RS.
- UE100's receiver 110 receives the full CSI-RS, and CSI generator 131 generates (or estimates) CSI based on the full CSI-RS.
- data collector A1 collects full CSI-RS and CSI.
- Model learning unit A2 uses the full CSI-RS and CSI as learning data to create a learned AI/ML model.
- step S14 UE100 transmits the generated CSI to gNB200.
- step S15 when the model learning is completed, UE100 transmits a completion notification to gNB200 indicating that the model learning is completed. UE100 may also transmit a completion notification when the creation of the trained model is completed.
- step S16 in response to receiving the completion notification, gNB200 transmits a switching notification to UE100 to switch from learning mode to inference mode.
- step S17 in response to receiving the switching notification, UE100 switches from learning mode to inference mode.
- step S18 gNB200 transmits partial CSI-RS.
- Receiver 110 of UE100 receives the partial CSI-RS.
- data collector A1 collects the partial CSI-RS.
- Model inference unit A3 inputs the partial CSI-RS as inference data into the trained model, and obtains CSI as the inference result.
- step S19 UE100 feeds back (or transmits) the CSI, which is the inference result, to gNB200 as inference result data.
- UE100 can generate a trained model with a predetermined accuracy or higher by repeating model learning during learning mode. It is expected that the inference results using the trained model generated in this way will also have a predetermined accuracy or higher.
- step S20 if UE100 determines by itself that model learning is necessary, it may transmit a notification indicating that model learning is necessary to gNB200 as control data.
- the training data is "(full) CSI-RS” and "CSI”
- the inference data is "(partial) CSI-RS.”
- the training data and/or the inference data may be referred to as a "dataset.”
- Bit error rate (BER) or block error rate (BLER) (BER (or BLER) may be measured based on CSI-RS, assuming the total number of transmitted bits (or total number of transmitted blocks) is known.)
- BLER block error rate
- (Y3) Moving speed of UE 100 (may be measured by a speed sensor in UE 100) It may be set what is used as the dataset used in machine learning. For example, the following processing may be performed. That is, UE100 transmits capability information indicating what type of input data it can handle in machine learning to gNB200 as control data.
- the capability information may represent, for example, any of the data or information shown in (Y1) to (Y3).
- the capability information may be information in which learning data and inference data are separately specified.
- gNB200 transmits data type information to be used as the dataset to UE100 as control data.
- the data type information may represent, for example, any of the data or information shown in (Y1) to (Y3).
- the data type information may separately specify data type information used as learning data and data type information used as inference data.
- FIG. 10 is a diagram showing an example of an operation of the first operation pattern related to model forwarding according to the first embodiment.
- the receiving entity RE will be described as mainly being the UE 100, but the receiving entity RE may be the gNB 200 or the AMF 300.
- the transmitting entity TE will be described as being the gNB 200, but the transmitting entity TE may be the UE 100 or the AMF 300.
- UE100 transmits a message to gNB200 that includes information elements indicating its execution capability for the learning process (or, from another perspective, the execution environment for the learning process).
- gNB200 receives the message.
- the message may be an RRC message (e.g., a "UE Capability” message or a newly defined message (e.g., a "UE AI Capability” message, etc.)).
- the transmitting entity TE may be AMF300, and the message may be a NAS message.
- the message may be a message of the new layer.
- the information element indicating the execution capability for the learning process may be an information element indicating the processor's capability for executing the learning process and/or an information element indicating the memory's capability for executing the learning process.
- the information element indicating the processor's capability may be an information element indicating the product number (or model number) of the AI processor.
- the information element indicating the memory's capability may be an information element indicating the memory capacity.
- the information element indicating the execution capability for the learning process may be an information element indicating the execution capability for the inference process (model inference).
- the information element indicating the execution capability for the inference process may be an information element indicating whether a deep neural network model is supported, or an information element indicating the time (or response time) required to execute the inference process.
- the information element indicating the execution capacity for the learning process may be an information element indicating the execution capacity of the learning process (model learning).
- the information element indicating the execution capacity of the learning process may be an information element indicating the number of learning processes being executed simultaneously, or an information element indicating the processing capacity of the learning process.
- step S27 gNB200 determines the model to be configured (or deployed) in UE100 based on the information elements included in the message received in step S26.
- the setting message includes three models (Model #1 to #3). Each model is included as a container in the setting message. However, the setting message may include only one model.
- the setting message further includes, as additional information, three individual additional information pieces (Info #1 to #3) that correspond to each of the three models (Model #1 to #3), and common additional information (Meta-Info) that is commonly associated with all three models (Model #1 to #3). Each of the individual additional information pieces (Info #1 to #3) includes information unique to the corresponding model.
- the common additional information (Meta-Info) includes information common to all models in the setting message.
- the individual additional information may be a model index that indicates an index (index number) assigned to each model.
- the individual additional information may also be model execution conditions that indicate the performance (e.g., processing delay) required to apply (execute) the model.
- the individual additional information or common additional information may be a model application that specifies the function to which the model is to be applied (e.g., "CSI feedback,” "beam management,” “positioning,” etc.).
- the individual additional information or common additional information may also be a model selection criterion that applies (executes) the corresponding model when a specified criterion (e.g., movement speed) is met.
- transferring the trained AI/ML model to the UE 100 may be beneficial, even at the expense of overhead and delay. However, in general, it may not be appropriate in consideration of the storage capacity of the UE 100.
- fine-tuning and/or relearning may replace the transfer of the trained AI/ML model.
- fine-tuning and/or relearning may not be necessary. This makes it possible, for example, in mobile communication system 1, to perform appropriate model inference operations using the updated AI/ML model after updating, without having to consider the storage capacity of UE 100 or network overhead and delays.
- 3GPP defines life cycle management (LCM) for AI/ML models. From the perspective of LCM, processes are performed on an AI/ML model in the following order: model learning (i.e., "learning"), model inference (i.e., "inference"), model monitoring, and model update. Although each process is not strictly defined in 3GPP, it is assumed that re-learning is performed at the model update stage after model monitoring. In other words, it is assumed that model monitoring is performed on a trained AI/ML model, and that re-learning of the trained AI/ML model is performed based on the results of model monitoring.
- model learning i.e., "learning”
- model inference i.e., "inference”
- model monitoring i.e., "inference”
- model update i.e., "inference”
- the management unit (Management) A5 issues a re-learning request to the model training unit (Model Training) A2, which causes re-learning to be performed in the model training unit A2. Therefore, from the perspective of the functional block configuration shown in Figure 6, after management in the management unit A5, re-learning is performed in the model training unit A2.
- Figures 12 and 13 are diagrams showing an example of operation according to the first embodiment.
- Figures 12 and 13 show an example of operation in a UE-side model (a model in which inference is performed on the UE 100 side) in the use case of "Beam management.”
- Figures 12 and 13 also show an example of operation from inference to relearning.
- (Z2) Monitoring is performed on the UE 100 side, and model control of the UE's trained AI/ML model is performed on the gNB 200 side.
- steps S41 and S42 represent the above-described case of (Z1). That is, in step S41, the control unit 130 of UE100 monitors the performance of the inference (step S30) performed by UE100. Then, in step S42, the control unit 130 performs model control based on the monitoring result.
- the control unit 130 may perform model control based on the monitoring result that the inference performance is equal to or less than a performance threshold.
- the control unit 130 may perform re-learning of the trained AI/ML model.
- the performance threshold is a threshold used to determine whether the inference performance (or accuracy) of the trained AI/ML model is good or bad.
- the control unit 130 may compare the inference results (or monitoring results, or the difference between the inference results and the monitoring results) of the trained AI/ML model with a performance threshold to determine the performance of the trained AI/ML model.
- Steps S43 to S45 represent the case of (Z2) described above. That is, in step S43, the control unit 130 of UE100 monitors the performance of the inference (step S30) performed by UE100.
- the transmission unit 120 of UE100 transmits the monitoring results to gNB200.
- the transmission unit 120 may transmit control data including the monitoring results to gNB200.
- the reception unit 220 of gNB200 receives the monitoring results.
- the control unit 230 of gNB200 decides to perform model control based on the monitoring results and instructs UE100 to perform model control.
- the transmission unit 210 of gNB200 transmits the instruction to UE100.
- the instruction may also be transmitted included in the control data.
- the reception unit 110 of UE100 receives the instruction.
- the control unit 130 of the UE 100 may decide to re-learn the trained AI/ML model in accordance with the instruction to execute model control.
- Steps S46 to S49 represent the case of (Z3) described above. That is, in step S46, the control unit 130 of UE100 calculates a performance metric (or performance metrics) related to the performance of the inference (step S30) performed by UE100.
- the performance metric represents, for example, an evaluation index of the trained AI/ML model when inference is performed.
- the performance metric is a value for analyzing the accuracy of the current inference, i.e., the confidence level, and may be squared generalized cosine similarity (SGCS), throughput, block error rate (BLER), etc.
- the performance metric may also include CPU usage, memory usage, inference time, and/or data usage during inference.
- the transmission unit 120 of UE100 transmits the performance metric to gNB200.
- the transmission unit 120 may also transmit control data including the performance metric to gNB200.
- the receiver 220 of the gNB 200 receives the performance metric.
- the control unit 230 of the gNB 200 uses the performance metric to monitor the performance of the inference (step S30). If the control unit 230 obtains a detection result based on the monitoring result that the inference performance of the trained AI/ML model on the UE 100 side is below the performance threshold, it instructs the UE 100 to perform model control.
- the model control instruction may be a re-learning instruction, as in step S45.
- the receiver 110 of the UE 100 receives the model control instruction.
- the control unit 130 of the UE 100 may decide to perform re-learning in accordance with the model control instruction.
- fine-tuning and relearning are defined as follows:
- fine-tuning is performed in UE 100 based on the fine-tuning execution time. For example, it is possible to perform fine-tuning when the fine-tuning execution time is equal to or less than the fine-tuning execution time threshold, and not perform fine-tuning when the fine-tuning execution time requires more time than the fine-tuning execution time threshold. This allows fine-tuning to be performed in UE 100 under certain judgment, for example, making it possible to appropriately perform fine-tuning on a trained AI/ML model.
- training the entire trained AI/ML model may not always be efficient. By performing fine-tuning, it is possible to train only a portion of the trained AI/ML model and efficiently obtain an updated AI/ML model.
- FIGS. 14 and 15 are diagrams showing an example of operation according to the first embodiment.
- FIG. 14 and FIG. 15 show a gNB 200 and a network device 400
- gNB 200 indicates a case where processing is performed by gNB 200 alone among the network devices 400.
- gNB 200 may also be a network node.
- network device 400 indicates a case where processing is performed by any of the network devices 400, including gNB 200.
- FIG. 14 and FIG. 15 are a use case of beam management, and show an example of operation in the case of a UE-side model.
- information, data, messages, etc. are transmitted and received between UE 100 and network device 400.
- This information, etc. may be transmitted and received using a predetermined message according to a predetermined protocol set between UE 100 and network device 400.
- the predetermined message may be control data if network device 400 is a gNB 200, or a NAS message if network device 400 is an AMF.
- the fact that transmission and reception are performed using a predetermined message may be omitted.
- step S60 the network device 400 transmits (transfers) the trained AI/ML model to the UE 100.
- the receiver 110 of the UE 100 receives the trained AI/ML model.
- step S61 the network device 400 transmits parameters for calculating the fine-tuning execution time (hereinafter, sometimes referred to as "fine-tuning execution time calculation parameters") to the UE 100.
- the parameters for calculating the fine-tuning execution time may include information representing the type of trained AI/ML model (step S60).
- the information representing the type may be, for example, ensemble learning, neural network, etc.
- Ensemble learning is, for example, a machine learning method that combines multiple trained AI/ML models to obtain output.
- Random forest which is one method of ensemble learning, may be indicated as information representing the type of model. Random forest is a machine learning method that uses ensemble learning with decision trees.
- step S60 the trained AI/ML model (step S60) and the parameters for calculating the fine-tuning execution time (step S61) may be sent in a single message, or may be sent in separate messages, as shown in Figure 14.
- the OTT server 500 may transmit the trained AI/ML model to the UE 100. Also, in step S63, the OTT server 500 may transmit parameters for calculating the fine-tuning execution time to the UE 100. In step S63, the OTT server may transmit the above-mentioned sample to the UE 100. Taking into account the processing load on the network side, the OTT server may train the AI/ML model and generate a trained AI/ML model. Taking such cases into account, steps S62 and S63 may be performed. The OTT server 500 may perform steps S62 and S63 using messages according to a protocol established between the OTT server 500 and the UE 100. The receiver 110 of the UE 100 receives the trained AI/ML model and the parameters for calculating the fine-tuning execution time.
- step S64 the control unit 130 of the UE 100 calculates the fine-tuning execution time for the trained AI/ML model using the parameters for calculating the fine-tuning execution time.
- the control unit 130 may perform fine-tuning on the trained AI/ML model and obtain the fine-tuning execution time by calculating (or measuring) it through actual measurement.
- the fine-tuning execution time may represent the time from the start to the end of the fine-tuning.
- step S65 the transmitter 120 of the UE 100 transmits the fine-tuning execution time to the gNB 200.
- the receiver 220 of the gNB 200 receives the fine-tuning execution time.
- step S67 management is performed in the mobile communication system 1.
- step S68 the control unit 230 of the gNB 200 determines fine-tuning execution conditions indicating the conditions for performing fine-tuning of the trained AI/ML model in the UE 100, based on the fine-tuning execution time received in step S65.
- the fine-tuning execution conditions may include information regarding model control when fine-tuning is not performed.
- the information regarding model control may be, for example, information indicating re-learning, falling back to legacy (i.e., obtaining output without using a trained AI/ML model), or doing nothing.
- the fine-tuning execution conditions may include a request to transmit the dataset (learning data) used in the fine-tuning. If a transmission request is included, the transmitter 120 of the UE 100 will transmit the dataset used in the fine-tuning to the gNB 200.
- step S69 the transmitter 210 of the gNB 200 transmits the fine-tuning execution conditions to the UE 100.
- the receiver 220 of the UE 100 receives the fine-tuning execution conditions.
- step S80 the transmitter 120 of the UE 100 transmits the inference metric to the gNB 200.
- the receiver 220 of the gNB 200 receives the performance metric.
- fine-tuning may be performed on the UE 100 side. That is, in step S85, the control unit 130 of the UE 100 generates learning data to be used for fine-tuning. In step S86, the control unit 130 uses the learning data to perform fine-tuning on the trained AI/ML model.
- the fine-tuning may be performed on the OTT server 500 side.
- the OTT server 500 may store the trained AI/ML model to be fine-tuned.
- the control unit 130 of the UE 100 generates training data to be used for the fine-tuning.
- the transmission unit 120 of the UE 100 transmits the training data to the OTT server 500.
- the OTT server 500 receives the training data and uses the training data to perform fine-tuning on the trained AI/ML model.
- the first embodiment we have described performing fine-tuning while taking into account the time required to perform the fine-tuning.
- the second embodiment we will describe evaluating the performance of a trained AI/ML model and performing fine-tuning based on the evaluation results.
- the time required for the performance evaluation is calculated, and a decision is made as to whether or not to perform the performance evaluation based on the time required for the performance evaluation.
- UE100 can determine when a trained AI/ML model obtains optimal inference result data at a specific time or location, but does not necessarily obtain optimal inference result data at other times or locations (this is generally referred to as "overlearning").
- overlearning when a trained AI/ML model is trained using training data obtained at a specific time or location, it is expected that optimal inference results will be obtained at that specific time or location.
- Performance evaluation can also capture the characteristics of such a trained AI/ML model.
- a threshold e.g., a performance evaluation result threshold
- performance evaluation evaluating a trained AI/ML model is referred to as "performance evaluation.”
- the evaluation method used in "performance evaluation” is described as using NMSE, but evaluation methods other than NMSE may also be used.
- the mobile communication system 1 determines whether to perform a performance evaluation based on the performance evaluation execution time required for the performance evaluation. If the mobile communication system 1 determines to perform a performance evaluation, it performs a performance evaluation on the trained AI/ML model and performs fine-tuning based on the evaluation results.
- step S91 the network device 400 transmits performance evaluation time calculation parameters to the UE 100.
- the performance evaluation calculation parameters represent parameters used to calculate the performance evaluation execution time.
- the dataset is input data and a correct value
- the same number of datasets as the number of datasets may be input into the trained AI/ML model to perform inference, and performance evaluation may be performed on the inference results and the correct value to calculate the performance evaluation execution time.
- the performance evaluation execution time may represent the time from the start of inference with the trained AI/ML model (or the start of evaluation of the trained AI/ML model) to the acquisition of the performance evaluation results for the trained AI/ML model.
- the performance evaluation calculation parameters may include an evaluation method (such as NMSE) used in performance evaluation.
- UE 100 uses this evaluation method to perform performance evaluation and calculate the performance evaluation execution time.
- network device 400 may transmit samples calculated using the performance evaluation time calculation parameters to UE 100.
- model transfer and transmission of performance evaluation calculation parameters and samples may be performed from the OTT server rather than from the network device 400 (steps S92 and S93).
- the network device 400 transmits a performance evaluation time calculation dataset to the UE 100.
- the performance evaluation time calculation dataset is a dataset used to calculate the performance evaluation execution time.
- the UE 100 may calculate a performance evaluation index using a mathematical method (e.g., NMSE) and use the calculation time as the performance evaluation execution time, or may perform inference to calculate the performance evaluation execution time.
- the receiving unit 110 of the UE 100 receives the performance evaluation time calculation dataset.
- UE100 may not be able to use the dataset to calculate the performance evaluation execution time.
- UE100 may find that the size of the trained AI/ML model received in step S90 is larger than a predetermined value and cannot be stored in memory. Therefore, in step S95, UE100 may transmit the received dataset to gNB200. This is to allow gNB200 to calculate the performance evaluation execution time.
- a network device 400 other than gNB200 may transmit the dataset to gNB200 to allow gNB200 to calculate the performance evaluation execution time.
- the data set may be transmitted from the OTT server 500 to the UE 100 (step S97).
- the UE 100 may transmit the received data set to the gNB 200 (step S98), causing the gNB 200 to calculate the performance evaluation execution time.
- the network device 400 transmits the current index to the UE 100.
- the network device 400 can transmit the acquired evaluation result as the current index to the UE 100.
- the UE 100 can use the current index in subsequent performance evaluation (step S117 in Figure 17).
- the performance evaluation execution time may be calculated in gNB200, and taking such cases into consideration, the UE may transmit the current indicators received in step S99 to gNB200 (step S100).
- Network devices 400 other than gNB200 may also transmit the current indicators to gNB200 (step S101).
- the current indicator may also be transmitted from the OTT server 500 to the UE 100 (step S102), and the UE 100 may transmit the current indicator received from the OTT server 500 to the gNB 200 in order to cause the gNB 200 to perform calculation of the performance evaluation execution time (step S103).
- step S104 the control unit 130 of the UE 100 calculates the execution time of the performance evaluation using the parameters for calculating the performance evaluation time.
- the control unit 130 may calculate the performance evaluation index using a mathematical method (e.g., NMSE) and use the calculation time as the performance evaluation execution time.
- the control unit 130 may also perform inference to calculate the performance evaluation execution time.
- the control unit 130 may calculate (or measure) the time required to execute the performance evaluation and use the calculated time as the performance evaluation execution time.
- step S105 the transmitter 120 of the UE 100 transmits the performance evaluation execution time to the gNB 200.
- the receiver 220 of the gNB 200 receives the performance evaluation execution time.
- step S110 ( Figure 17) management is performed in the mobile communication system 1.
- the performance evaluation execution condition may include a performance evaluation time threshold.
- performance evaluation may be performed when the performance evaluation execution time is equal to or less than the performance evaluation time threshold, and performance evaluation may not be performed when the performance evaluation execution time is longer than the performance evaluation time threshold.
- the performance evaluation execution condition may include information indicating that performance evaluation is to be performed when the performance evaluation time is equal to or less than the performance evaluation time threshold (or information indicating that performance evaluation is not to be performed when the performance evaluation time is longer than the performance evaluation time threshold).
- step S112 the transmitter 210 of the gNB 200 transmits the performance evaluation execution conditions to the UE 100.
- the receiver 110 of the UE 100 receives the performance evaluation execution conditions.
- the fine-tuning execution conditions according to the second embodiment include a performance evaluation result threshold that indicates a threshold for the evaluation result of the performance evaluation.
- the UE 100 uses the performance evaluation result threshold to determine whether or not to perform fine-tuning.
- the fine-tuning execution conditions according to the second embodiment may include information indicating that fine-tuning is to be performed when the evaluation result of the performance evaluation is equal to or greater than the performance evaluation result threshold (or information indicating that fine-tuning is not to be performed when the evaluation result is less than the performance evaluation result threshold). Alternatively, they may include information instructing that relearning be performed if fine-tuning is not to be performed.
- fine-tuning execution conditions in the second embodiment do not necessarily have to include the fine-tuning execution time threshold described in the first embodiment.
- step S116 the control unit 130 of the UE 100 determines whether or not to perform performance evaluation based on the performance evaluation execution conditions received in step S112. Specifically, the control unit 130 determines whether or not to perform performance evaluation based on the performance evaluation execution time calculated in step S104 and the performance evaluation execution conditions received in step S112. For example, the control unit 130 performs performance evaluation when the performance evaluation execution time is equal to or less than the performance evaluation time threshold included in the performance evaluation execution conditions, and does not perform performance evaluation when the performance evaluation execution time exceeds the performance evaluation time threshold. In the following description, the control unit 130 is assumed to perform performance evaluation.
- step S117 the control unit 130 of the UE 100 performs a performance evaluation of the trained AI/ML model.
- the control unit 130 performs the performance evaluation using an evaluation method (e.g., NMSE) included in the performance evaluation conditions.
- an evaluation method e.g., NMSE
- step S118 the control unit 130 of the UE 100 determines whether to perform fine-tuning. Specifically, the control unit 130 determines whether to perform fine-tuning based on the evaluation result of the performance evaluation performed in step S117 and the fine-tuning execution conditions. For example, the control unit 130 performs fine-tuning when the evaluation result of the performance evaluation is equal to or greater than the performance evaluation result threshold (i.e., the evaluation result is good), and does not perform fine-tuning when the evaluation result is less than the performance evaluation result threshold (i.e., the evaluation result is poor).
- the control unit 130 performs fine-tuning when the evaluation result of the performance evaluation is equal to or greater than the performance evaluation result threshold (i.e., the evaluation result is good), and does not perform fine-tuning when the evaluation result is less than the performance evaluation result threshold (i.e., the evaluation result is poor).
- step S115 the impact on the trained AI/ML model (step S115) that is below the performance threshold will be greater than a certain level, and it is predicted that fine-tuning will not improve the trained AI/ML model, or that over-training will occur, resulting in a decrease in the overall performance of the AI/ML model. In such cases, re-training may be preferable to fine-tuning.
- the evaluation result is equal to or greater than the performance evaluation result threshold, even if fine-tuning is performed on the trained AI/ML model, the impact can be kept below a certain level, and it is predicted that performance will be improved.
- the control unit 130 decides to perform fine-tuning in step S118.
- step S120 the control unit 130 of the UE 100 monitors the inference of the trained AI/ML model.
- step S122 the control unit 230 of the gNB 200 detects, based on the monitoring results, that the performance of the trained AI/ML model is below a performance threshold (i.e., performance is poor). The control unit 230 then decides whether to cause the UE 100 to perform performance evaluation. Specifically, the control unit 230 decides whether to perform performance evaluation based on the performance evaluation execution time received in step S105 ( Figure 16). The decision on whether to perform performance evaluation may be the same as that in step S116 ( Figure 17). That is, the control unit 230 may decide to perform performance evaluation when the performance evaluation execution time received in step S105 is below the performance evaluation time threshold, and may decide not to perform performance evaluation when the performance evaluation execution time received in step S105 exceeds the performance evaluation time threshold. The following description is based on the assumption that the control unit 230 decides to perform performance evaluation based on the performance evaluation execution time.
- the transmitter 210 of the gNB 200 transmits a performance evaluation execution instruction (or performance evaluation execution instruction) to the UE 100, instructing the UE 100 to perform a performance evaluation.
- the performance evaluation execution instruction may include identification information (e.g., a model ID or function name) of the trained AI/ML model for which performance evaluation is to be performed.
- the performance evaluation execution instruction may also be instruction information instructing the UE 100 not to perform performance evaluation.
- the receiver 110 of the UE 100 receives the performance evaluation execution instruction.
- step S124 the control unit 130 of the UE 100 performs performance evaluation in response to receiving the performance evaluation execution instruction.
- the performance evaluation itself may be performed in the same manner as in step S117 ( Figure 17).
- step S126 the control unit 230 of the gNB 200 determines whether or not to perform fine-tuning in the UE 100 based on the evaluation result of the performance evaluation received in step S125.
- the decision on whether or not to perform fine-tuning may be the same as that in step S118 ( Figure 17). That is, the control unit 230 may decide to perform fine-tuning when the evaluation result received in step S125 is equal to or greater than the performance evaluation result threshold, and may decide not to perform fine-tuning when the evaluation result received in step S125 is less than the performance evaluation result threshold. Alternatively, the control unit 230 may decide to perform re-learning if fine-tuning is not to be performed. In the following, it is assumed that the control unit 230 decides to perform fine-tuning based on the evaluation result.
- step S127 the transmitter 210 of the gNB 200 transmits a fine-tuning execution instruction to the UE 100 to instruct the UE 100 to perform fine-tuning.
- the fine-tuning execution instruction may include identification information (e.g., a model ID or a function name) of the trained AI/ML model for which fine-tuning is to be performed.
- the fine-tuning execution instruction may also be instruction information instructing the UE 100 not to perform fine-tuning.
- the receiver 110 of the UE 100 receives the fine-tuning execution instruction.
- the controller 130 of the UE 100 may decide to perform fine-tuning in response to receiving the fine-tuning execution instruction.
- step S130 the control unit 130 of the UE 100 calculates a performance metric for the inference of the trained AI/ML model (step S106).
- step S131 the transmitter 120 of the UE 100 transmits the performance metric to the gNB 200.
- the receiver 220 of the gNB 200 receives the performance metric.
- step S132 the control unit 230 of the gNB 200 uses the performance metrics to monitor the inference being performed by the UE 100 (step S106).
- the control unit 230 detects that the monitoring results are below the performance threshold, i.e., that the performance of the trained AI/ML model is poor.
- step S133 the control unit 230 of the gNB 200 determines whether or not to perform performance evaluation. This determination may be the same as that in step S122. That is, the control unit 230 may determine whether or not to perform performance evaluation using the performance evaluation execution time and the performance evaluation time threshold.
- step S138 the control unit 130 of the UE 100 may decide to execute the fine-tuning.
- step S90 (Another Operation Example 1 According to the Second Embodiment)
- step S91 the transmission of the parameters for calculating the performance evaluation time
- step S94 the transmission of the data set for calculating the performance evaluation time
- step S99 the transmission of the current index
- the model transfer and at least one of the parameters, the data set, and the index may be performed using a single message.
- the control unit 230 calculates the performance evaluation execution time for the trained AI/ML model on the gNB 200 side, performs monitoring, and when the monitoring result is equal to or less than the performance threshold, determines whether the performance evaluation execution condition is met (or whether the performance evaluation execution time is equal to or less than the performance evaluation time threshold).
- the control unit 130 of the UE 100 may store the performance evaluation execution time in a memory or the like without transmitting it, and after monitoring (step S120), may determine whether or not to perform performance evaluation based on the performance evaluation execution conditions (step S114) received from the gNB 200.
- the transmitter 120 of the UE 100 may transmit the performance evaluation execution time together with the monitoring result in step S121, instead of transmitting the performance evaluation execution time in step S105.
- the UE 100 can transmit the monitoring result and the performance evaluation execution time by one message.
- beam management is used as an example of a use case, but the use case is not limited to beam management.
- the first embodiment can also be applied to CSI feedback improvement (X1.1) or position accuracy improvement (X1.3).
- step S68 to S71 we described performing fine-tuning based on the fine-tuning execution time.
- step S68 to S71 we described an example in which we determine whether to perform fine-tuning based on the number of times it is detected that the inference results of the trained AI/ML model are below the performance threshold (i.e., the inference accuracy is poor).
- the user device receives fine-tuning execution conditions from a network node (e.g., gNB200) that indicate the conditions for performing fine-tuning of the trained AI/ML model.
- the user device performs inference using the trained AI/ML model.
- the user device performs fine-tuning.
- the fine-tuning execution conditions include a performance threshold and a predetermined number of times.
- fine-tuning is performed in response to detecting a predetermined number of times that the inference result of the trained AI/ML model is below the performance threshold.
- fine-tuning means training the trained AI/ML model using training data with a data volume below a data volume threshold.
- UE100 performs fine-tuning when the inference result of the trained AI/ML model is below the performance threshold, i.e., when it detects a predetermined number of times that the performance of the trained AI/ML model is poor.
- UE100 can determine whether to perform fine-tuning in accordance with instructions from gNB200, making it possible to perform fine-tuning appropriately.
- FIG. 21 shows an example of operation according to the third embodiment.
- step S150 the network device 400 transfers (or transmits) the trained AI/ML model to the UE 100.
- the receiver 110 of the UE 100 receives the trained AI/ML model.
- step S151 the transmitter 210 of the gNB 200 transmits the fine-tuning execution conditions to the UE 100.
- the fine-tuning execution conditions include the above-mentioned performance threshold.
- the performance threshold is, for example, a threshold for determining the performance of the trained AI/ML model.
- UE100 determines that the performance of the trained AI/ML model is poor (in the third embodiment, the inference accuracy is poor) when the inference result of the trained AI/ML model is below the performance threshold, and determines that the inference result of the trained AI/ML model is good (in the third embodiment, the inference accuracy is good) when the inference result is greater than the performance threshold.
- the inference result may include, for example, the CPU usage rate or memory usage rate when the inference was performed, the amount of inference data used when the inference was performed, or the amount of inference result data.
- the performance threshold may be a threshold corresponding to each indicator of the inference result.
- the fine-tuning execution conditions include a predetermined number of times representing a threshold for the number of times.
- UE100 executes fine-tuning when it detects that the inference result of the trained AI/ML model is below the performance threshold a predetermined number of times.
- the predetermined number of times may be a consecutive number of times or a cumulative number of times. In the case of a cumulative number of times, the fine-tuning execution conditions may also include a period of time.
- the fine-tuning execution condition may include a report instruction (or report request) that instructs (or requests) a report on whether or not fine-tuning has been performed.
- UE 100 transmits information indicating that fine-tuning has been performed to network device 400 in accordance with the report instruction.
- the report instruction may include a destination indicating to which node the report should be sent.
- the report instruction may include information instructing transmission of the inference data set used for inference, along with whether or not fine-tuning has been performed.
- the fine-tuning execution conditions may include the same information as the fine-tuning execution conditions described in the first embodiment (or second embodiment).
- step S152-1 the control unit 130 of the UE 100 performs inference using the trained AI/ML model received in step S150.
- step S153-1 the control unit 130 of the UE 100 evaluates the inference result.
- Step S153-1 may be the monitoring (step S70) in the first embodiment.
- step S154-1 the control unit 130 of the UE 100 detects that the inference result is below the performance threshold.
- the control unit 130 of the UE 100 performs steps S152-1 through S154-1 n times (n is a natural number) (steps S152-n through S154-n).
- step S156 the control unit 130 of UE 100 performs fine-tuning on the trained AI/ML model received in step S150 because the fine-tuning execution conditions are met (the inference result has been detected to be below the performance threshold a predetermined number of times).
- step S157 the transmitter 120 of the UE 100 transmits information indicating that fine-tuning has been performed to the gNB 200.
- the transmitter 120 may transmit the information in accordance with a reporting instruction included in the fine-tuning execution conditions.
- step S158 the transmitter 120 of the UE 100 transmits to the network device 400 information indicating that fine-tuning has been performed and a data set of the inference data used in the inference.
- the transmitter 120 may transmit this information in accordance with a reporting instruction included in the fine-tuning execution conditions.
- the third embodiment can apply the performance evaluation described in the second embodiment. That is, in the UE 100, whether or not to perform performance evaluation is determined based on the number of times that the inference result of the trained AI/ML model is detected to be equal to or less than the performance threshold.
- the user equipment receives performance evaluation execution conditions from a network node (e.g., gNB200) that indicate the conditions for performing performance evaluation of the trained AI/ML model.
- the user equipment performs inference using the trained AI/ML model.
- the user equipment performs performance evaluation in response to detecting a predetermined number of times that the inference result of the trained AI/ML model is below the performance threshold.
- the performance evaluation execution conditions include the performance threshold and the predetermined number of times.
- UE100 performs performance evaluation when it detects that the inference result of the trained AI/ML model is below the performance threshold, i.e., when it detects a predetermined number of times that the performance of the trained AI/ML model is poor.
- UE100 can determine whether or not to perform performance evaluation in accordance with instructions from gNB200, making it possible to appropriately perform performance evaluation.
- FIG. 22 shows another example of operation according to the third embodiment.
- the transmitter 210 of gNB 200 transmits the performance evaluation execution conditions to UE 100 in step S160.
- the receiver 110 of UE 100 receives the performance evaluation execution conditions.
- the performance evaluation execution conditions include a performance threshold.
- the performance threshold is, for example, a threshold for determining the inference performance of the trained AI/ML model.
- the performance determination method may be the same as in the third embodiment.
- the performance evaluation execution conditions include a predetermined number of times that represents a threshold for the number of times.
- UE100 executes performance evaluation, for example, when it detects that the inference result of the trained AI/ML model is below the performance threshold a predetermined number of times.
- the predetermined number of times may be a consecutive number of times or a cumulative number of times. In the case of a cumulative number of times, the performance evaluation conditions may also include a period of time.
- the performance evaluation execution conditions may include a reporting instruction to report whether or not performance evaluation has been performed.
- the reporting instruction may include information to report the results of the performance evaluation as the report content.
- the performance evaluation execution conditions may include the evaluation method (e.g., NMSE) to be used when performing the performance evaluation.
- the evaluation method e.g., NMSE
- the performance evaluation execution conditions may include identification information (e.g., model ID or function name) of the trained AI/ML model for which performance evaluation is to be performed.
- identification information e.g., model ID or function name
- step S164 the control unit 130 of the UE 100 detects that the inference result is below the performance threshold a predetermined number of times (e.g., n times).
- a user device e.g., UE100 performs fine-tuning of a first trained AI/ML model (e.g., a trained AI/ML model on the UE100 side) based on a first time required to perform fine-tuning of the first trained AI/ML model, a second time required to perform a performance evaluation of the first trained AI/ML model, and the evaluation results of the performance evaluation.
- fine-tuning means training the first trained AI/ML model using training data with a data volume equal to or less than a data volume threshold.
- fine-tuning may be performed when the first time is equal to or less than the fine-tuning execution time threshold, the second time is also equal to or less than the performance evaluation time threshold, and the evaluation result is equal to or greater than the performance threshold, and fine-tuning may not be performed under other conditions. Therefore, in the mobile communication system 1, fine-tuning is performed under certain judgments, making it possible to appropriately perform fine-tuning on the trained AI/ML model.
- step S215 the control unit 130 of the UE 100 determines whether or not to perform fine-tuning.
- the determination of whether or not to perform fine-tuning may itself be the same as in step S212. That is, the control unit 130 may perform fine-tuning when the performance evaluation execution time is equal to or less than the performance evaluation time threshold (i.e., performance evaluation is to be performed), the fine-tuning execution time is equal to or less than the fine-tuning execution time threshold, and the evaluation result from the performance evaluation is equal to or greater than the performance evaluation result threshold (i.e., the evaluation result is good).
- the control unit 130 determines to perform fine-tuning and decides to perform fine-tuning.
- the network-side model can also be implemented by replacing "UE 100" with "network side.”
- model transfer step S180
- parameter transmission steps S182 and S187
- data set transmission step S188
- current indicator transmission step S189
- the fifth embodiment can be applied to the third embodiment. That is, in the mobile communication system 1, it may be determined whether to perform fine-tuning based on the number of times that the inference result (or monitoring result) of the trained AI/ML model is detected to be equal to or lower than the performance threshold (i.e., the inference accuracy is poor). Specifically, steps S151 to S158 shown in FIG. 21 may be executed instead of steps S211 to S215 in FIG. 26.
- gNB200 may transmit reporting conditions to UE100, and UE100 may transmit the recorded log to gNB200 in accordance with the reporting conditions.
- step S171 (transmission of reporting conditions) in FIG. 23 may be executed between step S180 (model transfer) in FIG. 24 and step S200 (inference) in FIG. 25, and step S175 (log recording) and subsequent steps in FIG. 23 may be executed after step S216 (fine-tuning) in FIG. 26.
- the base station is an NR base station (gNB), but the base station may also be an LTE base station (eNB) or a 6G base station. Furthermore, the base station may also be a relay node such as an IAB (Integrated Access and Backhaul) node. The base station may also be a DU of an IAB node. Furthermore, UE100 may also be an MT (Mobile Termination) of an IAB node.
- gNB NR base station
- eNB LTE base station
- 6G base station 6G base station
- the base station may also be a relay node such as an IAB (Integrated Access and Backhaul) node.
- the base station may also be a DU of an IAB node.
- UE100 may also be an MT (Mobile Termination) of an IAB node.
- UE100 may be a terminal function unit (a type of communication module) that allows a base station to control a repeater that relays signals.
- a terminal function unit is referred to as an MT.
- examples of MT include NCR (Network Controlled Repeater)-MT and RIS (Reconfigurable Intelligent Surface)-MT.
- network node primarily refers to a base station, but may also refer to a core network device or part of a base station (CU, DU, or RU).
- a network node may also be composed of a combination of at least part of a core network device and at least part of a base station.
- a program may be provided that causes a computer to execute each process performed by UE100, gNB200, or network device 400.
- the program may be recorded on a computer-readable medium.
- the program can be installed on a computer.
- the computer-readable medium on which the program is recorded may be a non-transitory recording medium.
- the non-transitory recording medium is not particularly limited, but may be, for example, a CD-ROM and/or DVD-ROM.
- circuits that execute each process performed by UE100, gNB200, or network device 400 may be integrated, and at least a portion of UE100, gNB200, or network device 400 may be configured as a semiconductor integrated circuit (chipset, SoC: System on a chip).
- the terms “based on” and “depending on/in response to” do not mean “based only on” or “depending only on,” unless expressly stated otherwise.
- the term “based on” means both “based only on” and “based at least in part on.”
- the term “depending on” means both “depending only on” and “depending at least in part on.”
- the terms “include,” “comprise,” and variations thereof do not mean including only the listed items, but may mean including only the listed items or including additional items in addition to the listed items. Additionally, as used in this disclosure, the term “or” is not intended to mean an exclusive or.
- any reference to elements using designations such as “first,” “second,” etc., as used in this disclosure does not generally limit the quantity or order of those elements. These designations may be used herein as a convenient method of distinguishing between two or more elements. Thus, a reference to a first and a second element does not imply that only two elements may be employed therein, or that the first element must precede the second element in some way.
- articles are added by translation, such as a, an, and the in English, these articles shall include the plural unless the context clearly indicates otherwise.
- UE 100 or base station 200 may be implemented in circuitry or processing circuitry, including general-purpose processors, application-specific processors, integrated circuits, ASICs (Application Specific Integrated Circuits), CPUs (Central Processing Units), conventional circuits, and/or combinations thereof, programmed to perform the described functions.
- a processor includes transistors and/or other circuits and is considered to be circuitry or processing circuitry.
- a processor may also be a programmed processor that executes programs stored in memory.
- a circuit, unit, or means refers to hardware that is programmed to realize or executes the described functions.
- the hardware may be any hardware disclosed in this specification or any hardware known to be programmed to realize or execute the described functions.
- the hardware is a processor, which is considered to be a type of circuitry
- the circuitry, means, or unit is the combination of the hardware and the software used to configure the hardware and/or processor.
- a communication control method in a user device of a mobile communication system comprising: The user device performs the performance evaluation based on a performance evaluation execution time indicating a time required to perform the performance evaluation of the trained AI/ML model; The user device performs fine-tuning of the trained AI/ML model based on the evaluation result of the performance evaluation; The fine-tuning is to train the trained AI/ML model using training data with a data volume equal to or less than a data volume threshold.
- the fine-tuning is to train the trained AI/ML model using the training data, and further to perform the training on layers that are equal to or less than a layer threshold and include a final layer among multiple layers up to the final layer that constitute the trained AI/ML model.
- the method further comprises receiving, by the user equipment, the trained AI/ML model and parameters for calculating the performance evaluation execution time from the network node;
- the communication control method according to any one of Supplementary Note 1 to Supplementary Note 3, wherein the calculating step includes a step in which the user device calculates the performance evaluation execution time using the parameters.
- the method further comprises receiving, by the user equipment, a performance evaluation execution condition indicating a condition for executing the performance evaluation from the network node; 5.
- the method further comprises receiving, by the user equipment, a performance evaluation execution instruction from the network node, the performance evaluation execution instruction instructing the user equipment to execute the performance evaluation; 6.
- the method further comprises receiving, by the user equipment, a fine-tuning execution condition indicating a condition for performing the fine-tuning from a network node;
- the communication control method according to any one of Supplementary Note 1 to Supplementary Note 6, wherein the performing step includes a step of the user device performing the fine-tuning based on the evaluation result and the fine-tuning execution condition.
- a communication control method in a network node of a mobile communication system comprising: a step in which the network node determines performance evaluation execution conditions indicating conditions for executing the performance evaluation of the trained AI/ML model in a user device based on a performance evaluation execution time indicating the time required to execute the performance evaluation of the trained AI/ML model; the network node transmitting the performance evaluation execution condition to the user equipment; In the user device, the performance evaluation is performed based on the performance evaluation execution conditions, and fine-tuning of the trained AI/ML model is performed based on the evaluation result of the performance evaluation; The fine-tuning is to train the trained AI/ML model using training data with a data volume equal to or less than a data volume threshold.
- a communication control method in a network node of a mobile communication system comprising: a step of determining, by the network node, to perform performance evaluation of the trained AI/ML model in a user device based on a performance evaluation execution time indicating the time required to perform performance evaluation of the trained AI/ML model; the network node transmitting a performance evaluation execution instruction to the user equipment, the performance evaluation execution instruction instructing the user equipment to execute the performance evaluation; In the user device, the performance evaluation is performed in accordance with the performance evaluation instruction, and fine-tuning of the trained AI/ML model is performed based on the evaluation result of the performance evaluation; The fine-tuning is to train the trained AI/ML model using training data with a data volume equal to or less than a data volume threshold.
- the transmitting step includes: receiving, by the network node, an evaluation result of the performance evaluation from the user equipment; the network node deciding to perform fine-tuning based on the evaluation result;
- the communication control method according to any one of Supplementary Note 1 to Supplementary Note 13, further comprising the step of: the network node transmitting, to the user equipment, a fine-tuning execution instruction instructing the user equipment to execute fine-tuning.
- a communication control method in a user device of a mobile communication system comprising: The user equipment receives, from a network node, performance evaluation execution conditions indicating conditions for executing performance evaluation of the trained AI/ML model; The user device performs inference using the trained AI/ML model; and performing the performance evaluation by the user device in response to detecting a predetermined number of times that the inference result of the trained AI/ML model is equal to or less than a performance threshold; The performance evaluation execution condition includes the performance threshold and the predetermined number of times.
- the network node further comprises a step of transmitting a fine-tuning execution instruction to the user device, the fine-tuning execution instruction instructing the user device to perform fine-tuning of the trained AI/ML model;
- the communication control method according to any one of Supplementary Note 1 to Supplementary Note 18, wherein the fine-tuning is to train the trained AI/ML model using training data having a data amount equal to or less than a data amount threshold.
- Mobile communication system 20 5GC (CN) 100:UE 110: Receiving unit 120: Transmitting unit 130: Control unit 200: gNB 210: Transmitter 220: Receiver 230: Controller 400: Network device 500: OTT server
Landscapes
- Engineering & Computer Science (AREA)
- Software Systems (AREA)
- Signal Processing (AREA)
- Theoretical Computer Science (AREA)
- Evolutionary Computation (AREA)
- Medical Informatics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Computer Networks & Wireless Communication (AREA)
- Artificial Intelligence (AREA)
- Data Mining & Analysis (AREA)
- Physics & Mathematics (AREA)
- Computing Systems (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Mathematical Physics (AREA)
- Databases & Information Systems (AREA)
- Mobile Radio Communication Systems (AREA)
Abstract
La présente invention concerne un procédé de commande de communication qui, selon un mode de réalisation, est destiné à un dispositif utilisateur dans un système de communication mobile. Le procédé de commande de communication comprend une étape dans laquelle le dispositif utilisateur exécute une évaluation de performance sur la base d'un temps d'exécution d'évaluation de performance indiquant le temps nécessaire pour exécuter une évaluation de performance d'un modèle AI/ML entraîné. En outre, le procédé de commande de communication comprend une étape dans laquelle le dispositif utilisateur exécute un réglage fin du modèle AI/ML entraîné sur la base du résultat d'évaluation de l'évaluation de performance. Le réglage fin est destiné à entraîner le modèle AI/ML entraîné à l'aide de données d'apprentissage d'une quantité de données inférieure ou égale à un seuil de quantité de données.
Applications Claiming Priority (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| JP2024018865 | 2024-02-09 | ||
| JP2024-018865 | 2024-02-09 |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| WO2025170023A1 true WO2025170023A1 (fr) | 2025-08-14 |
Family
ID=96700110
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| PCT/JP2025/004063 Pending WO2025170023A1 (fr) | 2024-02-09 | 2025-02-07 | Procédé de commande de communication |
Country Status (1)
| Country | Link |
|---|---|
| WO (1) | WO2025170023A1 (fr) |
Citations (5)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| WO2021084623A1 (fr) * | 2019-10-29 | 2021-05-06 | 富士通株式会社 | Programme de suppression de dégradation, procédé de suppression de dégradation et dispositif de suppression de dégradation |
| WO2022058020A1 (fr) * | 2020-09-18 | 2022-03-24 | Nokia Technologies Oy | Évaluation et commande de modèles prédictifs d'apprentissage machine dans des réseaux mobiles |
| WO2022237822A1 (fr) * | 2021-05-11 | 2022-11-17 | 维沃移动通信有限公司 | Procédé d'acquisition d'ensemble de données de formation, procédé de transmission sans fil, appareil et dispositif de communication |
| JP2023503111A (ja) * | 2019-11-22 | 2023-01-26 | ホアウェイ・テクノロジーズ・カンパニー・リミテッド | 個人向けに調整されたエアインターフェース |
| JP2023170326A (ja) * | 2022-05-19 | 2023-12-01 | 株式会社日立製作所 | 機械学習モデル管理装置及びベースモデル選定方法 |
-
2025
- 2025-02-07 WO PCT/JP2025/004063 patent/WO2025170023A1/fr active Pending
Patent Citations (5)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| WO2021084623A1 (fr) * | 2019-10-29 | 2021-05-06 | 富士通株式会社 | Programme de suppression de dégradation, procédé de suppression de dégradation et dispositif de suppression de dégradation |
| JP2023503111A (ja) * | 2019-11-22 | 2023-01-26 | ホアウェイ・テクノロジーズ・カンパニー・リミテッド | 個人向けに調整されたエアインターフェース |
| WO2022058020A1 (fr) * | 2020-09-18 | 2022-03-24 | Nokia Technologies Oy | Évaluation et commande de modèles prédictifs d'apprentissage machine dans des réseaux mobiles |
| WO2022237822A1 (fr) * | 2021-05-11 | 2022-11-17 | 维沃移动通信有限公司 | Procédé d'acquisition d'ensemble de données de formation, procédé de transmission sans fil, appareil et dispositif de communication |
| JP2023170326A (ja) * | 2022-05-19 | 2023-12-01 | 株式会社日立製作所 | 機械学習モデル管理装置及びベースモデル選定方法 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| CN112512059B (zh) | 网络优化方法、服务器、网络侧设备、系统和存储介质 | |
| US12133156B2 (en) | Method and base station for determining transmission path in wireless communication system | |
| US20250168706A1 (en) | Communication method | |
| US20250168663A1 (en) | Communication method and communication apparatus | |
| WO2025170023A1 (fr) | Procédé de commande de communication | |
| WO2025170021A1 (fr) | Procédé de commande de communication | |
| WO2025170024A1 (fr) | Procédé de commande de communication | |
| US20250184822A1 (en) | Ran node and method | |
| WO2025135132A1 (fr) | Procédé de commande de communication et dispositif utilisateur | |
| US20250365634A1 (en) | Communication control method, network node and user equipment | |
| WO2025211434A1 (fr) | Procédé de commande de communication | |
| WO2025211279A1 (fr) | Procédé de communication, équipement utilisateur, et nœud de réseau | |
| WO2025211435A1 (fr) | Procédé de commande de communication et dispositif de réseau | |
| WO2025234453A1 (fr) | Procédé de commande de communication, dispositif de réseau et équipement utilisateur | |
| WO2025234455A1 (fr) | Procédé de commande de communication, dispositif réseau et dispositif utilisateur | |
| WO2025211436A1 (fr) | Procédé de commande de communication et dispositif réseau | |
| WO2025234454A1 (fr) | Procédé de commande de communication, dispositif de réseau et dispositif d'utilisateur | |
| WO2025211268A1 (fr) | Procédé de communication, équipement utilisateur, et nœud de réseau | |
| WO2025211269A1 (fr) | Procédé de communication, dispositif utilisateur, et nœud de réseau | |
| WO2024210193A1 (fr) | Procédé de commande de communication | |
| WO2025211432A1 (fr) | Procédé de communication, équipement d'utilisateur et nœud de réseau | |
| US20250374087A1 (en) | Communication control method, network node and user equipment | |
| WO2024166864A1 (fr) | Procédé de commande de communication | |
| WO2024232433A1 (fr) | Procédé de commande de communication et dispositif utilisateur | |
| WO2024232434A1 (fr) | Procédé de commande de communication et dispositif utilisateur |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| 121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 25752246 Country of ref document: EP Kind code of ref document: A1 |