[go: up one dir, main page]

WO2025170021A1 - Procédé de commande de communication - Google Patents

Procédé de commande de communication

Info

Publication number
WO2025170021A1
WO2025170021A1 PCT/JP2025/004061 JP2025004061W WO2025170021A1 WO 2025170021 A1 WO2025170021 A1 WO 2025170021A1 JP 2025004061 W JP2025004061 W JP 2025004061W WO 2025170021 A1 WO2025170021 A1 WO 2025170021A1
Authority
WO
WIPO (PCT)
Prior art keywords
fine
tuning
model
trained
performance evaluation
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
PCT/JP2025/004061
Other languages
English (en)
Japanese (ja)
Inventor
光孝 秦
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Kyocera Corp
Original Assignee
Kyocera Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Kyocera Corp filed Critical Kyocera Corp
Publication of WO2025170021A1 publication Critical patent/WO2025170021A1/fr
Pending legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/16Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks using machine learning or artificial intelligence
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W24/00Supervisory, monitoring or testing arrangements
    • H04W24/04Arrangements for maintaining operational condition

Definitions

  • a communication control method is a communication control method in a network node of a mobile communication system.
  • the communication control method includes a step in which the network node decides to perform fine-tuning of the trained AI/ML model in the user device based on a fine-tuning execution time indicating the time required to perform fine-tuning of the trained AI/ML model.
  • the communication control method also includes a step in which the network node sends a fine-tuning execution instruction to the user device, instructing the user device to perform fine-tuning.
  • the fine-tuning involves training the trained AI/ML model using training data with a data volume equal to or less than a data volume threshold.
  • a communication control method is a communication control method for a user device in a mobile communication system.
  • the communication control method includes a step in which the user device receives, from a network node, fine-tuning execution conditions indicating conditions for performing fine-tuning of a trained AI/ML model.
  • the communication control method also includes a step in which the user device performs inference using the trained AI/ML model.
  • the communication control method further includes a step in which the user device performs fine-tuning.
  • the fine-tuning execution conditions include a performance threshold and a predetermined number of times.
  • the receiving unit 110 performs various types of reception under the control of the control unit 130.
  • the receiving unit 110 includes an antenna and a receiver.
  • the receiver converts the radio signal received by the antenna into a baseband signal (received signal) and outputs it to the control unit 130.
  • the control unit 130 performs various types of control and processing in the UE 100. Such processing includes processing for each layer, which will be described later.
  • the control unit 130 includes at least one processor and at least one memory.
  • the memory stores programs executed by the processor and information used for processing by the processor.
  • the processor may include a baseband processor and a CPU (Central Processing Unit).
  • the baseband processor performs modulation/demodulation, encoding/decoding, etc. of baseband signals.
  • the CPU executes programs stored in the memory to perform various types of processing. Note that the processing or operations performed by the UE 100 may be performed in the control unit 130.
  • FIG 3 is a diagram showing an example configuration of a gNB200 (base station) according to the first embodiment.
  • the gNB200 comprises a transmitter 210, a receiver 220, a controller 230, and a backhaul communication unit 250.
  • the transmitter 210 and receiver 220 constitute a communication unit that performs wireless communication with the UE100.
  • the backhaul communication unit 250 constitutes a network communication unit that performs communication with the CN20.
  • the gNB200 is another example of a communication device. Alternatively, the gNB200 may be an example of a network node.
  • the transmitting unit 210 performs various transmissions under the control of the control unit 230.
  • the transmitting unit 210 includes an antenna and a transmitter.
  • the transmitter converts the baseband signal (transmission signal) output by the control unit 230 into a radio signal and transmits it from the antenna.
  • the backhaul communication unit 250 is connected to adjacent base stations via an Xn interface, which is an interface between base stations.
  • the backhaul communication unit 250 is connected to the AMF/UPF 300 via an NG interface, which is an interface between a base station and a core network.
  • the gNB 200 may be composed of a central unit (CU) and a distributed unit (DU) (i.e., functionally divided), with the two units connected via an F1 interface, which is a fronthaul interface.
  • CU central unit
  • DU distributed unit
  • Figure 4 shows an example of the protocol stack configuration for the wireless interface of the user plane that handles data.
  • the user plane radio interface protocol includes a physical (PHY) layer, a medium access control (MAC) layer, a radio link control (RLC) layer, a packet data convergence protocol (PDCP) layer, and a service data adaptation protocol (SDAP) layer.
  • PHY physical
  • MAC medium access control
  • RLC radio link control
  • PDCP packet data convergence protocol
  • SDAP service data adaptation protocol
  • the PHY layer performs encoding/decoding, modulation/demodulation, antenna mapping/demapping, and resource mapping/demapping.
  • Data and control information are transmitted between the PHY layer of UE100 and the PHY layer of gNB200 via a physical channel.
  • the PHY layer of UE100 receives downlink control information (DCI) transmitted from gNB200 on the physical downlink control channel (PDCCH).
  • DCI downlink control information
  • UE100 performs blind decoding of the PDCCH using a radio network temporary identifier (RNTI) and acquires successfully decoded DCI as DCI addressed to the UE.
  • RNTI radio network temporary identifier
  • the DCI transmitted from gNB200 has CRC (Cyclic Redundancy Code) parity bits scrambled by the RNTI added.
  • UE100 can use a bandwidth narrower than the system bandwidth (i.e., the cell bandwidth).
  • gNB200 configures UE100 with a bandwidth portion (BWP) consisting of contiguous PRBs (Physical Resource Blocks).
  • BWP bandwidth portion
  • UE100 transmits and receives data and control signals in the active BWP.
  • UE100 may be configured with, for example, up to four BWPs. Each BWP may have a different subcarrier spacing. The BWPs may overlap in frequency.
  • gNB200 can specify which BWP to apply by controlling the downlink. This allows gNB200 to dynamically adjust the UE bandwidth according to the amount of data traffic of UE100, etc., thereby reducing UE power consumption.
  • gNB200 can configure up to three control resource sets (CORESETs) for each of up to four BWPs on the serving cell.
  • a CORESET is a radio resource for control information to be received by UE100. Up to 12 or more CORESETs may be configured on the serving cell for UE100.
  • Each CORESET may have an index of 0 to 11 or more.
  • a CORESET may consist of six resource blocks (PRBs) and one, two, or three consecutive Orthogonal Frequency Division Multiplex (OFDM) symbols in the time domain.
  • PRBs resource blocks
  • OFDM Orthogonal Frequency Division Multiplex
  • the MAC layer performs data priority control, retransmission processing using Hybrid Automatic Repeat reQuest (HARQ), random access procedures, etc.
  • Data and control information are transmitted between the MAC layer of UE100 and the MAC layer of gNB200 via a transport channel.
  • the MAC layer of gNB200 includes a scheduler. The scheduler determines the uplink and downlink transport format (transport block size, modulation and coding scheme (MCS)) and the resource blocks to be allocated to UE100.
  • MCS modulation and coding scheme
  • the RLC layer uses the functions of the MAC layer and PHY layer to transmit data to the RLC layer on the receiving side. Data and control information are transmitted between the RLC layer of UE100 and the RLC layer of gNB200 via logical channels.
  • the PDCP layer performs header compression/decompression, encryption/decryption, etc.
  • the SDAP layer maps IP flows, which are the units by which the core network controls QoS (Quality of Service), to radio bearers, which are the units by which the access stratum (AS) controls QoS. Note that if the RAN is connected to the EPC, SDAP is not necessary.
  • the protocol stack for the radio interface of the control plane includes a Radio Resource Control (RRC) layer and a Non-Access Stratum (NAS) instead of the SDAP layer shown in Figure 4.
  • RRC Radio Resource Control
  • NAS Non-Access Stratum
  • RRC signaling for various settings is transmitted between the RRC layer of UE100 and the RRC layer of gNB200.
  • the RRC layer controls logical channels, transport channels, and physical channels in accordance with the establishment, re-establishment, and release of radio bearers.
  • RRC connection connection between the RRC of UE100 and the RRC of gNB200
  • UE100 is in the RRC connected state.
  • RRC connection no connection between the RRC of UE100 and the RRC of gNB200
  • UE100 is in the RRC idle state.
  • UE100 is in the RRC inactive state.
  • the NAS which is located above the RRC layer, performs session management, mobility management, etc.
  • NAS signaling is transmitted between the NAS of UE100 and the NAS of AMF300.
  • UE100 also has an application layer, etc.
  • the layer below the NAS is called the AS (Access Stratum).
  • FIG. 6 is a diagram showing an example of the configuration of functional blocks of the AI/ML technology in the mobile communication system 1 according to the first embodiment.
  • the functional block configuration example shown in Figure 6 includes a data collection unit (Data Collection) A1, a model training unit (Model Training) A2, a model inference unit (Inference) A3, a management unit (Management) A5, and a model storage unit (Model Storage) A6.
  • Data Collection Data Collection
  • Model Training model training unit
  • Model inference Inference
  • Management Management
  • Model Storage Model Storage
  • the example functional block configuration shown in Figure 6 represents the functional framework of general AI/ML technology. Therefore, depending on the hypothetical use case, some of the example functional block configuration (such as the model recording unit A6) may not be included in the example functional block configuration. Furthermore, the example functional block configuration shown in Figure 6 may be distributed between the UE 100 and the network-side device. Alternatively, some functions of the example functional block configuration (such as the model learning unit A2 or model inference unit A3) may be located in both the UE 100 and the network-side device.
  • the data collection unit A1 provides input data to the model learning unit A2, the model inference unit A3, and the management unit A5.
  • the input data includes training data for the model learning unit A2, inference data for the model inference unit A3, and monitoring data for the management unit A5.
  • Learning data is data that is required as input when an AI/ML model is learning.
  • Inference data is data that is required as input when an AI/ML model is making inferences.
  • Monitoring data is data that is required as input when managing an AI/ML model.
  • data collection may refer to the process of collecting data in a network node, management entity, or UE 100, for example, to train an AI/ML model, manage an AI/ML model, and perform inference on an AI/ML model.
  • the model training unit A2 performs AI/ML model training, AI/ML model validation, and AI/ML model testing.
  • AI/ML model training is the process of learning an AI/ML model from input/output relationships to obtain a trained AI/ML model that can be used for inference.
  • y ax + b
  • the process of optimizing a (slope) and b (intercept) by providing input (x) and output (y) may be AI/ML model learning.
  • machine learning is divided into supervised learning, unsupervised learning, and reinforcement learning.
  • Supervised learning is a method that uses correct answer data for training data.
  • Unsupervised learning is a method that does not use correct answer data for training data. For example, unsupervised learning memorizes feature points from large amounts of training data and determines the correct answer (estimates the range).
  • Reinforcement learning is a method that assigns a score to the output result and learns how to maximize the score. Supervised learning is explained below, but either unsupervised learning or reinforcement learning can also be applied to machine learning.
  • the model learning unit A2 outputs the trained AI/ML model (Trained Model) obtained by AI/ML model learning to the model recording unit A6.
  • the model learning unit A2 also outputs the updated AI/ML model (Updated Model) obtained by relearning the trained AI/ML model to the model recording unit A6.
  • the model inference unit A3 outputs inference output data to the management unit A5.
  • the model inference unit A3 also receives management instructions from the management unit A5.
  • management instructions include selecting an AI/ML model, activating (deactivating) an AI/ML model, switching between AI/ML models, and fallback (performing inference without using an AI/ML model).
  • the model inference unit A3 performs model inference in accordance with the management instructions.
  • AI/ML model inference is, for example, the process of obtaining a set of outputs from a set of inputs using a trained AI/ML model (or an updated AI/ML model).
  • model inference may be the process of obtaining inference output data from inference data using a trained AI/ML model (or an updated AI/ML model).
  • AI/ML model inference may be referred to as "model inference” or "inference.”
  • an AI/ML model that is currently being trained may be referred to as a training AI/ML model (or an updating AI/ML model).
  • a training AI/ML model or an updating AI/ML model
  • AI/ML model when there is no need to distinguish between a training (or updating) AI/ML model and a trained (or updated) AI/ML model, they may simply be referred to as an "AI/ML model.”
  • Figure 7 shows an example of the operation of the AI/ML technology according to the first embodiment.
  • an entity may be, for example, a device.
  • the entity may also be a functional block included in the device.
  • the entity may also be a hardware block included in the device.
  • the control data may be a control message in a control layer (e.g., the AI/ML layer) specialized for artificial intelligence or machine learning.
  • the control data may be a NAS message in the NAS layer.
  • the control data may include a performance feedback request and/or a re-learning request sent from the management unit A5 to the model learning unit A2.
  • the control data may include a model transfer request and/or a model delivery request sent from the management unit A5 to the model recording unit A6.
  • the control data may include a management instruction sent from the management unit A5 to the model inference unit A3.
  • CSI feedback improvement represents a use case in which AI/ML technology is applied to CSI fed back from UE100 to gNB200, for example.
  • CSI is information about the channel state in the downlink between UE100 and gNB200.
  • the CSI includes at least one of a channel quality indicator (CQI), a precoding matrix indicator (PMI), and a rank indicator (RI).
  • CQI channel quality indicator
  • PMI precoding matrix indicator
  • RI rank indicator
  • the gNB200 performs, for example, downlink scheduling based on the CSI feedback from UE100.
  • Figure 8 is a diagram showing an example of the arrangement of each functional block in "CSI feedback improvement.”
  • the data collection unit A1, model learning unit A2, and model inference unit A3 are included in the control unit 130 of the UE 100.
  • the data processing unit A4 is included in the control unit 230 of the gNB 200.
  • model learning and model inference are performed in the UE 100.
  • Figure 8 shows an example in which the transmitting entity TE is the UE 100 and the receiving entity RE is the gNB 200.
  • the gNB 200 transmits a reference signal for the UE 100 to estimate the downlink channel state.
  • CSI-RS CSI reference signal
  • DMRS demodulation reference signal
  • the CSI generation unit 131 performs channel estimation using the received signal (CSI-RS) received by the receiving unit 110 to generate CSI.
  • the transmitting unit 120 transmits the generated CSI to the gNB 200.
  • the model learning unit A2 performs model learning using a set of the received signal (CSI-RS) and CSI as learning data, and derives a learned model for inferring CSI from the received signal (CSI-RS).
  • Figure 9 shows an example of operation for "CSI feedback improvement" according to the first embodiment.
  • step S13 gNB200 transmits full CSI-RS.
  • UE100's receiver 110 receives the full CSI-RS, and CSI generator 131 generates (or estimates) CSI based on the full CSI-RS.
  • data collector A1 collects full CSI-RS and CSI.
  • Model learning unit A2 uses the full CSI-RS and CSI as learning data to create a learned AI/ML model.
  • step S16 in response to receiving the completion notification, gNB200 transmits a switching notification to UE100 to switch from learning mode to inference mode.
  • step S17 in response to receiving the switching notification, UE100 switches from learning mode to inference mode.
  • step S18 gNB200 transmits partial CSI-RS.
  • Receiver 110 of UE100 receives the partial CSI-RS.
  • data collector A1 collects the partial CSI-RS.
  • Model inference unit A3 inputs the partial CSI-RS as inference data into the trained model, and obtains CSI as the inference result.
  • step S19 UE100 feeds back (or transmits) the CSI, which is the inference result, to gNB200 as inference result data.
  • UE100 can generate a trained model with a predetermined accuracy or higher by repeating model learning during learning mode. It is expected that the inference results using the trained model generated in this way will also have a predetermined accuracy or higher.
  • step S20 if UE100 determines by itself that model learning is necessary, it may transmit a notification indicating that model learning is necessary to gNB200 as control data.
  • the fine-tuning execution conditions may include the conditions for using the updated AI/ML model after fine-tuning has been performed.
  • the conditions for use may be expressed, for example, by time and/or location.
  • the location may be expressed, for example, by any of a cell ID, an RNA (RAN-based Notification Area), and a PLMN (Public Land Mobile Network).
  • the fine-tuning execution conditions may include information regarding whether or not to register the updated AI/ML model after fine-tuning has been performed. If the fine-tuning execution conditions include this information, UE100 will transmit information indicating whether or not to register the updated AI/ML model to gNB200.
  • step S69 the transmitter 210 of the gNB 200 transmits the fine-tuning execution conditions to the UE 100.
  • the receiver 220 of the UE 100 receives the fine-tuning execution conditions.
  • step S75 the control unit 130 of the UE 100 monitors the performance of the inference (step S66) for the trained AI/ML model.
  • step S77 the control unit 230 of the gNB 200 determines whether or not to perform fine-tuning in the UE 100 based on the fine-tuning execution time received in step S65.
  • the determination itself may be the same as step S71. That is, the control unit 230 may decide to perform fine-tuning if the fine-tuning execution time is equal to or less than the fine-tuning execution time threshold, and may decide not to perform fine-tuning if the fine-tuning execution time takes longer than the fine-tuning execution time threshold.
  • the fine-tuning execution time threshold may be stored in advance in the memory of the gNB 200.
  • the control unit 230 decides to perform fine-tuning based on the fine-tuning execution time.
  • step S79 the control unit 130 of the UE 100 calculates the inference metric by performing inference on the trained AI/ML model (step S66).
  • the inference metric may be the same as that in step S46 of FIG. 12.
  • step S82 the control unit 230 of the gNB 200 determines whether or not to perform fine-tuning in the UE 100 based on the fine-tuning execution time received in step S65.
  • the determination itself may be the same as in step S77.
  • the control unit 230 decides to perform fine-tuning.
  • step S83 the transmitter 210 of the gNB 200 transmits a fine-tuning execution instruction to the UE 100, instructing the UE 100 to perform fine-tuning.
  • the transmission of the fine-tuning execution instruction itself may be the same as step S78.
  • the receiver 110 of the UE 100 receives the fine-tuning execution instruction.
  • the control unit 130 of the UE 100 may decide to perform fine-tuning in accordance with the fine-tuning execution instruction.
  • step S84 Fine Adjustment Then, in step S84, fine adjustment is performed.
  • fine-tuning may be performed on the UE 100 side. That is, in step S85, the control unit 130 of the UE 100 generates learning data to be used for fine-tuning. In step S86, the control unit 130 uses the learning data to perform fine-tuning on the trained AI/ML model.
  • the fine-tuning may be performed on the OTT server 500 side.
  • the OTT server 500 may store the trained AI/ML model to be fine-tuned.
  • the control unit 130 of the UE 100 generates training data to be used for the fine-tuning.
  • the transmission unit 120 of the UE 100 transmits the training data to the OTT server 500.
  • the OTT server 500 receives the training data and uses the training data to perform fine-tuning on the trained AI/ML model.
  • the first embodiment can also be implemented in a two-sided model in which inference is performed on both the UE100 side and the gNB200 side.
  • implementation is possible by applying the operation examples shown in Figures 14 and 15 to the inference performed on the UE100 side.
  • the control unit 230 calculates and monitors the fine-tuning execution time for the trained AI/ML model on the gNB200 side, and when performance is below the performance threshold, determines whether the fine-tuning execution condition is met (or whether the fine-tuning execution time is below the fine-tuning execution time threshold).
  • the control unit 230 performs fine-tuning according to the determination result
  • implementation is also possible for the inference performed on the gNB200 side.
  • the first embodiment can also be applied to a network-side model in which inference is performed on the network side (gNB200, core network, LMF (Location Management Function), or OTT server).
  • gNB200 Network side
  • LMF Location Management Function
  • OTT server OTT server
  • control unit 130 of UE100 may store the fine-tuning execution time in a memory or the like without transmitting it to the gNB200, and after monitoring (step S70), may determine whether to perform fine-tuning based on the fine-tuning execution conditions received from the gNB200 (step S69).
  • beam management has been described as an example of a use case, but the use case is not limited to beam management.
  • the first embodiment can also be applied to CSI feedback improvement (X1.1) or position accuracy improvement (X1.3).
  • the first embodiment we have described performing fine-tuning while taking into account the time required to perform the fine-tuning.
  • the second embodiment we will describe evaluating the performance of a trained AI/ML model and performing fine-tuning based on the evaluation results.
  • the time required for the performance evaluation is calculated, and a decision is made as to whether or not to perform the performance evaluation based on the time required for the performance evaluation.
  • the user device e.g., UE100
  • the user device performs a performance evaluation based on a performance evaluation execution time that indicates the time required to evaluate the performance of the trained AI/ML model.
  • the user device performs fine-tuning of the trained AI/ML model based on the results of the performance evaluation.
  • fine-tuning means training the trained AI/ML model using training data with a data volume equal to or less than a data volume threshold.
  • UE100 can determine when a trained AI/ML model obtains optimal inference result data at a specific time or location, but does not necessarily obtain optimal inference result data at other times or locations (this is generally referred to as "overlearning").
  • overlearning when a trained AI/ML model is trained using training data obtained at a specific time or location, it is expected that optimal inference results will be obtained at that specific time or location.
  • Performance evaluation can also capture the characteristics of such a trained AI/ML model.
  • a threshold e.g., a performance evaluation result threshold
  • evaluation index used in the second embodiment Here, the evaluation index of the trained AI/ML model used in the second embodiment will be described.
  • performance evaluation evaluating a trained AI/ML model is referred to as "performance evaluation.”
  • the evaluation method used in "performance evaluation” is described as using NMSE, but evaluation methods other than NMSE may also be used.
  • step S90 the network device 400 transfers (or transmits) the trained AI/ML model to the UE 100.
  • the receiver 110 of the UE 100 receives the trained AI/ML model.
  • the performance evaluation calculation parameters may include, for example, the number of datasets (or amount of data) used in the performance evaluation.
  • the number of datasets used in the performance evaluation may be the same as or smaller than the number of training datasets used in the trained AI/ML model.
  • the performance evaluation index may be calculated using a mathematical method, and the calculation time may be taken as the performance evaluation execution time.
  • the data set may be transmitted from the OTT server 500 to the UE 100 (step S97).
  • the UE 100 may transmit the received data set to the gNB 200 (step S98), causing the gNB 200 to calculate the performance evaluation execution time.
  • step S104 the control unit 130 of the UE 100 calculates the execution time of the performance evaluation using the parameters for calculating the performance evaluation time.
  • the control unit 130 may calculate the performance evaluation index using a mathematical method (e.g., NMSE) and use the calculation time as the performance evaluation execution time.
  • the control unit 130 may also perform inference to calculate the performance evaluation execution time.
  • the control unit 130 may calculate (or measure) the time required to execute the performance evaluation and use the calculated time as the performance evaluation execution time.
  • step S106 the control unit 130 of the UE 100 performs inference on the trained AI/ML model received in step S90.
  • step S111 the control unit 230 of the gNB 200 determines performance evaluation execution conditions that indicate the conditions for performing performance evaluation in the UE 100.
  • the control unit 230 may determine the performance evaluation execution conditions based on the performance evaluation execution time received in step S105.
  • the performance evaluation execution condition may include a performance evaluation time threshold.
  • performance evaluation may be performed when the performance evaluation execution time is equal to or less than the performance evaluation time threshold, and performance evaluation may not be performed when the performance evaluation execution time is longer than the performance evaluation time threshold.
  • the performance evaluation execution condition may include information indicating that performance evaluation is to be performed when the performance evaluation time is equal to or less than the performance evaluation time threshold (or information indicating that performance evaluation is not to be performed when the performance evaluation time is longer than the performance evaluation time threshold).
  • the performance evaluation conditions may include information regarding the evaluation method (e.g., NMSE) used when performing the performance evaluation.
  • UE 100 performs the performance evaluation using this evaluation method.
  • the performance evaluation conditions may include identification information (such as a model ID or function name) of the trained AI/ML model for which performance evaluation is to be performed.
  • the fine-tuning execution conditions according to the second embodiment include a performance evaluation result threshold that indicates a threshold for the evaluation result of the performance evaluation.
  • the UE 100 uses the performance evaluation result threshold to determine whether or not to perform fine-tuning.
  • the fine-tuning execution conditions according to the second embodiment may include information indicating that fine-tuning is to be performed when the evaluation result of the performance evaluation is equal to or greater than the performance evaluation result threshold (or information indicating that fine-tuning is not to be performed when the evaluation result is less than the performance evaluation result threshold). Alternatively, they may include information instructing that relearning be performed if fine-tuning is not to be performed.
  • step S114 the transmitter 210 of the gNB 200 transmits the fine-tuning execution conditions to the UE 100.
  • the receiver 110 of the UE 100 receives the fine-tuning execution conditions.
  • step S118 the control unit 130 of the UE 100 determines whether to perform fine-tuning. Specifically, the control unit 130 determines whether to perform fine-tuning based on the evaluation result of the performance evaluation performed in step S117 and the fine-tuning execution conditions. For example, the control unit 130 performs fine-tuning when the evaluation result of the performance evaluation is equal to or greater than the performance evaluation result threshold (i.e., the evaluation result is good), and does not perform fine-tuning when the evaluation result is less than the performance evaluation result threshold (i.e., the evaluation result is poor).
  • the control unit 130 performs fine-tuning when the evaluation result of the performance evaluation is equal to or greater than the performance evaluation result threshold (i.e., the evaluation result is good), and does not perform fine-tuning when the evaluation result is less than the performance evaluation result threshold (i.e., the evaluation result is poor).
  • step S115 the impact on the trained AI/ML model (step S115) that is below the performance threshold will be greater than a certain level, and it is predicted that fine-tuning will not improve the trained AI/ML model, or that over-training will occur, resulting in a decrease in the overall performance of the AI/ML model. In such cases, re-training may be preferable to fine-tuning.
  • the evaluation result is equal to or greater than the performance evaluation result threshold, even if fine-tuning is performed on the trained AI/ML model, the impact can be kept below a certain level, and it is predicted that performance will be improved.
  • the control unit 130 decides to perform fine-tuning in step S118.
  • step S120 the control unit 130 of the UE 100 monitors the inference of the trained AI/ML model.
  • step S121 the transmitter 120 of the UE 100 transmits the monitoring results to the gNB 200.
  • the receiver 220 of the gNB 200 receives the monitoring results.
  • step S122 the control unit 230 of the gNB 200 detects, based on the monitoring results, that the performance of the trained AI/ML model is below a performance threshold (i.e., performance is poor). The control unit 230 then decides whether to cause the UE 100 to perform performance evaluation. Specifically, the control unit 230 decides whether to perform performance evaluation based on the performance evaluation execution time received in step S105 ( Figure 16). The decision on whether to perform performance evaluation may be the same as that in step S116 ( Figure 17). That is, the control unit 230 may decide to perform performance evaluation when the performance evaluation execution time received in step S105 is below the performance evaluation time threshold, and may decide not to perform performance evaluation when the performance evaluation execution time received in step S105 exceeds the performance evaluation time threshold. The following description is based on the assumption that the control unit 230 decides to perform performance evaluation based on the performance evaluation execution time.
  • the transmitter 210 of the gNB 200 transmits a performance evaluation execution instruction (or performance evaluation execution instruction) to the UE 100, instructing the UE 100 to perform a performance evaluation.
  • the performance evaluation execution instruction may include identification information (e.g., a model ID or function name) of the trained AI/ML model for which performance evaluation is to be performed.
  • the performance evaluation execution instruction may also be instruction information instructing the UE 100 not to perform performance evaluation.
  • the receiver 110 of the UE 100 receives the performance evaluation execution instruction.
  • step S125 the transmitter 120 of the UE 100 transmits the performance evaluation results to the gNB 200.
  • the receiver 220 of the gNB 200 receives the evaluation results.
  • step S130 the control unit 130 of the UE 100 calculates a performance metric for the inference of the trained AI/ML model (step S106).
  • step S131 the transmitter 120 of the UE 100 transmits the performance metric to the gNB 200.
  • the receiver 220 of the gNB 200 receives the performance metric.
  • step S138 the control unit 130 of the UE 100 may decide to execute the fine-tuning.
  • step S140 of Fig. 20 fine adjustment is performed.
  • the fine adjustment itself may be performed on the UE 100 side (steps S141 and S142) as in the first embodiment, or may be performed on the OTT server 500 side (steps S143 to S145).
  • Each operation steps S141 to S145) is also the same as each operation (steps S85 to S89) in the first embodiment.
  • step S90 (Another Operation Example 1 According to the Second Embodiment)
  • step S91 the transmission of the parameters for calculating the performance evaluation time
  • step S94 the transmission of the data set for calculating the performance evaluation time
  • step S99 the transmission of the current index
  • the model transfer and at least one of the parameters, the data set, and the index may be performed using a single message.
  • the control unit 230 calculates the performance evaluation execution time for the trained AI/ML model on the gNB 200 side, performs monitoring, and when the monitoring result is equal to or less than the performance threshold, determines whether the performance evaluation execution condition is met (or whether the performance evaluation execution time is equal to or less than the performance evaluation time threshold).
  • the control unit 230 performs performance evaluation according to the determination result, and, for example, as in step S126, compares the evaluation result of the performance evaluation with the performance evaluation result threshold and determines whether to perform fine-tuning.
  • the second embodiment can also be applied to a network-side model.
  • the network-side model can also be implemented by replacing "UE 100" with "network side.”
  • model transfer step S90
  • parameter transmission step S91
  • data set transmission step S94
  • current indicator transmission step S99
  • the user device receives fine-tuning execution conditions from a network node (e.g., gNB200) that indicate the conditions for performing fine-tuning of the trained AI/ML model.
  • the user device performs inference using the trained AI/ML model.
  • the user device performs fine-tuning.
  • the fine-tuning execution conditions include a performance threshold and a predetermined number of times.
  • fine-tuning is performed in response to detecting a predetermined number of times that the inference result of the trained AI/ML model is below the performance threshold.
  • fine-tuning means training the trained AI/ML model using training data with a data volume below a data volume threshold.
  • FIG. 21 shows an example of operation according to the third embodiment.
  • step S151 the transmitter 210 of the gNB 200 transmits the fine-tuning execution conditions to the UE 100.
  • the inference result may include, for example, the CPU usage rate or memory usage rate when the inference was performed, the amount of inference data used when the inference was performed, or the amount of inference result data.
  • the performance threshold may be a threshold corresponding to each indicator of the inference result.
  • the fine-tuning execution conditions include a predetermined number of times representing a threshold for the number of times.
  • UE100 executes fine-tuning when it detects that the inference result of the trained AI/ML model is below the performance threshold a predetermined number of times.
  • the predetermined number of times may be a consecutive number of times or a cumulative number of times. In the case of a cumulative number of times, the fine-tuning execution conditions may also include a period of time.
  • the fine-tuning execution condition may include a report instruction (or report request) that instructs (or requests) a report on whether or not fine-tuning has been performed.
  • UE 100 transmits information indicating that fine-tuning has been performed to network device 400 in accordance with the report instruction.
  • the report instruction may include a destination indicating to which node the report should be sent.
  • the report instruction may include information instructing transmission of the inference data set used for inference, along with whether or not fine-tuning has been performed.
  • step S154-1 the control unit 130 of the UE 100 detects that the inference result is below the performance threshold.
  • step S156 the control unit 130 of UE 100 performs fine-tuning on the trained AI/ML model received in step S150 because the fine-tuning execution conditions are met (the inference result has been detected to be below the performance threshold a predetermined number of times).
  • step S157 the transmitter 120 of the UE 100 transmits information indicating that fine-tuning has been performed to the gNB 200.
  • the transmitter 120 may transmit the information in accordance with a reporting instruction included in the fine-tuning execution conditions.
  • step S158 the transmitter 120 of the UE 100 transmits to the network device 400 information indicating that fine-tuning has been performed and a data set of the inference data used in the inference.
  • the transmitter 120 may transmit this information in accordance with a reporting instruction included in the fine-tuning execution conditions.
  • the user equipment receives performance evaluation execution conditions from a network node (e.g., gNB200) that indicate the conditions for performing performance evaluation of the trained AI/ML model.
  • the user equipment performs inference using the trained AI/ML model.
  • the user equipment performs performance evaluation in response to detecting a predetermined number of times that the inference result of the trained AI/ML model is below the performance threshold.
  • the performance evaluation execution conditions include the performance threshold and the predetermined number of times.
  • UE100 performs performance evaluation when it detects that the inference result of the trained AI/ML model is below the performance threshold, i.e., when it detects a predetermined number of times that the performance of the trained AI/ML model is poor.
  • UE100 can determine whether or not to perform performance evaluation in accordance with instructions from gNB200, making it possible to appropriately perform performance evaluation.
  • FIG. 22 shows another example of operation according to the third embodiment.
  • the performance evaluation execution conditions include a performance threshold.
  • the performance threshold is, for example, a threshold for determining the inference performance of the trained AI/ML model.
  • the performance determination method may be the same as in the third embodiment.
  • the performance evaluation execution conditions include a predetermined number of times that represents a threshold for the number of times.
  • UE100 executes performance evaluation, for example, when it detects that the inference result of the trained AI/ML model is below the performance threshold a predetermined number of times.
  • the predetermined number of times may be a consecutive number of times or a cumulative number of times. In the case of a cumulative number of times, the performance evaluation conditions may also include a period of time.
  • the performance evaluation execution conditions may include the evaluation method (e.g., NMSE) to be used when performing the performance evaluation.
  • the evaluation method e.g., NMSE
  • step S164 the control unit 130 of the UE 100 detects that the inference result is below the performance threshold a predetermined number of times (e.g., n times).
  • step S165 the control unit 130 of the UE 100 performs performance evaluation to satisfy the performance evaluation execution conditions.
  • step S166 the transmitter 120 of the UE 100 transmits to the gNB 200 information indicating that the inference result is below the performance threshold (i.e., the accuracy of the inference is poor) and the evaluation result.
  • the transmitter 120 may transmit this information in accordance with a reporting instruction included in the performance evaluation execution conditions.
  • the receiver 220 of the gNB 200 receives the information indicating that the inference result is below the performance threshold and the evaluation result.
  • step S170 the network device 400 transfers (or transmits) the trained AI/ML model to the UE 100.
  • the receiver 110 of the UE 100 receives the trained AI/ML model.
  • Identification information of the trained AI/ML model used for performance evaluation and/or fine-tuning e.g., model name, model ID, or function name
  • the reporting conditions may include information indicating that a log is to be transmitted in response to a log transmission request from gNB200.
  • the reporting conditions may include log transmission conditions that indicate the conditions for sending a log.
  • the log transmission condition may be that at least one of the pieces of information to be reported has been acquired.
  • the log transmission condition may also be that the data volume of at least one of the pieces of information to be reported has reached or exceeded a data volume threshold. In the latter case, the data volume threshold may be included in the log transmission condition.
  • step S172 the control unit 130 of the UE 100 performs inference on the trained AI/ML model received in step S170.
  • step S173 the control unit 130 of the UE 100 performs performance evaluation based on the inference results.
  • step S174 the control unit 130 of the UE 100 performs fine-tuning on the trained AI/ML model.
  • step S175 the control unit 130 of the UE 100 records in memory the log used during performance evaluation and fine-tuning.
  • the control unit 130 may record the log in memory in accordance with the report target information included in the reporting conditions.
  • step S177 the transmitter 210 of the gNB 200 transmits a log transmission request to the UE 100.
  • the receiver 110 of the UE 100 receives the log transmission request.
  • step S178 in response to receiving the log transmission request, the transmitter 120 of the UE 100 transmits the log recorded in step S175 to the gNB 200.
  • the transmitter 120 may also transmit the log together with a measurement report (MeasurementReport).
  • the fifth embodiment is an embodiment that combines the first embodiment (the fine-tuning embodiment) and the second embodiment (the performance evaluation embodiment). That is, the fifth embodiment is an example in which the UE 100 determines whether to perform fine-tuning based on the fine-tuning execution time, the performance evaluation execution time, and the evaluation results of the performance evaluation.
  • a user device e.g., UE100 performs fine-tuning of a first trained AI/ML model (e.g., a trained AI/ML model on the UE100 side) based on a first time required to perform fine-tuning of the first trained AI/ML model, a second time required to perform a performance evaluation of the first trained AI/ML model, and the evaluation results of the performance evaluation.
  • fine-tuning means training the first trained AI/ML model using training data with a data volume equal to or less than a data volume threshold.
  • fine-tuning may be performed when the first time is equal to or less than the fine-tuning execution time threshold, the second time is also equal to or less than the performance evaluation time threshold, and the evaluation result is equal to or greater than the performance threshold, and fine-tuning may not be performed under other conditions. Therefore, in the mobile communication system 1, fine-tuning is performed under certain judgments, making it possible to appropriately perform fine-tuning on the trained AI/ML model.
  • FIGS. 24 to 26 are diagrams showing an example of operation according to the fifth embodiment. Note that the example of operation shown in FIG. 24 to 26 is an example of CSI compression, which is a sub-use case of CSI feedback, and represents an example of a two-sided model.
  • CSI compression is performed by UE100 transmitting compressed CSI to gNB200, which then reconstructs the pre-compressed CSI.
  • the AI/ML method is used for CSI compression and CSI reconstruction, and a two-sided model is constructed by an inference unit on the UE100 side that performs CSI compression and an inference unit on the UE100 side that performs CSI reconstruction.
  • the CSI to be compressed is, for example, CSI feedback information generated by legacy processing by CSI generation unit 131.
  • the network device 400 transmits a trained AI/ML model for the UE 100 (e.g., a first trained AI/ML model) to the UE 100 (step S180), and transmits a trained AI/ML model for the gNB 200 (e.g., a second trained AI/ML model) to the gNB 200 (step S181).
  • a trained AI/ML model for the UE 100 e.g., a first trained AI/ML model
  • a trained AI/ML model for the gNB 200 e.g., a second trained AI/ML model
  • the network device 400 transmits a parameter for calculating the fine-tuning execution time (e.g., the first parameter) to the UE 100 (step S182).
  • the network device 400 also transmits a parameter for calculating the fine-tuning execution time (e.g., the third parameter) to the gNB 200 in order to calculate the fine-tuning execution time at the gNB 200 (step S185).
  • the control unit 130 of the UE 100 calculates the fine-tuning execution time (e.g., the first time) using the fine-tuning execution time calculation parameters (step S183), and the transmission unit 120 of the UE 100 transmits the fine-tuning execution time on the UE 100 side to the gNB 200 (step S184). Meanwhile, the control unit 230 of the gNB 200 also calculates the fine-tuning execution time (e.g., the third time) using the fine-tuning execution time calculation parameters (step S186).
  • the fine-tuning execution time e.g., the first time
  • the transmission unit 120 of the UE 100 transmits the fine-tuning execution time on the UE 100 side to the gNB 200 (step S184).
  • the control unit 230 of the gNB 200 also calculates the fine-tuning execution time (e.g., the third time) using the fine-tuning execution time calculation parameters (step S186).
  • the network device 400 transmits parameters for calculating the performance evaluation time (e.g., second parameters), a dataset for calculating the performance evaluation time, and the current indicators to the UE 100 (steps S187 to S189).
  • the control unit 130 of the UE 100 uses these received parameters to calculate the performance evaluation execution time (e.g., second time) for the trained AI/ML model on the UE 100 side (step S190).
  • the transmission unit 120 of the UE 100 transmits the performance evaluation execution time on the UE 100 side to the gNB 200 (step S191).
  • network device 400 transmits parameters for calculating the performance evaluation time (e.g., the fourth parameter), a dataset for calculating the performance evaluation time, and the current indicators to gNB200 (steps S192 to S194).
  • the control unit 230 of gNB200 uses these received parameters to calculate the performance evaluation execution time (e.g., the fourth time) for the trained AI/ML model on the gNB200 side (step S195).
  • step S200 inference is performed in the mobile communication system 1. Because it is a two-sided model, inference is performed in the UE 100 and the gNB 200 (steps S202 and S204).
  • the control unit 130 of the UE 100 generates inference data to be used in the trained AI/ML model on the UE side (step S201) and performs inference using the inference data (step S202).
  • the transmission unit 120 of the UE 100 transmits the inference result data to the gNB 200 as inference data on the gNB 200 side (step S203).
  • the control unit 230 of the gNB 200 inputs the inference result data as inference data for the trained AI/ML model on the gNB 200 side and performs inference (step S204).
  • step S205 monitoring is performed in the mobile communication system 1. Because of the two-sided model, the control unit 130 of the UE 100 monitors the inference on the UE 100 side (step S202) (step S206), and the control unit 230 of the gNB 200 monitors the inference on the gNB 200 side (step S204) (step S208). The transmission unit 120 of the UE 100 transmits its own monitoring results to the gNB 200.
  • step S210 ( Figure 26), management is performed in the mobile communication system 1.
  • the fifth embodiment there is fine-tuning of the trained AI/ML model on the UE100 side, and fine-tuning of the trained AI/ML model on the gNB200 side.
  • the fine-tuning performed on the UE100 side there are cases where the gNB200 makes the fine-tuning decision, and cases where the UE100 itself makes the fine-tuning decision.
  • step S212 the control unit 230 of the gNB 200 determines whether or not to perform fine-tuning.
  • the control unit 230 performs fine-tuning when the performance evaluation execution time is equal to or less than the performance evaluation time threshold (i.e., to perform performance evaluation), the fine-tuning execution time is equal to or less than the fine-tuning execution time threshold, and the evaluation result from the performance evaluation is equal to or greater than the performance evaluation result threshold (i.e., the evaluation result is good).
  • the control unit 230 combines the performance evaluation execution time, the fine-tuning execution time, and the evaluation result from the performance evaluation to determine whether or not to perform fine-tuning.
  • the evaluation results from the performance evaluation are evaluation results for the trained AI/ML model on the UE100 side. Therefore, the control unit 130 of the UE100 evaluates the trained AI/ML model, and the transmission unit 120 of the UE100 transmits the evaluation results to the gNB200. This allows the gNB200 to obtain the evaluation results of the trained AI/ML model on the UE100 side. The control unit 230 of the gNB200 can then use the evaluation results to determine whether or not to perform fine-tuning.
  • control unit 230 has determined to perform fine adjustment.
  • step S213 the transmitter 210 of the gNB 200 transmits a fine-tuning execution instruction to the UE 100 to instruct the UE 100 to perform fine-tuning.
  • the receiver 110 of the UE 100 receives the fine-tuning execution instruction, and the controller 130 of the UE 100 decides to perform fine-tuning.
  • V2 When the UE 100 Determines Fine Adjustment on the UE 100 Side by Itself, the UE 100 performs step S214 and step S215.
  • step S214 the control unit 130 of the UE 100 determines whether or not to perform performance evaluation.
  • the determination of whether or not to perform performance evaluation may be the same as step S211. That is, as a result of monitoring the trained AI/ML model on the UE side (step S206), the control unit 130 detects that the performance of the model is below the performance threshold, and compares the performance evaluation execution time calculated in step S190 with the performance evaluation time threshold to determine whether or not to perform performance evaluation.
  • the control unit 130 has determined to perform performance evaluation.
  • step S215 the control unit 130 of the UE 100 determines whether or not to perform fine-tuning.
  • the determination of whether or not to perform fine-tuning may itself be the same as in step S212. That is, the control unit 130 may perform fine-tuning when the performance evaluation execution time is equal to or less than the performance evaluation time threshold (i.e., performance evaluation is to be performed), the fine-tuning execution time is equal to or less than the fine-tuning execution time threshold, and the evaluation result from the performance evaluation is equal to or greater than the performance evaluation result threshold (i.e., the evaluation result is good).
  • the control unit 130 determines to perform fine-tuning and decides to perform fine-tuning.
  • control unit 230 of gNB200 may also determine whether or not to perform fine-tuning on its own trained AI/ML model. In this case, when the control unit 230 monitors the trained AI/ML model on the gNB200 side (step S208) and detects that the performance of the trained AI/ML model is below the performance threshold (i.e., poor performance), it determines whether or not to perform fine-tuning.
  • the control unit 230 monitors the trained AI/ML model on the gNB200 side (step S208) and detects that the performance of the trained AI/ML model is below the performance threshold (i.e., poor performance), it determines whether or not to perform fine-tuning.
  • step S216 fine-tuning is performed in the mobile communication system 1. That is, on the UE100 side, the control unit 130 generates learning data (step S217) and uses the generated learning data to perform fine-tuning on the trained AI/ML model on the UE100 side (step S218). Meanwhile, on the gNB200 side, the control unit 230 generates learning data (step S219) and uses the generated learning data to perform fine-tuning on the trained AI/ML model on the gNB200 side (step S220).
  • the fifth embodiment has been described in the case of a two-side model, this is not limiting.
  • the fifth embodiment can also be applied to a UE-side model.
  • some of the processing performed on the gNB 200 side (such as steps S186, S195, S219, and S220) can be prevented from being performed.
  • the fifth embodiment can also be applied to a network-side model. For example, in FIGS.
  • the network-side model can also be implemented by replacing "UE 100" with "network side.”
  • model transfer step S180
  • parameter transmission steps S182 and S187
  • data set transmission step S188
  • current indicator transmission step S189
  • the fifth embodiment can be applied to the third embodiment. That is, in the mobile communication system 1, it may be determined whether to perform fine-tuning based on the number of times that the inference result (or monitoring result) of the trained AI/ML model is detected to be equal to or lower than the performance threshold (i.e., the inference accuracy is poor). Specifically, steps S151 to S158 shown in FIG. 21 may be executed instead of steps S211 to S215 in FIG. 26.
  • gNB200 may transmit reporting conditions to UE100, and UE100 may transmit the recorded log to gNB200 in accordance with the reporting conditions.
  • step S171 (transmission of reporting conditions) in FIG. 23 may be executed between step S180 (model transfer) in FIG. 24 and step S200 (inference) in FIG. 25, and step S175 (log recording) and subsequent steps in FIG. 23 may be executed after step S216 (fine-tuning) in FIG. 26.
  • the base station is an NR base station (gNB), but the base station may also be an LTE base station (eNB) or a 6G base station. Furthermore, the base station may also be a relay node such as an IAB (Integrated Access and Backhaul) node. The base station may also be a DU of an IAB node. Furthermore, UE100 may also be an MT (Mobile Termination) of an IAB node.
  • gNB NR base station
  • eNB LTE base station
  • 6G base station 6G base station
  • the base station may also be a relay node such as an IAB (Integrated Access and Backhaul) node.
  • the base station may also be a DU of an IAB node.
  • UE100 may also be an MT (Mobile Termination) of an IAB node.
  • UE100 may be a terminal function unit (a type of communication module) that allows a base station to control a repeater that relays signals.
  • a terminal function unit is referred to as an MT.
  • examples of MT include NCR (Network Controlled Repeater)-MT and RIS (Reconfigurable Intelligent Surface)-MT.
  • network node primarily refers to a base station, but may also refer to a core network device or part of a base station (CU, DU, or RU).
  • a network node may also be composed of a combination of at least part of a core network device and at least part of a base station.
  • a program may be provided that causes a computer to execute each process performed by UE100, gNB200, or network device 400.
  • the program may be recorded on a computer-readable medium.
  • the program can be installed on a computer.
  • the computer-readable medium on which the program is recorded may be a non-transitory recording medium.
  • the non-transitory recording medium is not particularly limited, but may be, for example, a CD-ROM and/or DVD-ROM.
  • circuits that execute each process performed by UE100, gNB200, or network device 400 may be integrated, and at least a portion of UE100, gNB200, or network device 400 may be configured as a semiconductor integrated circuit (chipset, SoC: System on a chip).
  • a circuit, unit, or means refers to hardware that is programmed to realize or executes the described functions.
  • the hardware may be any hardware disclosed in this specification or any hardware known to be programmed to realize or execute the described functions.

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Signal Processing (AREA)
  • Theoretical Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Medical Informatics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Artificial Intelligence (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Databases & Information Systems (AREA)
  • Mobile Radio Communication Systems (AREA)

Abstract

Un aspect de la présente invention se rapporte à un procédé de commande de communication pour un équipement utilisateur dans un système de communication mobile. Le procédé de commande de communication comprend une étape consistant à effectuer un réglage fin sur un modèle d'intelligence artificielle (IA)/apprentissage automatique (ML) entraîné sur la base d'un temps d'exécution de réglage fin indiquant le temps nécessaire à l'équipement utilisateur pour exécuter le réglage fin du modèle AI/ML entraîné. Le réglage fin se réfère à l'entraînement du modèle AI/ML entraîné à l'aide de données d'entraînement ayant une quantité de données égale ou inférieure à un seuil de quantité de données.
PCT/JP2025/004061 2024-02-09 2025-02-07 Procédé de commande de communication Pending WO2025170021A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2024-018853 2024-02-09
JP2024018853 2024-02-09

Publications (1)

Publication Number Publication Date
WO2025170021A1 true WO2025170021A1 (fr) 2025-08-14

Family

ID=96700101

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2025/004061 Pending WO2025170021A1 (fr) 2024-02-09 2025-02-07 Procédé de commande de communication

Country Status (1)

Country Link
WO (1) WO2025170021A1 (fr)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2020062914A (ja) * 2018-10-15 2020-04-23 トヨタ自動車株式会社 情報提供装置
WO2021084623A1 (fr) * 2019-10-29 2021-05-06 富士通株式会社 Programme de suppression de dégradation, procédé de suppression de dégradation et dispositif de suppression de dégradation
WO2022237822A1 (fr) * 2021-05-11 2022-11-17 维沃移动通信有限公司 Procédé d'acquisition d'ensemble de données de formation, procédé de transmission sans fil, appareil et dispositif de communication
JP2023503111A (ja) * 2019-11-22 2023-01-26 ホアウェイ・テクノロジーズ・カンパニー・リミテッド 個人向けに調整されたエアインターフェース

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2020062914A (ja) * 2018-10-15 2020-04-23 トヨタ自動車株式会社 情報提供装置
WO2021084623A1 (fr) * 2019-10-29 2021-05-06 富士通株式会社 Programme de suppression de dégradation, procédé de suppression de dégradation et dispositif de suppression de dégradation
JP2023503111A (ja) * 2019-11-22 2023-01-26 ホアウェイ・テクノロジーズ・カンパニー・リミテッド 個人向けに調整されたエアインターフェース
WO2022237822A1 (fr) * 2021-05-11 2022-11-17 维沃移动通信有限公司 Procédé d'acquisition d'ensemble de données de formation, procédé de transmission sans fil, appareil et dispositif de communication

Similar Documents

Publication Publication Date Title
CN112512059B (zh) 网络优化方法、服务器、网络侧设备、系统和存储介质
US12133156B2 (en) Method and base station for determining transmission path in wireless communication system
WO2020233405A1 (fr) Procédé et dispositif mis en œuvre dans un nœud et utilisés pour une communication sans fil
US20250168706A1 (en) Communication method
US20250168663A1 (en) Communication method and communication apparatus
WO2025170021A1 (fr) Procédé de commande de communication
WO2025170023A1 (fr) Procédé de commande de communication
WO2025170024A1 (fr) Procédé de commande de communication
US20250184822A1 (en) Ran node and method
WO2025135132A1 (fr) Procédé de commande de communication et dispositif utilisateur
US20250365634A1 (en) Communication control method, network node and user equipment
WO2025211434A1 (fr) Procédé de commande de communication
WO2025211279A1 (fr) Procédé de communication, équipement utilisateur, et nœud de réseau
WO2025211435A1 (fr) Procédé de commande de communication et dispositif de réseau
WO2025234453A1 (fr) Procédé de commande de communication, dispositif de réseau et équipement utilisateur
WO2025211436A1 (fr) Procédé de commande de communication et dispositif réseau
WO2025234455A1 (fr) Procédé de commande de communication, dispositif réseau et dispositif utilisateur
WO2025234454A1 (fr) Procédé de commande de communication, dispositif de réseau et dispositif d'utilisateur
WO2025211268A1 (fr) Procédé de communication, équipement utilisateur, et nœud de réseau
WO2024210193A1 (fr) Procédé de commande de communication
WO2025211269A1 (fr) Procédé de communication, dispositif utilisateur, et nœud de réseau
WO2025211432A1 (fr) Procédé de communication, équipement d'utilisateur et nœud de réseau
US20250374087A1 (en) Communication control method, network node and user equipment
WO2024166864A1 (fr) Procédé de commande de communication
WO2024232433A1 (fr) Procédé de commande de communication et dispositif utilisateur

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 25752244

Country of ref document: EP

Kind code of ref document: A1