WO2025026851A1 - Procédé de surveillance distribuée de modèle partitionné - Google Patents
Procédé de surveillance distribuée de modèle partitionné Download PDFInfo
- Publication number
- WO2025026851A1 WO2025026851A1 PCT/EP2024/071026 EP2024071026W WO2025026851A1 WO 2025026851 A1 WO2025026851 A1 WO 2025026851A1 EP 2024071026 W EP2024071026 W EP 2024071026W WO 2025026851 A1 WO2025026851 A1 WO 2025026851A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- model
- partitioned
- models
- monitoring
- wireless network
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L41/00—Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
- H04L41/16—Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks using machine learning or artificial intelligence
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L41/00—Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
- H04L41/08—Configuration management of networks or network elements
- H04L41/0803—Configuration setting
- H04L41/0813—Configuration setting characterised by the conditions triggering a change of settings
- H04L41/082—Configuration setting characterised by the conditions triggering a change of settings the condition being updates or upgrades of network functionality
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L41/00—Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
- H04L41/14—Network analysis or design
- H04L41/145—Network analysis or design involving simulating, designing, planning or modelling of a network
Definitions
- the present disclosure relates to AI/ML based applicable model update report signaling, where techniques for pre-configuring and signaling the model properties and partitioning for the efficient monitoring of machine learning model operation are presented.
- AI/ML artificial intelligence/machine learning
- RP-213599 3GPP TSG RAN (Technical Specification Group Radio Access Network) meeting #94e.
- the official title of AI/ML study item is “Study on AI/ML for NR Air Interface”, and currently RAN WG1 and WG2 are actively working on specification.
- the goal of this study item is to identify a common AI/ML framework and areas of obtaining gains using AI/ML based techniques with use cases.
- the main objective of this study item is to study AI/ML frameworks for air-interfaces with target use cases by considering performance, complexity, and potential specification impacts.
- AI/ML models, terminology, and descriptions to identify common and specific characteristics for a framework will be one key work scope.
- various aspects are under consideration for investigation and one key item is about the lifecycle management of AI/ML models where multiple stages are included as mandatory for model training, model deployment, model inference, model monitoring, model updating, etc.
- UE mobility was also considered as one of the AI/ML use cases and one of the scenarios for model training/inference is that both functions are located within a RAN node.
- the new work item of “Artificial Intelligence (Al)/Machine Learning (ML) for NG-RAN” was initiated to specify data collection enhancements and signaling support within existing NG-RAN interfaces and architectures.
- model monitoring is one of the key phases in LCM operations and the follow-up countermeasures such as model switching, model re-training or model de-activation are commonly used.
- model transfer is needed from one node to another node when the existing model is replaced with an alternative one. Additional signaling overhead can be significant to execute such a model transfer.
- US 2020201 727 A1 teaches technologies for monitoring performance of a machine learning model which includes receiving, by an unsupervised anomaly detection function, digital time series data for a feature metric; where the feature metric is computed for a feature that is extracted from an online system over a time interval; where the machine learning model is to produce model output that relates to one or more users' use of the online system; using the unsupervised anomaly detection function, detecting anomalies in the digital time series data; labeling a subset of the detected anomalies in response to a deviation of a time-series prediction model from a predicted baseline model exceeding a predicted deviation criterion; creating digital output that identifies the feature as associated with the labeled subset of the detected anomalies; causing, in response to the digital output, a modification of the machine learning model.
- US 2021 133 632 A1 discloses systems and methods to provide an open, unified platform to build, validate, deliver, and monitor models for data science at scale. These systems and methods may accelerate research, spark collaboration, increase iteration speed, and remove deployment friction to deliver impactful models. In particular, users may be allowed to visualize statistics about models and monitor models in real-time via a graphical user interface provided by the systems.
- US 2022 027 749 A1 discloses a dataset that is received for processing by a machine learning model.
- a scoring payload for the dataset and that regards the machine learning model is also received.
- a set of features of the machine learning model is determined by analyzing the scoring payload.
- the scoring payload is structured in accordance with the set of features such that the structured scoring payload is ready for analysis for a monitor of the machine learning model.
- US 2022 321 647 A1 discloses methods for managed machine learning (ML) in a communication network, such as by one or more first network functions (NFs) of the communication network. Such methods include determining whether processing of an ML model in the communication network should be distributed to one or more user equipment (UEs) operating in the communication network, based on characteristics of the respective UEs. Such methods also include, based on determining that the processing of the ML model should be distributed to the one or more UEs, establishing trusted execution environments (TEEs) in the respective UEs and distributing the ML model for processing in the respective TEEs.
- TEEs trusted execution environments
- Other embodiments include complementary methods for UEs, as well as UEs and NFs (or communication networks) configured to perform such methods.
- WO 2022 008 037 A1 discloses a method comprising: checking whether a terminal indicates to a network its capability to execute and/or to train a machine learning model; monitoring whether the terminal is in an inability state; informing the network that the terminal is in the inability state if the terminal indicated the capability and the terminal is in the inability state, wherein, in the inability state, the terminal is not able to execute and/or train the machine learning model, or the terminal is not able to execute and/or train the machine learning model at least with a predefined performance.
- Figure 1 is an exemplary table of a mapping relationship for partitioned-models
- Figure 2 is an exemplary block diagram of a model structure type A for partitioned-models
- Figure 3 is an exemplary block diagram of model structure type B for partitioned-models
- Figure 4 is an exemplary block diagram of a partitioned-model distribution procedure for network and UE sides
- Figure 5 is a signaling flow of monitoring partitioned-models
- Figure 6 is a flowchart of a procedure of monitoring partitioned-models for the network side.
- Figure 7 is a flowchart of procedure of monitoring partitioned-models for the UE side.
- network node may be used and may correspond to any type of radio network node or any network node, which communicates with a UE (directly or via another node) and/or with another network node.
- network nodes are NodeB, MeNB, ENB, a network node belonging to MCG or SCG, base station (BS), multi-standard radio (MSR) radio node such as MSR BS, eNodeB, gNodeB, network controller, radio network controller (RNC), base station controller (BSC), relay, donor node controlling relay, base transceiver station (BTS), access point (AP), transmission points, transmission nodes, RRU, RRH, nodes in distributed antenna system (DAS), core network node (e.g. Mobile Switching Center (MSC), Mobility Management Entity (MME), etc), Operations & Maintenance (O&M), Operations Support System (OSS), Self Optimized Network (SON), positioning node (e.g. Evolved- Serving Mobile Location Centre (E-SMLC)), Minimization of Drive Tests (MDT), test equipment (physical node or software), another UE, etc.
- MSR multi-standard radio
- MSR multi-standard radio
- RNC radio network controller
- BSC base
- the non-limiting term user equipment (UE) or wireless device may be used and may refer to any type of wireless device communicating with a network node and/or with another UE in a cellular or mobile communication system.
- UE are target device, device to device (D2D) UE, machine type UE or UE capable of machine to machine (M2M) communication, PDA, PAD, Tablet, mobile terminals, smart phone, laptop embedded equipped (LEE), laptop mounted equipment (LME), USB dongles, UE category Ml, UE category M2, ProSe UE, V2V UE, V2X UE, etc.
- terminologies such as base station/gNodeB and UE should be considered non-limiting and in particular do not imply a certain hierarchical relation between the two; in general, “gNodeB” could be considered as device 1 and “UE” could be considered as device 2 and these two devices communicate with each other over some radio channel. And in the following the transmitter or receiver could be either gNodeB (gNB), or UE.
- gNB gNodeB
- aspects of the embodiments may be embodied as a system, apparatus, method, or computer program product.
- embodiments may take the form of an entirely hardware embodiment, an entirely software embodiment (including firmware, resident software, micro-code, etc.) or an embodiment combining software and hardware aspects.
- the disclosed embodiments may be implemented as a hardware circuit comprising custom very-large-scale integration (“VLSI”) circuits or gate arrays, off-the-shelf semiconductors such as logic chips, transistors, or other discrete components.
- VLSI very-large-scale integration
- the disclosed embodiments may also be implemented in programmable hardware devices such as field programmable gate arrays, programmable array logic, programmable logic devices, or the like.
- the disclosed embodiments may include one or more physical or logical blocks of executable code which may, for instance, be organized as an object, procedure, or function.
- embodiments may take the form of a computer program product embodied in one or more computer readable storage devices storing machine readable code, computer readable code, and/or program code, referred hereafter as code.
- the storage devices may be tangible, non- transitory, and/or non-transmission.
- the storage devices may not embody signals. In a certain embodiment, the storage devices only employ signals for accessing code
- the computer readable medium may be a computer readable storage medium.
- the computer readable storage medium may be a storage device storing the code.
- the storage device may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, holographic, micromechanical, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing.
- a storage device More specific examples (a non-exhaustive list) of the storage device would include the following: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a random-access memory (“RAM”), a read-only memory (“ROM”), an erasable programmable read-only memory (“EPROM” or Flash memory), a portable compact disc read-only memory (“CD-ROM”), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
- a computer readable storage medium may be any tangible medium that can contain or store a program for use by or in connection with an instruction execution system, apparatus, or device.
- Code for carrying out operations for embodiments may be any number of lines and may be written in any combination of one or more programming languages including an object-oriented programming language such as Python, Ruby, Java, Smalltalk, C++, or the like, and conventional procedural programming languages, such as the “C” programming language, or the like, and/or machine languages such as assembly languages.
- the code may execute entirely on the user’s computer, partly on the user’s computer, as a stand-alone software package, partly on the user’s computer and partly on a remote computer or entirely on the remote computer or server.
- the remote computer may be connected to the user’s computer through any type of network, including a local area network (“LAN”), wireless LAN (“WLAN”), or a wide area network (“WAN”), or the connection may be made to an external computer (for example, through the Internet using an Internet Service Provider (“ISP”)).
- LAN local area network
- WLAN wireless LAN
- WAN wide area network
- ISP Internet Service Provider
- the code may also be stored in a storage device that can direct a computer, other programmable data processing apparatus, or other devices to function in a particular manner, such that the instructions stored in the storage device produce an article of manufacture including instructions which implement the function/act specified in the flowchart diagrams and/or block diagrams.
- the code may also be loaded onto a computer, other programmable data processing apparatus, or other devices to cause a series of operational steps to be performed on the computer, other programmable apparatus, or other devices to produce a computer implemented process such that the code which execute on the computer or other programmable apparatus provide processes for implementing the functions/acts specified in the flowchart diagrams and/or block diagrams.
- each block in the flowchart diagrams and/or block diagrams may represent a module, segment, or portion of code, which includes one or more executable instructions of the code for implementing the specified logical function(s).
- base station e.g., gNB
- mobile station e.g., UE
- AI/ML lifecycle can be split into several stages such as data collection/pre-processing, model training, model testing/validation, model deployment/update, model monitoring, model switching/selection etc., where each stage is equally important to achieve target performance with any specific model(s).
- AI/ML model for any use case or application, one of the challenging issues is to manage the lifecycle of AI/ML model. This is mainly because a data/model drift occurs during model deployment/inference which results in performance degradation of the AI/ML model. Model drift occurs, when dataset statistically change after the model is deployed and when model inference capability is impacted due to unseen data as input. In a similar aspect, the statistical property of a dataset and the relationship between input and output for the trained model can be changed with drift occurrence. Then, model adaptation is required to support operations such as model switching, re-training, fallback, etc.
- AI/ML model enabled wireless communication network When AI/ML model enabled wireless communication network is deployed, it is then important to consider how to handle an adaptation of the AI/ML model under operations such as model training, inference, monitoring, updating, etc.
- ML applicable conditions for LCM operations can be significantly changed with different mobility ranges over time by degrading any activated LCM operations.
- model drift occurs on either UE or network side, the complete model needs to be replaced through model transfer or re-training/updating which increases signaling overhead, particularly for the case when the model size is large.
- a method of distributed monitoring in a wireless network comprises a step of segmenting a target AI/ML model into multiple partitioned-models.
- the method further comprises a step of configuring a mapping relationship between the partitioned-models, the mapping relationship containing property information and index information for each of the partitioned-models.
- the method further comprises a step if receiving one of the partitioned-models and its associated property information and index information.
- the method further comprises a step of monitoring the received partitioned-model using the received property information.
- the method further comprises a step of reporting of a model monitoring update, if the monitoring indicates a model drift for the received partitioned-model.
- the property information contains a model description information and/or attribute data and/or model monitoring information and/or reference data.
- mapping relationship is signaled from one node of the wireless network to another node of the wireless network.
- the type of communication between the nodes of the wireless network is MEC-to-UE or UE-to-UE.
- the partitioned-models are sent to multiple UEs and each of the UEs executes the one or more received partitioned-models with the model monitoring information.
- the target model is a paired model between multiple AI/ML models which are located in different nodes of the wireless network.
- a fix maximum number of partitioned-models are determined by one of the following methods:
- the reporting of the model monitoring update or a signaling of the mapping relationship between nodes of the wireless network is signaled via L1 , L2, or L3 signaling, or a RRC re-configuration message.
- a preset model ID information is shared without a transfer of the partitioned-models, if the preset model ID is available for the partitioned-models.
- an apparatus for distributed monitoring in a wireless network comprises a wireless transceiver, a processor coupled with a memory in which computer program instructions are stored, said instructions being configured to implement steps of an above-described method.
- a user equipment for distributed monitoring in a wireless network comprises the above-described apparatus, while the steps of receiving one of the partitioned models and its associated property information and index information, monitoring the received partitioned-model, and reporting of the model monitoring update is proceeded.
- a base station for distributed monitoring in a wireless network comprises the above-described apparatus, while the steps of segmenting the target AI/ML model, configuring the mapping relationship, and transmitting the partitioned-models and the mapping relationship to other nodes in the wireless network are proceeded.
- a wireless communication system for distributed monitoring in a wireless network comprises the above-described base station and the above-described user equipment, the base station comprising a processor coupled with a memory in which computer program instructions are stored, said instructions being configured to implement steps an above-described method, the user equipment comprises a processor coupled with a memory in which computer program instructions are stored, said instructions being configured to implement steps of an above-described method.
- the whole target model is segmented into multiple partitioned-models, wherein there are multiple methods of setting maximum number of partitioned-models.
- firstly network side can determine to fix maximum number of partitioned-models as implementation-specific configuration.
- maximum number of partitioned-models can be also pre-defined by specifications.
- it can be pre-configured by target model type/format/functionality and/or use case.
- the properties and/or parameters of each partitioned-models are configured individually and it can be sent through RRC signaling (e.g., RRC re-configuration message).
- Network side also has to configure index of the partitioned-models in order to identify each partitioned-models and the mapping relationship between partitioned-models and index is pre-configured.
- This mapping relationship tables can be multiple based on different target models, model applications/functionalities, etc. For example, network side decides to segment target model into multiple partitioned-models and the mapping relationship information is sent to UEs (e.g., through dedicated RRC message) so that and drifted partitioned-model(s) can be indicated to network using index value (e.g., through L1/L2 or L3 signaling).
- index value e.g., through L1/L2 or L3 signaling
- model monitoring configuration information is provided by network so that each partitioned-models can be monitored with different configuration setting such as monitoring pattern/cycle or period, reference (distribution) data, etc. for drift detection or model performance measure.
- Figure 1 shows an exemplary table of mapping relationship for partitioned-models.
- the mapping relationship between partitioned-models and index is pre-configured.
- the properties of the associated partitioned-models include their own model description information and/or attribute data, in which partitioned-model monitoring information is also contained as well such as model monitoring cycle/period, reference data, etc.
- model monitoring drift detection can be based on any well-known metrics using statistical distance or similarity measurements.
- the mapping relationship information the property information can be only sent to each UEs having the associated partitioned-model(s). When mapping relationship information is received, UEs can perform model monitoring of the indexed partitioned-model(s) based on the property information.
- FIG. 2 shows an exemplary block diagram of model structure type A for partitioned-models.
- a target model is segmented into multiple partitioned-models that can be shared with multiple UEs so that each UEs can execute one or more partitioned-models with the pre-configured model monitoring information.
- FIG 3 is an exemplary block diagram of model structure type B for partitioned-models.
- Model A is collaborated with multiple Model B-1 , Model B-2, ... , Model B-/V as these models are all located in different nodes.
- Network side is aware of the indexed Model B property information so that any drifted Model B at UE can be identified to perform model switching or re-training.
- Model A and Model B are the paired relationship to perform any target model.
- Figure 4 shows an exemplary block diagram of partitioned-model distribution for network and UE sides.
- partitioned-models at multiple UE devices can be also collaborated with a single or multiple counterpart model(s) at the other node side in the deployment scenarios such as Network/MEC-to-UEs or UEs-to-UEs (note that MEC is multi-access edge computing device).
- any drifted partitioned-model(s) can be reported to the network side so that a new model can be transferred or alternative model available at UE can be activated. Or the drifted partitioned-model(s) can be also removed from the overall LCM operation between network and UEs if necessary.
- Figure 5 shows a signaling flow of monitoring partitioned-models.
- Network side configures index of the partitioned-models in order to identify each partitioned-models and the mapping relationship between partitioned-models and index is pre-configured.
- This mapping relationship tables can be multiple based on different target models, model applications/functionalities, etc. For example, network side decides to segment target model into multiple partitioned-models and the mapping relationship information is sent to UEs (e.g., through dedicated RRC message) so that and drifted partitioned-model(s) can be indicated to network using index value (e.g., through L1/L2 or L3 signaling).
- index value e.g., through L1/L2 or L3 signaling
- model monitoring configuration information is provided by network so that each partitioned-models can be monitored with different configuration setting such as monitoring pattern/cycle or period, reference (distribution) data, etc. for drift detection or model performance measure.
- Figure 6 shows a flowchart of procedure of monitoring partitioned-models for network side.
- multiple partitioned-models are pre-configured so that those can be run on UE side. If the preset model IDs are available for partitioned-models, only model ID information can be shared without model transfer of partitioned-models.
- the mapping relationship information between partitioned-models and index is sent to UEs (e.g., through dedicated RRC message).
- Figure 7 shows a flowchart of procedure of monitoring partitioned-models for UE side.
- model monitoring is executed after activating the allocated partitioned-models for drift detection.
- drifted partitioned-model(s) can then be indicated to network using index value (e.g., through L1/L2 or L3 signaling).
- Index value e.g., through L1/L2 or L3 signaling.
- Model monitoring configuration information is provided by network so that each partitioned-models can be monitored with different configuration settings such as monitoring pattern/cycle or period, reference (distribution) data, etc. for drift detection or model performance measure.
Landscapes
- Engineering & Computer Science (AREA)
- Computer Networks & Wireless Communication (AREA)
- Signal Processing (AREA)
- Artificial Intelligence (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Databases & Information Systems (AREA)
- Evolutionary Computation (AREA)
- Medical Informatics (AREA)
- Software Systems (AREA)
- Mobile Radio Communication Systems (AREA)
Abstract
La présente demande décrit des procédés de signalisation de modèle AI/ML entraîné par des données pour une surveillance de modèle efficace avec une opération de modèle d'apprentissage automatique dans des systèmes de communication mobile sans fil comprenant des stations de base (gNB, par exemple) et des stations mobiles (UE, par exemple). Le partitionnement de modèle basé sur un indice avec des propriétés de modèle est utilisé pour réduire le surdébit de signalisation par l'intermédiaire d'une surveillance distribuée de modèle lorsqu'une dérive de modèle se produit.
Applications Claiming Priority (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| DE102023207271 | 2023-07-28 | ||
| DE102023207271.8 | 2023-07-28 |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| WO2025026851A1 true WO2025026851A1 (fr) | 2025-02-06 |
Family
ID=92264220
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| PCT/EP2024/071026 Pending WO2025026851A1 (fr) | 2023-07-28 | 2024-07-24 | Procédé de surveillance distribuée de modèle partitionné |
Country Status (1)
| Country | Link |
|---|---|
| WO (1) | WO2025026851A1 (fr) |
Citations (6)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20200201727A1 (en) | 2018-12-21 | 2020-06-25 | Microsoft Technology Licensing, Llc | Machine learning model monitoring |
| US20210133632A1 (en) | 2019-11-04 | 2021-05-06 | Domino Data Lab, Inc. | Systems and methods for model monitoring |
| WO2022008037A1 (fr) | 2020-07-07 | 2022-01-13 | Nokia Technologies Oy | Aptitude et incapacité d'ue ml |
| US20220027749A1 (en) | 2020-07-22 | 2022-01-27 | International Business Machines Corporation | Machine learning model monitoring |
| US20220321647A1 (en) | 2021-03-31 | 2022-10-06 | Telefonaktiebolaget Lm Ericsson (Publ) | Network Controlled Machine Learning in User Equipment |
| WO2023023954A1 (fr) * | 2021-08-24 | 2023-03-02 | Oppo广东移动通信有限公司 | Procédé et appareil de transmission de modèle |
-
2024
- 2024-07-24 WO PCT/EP2024/071026 patent/WO2025026851A1/fr active Pending
Patent Citations (7)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20200201727A1 (en) | 2018-12-21 | 2020-06-25 | Microsoft Technology Licensing, Llc | Machine learning model monitoring |
| US20210133632A1 (en) | 2019-11-04 | 2021-05-06 | Domino Data Lab, Inc. | Systems and methods for model monitoring |
| WO2022008037A1 (fr) | 2020-07-07 | 2022-01-13 | Nokia Technologies Oy | Aptitude et incapacité d'ue ml |
| US20220027749A1 (en) | 2020-07-22 | 2022-01-27 | International Business Machines Corporation | Machine learning model monitoring |
| US20220321647A1 (en) | 2021-03-31 | 2022-10-06 | Telefonaktiebolaget Lm Ericsson (Publ) | Network Controlled Machine Learning in User Equipment |
| WO2023023954A1 (fr) * | 2021-08-24 | 2023-03-02 | Oppo广东移动通信有限公司 | Procédé et appareil de transmission de modèle |
| EP4395434A1 (fr) * | 2021-08-24 | 2024-07-03 | Guangdong Oppo Mobile Telecommunications Corp., Ltd. | Procédé et appareil de transmission de modèle |
Non-Patent Citations (1)
| Title |
|---|
| ERIC YIP ET AL: "[FS_AI4Media] Permanent Document v0.8", vol. 3GPP SA 4, no. Berlin, DE; 20230522 - 20230526, 26 May 2023 (2023-05-26), XP052378545, Retrieved from the Internet <URL:https://www.3gpp.org/ftp/TSG_SA/WG4_CODEC/TSGS4_124_Berlin/Docs/S4-231011.zip S4-231011 [FS_AI4Media] PD v0.8.docx> [retrieved on 20230526] * |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| WO2025008304A1 (fr) | Procédé de signalisation avancée de rapport ml | |
| WO2025026970A1 (fr) | Procédé d'activation d'un modèle candidat | |
| CN121014189A (zh) | 用于无线接入网络的模型数据集信令的方法 | |
| WO2025026851A1 (fr) | Procédé de surveillance distribuée de modèle partitionné | |
| WO2025017056A1 (fr) | Procédé de signalisation de mise en correspondance de modèle ia/ml | |
| WO2025016856A1 (fr) | Procédé de signalisation d'assistance avancée pour équipement utilisateur de rapport d'apprentissage automatique | |
| WO2025087719A1 (fr) | Procédé de délestage d'une opération de lcm ml dans un réseau sans fil | |
| WO2025087720A1 (fr) | Procédé d'une opération de lcm ml indirect assistée par réseau | |
| WO2025087718A1 (fr) | Procédé de signalisation de regroupement de modèles | |
| WO2025067885A1 (fr) | Procédé de signalisation de modèle pour connectivité multiple | |
| WO2025087873A1 (fr) | Procédé de ré-entraînement de modèle ia/ml dans réseau sans fil | |
| FI130871B1 (en) | DETERMINATION OF A TRANSMITTER LOB | |
| WO2024231363A1 (fr) | Procédé d'adaptation de modèle avancé pour réseau d'accès radio | |
| WO2024231362A1 (fr) | Procédé de configuration et de signalisation de modèle ml | |
| WO2025129631A1 (fr) | Dispositifs et procédés de communication | |
| WO2025124931A1 (fr) | Procédé de signalisation de partage de modèle dans un système de communication sans fil | |
| WO2024160974A1 (fr) | Procédé de surveillance avancée de modèle de gnb-ue | |
| WO2024160972A2 (fr) | Procédé de comportements de gnb-ue pour mobilité basée sur un modèle | |
| WO2025093364A1 (fr) | Système et appareil de transfert de modèle dans un réseau et procédé associé | |
| WO2025168462A1 (fr) | Procédé de signalisation d'entraînement en ligne avancée pour ran | |
| WO2025093284A1 (fr) | Système et appareil pour le transfert de modèles dans un réseau et procédé en association avec celui-ci | |
| WO2025172265A1 (fr) | Procédé de signalisation de modèle basée sur des conditions avancées | |
| WO2025195792A1 (fr) | Procédé de configuration d'un ensemble de modes de fonctionnement pris en charge pour une fonctionnalité aa dans un système de communication sans fil | |
| CN121195482A (zh) | 用于无线电接入网络的高级模型适配的方法 | |
| WO2024200393A1 (fr) | Procédé de signalisation de commutation de modèle pour réseau d'accès radio |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| 121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 24752356 Country of ref document: EP Kind code of ref document: A1 |