[go: up one dir, main page]

WO2025183613A1 - Procédés, appareil et supports lisibles par ordinateur associés au partage d'ensembles de données sur un réseau de communication - Google Patents

Procédés, appareil et supports lisibles par ordinateur associés au partage d'ensembles de données sur un réseau de communication

Info

Publication number
WO2025183613A1
WO2025183613A1 PCT/SE2025/050178 SE2025050178W WO2025183613A1 WO 2025183613 A1 WO2025183613 A1 WO 2025183613A1 SE 2025050178 W SE2025050178 W SE 2025050178W WO 2025183613 A1 WO2025183613 A1 WO 2025183613A1
Authority
WO
WIPO (PCT)
Prior art keywords
network node
dataset
network
training
machine
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
PCT/SE2025/050178
Other languages
English (en)
Inventor
Reem KARAKI
Jingya Li
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Telefonaktiebolaget LM Ericsson AB
Original Assignee
Telefonaktiebolaget LM Ericsson AB
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Telefonaktiebolaget LM Ericsson AB filed Critical Telefonaktiebolaget LM Ericsson AB
Publication of WO2025183613A1 publication Critical patent/WO2025183613A1/fr
Pending legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/16Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks using machine learning or artificial intelligence
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W24/00Supervisory, monitoring or testing arrangements
    • H04W24/02Arrangements for optimising operational condition
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W24/00Supervisory, monitoring or testing arrangements
    • H04W24/10Scheduling measurement reports ; Arrangements for measurement reports

Definitions

  • Embodiments of the disclosure relate to communication networks, and particularly to methods, apparatus and computer-readable media related to sharing datasets over a communication network.
  • Example use cases include using autoencoders for Channel State Information (CSI) compression to reduce feedback overhead and improve channel prediction accuracy (see, e.g., 3rd Generation Partnership Project (3GPP) Technical Specification (TS) Rl-2307916); using deep neural networks for classifying Line-of-Sight (LOS) and Non-LOS (NLOS) conditions to enhance positioning accuracy; using reinforcement learning for beam selection at the network (NW) side and/or the User Equipment (UE) side to reduce signaling overhead and beam alignment latency; and using deep reinforcement learning to leam an optimal precoding policy for complex Multiple Input Multiple Output (MIMO) precoding problems.
  • CSI Channel State Information
  • 3GPP 3rd Generation Partnership Project
  • TS Technical Specification
  • LCM of the AI/ML model (e.g., model training, model deployment, model inference, model monitoring, model updating) and AI/ML functionality.
  • ID model identifier
  • “Functionality” refers to an AI/ML-enabled Feature or Feature Group (FG) enabled by configuration(s), where configuration(s) is(are) supported based on conditions indicated by UE capability.
  • functionality-based LCM operates based on, at least, one configuration of an AI/ML-enabled Feature/FG or specific configurations of an AI/ML-enabled Feature/FG.
  • the network indicates activation/deactivation/fallback/switching of AI/ML functionality via 3GPP signalling (e.g., Radio Resource Control (RRC), Medium Access Control (MAC) Control Element (CE), Downlink Control Information (DCI)).
  • RRC Radio Resource Control
  • MAC Medium Access Control
  • CE Control Element
  • DCI Downlink Control Information
  • Models may not be identified at the Network, and UE may perform model-level LCM. Whether and how much awareness/interaction NW should have about model-level LCM requires further study.
  • functionality identification there may be one or more functionalities defined within an AI/ML-enabled feature, where “AI/ML-enabled Feature” refers to a Feature where AI/ML may be used.
  • model-ID-based LCM models are identified at the Network, and the Network or the UE may activate/deactivate/select/switch individual AI/ML models via the model ID.
  • a model may be associated with specific configurations/conditions associated with the UE capability of an AI/ML-enabled Feature/FG and additional conditions (e.g., scenarios, sites, and datasets) as determined/identified between the UE side and the NW side.
  • An AI/ML model identified by a model ID may be logical, and how it maps to physical AI/ML model(s) may be up to implementation.
  • Figure 1 shows a functional framework for AI/ML for the NR air interface, and particularly a functional framework that can be used for studying model LCM aspects for different Al for Physical Layer (PHY) use cases.
  • the general framework consists of the following:
  • Data Collection 102 is a function that provides input data to the Model Training 104, Management 106, and Inference functions 108.
  • Training Data 110 Data needed as input for the AI/ML Model Training function 104.
  • Monitoring Data 112 Data needed as input for the Management of AI/ML models or AI/ML functionalities.
  • Inference Data 114 Data needed as input for the AI/ML Inference function 108.
  • Model Training 104 is a function that performs AI/ML model training, validation, and testing which may generate model performance metrics that can be used as part of the model testing procedure.
  • the Model Training function 104 is also responsible for data preparation (e.g., data pre-processing and cleaning, formatting, and transformation) based on the Training Data 110 delivered by the Data Collection function 102 if required.
  • Trained/Updated Model 115 In case a Model Storage function 116 is present, this is used to deliver trained, validated, and tested AI/ML models to the Model Storage function 116, or to deliver an updated version of a model to the Model Storage function 116.
  • Management 106 is a function that oversees the operation (e.g., selection, (de)activation, switching, fallback) and monitoring (e.g., performance) of AI/ML models or AI/ML functionalities. This function is also responsible for making decisions to ensure the proper inference operation based on data received from the Data Collection function 102 and the Inference function 108.
  • o Management Instruction 117 Information needed as input to manage the Inference function 108. Concerning information may include selection/(de)activation/switching of AI/ML models or AI/ML-based functionalities, fallback to non- AI/ML operation (i.e., not relying on inference process), etc.
  • o Model Transfer/Delivery Request 118 Used to request model(s) to the Model Storage function 116.
  • Performance Feedback / Retraining Request 120 Information needed as input for the Model Training function 104, e.g., for model (re)training or updating purposes.
  • Inference 108 is a function that provides outputs from the process of applying AI/ML models or AI/ML functionalities, using the data that is provided by the Data Collection function 102 (i.e., Inference Data 114) as an input.
  • the Inference function 108 is also responsible for data preparation (e.g., data pre-processing and cleaning, formatting, and transformation) based on Inference Data 114 delivered by a Data Collection function 102, if required.
  • o Inference Output 122 Data used by the Management function 106 to monitor the performance of AI/ML models or AI/ML functionalities.
  • Model Storage 116 is a function responsible for storing trained/ updated models 115 that can be used to perform the Inference function 108.
  • the Model Storage function 116 in Figure 1 is only intended as a reference point (if any) when applicable for protocol terminations, model transfer/delivery, and related processes. It should be stressed that the presence of Model Storage 116 in Figure 1 does not restrict the actual storage locations of models. Therefore, the specification impact of all data/information/instruction flows (i.e., the arrows in Figure 1) to/from this function should be studied case by case.
  • Model Transfer/Delivery 124 Used to deliver an AI/ML model to the Inference function 108.
  • AI/ML models being discussed in the Rel-18 study item on AI/ML for the NR air interface can be categorized into the following two types:
  • One-sided AI/ML model which can be a UE-sided AI/ML model whose inference is performed entirely at the UE, or a NW-sided AI/ML model whose inference is performed entirely at the NW.
  • Two-sided AI/ML model which refers to a paired AI/ML Model(s) over which joint inference is performed across the UE and the NW, i.e., the first part of the inference is firstly performed by UE and then the remaining part is performed by gNB, or vice versa.
  • Figure 2 shows a use case of autoencoder (AE)-based CSI feedback/report, particularly AE-based CSI compression using a two- sided AI/ML model use case, where an encoder 202 (UE-part of the two-sided AE model) is operated at a UE to compress the estimated wireless channel, and the output of the encoder 202 (the compressed wireless channel information estimates) is reported from the UE to a gNB.
  • the gNB uses a 204 decoder (NW-part of the two-sided AE model) to reconstruct the estimated wireless channel information.
  • the two-sided AI/ML model is composed of the encoder 202 at the UE side and the decoder 204 at the base station (i.e., a gNB) side.
  • the code is generated by the encoder 202 and only interpretable by a jointly trained decoder 204.
  • the situation is different from running an AI/ML model in the UE, reporting the output over the air in a fully standardized format, and running a separate AI/ML model at the base station.
  • a proprietary ML model operating with the existing standard air-interface is applied at one end of the communication chain (e.g., at the UE side), and the model life cycle management (e.g., model selection/training, model monitoring, model retraining, model update) is done at this node without inter-node assistance (e.g., assistance information provided by the network node).
  • model life cycle management e.g., model selection/training, model monitoring, model retraining, model update
  • an ML model is operating at one end of the communication chain (e.g., at the UE side), but this node gets assistance from the node(s) at the other end of the communication chain (e.g., anext generation Node B (gNB)) for its Al model life cycle management to some extent (e.g., for training/retraining the Al model, model update, model monitoring, model selection/fallback/switching).
  • gNB ext generation Node B
  • the model-training process may require sharing of data from one side to the other side since the input and output of a two-sided model reside within different vendors’ domains.
  • Type 1 Joint training of the two-sided model at a single side/entity, e.g., the UE-side or the NW-side.
  • a two-sided model UE-part model and NW-part model
  • the NW side e.g., by a NW vendor
  • the UE-part of the trained model e.g., encoder for the AE-based CSI compression use case
  • Type 2 Joint training of the two-sided model at the network side and the UE side. Joint training can be done simultaneously at the network and UE sides or be performed in a sequential way. In the case of Type 2 simultaneous joint training, the UE-part model (trained at the UE side) and the NW-part model (trained at the NW side) are jointly trained in the same loop through exchanging forward propagation values and backward propagation values between NW and UE. In case of Type 2 sequential joint training, one side (UE-side or NW-side) starts its model training first, and then opens an Application Programming Interface (API) to facilitate the other side to do the model training.
  • API Application Programming Interface
  • the NW side trains its model first (thus also obtaining what is sometimes known as a nominal encoder, but that is not used at the UE), and then the UE side can train its encoder by using an API.
  • the API would accept, e.g., a CSI report and a target CSI, both of which are derived by the UE side based on the data (note that the CSI report is generated, at least partially, by the UE encoder under training and may thus not be an efficient CSI report at each step in the training).
  • the API would return gradients of the decoder and a loss function, with respect to the variables in the CSI report, thus enabling the UE to train an encoder that is matched to the decoder.
  • Type 3 Sequential training starting with UE side training or sequential training starting with NW side training, where the UE-part model and the NW-part model are trained by the UE side and network side, respectively.
  • the NW can firstly train the UE-part and NW-part models jointly using training data (e.g., target CSI samples), and then share a dataset consisting of UE-part model output (e.g., latent space variables) associated to the ground-truth/labels (e g., target CSI) for the UE-side to train its UE-part model (e.g., an encoder).
  • training data e.g., target CSI samples
  • UE-part model output e.g., latent space variables
  • the ground-truth/labels e.g., target CSI
  • the NW can share a dataset consisting of gradients of the NW-part model (e.g., the gradients of the decoder) together with loss function value(s) indicating the discrepancy of the NW-part model output (e.g., the decoder output) and the ground-truth/labels (e.g., target CSI) with respect to the UE-part model output (e g., latent space variables), based on which the UE-side trains its UE-part model (e.g., an encoder).
  • the NW-part model e.g., the gradients of the decoder
  • loss function value(s) indicating the discrepancy of the NW-part model output
  • the ground-truth/labels e.g., target CSI
  • the UE and/or the NW can belong to different vendors or have different configuration/conditions.
  • Dataset delivery from a NW side to a UE side without considering interoperability aspects might require developing multiple vendor-specific model pairs between one NW node and different UE vendors or developing multiple vendor-specific model pairs between one UE side (vendor/condition) and different NW vendors.
  • a method is performed by a second network node.
  • the method comprises obtaining first datasets from one or more first network nodes and using the first datasets to generate a second dataset.
  • the second dataset is labelled with a first identifier.
  • the method further comprises forwarding the second dataset and the first identifier to a third network node.
  • a method is performed by a fourth network node.
  • the method comprises receiving, from a third network node, a third dataset labelled with a second identifier
  • the third dataset comprises data samples that are labelled with a first condition or configuration of a user equipment or a network node associated with the data samples.
  • the method further comprises training, using the third dataset, a machine-learning model for the first condition or configuration.
  • a method is performed by a system. The method comprises training, at each of a plurality of second network nodes and based on a respective plurality of first datasets, a machine-learning model comprising first and second parts.
  • Training the machine-learning model comprises training the first part of the machine-learning model.
  • the method further comprises: outputting, by each of the second network nodes, a respective second dataset generated using the first datasets and the respective machine-learning model; aggregating, at the third network node, the plurality of second datasets to generate one or more third datasets; and training, at the one or more fourth network nodes, a second part of the machine-learning model using the one or more third dataset.
  • a second network node comprises processing circuity configured to cause the second network node to obtain first datasets from one or more first network nodes and use the first datasets to generate a second dataset.
  • the second dataset is labelled with a first identifier.
  • the processing circuitry is further configured to cause the second network node to forward the second dataset and the first identifier to a third network node.
  • a fourth network node comprises processing circuitry configured to cause the fourth network node to receive, from a third network node, a third dataset labelled with a second identifier.
  • the third dataset comprises data samples that are labelled with a first condition or configuration of a user equipment or a network node associated with the data samples.
  • the processing circuitry is further configured to cause the fourth network node to train, using the third dataset, a machine-learning model for the first condition or configuration.
  • a system comprises a plurality of second network nodes, a third network node, and one or more fourth network nodes.
  • the system comprises processing circuitry configured to cause the system to train, at each of a plurality of second network nodes and based on a respective plurality of first datasets, a machine-learning model comprising first and second parts, wherein training the machine-learning model comprises training the first part of the machine-learning model; output, by each of the second network nodes, a respective second dataset generated using the first datasets and the respective machinelearning model; aggregate, at the third network node, the plurality of second datasets to generate one or more third datasets; and train, at the one or more fourth network nodes, a second part of the machine-learning model using the one or more third dataset [0022]
  • computer-readable storage medium stores code which, when executed by processing circuitry of a second network node, causes the second network node to perform a method according to embodiments of the first aspect
  • computer-readable storage medium stores code which, when executed by processing circuitry of a fourth network node, causes the fourth network node to perform a method according to embodiments of the second aspect.
  • computer-readable storage medium stores code which, when executed by processing circuitry of a system, causes the system to perform a method according to embodiments of the third aspect.
  • the dataset delivery method may include one or more of the following:
  • the dataset (i.e. , used for the NW to train the NW's actual NW-part model and the NW's nominal UE-part model and to generate another dataset to be delivered to the UE side) may be delivered from the NW to a central entity, and
  • the dataset i.e., used for the UE to train the UE’s actual UE-part model and the UE's nominal NW-part model and to generate another dataset to be delivered to the NW side
  • the dataset is delivered from the UE to a central entity
  • the “UE” may refer to the UE device or another central entity (referred to as central entity 3 in the detailed description) that is responsible for training the UE part of the two-sided model.
  • the “NW” may refer to a NW node or another central entity (referred to as central entity 1 in the detailed description) that is responsible for training the NW part of the two-sided model.
  • the proposed dataset delivery mechanism may enable a UE/UE- side to tram a single model that can be used jointly with one or more NW-part models, reducing implementation complexity for UEs and the need to store and/or obtain multiple models trained separately for different NW conditions/configuration.
  • NW nodes can train a single NW-part model that can be used jointly with one or more UE-part models.
  • the benefits for the NW side include less implementation complexity and avoiding the need to simultaneously run multiple NW side models to serve different UEs.
  • Fig. 1 shows a functional framework for AI/ML for the NR air interface
  • Fig. 2 shows an example of autoencoder-based CSI compression using two-sided
  • FIGs 3 to 6 are schematic diagrams showing the sharing of one or more training datasets according to embodiments of the disclosure.
  • Fig. 7 is a flow chart illustrating a method performed by a network node in accordance with some embodiments.
  • Fig. 8 is a flow chart illustrating a method performed by a network node in accordance with some embodiments.
  • FIG. 9 shows an example of a communication system in accordance with some embodiments.
  • FIG. 10 shows a UE in accordance with some embodiments
  • FIG. 11 shows a network node in accordance with some embodiments.
  • Fig. 12 is a block diagram illustrating a virtualization environment in which functions implemented by some embodiments may be virtualized.
  • Fig. 13 shows a network node in accordance with some embodiments. Detailed Description
  • NW network
  • gNB network node
  • D2D Device-to-Device
  • the node may be deployed in a 5G network, a 6G network.
  • AI/ML model uses the singular form, it should be well understood that more than one AI/ML model may be implemented.
  • the UE may be configured or may autonomously switch between the models depending on certain conditions and/or proprietary implementations.
  • the UE (and the NW for two-sided model cases) is assumed to have the capability of running AI/ML models supporting the AI/ML-enabled features and any associated data collection procedures.
  • training is to be understood as a generic term, e.g., representing training, retraining, or fine-tuning.
  • the dataset transmitted by the UE to the NW has the same size (number) as the reference signals transmitted by the NW for data collection.
  • the size between these two may be different.
  • the UE may not transmit all of the measurement results, and/or part of the transmitted dataset may not be received by the NW.
  • the total dataset transmitted by the UE to the NW is the same as the total dataset received by the UE from the NW.
  • the NW may discard some of the received dataset, e.g., some of the elements of the dataset have the same/similar values, etc.
  • Method 1 NW first training and OTT dataset delivery from multiple NW-sides to multiple UE- sides via a central entity
  • a first method (referred to herein as “method 1 ”) includes one or more of the following steps:
  • UE-side/UE-part model training dataset (called the first global dataset herein) is created by the central entity 2.
  • the NW nodes e.g. a gNB
  • a central entity 1 e.g. NW training center
  • the central entity 1 delivers the first data set to central entity 2 (e.g. Operations, Administration and Maintenance (0AM), Core Network (CN) entity) that creates the first global dataset based on aggregating one or more first datasets from one or more central entity 1.
  • central entity 2 e.g. Operations, Administration and Maintenance (0AM), Core Network (CN) entity
  • the central entity 2 delivers the first global dataset(s) to a central entity 3 (e.g., a UE server) that trains an AL/ML model using the first global data set and delivers the model to the UEs.
  • a central entity 3 e.g., a UE server
  • Figure 3 is a schematic diagram showing dataset sharing and deliveiy according to an embodiment of the disclosure, and particularly according to the first method referred to herein as “method 1”.
  • Method 1 has one or more of the following steps:
  • Step 301 A NW node collects data samples (e g., channel measurements) from multiple UEs, either directly or via another NW node, and creates a data set (referred to as the second dataset herein) using all or part of these data samples collected from one or more NW nodes, and forwards the collected data samples to central entity 1.
  • data samples e g., channel measurements
  • a gNB collects data samples from multiple UEs and forwards the collected data samples to the NW-side training center.
  • the NW-side training center creates a data set (called as the second dataset herein) using all or part of these data samples collected from one or more gNB nodes.
  • the NW-side training center then forwards the second data set to central entity 1.
  • a gNB collects data samples from multiple UEs and forwards the collected data samples to the NW-side training center.
  • the NW-side training center creates a data set (referred to as the second dataset herein) using all or part of these data samples collected from one or more gNB nodes.
  • the second data set is then distributed to one or more gNBs.
  • the one or more gNBs then forwards the second data set to central entity 1.
  • a NW node e g., a gNB-DU
  • the gNB-CU creates a data set (called as the second dataset herein) using all or part of these data samples collected from one or more gNB-DUs.
  • the gNB-CU then forwards the second data set to the central entity 1 (e.g., a NW-side training center, or aNW-side data center).
  • the central entity 1 e.g., a NW-side training center, or aNW-side data center.
  • a NW node e.g., a gNB-CU, or a gNB-DU collects data samples from multiple UEs and forwards the collected data samples directly to the central entity 1 (e.g., aNW-side training center, or aNW-side data center).
  • the central entity 1 e.g., aNW-side training center, or aNW-side data center.
  • Step 302 If no NW-part model and nominal UE-part model are available at the central entity 1, the central entity 1 trains a pair of the NW-part model and nominal UE-part model using the second dataset. The central entity 1 creates another dataset (referred to as the first dataset herein) using the second dataset and the trained/available NW-part model and nominal UE-part model pair.
  • Step 303 The central entity 1 (e g., the NW side training center, referred to as the fourth node herein) delivers part or all of the data samples within the first dataset to the central entity 2.
  • the central entity 1 delivers part or all of the data samples within the first dataset to one or more NW nodes, which forwards the received data set to the central entity 2.
  • Step 304 The central entity 2 creates a global first dataset based on one or more received first data sets received from one or more central entity 1 and/or NW nodes. It should be noted that the one or more central entities and/or NW nodes could be from different vendors.
  • Step 305 The central entity 2 delivers part or all of the data samples within the global first data set to one or more central entities 3 (e.g., one or more UE side training centers).
  • Step 306 The one or more central entities 3 use part or all of the received data samples within the global first data set to training their own one or more UE-part AI/ML models for the AI/ML enabled feature independently.
  • Step 307 The one or more central entities 3 deliver the trained one or more UE- part AI/ML models to the corresponding Al-feature capable UEs.
  • FIG. 3 illustrates the above procedure.
  • NW 1 vendor training center i.e. central entity 1 trains a single decoder based on the dataset collected by its network nodes (gNBs), while a UE side may train a single model based on the global data set collected from multiple network vendors.
  • gNBs network nodes
  • the central entity 2 also delivers part or all of the data samples within the global first dataset to one or more central entities 1 (e.g., one or more NW-side training centers).
  • the one or more central entities 1 can update/retain/finetune their NW-part model(s) and/or nominal UE-part model(s) using the received data samples.
  • Method la NW first training and OTT dataset delivery from multiple NW-sides to UE side via a central entity and including dataset labelling based on UE-side conditions/configuration.
  • Method la A variation of the first method corresponds substantially to method 1, described above, with one or more of the following alterations:
  • UE-side/UE-part model training dataset (called the first sub global dataset herein) is created by the central entity 2 for a UE-side condition/configuration;
  • the NW nodes e.g. a gNB
  • the NW nodes deliver the data samples labeled with a UE-side condition/configuration to a central entity 1 (e g. NW training center) that creates a first sub dataset based on aggregating of the data samples from one or more NW nodes.
  • a central entity 1 e g. NW training center
  • the central entity 1 delivers the first sub dataset to central entity 2 (e.g. 0AM, CN entity) that creates the first global sub dataset based on aggregating one or more first sub datasets for a UE-side condition/configuration from one or more central entity 1.
  • central entity 2 e.g. 0AM, CN entity
  • the central entity 2 deliver the first global sub dataset(s) to a central entity 3 (e.g., a UE server) that trains an AL/ML model using the first global data set and delivers the trained models to the UEs with the associated UE-side condition/configuration.
  • a central entity 3 e.g., a UE server
  • FIG. 4 is a schematic diagram showing dataset sharing and delivery according to an embodiment of the disclosure, and particularly according to “method la”. The procedure is similar to method 1. However, in step 401, a NW node (e.g., a gNB) collects data samples (e.g., channel measurements) from multiple UEs, and labels the collected data samples with the associated UE-condition/configuration information.
  • a NW node e.g., a gNB
  • data samples e.g., channel measurements
  • Training of the NW side model can still be performed as in Method 1 (i.e., in step 401, the central entity 1 trains a pair of the NW-part model and nominal UE-part model using the second dataset). This labeling would allow the UE side model to be trained for a specific UE condition/configuration by training using data samples that are associated with UE-side condition/configuration from the first global dataset.
  • central entity 1 creates the first dataset using the second dataset and the trained/ available NW-part model and nominal UE-part model pair.
  • the central entity 1 divides the first dataset into sub first datasets based on the associated UE-side condition/configuration of each data sample. For instance, subDataset 1 consists of data samples that are associated with UE-side condition/configuration 1, and subDataset 2 consists of data samples that are associated with UE-side condition/configuration 2.
  • the central entity 1 e.g., the NW side training center
  • the central entity 1 delivers the first sub dataset(s) to one or more NW nodes, which forwards the received data set(s) to the central entity 2.
  • step 404 for each UE-side condition/configuration, the central entity 2 creates a global first sub dataset based on one or more received sub datasets associated with UE-side condition/configuration sets received from one or more central entities 1 and/or NW nodes.
  • step 405 the central entity 3 collects the data samples associated to the same UE-sided condition/configuration from central entity 2 and uses part or all of these aggregated data samples to train, in step 406, one or more UE-part AI/ML models for this UE-sided condition/configuration for the AI/ML enabled feature.
  • NW 1 vendor training center i.e. central entity 1 trains a single decoder based on the dataset collected by its network nodes (gNBs), while a UE side may train a model for each UE condition/ configuration/ vendor based on the subset of the global data set collected from multiple network vendors.
  • gNBs network nodes
  • the same methodology can be used by one or multiple central entities 1 to create and/or label local sub first datasets based on the associated NW-side condition/configuration of each data sample and delivers the local sub first datasets to the central entity 2.
  • the central entity 2 can create one or multiple global first sub datasets for different NW-side conditions/configurations, and then deliver these one or more global first sub datasets to one or more central entities 3.
  • a central entity 3 can then use the received one or multiple global first sub datasets to train one or more UE-part AI/ML models for this NW-sided condition/configuration for the AI/ML enabled feature.
  • Method 2 UE first training and OTT dataset delivery from multiple UE-sides to multiple NW- sides via a central entity
  • a second method includes one or more of the following steps/features:
  • NW-side/NW-part model training dataset (also called the third global dataset herein) is created by the central entity 2 (e.g. 0AM, CN entity).
  • the UE nodes e.g. a UE
  • the central entity 3 trains an AL/ML model using a fourth data set and delivers the model to the UEs.
  • the central entity 3 delivers the third data set to central entity 2 (e g. 0AM, CN entity) that creates the third global dataset based on aggregating one or more third datasets from one or more central entity 3.
  • central entity 2 e g. 0AM, CN entity
  • the central entity 2 delivers the third global dataset(s) to a central entity 1 (e.g., NW training center) that trams one or more NW-part model using the third global data set and delivers the model to the NW nodes (e.g., gNBs).
  • a central entity 1 e.g., NW training center
  • NW nodes e.g., gNBs
  • Figure 5 is a schematic diagram showing dataset sharing and delivery according to an embodiment of the disclosure, and particularly according to “method 2”.
  • Method 2 has one or more of the following steps:
  • Step 501 A central entity 3 (e.g., a UE training center) collects data samples (e.g., channel measurements) from multiple UEs.
  • the central entity 3 creates a data set (called the fourth dataset herein) using all or part of these data samples collected from one or more UEs.
  • Step 502 If no nominal NW-part model and UE-part model are available at the central entity 3, the central entity 3 trains a pair of the nominal NW-part model and UE-part model using the fourth dataset. The central entity 3 creates another dataset (called the third dataset) using the second dataset and the trained/available nominal NW-part model and UE- part model pair.
  • Step 503 The central entity 3 delivers part or all of the data samples within the third dataset to the central entity 2.
  • Step 504 The central entity 2 creates a global third dataset based on one or more received third data sets received from one or more central entity 2 and/or UEs.
  • Step 505 The central entity 2 delivers part or all of the data samples within the global third data set to a central entity 1 (e.g., a NW side training center)
  • a central entity 1 e.g., a NW side training center
  • Step 506 The central entity 1 uses part or all of the received data samples within the global third data set to train one or more NW-part AI/ML models for the AI/ML enabled feature.
  • Step 507 The central entity 1 delivers the trained one or more NW-part AI/ML models to the Al-feature capable NW nodes.
  • Figure 5 illustrates the above procedure.
  • UE 1 vendor training center trains a single encoder (UE-part model) based on the dataset collected by its multiple UE nodes (e.g., UEs belong to this UE vendor) (with the data samples possibly being collected from different NW vendors), while aNW-side may train a single decoder (NW- part model) based on the global dataset collected from multiple UE vendors (central entity 1).
  • Method 2a UE first training and OTT dataset delivery from multiple UE-sides to NW side via a central entity and including dataset labelling based on NW-side conditions/configuration.
  • Method 2a A variation of the second method (referred to herein as “Method 2a”) corresponds in many ways to method 2 described above.
  • method 2a includes one or more of the following steps/features:
  • NW-side/NW-part model training dataset (called the third sub global dataset herein) is created by the central entity 2, where is the dataset is associated to a NW-side condi tion/configuration.
  • the UE nodes e.g. a UE
  • the UE nodes deliver the data samples with a NW-side label/condition to a central entity 3 (e.g. UE training center) that creates a third sub data set based on aggregating of the data samples from one or more UEs.
  • a central entity 3 e.g. UE training center
  • the central entity 3 trains an AL/ML model using the fourth data set and delivers the model to the UEs.
  • the central entity 3 delivers the third sub data set to central entity 2 (e g. 0AM, CN entity) that creates the third global sub dataset based on aggregating one or more third sub datasets for a NW-side condition/configuration from one or more central entity 3.
  • central entity 2 e g. 0AM, CN entity
  • the central entity 2 deliver the third global sub dataset(s) to a central entity 1 (that trains one or more NW-part models using the third global sub dataset and delivers the trained one or model NW-part models to the NW nodes associated with the NW-side condition/configuration for which training is performed.
  • FIG. 6 is a schematic diagram showing dataset sharing and delivery according to an embodiment of the disclosure, and particularly according to “method 2a”. Method 2a is similar to method 2 described above. However, in step 601, a UE node (e.g., a UE) collects data samples (e.g., channel measurements) and labels the collected data samples with the associated NW-condition/configuration information.
  • data samples e.g., channel measurements
  • Training of the UE side model can still be performed as in Method 2. i.e., in step 602, the central entity 3 trains a pair of the nominal NW-part model and UE-part model using the fourth dataset. This labeling would allow the NW side model to be trained for a specific NW condition/configuration by training using data samples that are associated with NW-side condition/configuration from the third global dataset.
  • central entity 3 may create the third dataset using the fourth dataset and the trained/ available nominal NW-part model and UE-part model pair.
  • the central entity 3 node divides the third dataset into sub third datasets based on the associated NW-side condition/configuration of each data sample. For instance, subDataset 1 consists of data samples that are associated with NW-side condition/configuration 1, and subDataset 2 consists of data samples that are associated with NW-side condition/configuration 2.
  • step 603 the central entity 3 delivers the third sub dataset(s) with information about the associated NW-side condition/configuration to the central entity 2.
  • step 604 for each of the NW-side condition/configuration, the central entity 2 creates a global third sub dataset based on one or more received sub datasets associated with NW-side condition/configuration sets from one or more central entity 3.
  • the central entity 1 collects the data samples associated to the same NW-sided condition/configuration from central entity 2 and uses part or all of these aggregated data samples to train, in step 606, one or more NW-part AI/ML models for this NW-sided condition/configuration for the AI/ML enabled feature.
  • NW 1 vendor training center i.e. central entity 1 trains a single decoder for a specific network condition/configuration based on the subset of the global dataset (i.e. the data samples collected from its network nodes (gNBs)). While a UE part is trained based on data collected from multiple network vendors using a subset of the global data set.
  • gNBs network nodes
  • the UE indicating its UE-side condition/configuration to the NW
  • steps 301 , 401, 501, 601 may require the NW node or the central entity 1 and 2 (e.g., NW side training center) to be able to categorize/label the data samples within the first dataset based on UE-side conditions/configurations, so that it can divide the dataset into sub datasets and deliver the correct subDataset to the central entity 3 (e.g., UE side training center), based on the UE-side condition/configuration.
  • the central entity 3 e.g., UE side training center
  • One method to support the data sample labeling and categorizing at the NW node is via the UE indicating its UE-side condition/configuration to the NW node.
  • the UE may indicate its UE-side condition/configuration related information to the NW.
  • the UE-side condition/configuration related information may include one or more of: UE vendor ID, UE chipset ID, UE release number, Software/hardware version, UE antenna configuration, UE Radio Frequency (RF) impairments, UE power states, Al complexity levels, and Al model structures, etc.
  • the UE may report the UE-side condition/configuration related parameters to the NW explicitly.
  • the UE may report UE-side condition/configuration related information implicitly, e.g., by using a form of IDs, indexes and/or the like.
  • the UE-side conditions/configurations may be included in a UE capability report, for example.
  • the information may also be included when the UE reports its capability to support data collection, or when the UE reports its UE-side additional conditions associated to the particular Al-enabled feature (e.g., to which the AI/ML model relates) to the NW, or when the UE reports the collected training data samples to the NW, e.g., the UE-side condition/configuration related information is reported together with the data samples transmission from the UE to the NW.
  • the NW may be beneficial for the NW to train and/or optimize the model based on the dataset collected from its network nodes (gNB).
  • a UE may collect data samples (e.g., channel measurements) from multiple NW nodes, and label the collected data samples with the associated NW-condition/configuration information.
  • the central entity 3 trains a pair of the nominal NW-part model and UE-part model using the fourth dataset. This labeling would allow the NW part model to be trained for a specific NW condition/configuration by training using data samples that are associated with NW-side condition/configuration from the third global dataset.
  • changes to the NW side condition/configuration may require fine tuning/updating/retaining the models at the NW and the corresponding UE part model in the two-sided model case.
  • the data that is collected may be labelled and/or associated to a specific network condition/configuration to support UE-part retraining/finetuning/updates.
  • the UE or the central entity 3 may categorize/label the data samples within each dataset based on NW-side conditions/configurations before delivering them to central entity 2.
  • the NW node may provide the UE with information about the NW-side condi tions/configurations associated with the data samples collected for training UE side model.
  • the NW may indicate its NW-side condition/configuration related information to the UE.
  • the NW-side condition/configuration related information may include one or more of: NW vendor ID, configuration ID, etc.
  • the NW may report the NW-side condition/configuration explicitly or implicitly via dedicated signaling (e.g., RRC message, Downlink (DL) MAC CE, DL control signaling) or broadcast signaling.
  • dedicated signaling e.g., RRC message, Downlink (DL) MAC CE, DL control signaling
  • the NW may determine whether to reuse an already stored model pair, retain its stored model pair or train a new model pair, and whether it needs to conduct a dataset delivery to the central entity 2 to assist UE-side/UE-part model training/re-training/fine-tuning.
  • the collected data may need to be obtained from different NW nodes/central entity 1 before the training/retraining/fme-tuning can be started, e.g., such that the UE trained model pair will generalize well across NW nodes.
  • the size of the first data set may be specified in the standard text (e.g., as part of data quality assurance requirements for the associated Al-enabled feature) or may be left to the NW proprietary implementation.
  • the maximum number of data samples reported from different NW nodes/central entity 1 may also be considered/specified, e.g., to avoid bias.
  • the NW may additionally, or optionally, first consider the data distribution of the collected data set. If the data distribution of the collected dataset does not match the second dataset (or a subset of the dataset) used for training the NW part model, the NW may start the training procedures and a new first dataset delivery to the central entity 2. If the data distribution of the collected dataset matches the dataset (or a subset of the dataset) that is currently available at the NW (and used to train one or more NW-part model(s), e.g., the decoders )), the NW may omit the training and dataset delivering procedure to central entity 2.
  • the NW may additionally or alternatively first consider the performance of the pair of UE-part model (e.g., actual encoder) and NW-part model (e.g., actual decoder) when applying the dataset collected from the UE.
  • the performance may be measured using one or more intermediate Key Performance Indicators (KPIs) (e.g., squared generalized cosine similarity (SGCS), normalized mean square error (NMSE), etc ). If the performance of the mentioned pair is acceptable (e.g., above a certain threshold), the NW may omit training/retraining/fine tuning and dataset delivery procedure to central entity 2.
  • KPIs Key Performance Indicators
  • SGCS squared generalized cosine similarity
  • NMSE normalized mean square error
  • the NW may omit training/retraining/fine tuning and dataset delivery procedure to central entity 2.
  • a change to the NW side condition/configuration e.g.
  • the central entity 1 and/or NW nodes may indicate to the central entity 2 the invalidity of part or all data samples of a previously delivered first dataset(s).
  • the central entity 2 may label each collected first data set with an association to NW-side configuration/condition/vendor/ID to facilitate tracking of the validity of the data set. For instance, the central entity 2 may generate a unique ID that is provided to central entity 1/NW node when a new first dataset is delivered to central entity 2. The central entity 1/NW node can use the unique ID to indicate which first data set(s) should be updated/discarded/extended.
  • the central entity 2 may indicate to the central entity 3 the availability of an updated/new first global data set
  • the UE can determine whether to reuse the already trained UE part model or if there is a need to retrain/ finetune/ update the UE part model.
  • the first global data set may be labeled with a unique ID to assist the UE to assess the consistency between the first data set used for training and the first global data set available at the central entity 2.
  • the UE may assess the consistency by comparing the data samples used for training and the data samples of the first global data set available at the central entity 2.
  • the NW may determine whether to reuse the already stored model pair or retain its stored model pair or train a new model pair.
  • the collected data may need to be obtained from different UEs or central entities 3 before the training/retraining/fine-tuning can be started, e.g., such that the NW trained part will generalize well across different UEs or UE vendors.
  • the size of the third data set may be specified in the standard text (e.g., as part of data quality assurance requirements for the associated Al-enabled feature) or may be left to the UE proprietary implementation.
  • the maximum number of data samples reported from different UEs/central entity 3 may also be considered/specified, e g., to avoid bias.
  • the UE may additionally, or optionally, first consider the data distribution of the collected data set. If the data distribution of the collected dataset does not match with fourth dataset (or subset of dataset) used for training the UE part model, the UE may start the training procedures and a new third dataset delivery to the central entity 2. If the data distribution of the collected dataset matches with the dataset (or subset of dataset) that is currently available at the UE (and used to train one or more UE-part model(s), e.g., the encoder(s)), the UE/central entity 3 may omit the training and dataset delivering procedure to central entity 2.
  • the UE and/or the NW may first consider the performance of the pair of UE-part model (e.g., actual encoder) and NW-part model (e.g., actual decoder) when applying the dataset collected from the UE.
  • the performance may be measured using intermediate KPI (e.g., SGCS, NMSE, etc.). If the performance of the mentioned pair is not acceptable (e.g., below a certain threshold), the NW may request the UE to collect new/more data samples to train/retrain/fme tuning and perform dataset delivery procedure to central entity 2 to assist the NW side retraining/fmetuning.
  • intermediate KPI e.g., SGCS, NMSE, etc.
  • Figure 7 depicts a method in accordance with particular embodiments.
  • the method of Figure 7 may be performed by a second network node (e g. the network node 910 or network node 1100 as described later with reference to Figures 9 and 10 respectively, or the core network node 908 or network node 1300 as described later with reference to Figures 9 and 13 respectively, or another node coupled to such a core network node).
  • a second network node e g. the network node 910 or network node 1100 as described later with reference to Figures 9 and 10 respectively, or the core network node 908 or network node 1300 as described later with reference to Figures 9 and 13 respectively, or another node coupled to such a core network node.
  • At least some of the steps of the method shown in Figure 7 may correspond to the signaling and/or actions of “central entity 1” described above with respect to Figures 3 and/or 4, or to the signalling and/or actions of “central entity 3” described above with respect to Figures 5 and/or 6.
  • the method begins at step 702, in which the second network node obtains first datasets from a plurality of first network nodes.
  • This step may correspond in whole or in part to any one of steps 301, 401, 501, or 601 described above.
  • the one or more first network nodes may comprise one or more radio access network nodes (e.g., particularly where step 702 corresponds to step 301 or step 401), one or more wireless devices (e.g., particularly where step 702 corresponds to step 501 or step 601), and/or one or more core network nodes.
  • the first datasets may comprise a plurality of radio measurements (e.g., for CSI, positioning, line of sight detection, beam selection, etc.).
  • step 704 the second network node uses the first datasets to generate a second dataset.
  • This step may correspond in whole or in part to any one of steps 302, 402, 502, and 602 described above.
  • the second data set is labelled with a first identifier.
  • Step 704 may comprise training a machine-learning model using the first datasets, and using the trained machine-learning model and the first datasets to generate the second dataset.
  • the machine-learning model may comprise a first part and a second part for implementation in different network entities.
  • the machine-learning model may comprise an autoencoder, with one of the first part and the second part comprising an encoder (e.g., for implementation by a UE) and the other of the first part and the second part comprising a decoder (e.g., for implementation by a network node, gNB, etc).
  • This step may therefore comprise training the first part of the machine-learning model, for implementation by a network node, using the first datasets; and/or generating a nominal second part of the machine-learning model, for implementation by a user equipment. See steps 302 and 402 in methods 1 and la, respectively.
  • the step may comprise training the second part of the machinelearning model, for implementation by a user equipment, using the first datasets; and/or generating a nominal first part of the machine-learning model, for implementation by a network node. See steps 502 and 602 in methods 2 and 2a, respectively.
  • step 706 the second network node forwards the second dataset and the first identifier to a third network node.
  • This step may correspond in whole or in part to any one of steps 303, 403, 503, and 603 described above.
  • the third network node may thus correspond to “central entity 2” in any of methods 1, la, 2 and 2a, and be operable to combine respective second datasets from multiple second network nodes to generate a third dataset (e.g., a global dataset).
  • the third dataset may be labelled with a second identifier, which may be identical to or equal to the first identifier.
  • the second network nodes may be associated with multiple network vendors, such that the third dataset is not specific to one network vendor.
  • the first datasets may comprise data samples that are labelled with one or more conditions or configurations of a user equipment or a network node associated with the data samples.
  • step 704 may comprise training the machine-learning model for a first condition or configuration (e.g., one of the conditions or configurations used as a label for the data), based on data samples that are labelled with the first condition or configuration.
  • the second dataset may then also be associated with the first condition or configuration (e.g., the machine-learning model may be specified for use when the UE and/or NW are in the first condition or configuration).
  • the second dataset may comprise an update and/or an extension to a dataset previously forwarded to the third network node by the second network node.
  • the first identifier may be associated with the dataset previously forwarded to the third network node by the second network node.
  • the method of Figure 7 may further comprise the second network node receiving, prior to generating the second dataset, the first identifier from the third network node.
  • Figure 8 depicts a method in accordance with particular embodiments. The method of Figure 8 may be performed by a fourth network node (e.g.
  • At least some of the steps of the method shown in Figure 8 may correspond to the signaling and/or actions of “NW vendor training center” or “central entity 3” described above with respect to Figures 3 and/or 4, or to the signalling and/or actions of “UE vendor training center” or “central entity 1” described above with respect to Figures 5 and/or 6.
  • the method begins at step 802, in which the fourth network node receives, from a third network node, a third dataset.
  • This step may correspond in whole or in part to any one of steps 305, 405, 505, and 605 described above.
  • the third network node may thus correspond to “central entity 2” in any of methods 1, la, 2 and 2a, and be operable to combine respective second datasets from multiple second network nodes to generate a third dataset (e g., a global dataset).
  • the second network nodes may be associated with multiple network vendors, such that the third dataset is not specific to one network vendor.
  • the fourth network node trains a machine-learning model using the third dataset.
  • This step may correspond in whole or in part to any one of steps 306, 406, 506, and 606 described above.
  • the machine-learning model may comprise a first part and a second part for implementation in different network entities.
  • the machine-learning model may comprise an autoencoder, with one of the first part and the second part comprising an encoder (e.g., for implementation by a UE) and the other of the first part and the second part comprising a decoder (e.g., for implementation by a network node, gNB, etc).
  • This step may therefore comprise training the first part of the machine-learning model, for implementation by a user equipment, using the third dataset. See steps 306 and 406 in methods 1 and la, respectively.
  • the step may comprise training the second part of the machinelearning model, for implementation by a network node, using the third dataset. See steps 506 and 606 in methods 2 and 2a, respectively.
  • the third dataset may comprise data samples that are labelled with one or more conditions or configurations of a user equipment or a network node associated with the data samples.
  • step 804 may comprise training the machine-learning model for a first condition or configuration (e.g., one of the conditions or configurations used as a label for the data), based on data samples that are labelled with the first condition or configuration.
  • the machine-learning model may then be specified for use when the UE and/or NW are in the first condition or configuration.
  • the third datasets may be respectively labelled with second identifiers.
  • the method further comprises transmitting the trained machine-learning model (or the first or second parts thereof) to one or more UEs and/or network nodes. See any one of steps 307, 407, 507, and 607 described above.
  • Figures 3, 4, 5 and 6 each show systems and corresponding methods for sharing training datasets and ML models between nodes of a communication network.
  • Figures 7 and 8 are flow charts of methods performed by second and fourth network nodes respectively, and in some embodiments those network nodes may belong to a wider system comprising one or more first, second, third and fourth network nodes, e g., as shown in any of Figures 3 to 6.
  • a method, performed by a system in a communication network may comprise the following steps: obtaining, at a plurality of second network nodes, respective pluralities of first datasets; training, at each second network node and based on a respective plurality of first datasets, a machine-learning model comprising first and second parts, wherein training the machine-learning model comprises training the first part of the machine-learning model and generating a nominal second part of the machine-learning model; outputting, by each of the second network nodes, a respective second dataset generated using the first datasets and the respective machine-learning model; aggregating, at a third network node, the plurality of second datasets to generate a third dataset; and training, at one or more fourth network nodes, a second part of the machine-learning model using the third dataset.
  • the first part may be for implementation at a user equipment and the second part for implementation at a network node, or vice versa (the first part for implementation at a network node and the second part for implementation at a user equipment).
  • the machine-learning model may comprise an autoencoder, with one of the first part and the second part comprising an encoder and the other of the first part and the second part comprising a decoder.
  • Figure 9 shows an example of a communication system 900 in accordance with some embodiments.
  • the communication system 900 includes a telecommunication network 902 that includes an access network 904, such as a radio access network (RAN), and a core network 906, which includes one or more core network nodes 908.
  • the access network 904 includes one or more access network nodes, such as network nodes 910a and 910b (one or more of which may be generally referred to as network nodes 910), or any other similar 3rd Generation Partnership Project (3GPP) access nodes or non-3GPP access points.
  • 3GPP 3rd Generation Partnership Project
  • a network node is not necessarily limited to an implementation in which a radio portion and a baseband portion are supplied and integrated by a single vendor.
  • the telecommunication network 902 includes one or more Open-RAN (ORAN) network nodes.
  • ORAN Open-RAN
  • An ORAN network node is a node in the telecommunication network 902 that supports an ORAN specification (e.g., a specification published by the 0-RAN Alliance, or any similar organization) and may operate alone or together with other nodes to implement one or more functionalities of any node in the telecommunication network 902, including one or more network nodes 910 and/or core network nodes 908.
  • ORAN Open-RAN
  • Examples of an ORAN network node include an open radio unit (O-RU), an open distributed unit (O-DU), an open central unit (O-CU), including an O-CU control plane (O- CU-CP) or an O-CU user plane (O-CU-UP), a RAN intelligent controller (near-real time or non-real time) hosting software or software plug-ins, such as a near-real time control application (e.g., xApp) or a non-real time control application (e.g., rApp), or any combination thereof (the adjective “open” designating support of an ORAN specification).
  • a near-real time control application e.g., xApp
  • rApp non-real time control application
  • the network node may support a specification by, for example, supporting an interface defined by the ORAN specification, such as an Al, Fl, Wl, El, E2, X2, Xn interface, an open fronthaul user plane interface, or an open fronthaul management plane interface.
  • an ORAN access node may be a logical node in a physical node.
  • an ORAN network node may be implemented in a virtualization environment (described further below) in which one or more network functions are virtualized.
  • the virtualization environment may include an O-Cloud computing platform orchestrated by a Service Management and Orchestration Framework via an O-2 interface defined by the O-RAN Alliance or comparable technologies.
  • the network nodes 910 facilitate direct or indirect connection of user equipment (UE), such as by connecting UEs 912a, 912b, 912c, and 912d (one or more of which may be generally referred to as UEs 912) to the core network 906 over one or more wireless connections.
  • UE user equipment
  • Example wireless communications over a wireless connection include transmitting and/or receiving wireless signals using electromagnetic waves, radio waves, infrared waves, and/or other types of signals suitable for conveying information without the use of wires, cables, or other material conductors.
  • the communication system 900 may include any number of wired or wireless networks, network nodes, UEs, and/or any other components or systems that may facilitate or participate in the communication of data and/or signals whether via wired or wireless connections.
  • the communication system 900 may include and/or interface with any type of communication, telecommunication, data, cellular, radio network, and/or other similar type of system.
  • the UEs 912 may be any of a wide variety of communication devices, including wireless devices arranged, configured, and/or operable to communicate wirelessly with the network nodes 910 and other communication devices.
  • the network nodes 910 are arranged, capable, configured, and/or operable to communicate directly or indirectly with the UEs 912 and/or with other network nodes or equipment in the telecommunication network 902 to enable and/or provide network access, such as wireless network access, and/or to perform other functions, such as administration in the telecommunication network 902.
  • the core network 906 connects the network nodes 910 to one or more host computing systems, such as host 916. These connections may be direct or indirect via one or more intermediary networks or devices. In other examples, network nodes may be directly coupled to hosts.
  • the core network 906 includes one more core network nodes (e.g., core network node 908) that are structured with hardware and software components. Features of these components may be substantially similar to those described with respect to the UEs, network nodes, and/or hosts, such that the descriptions thereof are generally applicable to the corresponding components of the core network node 908.
  • Example core network nodes include functions of one or more of a Mobile Switching Center (MSC), Mobility Management Entity (MME), Home Subscriber Server (HSS), Access and Mobility Management Function (AMF), Session Management Function (SMF), Authentication Server Function (AUSF), Subscription Identifier De-concealing function (SIDF), Unified Data Management (UDM), Security Edge Protection Proxy (SEPP), Network Exposure Function (NEF), and/or a User Plane Function (UPF).
  • MSC Mobile Switching Center
  • MME Mobility Management Entity
  • HSS Home Subscriber Server
  • AMF Access and Mobility Management Function
  • SMF Session Management Function
  • AUSF Authentication Server Function
  • SIDF Subscription Identifier De-concealing function
  • UDM Unified Data Management
  • SEPP Security Edge Protection Proxy
  • NEF Network Exposure Function
  • UPF User Plane Function
  • the host 916 may be under the ownership or control of a service provider other than an operator or provider of the access network 904 and/or the telecommunication network 902.
  • the host 916 may host a variety of applications to provide one or more service. Examples of such applications include live and pre-recorded audio/video content, data collection services such as retrieving and compiling data on various ambient conditions detected by a plurality of UEs, analytics functionality, social media, functions for controlling or otherwise interacting with remote devices, functions for an alarm and surveillance center, or any other such function performed by a server.
  • the communication system 900 of Figure 9 enables connectivity between the UEs, network nodes, and hosts.
  • the communication system may be configured to operate according to predefined rules or procedures, such as specific standards that include, but are not limited to: Global System for Mobile Communications (GSM); Universal Mobile Telecommunications System (UMTS); Long Term Evolution (LTE), and/or other suitable 2G, 3G, 4G, 5G standards, or any applicable future generation standard (e.g., 6G); wireless local area network (WLAN) standards, such as the Institute of Electrical and Electronics Engineers (IEEE) 802.
  • GSM Global System for Mobile Communications
  • UMTS Universal Mobile Telecommunications System
  • LTE Long Term Evolution
  • WLAN wireless local area network
  • WiFi Wireless Fidelity
  • WiMax Worldwide Interoperability for Microwave Access
  • WiMax Wireless Fidelity
  • NFC Near Field Communication
  • LiFi LiFi
  • LPWAN low-power wide-area network
  • the telecommunication network 902 is a cellular network that implements 3 GPP standardized features. Accordingly, the telecommunications network 902 may support network slicing to provide different logical networks to different devices that are connected to the telecommunication network 902. For example, the telecommunications network 902 may provide Ultra Reliable Low Latency Communication (URLLC) services to some UEs, while providing Enhanced Mobile Broadband (eMBB) services to other UEs, and/or Massive Machine Type Communication (mMTC)/Massive loT services to yet further UEs.
  • URLLC Ultra Reliable Low Latency Communication
  • eMBB Enhanced Mobile Broadband
  • mMTC Massive Machine Type Communication
  • the UEs 912 are configured to transmit and/or receive information without direct human interaction.
  • a UE may be designed to transmit information to the access network 904 on a predetermined schedule, when triggered by an internal or external event, or in response to requests from the access network 904.
  • a UE may be configured for operating in single- or multi-RAT or multi-standard mode.
  • a UE may operate with any one or combination of Wi-Fi, NR (New Radio) and LTE, i.e. being configured for multi-radio dual connectivity (MR-DC), such as E-UTRAN (Evolved- UMTS Terrestrial Radio Access Network) New Radio - Dual Connectivity (EN-DC).
  • MR-DC multi-radio dual connectivity
  • the hub 914 communicates with the access network 904 to facilitate indirect communication between one or more UEs (e.g., UE 912c and/or 912d) and network nodes (e.g., network node 910b).
  • the hub 914 may be a controller, router, content source and analytics, or any of the other communication devices described herein regarding UEs.
  • the hub 914 may be a broadband router enabling access to the core network 906 for the UEs.
  • the hub 914 may be a controller that sends commands or instructions to one or more actuators in the UEs.
  • the hub 914 may be a data collector that acts as temporary storage for UE data and, in some embodiments, may perform analysis or other processing of the data.
  • the hub 914 may be a content source. For example, for a UE that is a VR device, display, loudspeaker, or other media delivery device, the hub 914 may retrieve VR assets, video, audio, or other media or data related to sensory in formation via a network node, which the hub 914 then provides to the UE either directly, after performing local processing, and/or after adding additional local content.
  • the hub 914 acts as a proxy server or orchestrator for the UEs, in particular if one or more of the UEs are low energy loT devices.
  • the hub 914 may have a constant/persistent or intermittent connection to the network node 910b.
  • the hub 914 may also allow for a different communication scheme and/or schedule between the hub 914 and UEs (e.g., UE 912c and/or 912d), and between the hub 914 and the core network 906.
  • the hub 914 is connected to the core network 906 and/or one or more UEs via a wired connection.
  • the hub 914 may be configured to connect to an M2M service provider over the access network 904 and/or to another UE over a direct connection.
  • UEs may establish a wireless connection with the network nodes 910 while still connected via the hub 914 via a wired or wireless connection.
  • the hub 914 may be a dedicated hub - that is, a hub whose primary function is to route communications to/from the UEs from/to the network node 910b.
  • the hub 914 may be a non-dedicated hub - that is, a device which is capable of operating to route communications between the UEs and network node 910b, but which is additionally capable of operating as a communication start and/or end point for certain data channels.
  • FIG 10 shows a UE 1000 in accordance with some embodiments.
  • the UE 1000 presents additional details of some embodiments of the UE 912 of Figure 1.
  • a UE refers to a device capable, configured, arranged and/or operable to communicate wirelessly with network nodes and/or other UEs.
  • Examples of a UE include, but are not limited to, a smart phone, mobile phone, cell phone, voice over IP (VoIP) phone, wireless local loop phone, desktop computer, personal digital assistant (PDA), wireless cameras, gaming console or device, music storage/playback device, wearable terminal device, wireless endpoint, mobile station, tablet, laptop, laptop-embedded equipment (LEE), laptop-mounted equipment (LME), an Augmented Reality (AR) or Virtual Reality (VR) device, wireless customer-premise equipment (CPE), vehicle, vehicle-mounted or vehicle embedded/integrated wireless device, etc.
  • VoIP voice over IP
  • PDA personal digital assistant
  • LME laptop-embedded equipment
  • AR Augmented Reality
  • VR Virtual Reality
  • CPE wireless customer-premise equipment
  • vehicle vehicle-mounted or vehicle embedded/integrated wireless device, etc.
  • UEs identified by the 3rd Generation Partnership Project (3 GPP), including a narrow band internet of things (NB-IoT) UE, a machine type communication (MTC) UE, and/or an enhanced MTC (eMTC) UE.
  • 3 GPP 3rd Generation Partnership Project
  • NB-IoT narrow band internet of things
  • MTC machine type communication
  • eMTC enhanced MTC
  • a UE may support device-to-device (D2D) communication, for example by implementing a 3GPP standard for sidelink communication, Dedicated Short-Range Communication (DSRC), vehicle-to-vehicle (V2V), vehicle-to-infrastructure (V2I), or vehicle- to-eveiything (V2X).
  • D2D device-to-device
  • DSRC Dedicated Short-Range Communication
  • V2V vehicle-to-vehicle
  • V2I vehicle-to-infrastructure
  • V2X vehicle- to-eveiything
  • a UE may not necessarily have a user in the sense of a human user who owns and/or operates the relevant device.
  • a UE may represent a device that is intended for sale to, or operation by, a human user but which may not, or which may not initially, be associated with a specific human user (e.g., a smart sprinkler controller).
  • a UE may represent a device that is
  • the UE 1000 includes processing circuitry 1002 that is operatively coupled via a bus 1004 to an input/output interface 1006, a power source 1008, a memory 1010, a communication interface 1012, and/or any other component, or any combination thereof.
  • processing circuitry 1002 that is operatively coupled via a bus 1004 to an input/output interface 1006, a power source 1008, a memory 1010, a communication interface 1012, and/or any other component, or any combination thereof.
  • Certain UEs may utilize all or a subset of the components shown in Figure 10. The level of integration between the components may vary from one UE to another UE. Further, certain UEs may contain multiple instances of a component, such as multiple processors, memories, transceivers, transmitters, receivers, etc.
  • the processing circuitry 1002 is configured to process instructions and data and may be configured to implement any sequential state machine operative to execute instructions stored as machine-readable computer programs in the memory 1010.
  • the processing circuitry 1002 may be implemented as one or more hardware-implemented state machines (e.g., in discrete logic, field-programmable gate arrays (FPGAs), application specific integrated circuits (ASICs), etc.); programmable logic together with appropriate firmware; one or more stored computer programs, general-purpose processors, such as a microprocessor or digital signal processor (DSP), together with appropriate software; or any combination of the above.
  • the processing circuitry 1002 may include multiple central processing units (CPUs).
  • the processing circuity 1002 may be configured to cause the UE 1002 to perform the signalling of at least one of the UEs shown in any of Figures 3 to 6.
  • the input/output interface 1006 may be configured to provide an interface or interfaces to an input device, output device, or one or more input and/or output devices.
  • Examples of an output device include a speaker, a sound card, a video card, a display, a monitor, a printer, an actuator, an emitter, a smartcard, another output device, or any combination thereof.
  • An input device may allow a user to capture information into the UE 1000.
  • Examples of an input device include a touch-sensitive or presence-sensitive display, a camera (e.g., a digital camera, a digital video camera, a web camera, etc.), a microphone, a sensor, a mouse, a trackball, a directional pad, a trackpad, a scroll wheel, a smartcard, and the like.
  • the presence-sensitive display may include a capacitive or resistive touch sensor to sense input from a user.
  • a sensor may be, for instance, an accelerometer, a gyroscope, a tilt sensor, a force sensor, a magnetometer, an optical sensor, a proximity sensor, a biometric sensor, etc., or any combination thereof.
  • An output device may use the same type of interface port as an input device. For example, a Universal Serial Bus (USB) port may be used to provide an input device and an output device.
  • USB Universal Serial Bus
  • the power source 1008 is structured as a batteiy or battery pack. Other types of power sources, such as an external power source (e.g., an electricity outlet), photovoltaic device, or power cell, may be used.
  • the power source 1008 may further include power circuitry for delivering power from the power source 1008 itself, and/or an external power source, to the various parts of the UE 1000 via input circuitry or an interface such as an electrical power cable. Delivering power may be, for example, for charging of the power source 1008.
  • Power circuitry may perform any formatting, converting, or other modification to the power from the power source 1008 to make the power suitable for the respective components of the UE 1000 to which power is supplied.
  • the memory 1010 may be or be configured to include memory such as random access memory (RAM), read-only memory (ROM), programmable read-only memory (PROM), erasable programmable read-only memory (EPROM), electrically erasable programmable read-only memory (EEPROM), magnetic disks, optical disks, hard disks, removable cartridges, flash drives, and so forth.
  • the memory 1010 includes one or more application programs 1014, such as an operating system, web browser application, a widget, gadget engine, or other application, and corresponding data 1016.
  • the memory 1010 may store, for use by the UE 1000, any of a variety of various operating systems or combinations of operating systems.
  • the memory 1010 may be configured to include a number of physical drive units, such as redundant array of independent disks (RAID), flash memory, USB flash drive, external hard disk drive, thumb drive, pen drive, key drive, high-density digital versatile disc (HD- DVD) optical disc drive, internal hard disk drive, Blu-Ray optical disc drive, holographic digital data storage (HDDS) optical disc drive, external mini-dual in-line memory module (DIMM), synchronous dynamic random access memory (SDRAM), external micro-DIMM SDRAM, smartcard memory such as tamper resistant module in the form of a universal integrated circuit card (UICC) including one or more subscriber identity modules (SIMs), such as a USIM and/or ISIM, other memory, or any combination thereof.
  • RAID redundant array of independent disks
  • HD- DVD high-density digital versatile disc
  • HD- DVD high-density digital versatile disc
  • HD- DVD high-density digital versatile disc
  • HD- DVD high-density digital versatile disc
  • HD- DVD high-
  • the UICC may for example be an embedded UICC (eUICC), integrated UICC (iUICC) or a removable UICC commonly known as ‘SIM card.’
  • eUICC embedded UICC
  • iUICC integrated UICC
  • SIM card removable UICC commonly known as ‘SIM card.’
  • the memory 1010 may allow the UE 1000 to access instructions, application programs and the like, stored on transitory or non-transitory memory media, to off-load data, or to upload data.
  • An article of manufacture, such as one utilizing a communication system may be tangibly embodied as or in the memory 1010, which may be or comprise a device-readable storage medium.
  • the processing circuitry 1002 may be configured to communicate with an access network or other network using the communication interface 1012.
  • the communication interface 1012 may comprise one or more communication subsystems and may include or be communicatively coupled to an antenna 1022.
  • the communication interface 1012 may include one or more transceivers used to communicate, such as by communicating with one or more remote transceivers of another device capable of wireless communication (e.g., another UE or a network node in an access network).
  • Each transceiver may include a transmitter 1018 and/or a receiver 1020 appropriate to provide network communications (e.g., optical, electrical, frequency allocations, and so forth).
  • the transmitter 1018 and receiver 1020 may be coupled to one or more antennas (e.g., antenna 1022) and may share circuit components, software or firmware, or alternatively be implemented separately.
  • communication functions of the communication interface 1012 may include cellular communication, Wi-Fi communication, LPWAN communication, data communication, voice communication, multimedia communication, short-range communications such as Bluetooth, near-field communication, location-based communication such as the use of the global positioning system (GPS) to determine a location, another like communication function, or any combination thereof.
  • GPS global positioning system
  • Communications may be implemented in according to one or more communication protocols and/or standards, such as IEEE 802.11, Code Division Multiplexing Access (CDMA), Wideband Code Division Multiple Access (WCDMA), GSM, LTE, New Radio (NR), UMTS, WiMax, Ethernet, transmission control protocol/intemet protocol (TCP/IP), synchronous optical networking (SONET), Asynchronous Transfer Mode (ATM), QUIC, Hypertext Transfer Protocol (HTTP), and so forth.
  • CDMA Code Division Multiplexing Access
  • WCDMA Wideband Code Division Multiple Access
  • WCDMA Wideband Code Division Multiple Access
  • GSM Global System for Mobile communications
  • LTE Long Term Evolution
  • NR New Radio
  • UMTS Worldwide Interoperability for Microwave Access
  • WiMax Ethernet
  • TCP/IP transmission control protocol/intemet protocol
  • SONET synchronous optical networking
  • ATM Asynchronous Transfer Mode
  • QUIC Hypertext Transfer Protocol
  • HTTP Hypertext Transfer Protocol
  • a UE may provide an output of data captured by its sensors, through its communication interface 1012, via a wireless connection to a network node.
  • Data captured by sensors of a UE can be communicated through a wireless connection to a network node via another UE.
  • the output may be periodic (e.g., once every 15 minutes if it reports the sensed temperature), random (e.g., to even out the load from reporting from several sensors), in response to a triggering event (e.g., when moisture is detected an alert is sent), in response to a request (e.g., a user initiated request), or a continuous stream (e.g., a live video feed of a patient).
  • a UE comprises an actuator, a motor, or a switch, related to a communication interface configured to receive wireless input from a network node via a wireless connection.
  • the states of the actuator, the motor, or the switch may change.
  • the UE may comprise a motor that adjusts the control surfaces or rotors of a drone in flight according to the received input or to a robotic arm performing a medical procedure according to the received input.
  • a UE when in the form of an Internet of Things (loT) device, may be a device for use in one or more application domains, these domains comprising, but not limited to, city wearable technology, extended industrial application and healthcare.
  • loT device are a device which is or which is embedded in: a connected refrigerator or freezer, a TV, a connected lighting device, an electricity meter, a robot vacuum cleaner, a voice controlled smart speaker, a home security camera, a motion detector, a thermostat, a smoke detector, a door/window sensor, a flood/moisture sensor, an electrical door lock, a connected doorbell, an air conditioning system like a heat pump, an autonomous vehicle, a surveillance system, a weather monitoring device, a vehicle parking monitoring device, an electric vehicle charging station, a smart watch, a fitness tracker, a wearable for tactile augmentation or sensory enhancement, a water sprinkler, an animal- or item-tracking device, a sensor for monitoring a plant or animal, an
  • a UE may represent a machine or other device that performs monitoring and/or measurements, and transmits the results of such monitoring and/or measurements to another UE and/or a network node.
  • the UE may in this case be an M2M device, which may in a 3GPP context be referred to as an MTC device.
  • the UE may implement the 3GPP NB-IoT standard.
  • a UE may represent a vehicle, such as a car, a bus, a truck, a ship and an airplane, or other equipment that is capable of monitoring and/or reporting on its operational status or other functions associated with its operation.
  • any number of UEs may be used together with respect to a single use case.
  • a first UE might be or be integrated in a drone and provide the drone’s speed information (obtained through a speed sensor) to a second UE that is a remote controller operating the drone.
  • the first UE may adjust the throttle on the drone (e.g. by controlling an actuator) to increase or decrease the drone’s speed.
  • the first and/or the second UE can also include more than one of the functionalities described above.
  • a UE might comprise the sensor and the actuator, and handle communication of data for both the speed sensor and the actuators.
  • FIG 11 shows a network node 1100 in accordance with some embodiments.
  • network node refers to equipment capable, configured, arranged and/or operable to communicate directly or indirectly with a UE and/or with other network nodes or equipment, in a telecommunication network.
  • network nodes include, but are not limited to, access points (APs) (e.g., radio access points), base stations (BSs) (e.g., radio base stations, Node Bs, evolved Node Bs (eNBs) and NRNodeBs (gNBs)), 0-RAN nodes or components of an 0-RAN node (e.g., O-RU, O-DU, O-CU).
  • APs access points
  • BSs base stations
  • eNBs evolved Node Bs
  • gNBs NRNodeBs
  • Base stations may be categorized based on the amount of coverage they provide (or, stated differently, their transmit power level) and so, depending on the provided amount of coverage, may be referred to as femto base stations, pico base stations, micro base stations, or macro base stations.
  • a base station may be a relay node or a relay donor node controlling a relay.
  • a network node may also include one or more (or all) parts of a distributed radio base station such as centralized digital units, distributed units (e.g., in an O-RAN access node) and/or remote radio units (RRUs), sometimes referred to as Remote Radio Heads (RRHs). Such remote radio units may or may not be integrated with an antenna as an antenna integrated radio.
  • Parts of a distributed radio base station may also be referred to as nodes in a distributed antenna system (DAS).
  • DAS distributed antenna system
  • network nodes include multiple transmission point (multi-TRP) 5G access nodes, multi -standard radio (MSR) equipment such as MSR BSs, network controllers such as radio network controllers (RNCs) or base station controllers (BSCs), base transceiver stations (BTSs), transmission points, transmission nodes, multi-cell/multicast coordination entities (MCEs), Operation and Maintenance (O&M) nodes, Operations Support System (OSS) nodes, Self-Organizing Network (SON) nodes, positioning nodes (e.g., Evolved Serving Mobile Location Centers (E-SMLCs)), and/or Minimization of Drive Tests (MDTs).
  • MSR multi -standard radio
  • RNCs radio network controllers
  • BSCs base station controllers
  • BTSs base transceiver stations
  • OFDM Operation and Maintenance
  • OSS Operations Support System
  • SON Self-Organizing Network
  • positioning nodes e.g., Evolved Serving Mobile Location Centers (E-SMLCs)
  • the network node 1100 includes a processing circuitry 1102, a memory 1104, a communication interface 1106, and a power source 1108.
  • the network node 1100 may be composed of multiple physically separate components (e.g., a NodeB component and a RNC component, or a BTS component and a BSC component, etc.), which may each have their own respective components.
  • the network node 1100 comprises multiple separate components (e.g., BTS and BSC components)
  • one or more of the separate components may be shared among several network nodes.
  • a single RNC may control multiple NodeBs.
  • each unique NodeB and RNC pair may in some instances be considered a single separate network node.
  • the network node 1100 may be configured to support multiple radio access technologies (RATs).
  • RATs radio access technologies
  • some components may be duplicated (e.g., separate memory 1104 for different RATs) and some components may be reused (e.g., a same antenna 1110 may be shared by different RATs).
  • the network node 1100 may also include multiple sets of the various illustrated components for different wireless technologies integrated into network node 1100, for example GSM, WCDMA, LTE, NR, WiFi, Zigbee, Z-wave, LoRaWAN, Radio Frequency Identification (RFID) or Bluetooth wireless technologies. These wireless technologies may be integrated into the same or different chip or set of chips and other components within network node 1100.
  • RFID Radio Frequency Identification
  • the processing circuitry 1102 may comprise a combination of one or more of a microprocessor, controller, microcontroller, central processing unit, digital signal processor, application-specific integrated circuit, field programmable gate array, or any other suitable computing device, resource, or combination of hardware, software and/or encoded logic operable to provide, either alone or in conjunction with other network node 1100 components, such as the memory 1104, to provide network node 1100 functionality.
  • the processing circuitry 1102 may be configured to cause the network node to perform the methods as described with reference to Figure 7 and/or Figure 8, and/or to perform the signalling and/or actions of at least one of the gNBs, the UE vendor training centers, the central entity 1, 2 or 3, and/or the NW vendor training centers shown in any of Figures 3 to 6.
  • the processing circuitry 1102 includes a system on a chip (SOC). In some embodiments, the processing circuitry 1102 includes one or more of radio frequency (RF) transceiver circuitry 1112 and baseband processing circuitry 1114. In some embodiments, the radio frequency (RF) transceiver circuitry 1112 and the baseband processing circuitry 1114 may be on separate chips (or sets of chips), boards, or units, such as radio units and digital units. In alternative embodiments, part or all of RF transceiver circuitry 1112 and baseband processing circuitry 1114 may be on the same chip or set of chips, boards, or units.
  • SOC system on a chip
  • the processing circuitry 1102 includes one or more of radio frequency (RF) transceiver circuitry 1112 and baseband processing circuitry 1114.
  • the radio frequency (RF) transceiver circuitry 1112 and the baseband processing circuitry 1114 may be on separate chips (or sets of chips), boards, or units, such as radio units and digital units. In alternative embodiments, part or all of
  • the memory 1104 may comprise any form of volatile or non-volatile computer- readable memory including, without limitation, persistent storage, solid-state memory, remotely mounted memory, magnetic media, optical media, random access memory (RAM), read-only memory (ROM), mass storage media (for example, a hard disk), removable storage media (for example, a flash drive, a Compact Disk (CD) or a Digital Video Disk (DVD)), and/or any other volatile or non-volatile, non-transitory device-readable and/or computerexecutable memory devices that store information, data, and/or instructions that may be used by the processing circuitry 1102.
  • volatile or non-volatile computer- readable memory including, without limitation, persistent storage, solid-state memory, remotely mounted memory, magnetic media, optical media, random access memory (RAM), read-only memory (ROM), mass storage media (for example, a hard disk), removable storage media (for example, a flash drive, a Compact Disk (CD) or a Digital Video Disk (DVD)), and/or any other volatile or non
  • the memory 1104 may store any suitable instructions, data, or information, including a computer program, software, an application including one or more of logic, rules, code, tables, and/or other instructions capable of being executed by the processing circuitry 1102 and utilized by the network node 1100.
  • the memory 1104 may be used to store any calculations made by the processing circuitry 1102 and/or any data received via the communication interface 1106.
  • the processing circuitry 1102 and memory 1104 is integrated.
  • the communication interface 1106 is used in wired or wireless communication of signaling and/or data between a network node, access network, and/or UE. As illustrated, the communication interface 1106 comprises port(s)/terminal(s) 1116 to send and receive data, for example to and from a network over a wired connection.
  • the communication interface 1106 also includes radio front-end circuitry 1118 that may be coupled to, or in certain embodiments a part of, the antenna 1110. Radio front-end circuitry 1118 comprises filters 1120 and amplifiers 1122.
  • the radio front-end circuitry 1118 may be connected to an antenna 1110 and processing circuitry 1102.
  • the radio front-end circuitry may be configured to condition signals communicated between antenna 1110 and processing circuitry 1102.
  • the radio front-end circuitry 1118 may receive digital data that is to be sent out to other network nodes or UEs via a wireless connection.
  • the radio front-end circuitry 1118 may convert the digital data into a radio signal having the appropriate channel and bandwidth parameters using a combination of filters 1120 and/or amplifiers 1122.
  • the radio signal may then be transmitted via the antenna 1110.
  • the antenna 1110 may collect radio signals which are then converted into digital data by the radio front-end circuitry 1118.
  • the digital data may be passed to the processing circuitry 1102.
  • the communication interface may comprise different components and/or different combinations of components.
  • the network node 1100 does not include separate radio front-end circuitry 1118, instead, the processing circuitry 1102 includes radio front-end circuitry and is connected to the antenna 1110. Similarly, in some embodiments, all or some of the RF transceiver circuitry 1112 is part of the communication interface 1106. In still other embodiments, the communication interface 1106 includes one or more ports or terminals 1116, the radio front-end circuitry 1118, and the RF transceiver circuitry 1112, as part of a radio unit (not shown), and the communication interface 1106 communicates with the baseband processing circuitry 1114, which is part of a digital unit (not shown).
  • the antenna 1110 may include one or more antennas, or antenna arrays, configured to send and/or receive wireless signals.
  • the antenna 1110 may be coupled to the radio frontend circuitry 1118 and may be any type of antenna capable of transmitting and receiving data and/or signals wirelessly.
  • the antenna 1110 is separate from the network node 1100 and connectable to the network node 1100 through an interface or port.
  • the antenna 1110, communication interface 1106, and/or the processing circuitry 1102 may be configured to perform any receiving operations and/or certain obtaining operations described herein as being performed by the network node. Any information, data and/or signals may be received from a UE, another network node and/or any other network equipment. Similarly, the antenna 1110, the communication interface 1106, and/or the processing circuitry 1102 may be configured to perform any transmitting operations described herein as being performed by the network node. Any information, data and/or signals may be transmitted to a UE, another network node and/or any other network equipment.
  • the power source 1108 provides power to the various components of network node 1100 in a form suitable for the respective components (e g., at a voltage and current level needed for each respective component).
  • the power source 1108 may further comprise, or be coupled to, power management circuitry to supply the components of the network node 1100 with power for performing the functionality described herein.
  • the network node 1100 may be connectable to an external power source (e.g., the power grid, an electricity outlet) via an input circuitry or interface such as an electrical cable, whereby the external power source supplies power to power circuitry of the power source 1108.
  • the power source 1108 may comprise a source of power in the form of a battery or battery pack which is connected to, or integrated in, power circuitry. The battery may provide backup power should the external power source fail.
  • Embodiments of the network node 1100 may include additional components beyond those shown in Figure 11 for providing certain aspects of the network node’s functionality, including any of the functionality described herein and/or any functionality necessary to support the subject matter described herein.
  • the network node 1100 may include user interface equipment to allow input of information into the network node 1100 and to allow output of information from the network node 1100. This may allow a user to perform diagnostic, maintenance, repair, and other administrative functions for the network node 1100.
  • providing a core network node such as core network node 108 of FIG. 9, some components, such as the radio front-end circuitry 1118 and the RF transceiver circuitry 1112 may be omitted.
  • FIG 13 shows a network node 1300 in accordance with some embodiments.
  • network node refers to equipment capable, configured, arranged and/or operable to communicate directly or indirectly with a UE and/or with other network nodes or equipment, in a telecommunication network.
  • the network node 1300 may be operable as a core network node, a core network function or, more generally, a core network entity, such as the core network node 908 described above with respect to Figure 9).
  • Examples of network nodes in this context include core network entities such as one or more of a Mobile Switching Center (MSC), Mobility Management Entity (MME), Home Subscriber Server (HSS), Access and Mobility Management Function (AMF), Session Management Function (SMF), Authentication Server Function (AUSF), Subscription Identifier De-concealing function (SIDF), Unified Data Management (UDM), Security Edge Protection Proxy (SEPP), Network Exposure Function (NEF), Policy Control Function (PCF) and/or a User Plane Function (UPF).
  • MSC Mobile Switching Center
  • MME Mobility Management Entity
  • HSS Home Subscriber Server
  • AMF Access and Mobility Management Function
  • SMF Session Management Function
  • AUSF Authentication Server Function
  • SIDF Subscription Identifier De-concealing function
  • UDM Unified Data Management
  • SEPP Security Edge Protection Proxy
  • NEF Network Exposure Function
  • PCF Policy Control Function
  • UPF User Plane Function
  • the network node 1300 includes processing circuitry 1302, a memory 1304, a communication interface 1306, and a power source 1308, and/or any other component, or any combination thereof.
  • the network node 1300 may be composed of multiple physically separate components, which may each have their own respective components. In certain scenarios in which the network node 1300 comprises multiple separate components, one or more of the separate components may be shared among several network nodes.
  • the processing circuitry 1302 may comprise a combination of one or more of a microprocessor, controller, microcontroller, central processing unit, digital signal processor, application-specific integrated circuit, field programmable gate array, or any other suitable computing device, resource, or combination of hardware, software and/or encoded logic operable to provide, either alone or in conjunction with other network node 1300 components, such as the memory 1304, network node 1300 functionality.
  • the processing circuitry 1302 may be configured to cause the network node to perform the methods as described with reference to Figure 7 and/or 8, and/or to perform the signalling and/or actions of at least one of the UE vendor training centers, the central entity 1, 2 or 3, and/or the NW vendor training centers shown in any of Figures 3 to 6.
  • the memory 1304 may comprise any form of volatile or non-volatile computer- readable memory including, without limitation, persistent storage, solid-state memory, remotely mounted memory, magnetic media, optical media, random access memory (RAM), read-only memory (ROM), mass storage media (for example, a hard disk), removable storage media (for example, a flash drive, a Compact Disk (CD) or a Digital Video Disk (DVD)), and/or any other volatile or non-volatile, non-transitory device-readable and/or computerexecutable memory devices that store information, data, and/or instructions that may be used by the processing circuitry 1302.
  • volatile or non-volatile computer- readable memory including, without limitation, persistent storage, solid-state memory, remotely mounted memory, magnetic media, optical media, random access memory (RAM), read-only memory (ROM), mass storage media (for example, a hard disk), removable storage media (for example, a flash drive, a Compact Disk (CD) or a Digital Video Disk (DVD)), and/or any other volatile or non
  • the memory 1304 may store any suitable instructions, data, or information, including a computer program, software, an application including one or more of logic, rules, code, tables, and/or other instructions capable of being executed by the processing circuitry 1302 and utilized by the network node 1300.
  • the memory 1304 may be used to store any calculations made by the processing circuitry 1302 and/or any data received via the communication interface 1306.
  • the processing circuitry 1302 and memory 1304 is integrated.
  • the communication interface 1306 is used in wired or wireless communication of signaling and/or data between a network node, access network, and/or UE.
  • the power source 1308 provides power to the various components of network node 1300 in a form suitable for the respective components (e g., at a voltage and current level needed for each respective component).
  • the power source 1308 may further comprise, or be coupled to, power management circuitry to supply the components of the network node 1300 with power for performing the functionality described herein.
  • the network node 1300 may be connectable to an external power source (e.g., the power grid, an electricity outlet) via an input circuitry or interface such as an electrical cable, whereby the external power source supplies power to power circuitry of the power source 1308.
  • the power source 1308 may comprise a source of power in the form of a battery or battery pack which is connected to, or integrated in, power circuitry. The batteiy may provide backup power should the external power source fail.
  • Embodiments of the network node 1300 may include additional components beyond those shown in Figure 13 for providing certain aspects of the network node’s functionality, including any of the functionality described herein and/or any functionality necessary to support the subject matter described herein.
  • the network node 1300 may include user interface equipment to allow input of information into the network node 1300 and to allow output of information from the network node 1300. This may allow a user to perform diagnostic, maintenance, repair, and other administrative functions for the network node 1300.
  • FIG. 12 is a block diagram illustrating a virtualization environment 1200 in which functions implemented by some embodiments may be virtualized.
  • virtualizing means creating virtual versions of apparatuses or devices which may include virtualizing hardware platforms, storage devices and networking resources.
  • virtualization can be applied to any device described herein, or components thereof, and relates to an implementation in which at least a portion of the functionality is implemented as one or more virtual components.
  • Some or all of the functions described herein may be implemented as virtual components executed by one or more virtual machines (VMs) implemented in one or more virtual environments 1200 hosted by one or more of hardware nodes, such as a hardware computing device that operates as a network node, UE, core network node, or host.
  • VMs virtual machines
  • the node may be entirely virtualized.
  • the virtualization environment 1200 includes components defined by the O-RAN Alliance, such as an O-Cloud environment orchestrated by a Service Management and Orchestration Framework via an O-2 interface. Virtualization may facilitate distributed implementations of a network node, UE, core network node, or host.
  • Applications 1202 (which may alternatively be called software instances, virtual appliances, network functions, virtual nodes, virtual network functions, etc.) are run in the virtualization environment Q400 to implement some of the features, functions, and/or benefits of some of the embodiments disclosed herein.
  • Hardware 1204 includes processing circuitry, memory that stores software and/or instructions executable by hardware processing circuitry, and/or other hardware devices as described herein, such as a network interface, input/output interface, and so forth.
  • Software may be executed by the processing circuitry to instantiate one or more virtualization layers 1206 (also referred to as hypervisors or virtual machine monitors (VMMs)), provide VMs 1208a and 1208b (one or more of which may be generally referred to as VMs 1208), and/or perform any of the functions, features and/or benefits described in relation with some embodiments described herein.
  • the virtualization layer 1206 may present a virtual operating platform that appears like networking hardware to the VMs 1208.
  • the VMs 1208 comprise virtual processing, virtual memory, virtual networking or interface and virtual storage, and may be run by a corresponding virtualization layer 1206.
  • a virtualization layer 1206 Different embodiments of the instance of a virtual appliance 1202 may be implemented on one or more of VMs 1208, and the implementations may be made in different ways.
  • Virtualization of the hardware is in some contexts referred to as network function virtualization (NFV). NFV may be used to consolidate many network equipment types onto industry standard high volume server hardware, physical switches, and physical storage, which can be located in data centers, and customer premise equipment.
  • NFV network function virtualization
  • a VM 1208 may be a software implementation of a physical machine that runs programs as if they were executing on a physical, non-virtualized machine.
  • Each of the VMs 1208, and that part of hardware 1204 that executes that VM be it hardware dedicated to that VM and/or hardware shared by that VM with others of the VMs, forms separate virtual network elements.
  • a virtual network function is responsible for handling specific network functions that run in one or more VMs 1208 on top of the hardware 1204 and corresponds to the application 1202.
  • Hardware 1204 may be implemented in a standalone network node with generic or specific components. Hardware 1204 may implement some functions via virtualization. Alternatively, hardware 1204 may be part of a larger cluster of hardware (e.g. such as in a data center or CPE) where many hardware nodes work together and are managed via management and orchestration 1210, which, among others, oversees lifecycle management of applications 1202. In some embodiments, hardware 1204 is coupled to one or more radio units that each include one or more transmitters and one or more receivers that may be coupled to one or more antennas. Radio units may communicate directly with other hardware nodes via one or more appropriate network interfaces and may be used in combination with the virtual components to provide a virtual node with radio capabilities, such as a radio access node or a base station.
  • radio units may communicate directly with other hardware nodes via one or more appropriate network interfaces and may be used in combination with the virtual components to provide a virtual node with radio capabilities, such as a radio access node or a base station.
  • some signaling can be provided with the use of a control system 1212 which may alternatively be used for communication between hardware nodes and radio units.
  • a control system 1212 which may alternatively be used for communication between hardware nodes and radio units.
  • the computing devices described herein e.g., UEs, network nodes
  • other embodiments may comprise computing devices with different combinations of components. It is to be understood that these computing devices may comprise any suitable combination of hardware and/or software needed to perform the tasks, features, functions and methods disclosed herein.
  • Determining, calculating, obtaining or similar operations described herein may be performed by processing circuitry, which may process information by, for example, converting the obtained information into other information, comparing the obtained information or converted information to information stored in the network node, and/or performing one or more operations based on the obtained information or converted information, and as a result of said processing making a determination.
  • processing circuitry may process information by, for example, converting the obtained information into other information, comparing the obtained information or converted information to information stored in the network node, and/or performing one or more operations based on the obtained information or converted information, and as a result of said processing making a determination.
  • computing devices may comprise multiple different physical components that make up a single illustrated component, and functionality may be partitioned between separate components.
  • a communication interface may be configured to include any of the components described herein, and/or the functionality of the components may be partitioned between the processing circuitry and the communication interface.
  • non-computationally intensive functions of any of such components may be implemented in software or firmware and computational
  • processing circuitiy executing instructions stored on in memory, which in certain embodiments may be a computer program product in the form of a non-transitory computer- readable storage medium.
  • some or all of the functionality may be provided by the processing circuitry without executing instructions stored on a separate or discrete device-readable storage medium, such as in a hard-wired manner.
  • the processing circuitry can be configured to perform the described functionality. The benefits provided by such functionality are not limited to the processing circuitry alone or to other components of the computing device, but are enjoyed by the computing device as a whole, and/or by end users and a wireless network generally.
  • a method performed by a second network node comprising: obtaining first datasets from a plurality of first network nodes; using the first datasets to generate a second dataset; and forwarding the second dataset to a third network node.
  • training the machine-learning model comprises training the first part of the machine-learning model, for implementation by a network node, using the first datasets.
  • training the machine-learning model comprises generating a nominal second part of the machine-learning model, for implementation by a user equipment.
  • training the machine-learning model comprises training the second part of the machine-learning model, for implementation by a user equipment, using the first datasets.
  • training the machine-learning model comprises generating a nominal first part of the machine-learning model, for implementation by a network node.
  • the first datasets comprise data samples that are labelled with one or more conditions or configurations of a user equipment or a network node associated with the data samples.
  • training the machine-learning model comprises training the machine-learning model for a first condition or configuration, based on data samples that are labelled with the first condition or configuration.
  • the second dataset is associated with the first condition or configuration.
  • the one or more first network nodes comprise one or more radio access network nodes, one or more wireless devices, and/or one or more core network nodes.
  • a method performed by a fourth network node comprising: receiving, from a third network node, a third dataset; and training a machine-learning model using the third dataset.
  • training the machine-learning model comprises training the second part of the machine-learning model, for implementation by a user equipment, using the third dataset.
  • training the machine-learning model comprises training the first part of the machine-learning model, for implementation by a network node, using the third dataset.
  • a method performed by a system in a communication network comprising: obtaining, at a plurality of second network nodes, respective pluralities of first datasets; training, at each second network node and based on a respective plurality of first datasets, a machine-learning model comprising first and second parts, wherein training the machine-learning model comprises training the first part of the machine-learning model and generating a nominal second part of the machine-learning model; outputting, by each of the second network nodes, a respective second dataset generated using the first datasets and the respective machine-learning model; aggregating, at a third network node, the plurality of second datasets to generate a third dataset; and training, at one or more fourth network nodes, a second part of the machine-learning model using the third dataset.
  • a network node comprising: processing circuitry configured to cause the network node to perform any of the steps of any of the Group C embodiments; power supply circuitry configured to supply power to the processing circuitry.
  • a system for a communication network comprising: a plurality of second network nodes, at least one of which is configured to perform any of the steps of any of embodiments 1 to 15; one or more third network nodes; and one or more fourth network nodes, at least one of which is configured to perform any of the steps of any of embodiments 16 to 23.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Databases & Information Systems (AREA)
  • Evolutionary Computation (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

L'invention concerne un procédé mis en oeuvre par un second noeud de réseau (1100). Le procédé consiste à : obtenir (702) des premiers ensembles de données à partir d'une pluralité d'un ou plusieurs premiers noeuds de réseau (1000, 1100) ; utiliser (704) les premiers ensembles de données pour générer un second ensemble de données, le second ensemble de données étant étiqueté avec un premier identifiant ; et transférer (706) le second ensemble de données et le premier identifiant à un troisième noeud de réseau (1100).
PCT/SE2025/050178 2024-02-26 2025-02-26 Procédés, appareil et supports lisibles par ordinateur associés au partage d'ensembles de données sur un réseau de communication Pending WO2025183613A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US202463557842P 2024-02-26 2024-02-26
US63/557,842 2024-02-26

Publications (1)

Publication Number Publication Date
WO2025183613A1 true WO2025183613A1 (fr) 2025-09-04

Family

ID=94928156

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/SE2025/050178 Pending WO2025183613A1 (fr) 2024-02-26 2025-02-26 Procédés, appareil et supports lisibles par ordinateur associés au partage d'ensembles de données sur un réseau de communication

Country Status (1)

Country Link
WO (1) WO2025183613A1 (fr)

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
MODERATOR (APPLE): "Summary #1 on other aspects of AI/ML for CSI enhancement", vol. 3GPP RAN 1, no. Incheon, Korea; 20230522 - 20230526, 23 May 2023 (2023-05-23), XP052487521, Retrieved from the Internet <URL:https://ftp.3gpp.org/Meetings_3GPP_SYNC/RAN1/Docs/R1-2306043.zip R1-2306043.docx> [retrieved on 20230523] *
MODERATOR (QUALCOMM): "Summary#1 of General Aspects of AI/ML Framework", vol. RAN WG1, no. Toulouse, France; 20220822 - 20220826, 28 August 2022 (2022-08-28), XP052275810, Retrieved from the Internet <URL:https://ftp.3gpp.org/tsg_ran/WG1_RL1/TSGR1_110/Docs/R1-2207879.zip R1-2207879 Summary#1_9.2.1.docx> [retrieved on 20220828] *
TSUYOSHI SHIMOMURA ET AL: "Evaluation on AI/ML for CSI feedback enhancement", vol. RAN WG1, no. Online; 20230417 - 20230426, 7 April 2023 (2023-04-07), XP052293479, Retrieved from the Internet <URL:https://www.3gpp.org/ftp/TSG_RAN/WG1_RL1/TSGR1_112b-e/Docs/R1-2302904.zip R1-2302904 Evaluation on AIML for CSI feedback enhancement.docx> [retrieved on 20230407] *
YUANYUAN ZHANG ET AL: "Data Collection for Model Training at UE Side", vol. RAN WG2, no. Toulouse, FR; 20230821 - 20230825, 11 August 2023 (2023-08-11), XP052443861, Retrieved from the Internet <URL:https://www.3gpp.org/ftp/TSG_RAN/WG2_RL2/TSGR2_123/Docs/R2-2308151.zip R2-2308151_Data Collection for Model Training at UE Side.docx> [retrieved on 20230811] *

Similar Documents

Publication Publication Date Title
US20250330373A1 (en) Ml model support and model id handling by ue and network
EP4476892A1 (fr) Gestion réseau-centrique du cycle de vie des modèles ai/ml déployés dans un équipement utilisateur (ue)
WO2023192409A1 (fr) Rapport d&#39;équipement utilisateur de performance de modèle d&#39;apprentissage automatique
EP4476891A1 (fr) Gestion de cycle de vie centrée sur l&#39;utilisateur de modèles ai/ml déployés dans un équipement utilisateur (ue)
US20250203401A1 (en) Artificial Intelligence/Machine Learning Model Management Between Wireless Radio Nodes
US20250220471A1 (en) Network assisted error detection for artificial intelligence on air interface
WO2024242612A1 (fr) Configuration et test d&#39;ue rapportant des résultats de surveillance des performances d&#39;un modèle ia/ml
US20250280304A1 (en) Machine Learning for Radio Access Network Optimization
WO2025038021A1 (fr) Surveillance de performance d&#39;un modèle d&#39;intelligence artificielle/apprentissage automatique à deux côtés au côté équipement utilisateur
US20250280423A1 (en) Methods in determining the application delay for search-space set group-switching
WO2024210809A1 (fr) Mises à jour dynamiques de rapport d&#39;applicabilité pour modèles ia/ml
WO2025183613A1 (fr) Procédés, appareil et supports lisibles par ordinateur associés au partage d&#39;ensembles de données sur un réseau de communication
WO2024125362A1 (fr) Procédé et appareil de commande de liaison de communication entre dispositifs de communication
US20250234219A1 (en) Network assisted user equipment machine learning model handling
US20250008416A1 (en) Automatic neighbor relations augmention in a wireless communications network
WO2025231828A1 (fr) Procédés et dispositifs d&#39;économie d&#39;énergie/gestion de surtempérature améliorées
EP4612935A1 (fr) Communication basée sur un partage d&#39;identifiant de configuration de réseau
WO2025095847A1 (fr) Appariement de modèles pour modèles d&#39;ia/ml bilatéraux
WO2024236513A1 (fr) Configuration de ressources de mesure de canal pour informations d&#39;état de canal d&#39;intelligence artificielle
WO2025233909A1 (fr) Identifiant de configuration de réseau pour modèles d&#39;apprentissage automatique
WO2025165273A1 (fr) Dispositif sans fil, nœud de réseau et procédés de surveillance de performance pour de multiples schémas de prédiction de csi
WO2024242608A1 (fr) Procédés pour permettre une signalisation efficace d&#39;informations d&#39;assistance de configuration de réseau pour une gestion de faisceau
WO2025063876A1 (fr) Procédés et appareils pour permettre la collecte de données d&#39;apprentissage d&#39;intelligence artificielle et d&#39;apprentissage automatique dans un réseau
WO2024214075A1 (fr) Gestion du cycle de vie d&#39;un modèle unilatéral basée sur l&#39;id
WO2024257044A1 (fr) Résolution de fréquence pour des fonctions de formation de faisceaux et d&#39;égalisation dans un réseau o-ran

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 25710933

Country of ref document: EP

Kind code of ref document: A1