[go: up one dir, main page]

WO2023211343A1 - Rapport d'ensemble de caractéristiques de modèle d'apprentissage automatique - Google Patents

Rapport d'ensemble de caractéristiques de modèle d'apprentissage automatique Download PDF

Info

Publication number
WO2023211343A1
WO2023211343A1 PCT/SE2023/050385 SE2023050385W WO2023211343A1 WO 2023211343 A1 WO2023211343 A1 WO 2023211343A1 SE 2023050385 W SE2023050385 W SE 2023050385W WO 2023211343 A1 WO2023211343 A1 WO 2023211343A1
Authority
WO
WIPO (PCT)
Prior art keywords
model
models
wireless device
network node
network
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Ceased
Application number
PCT/SE2023/050385
Other languages
English (en)
Inventor
Jung-Fu Cheng
Andres Reial
Daniel CHEN LARSSON
Henrik RYDÉN
Yufei Blankenship
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Telefonaktiebolaget LM Ericsson AB
Original Assignee
Telefonaktiebolaget LM Ericsson AB
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Telefonaktiebolaget LM Ericsson AB filed Critical Telefonaktiebolaget LM Ericsson AB
Priority to EP23796924.1A priority Critical patent/EP4515463A1/fr
Publication of WO2023211343A1 publication Critical patent/WO2023211343A1/fr
Anticipated expiration legal-status Critical
Ceased legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W36/00Hand-off or reselection arrangements
    • H04W36/0005Control or signalling for completing the hand-off
    • H04W36/0083Determination of parameters used for hand-off, e.g. generation or modification of neighbour cell lists
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W8/00Network data management
    • H04W8/22Processing or transfer of terminal data, e.g. status or physical capabilities
    • H04W8/24Transfer of terminal data
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Definitions

  • Embodiments of the present disclosure are directed to wireless communications and, more particularly, to machine learning model feature set reporting.
  • Example use cases include: using autoencoders for channel state information (CSI) compression to reduce the feedback overhead and improve channel prediction accuracy; using deep neural networks for classifying line-of-sight (LOS) and non- LOS (NLOS) conditions to enhance positioning accuracy; using reinforcement learning for beam selection at the network side and/or the user equipment (UE) side to reduce signaling overhead and beam alignment latency; and using deep reinforcement learning to learn an optimal precoding policy for complex multiple-input multiple-output (MIMO) precoding problems.
  • CSI channel state information
  • LOS line-of-sight
  • NLOS non- LOS
  • reinforcement learning for beam selection at the network side and/or the user equipment (UE) side to reduce signaling overhead and beam alignment latency
  • MIMO multiple-input multiple-output
  • Another use case is limited collaboration between network nodes and UEs.
  • a ML model is operating at one end of the communication chain (e.g., at the UE side), but this node gets assistance from the node(s) at the other end of the communication chain (e.g., a next generation Node B (gNB)) for its Al model life cycle management (e.g., for training/retraining the Al model, model update).
  • gNB next generation Node B
  • a third use case is joint ML operation between network nodes and UEs.
  • the Al model may be split with one part located at the network side and the other part located at the UE side.
  • the Al model includes joint training between the network and UE, and the Al model life cycle management involves both ends of a communication chain.
  • FIGURE 1 is an illustration of training and inference pipelines, and their interactions within a model lifecycle management procedure.
  • the model lifecycle management typically consists of a training (re-training) pipeline, a deployment stage to make the trained (or retrained) Al model part of the inference pipeline, an inference pipeline, and a drift detection stage that informs about any drifts in the model operations.
  • the training (re-training) pipeline may include data ingestion, data pre-processing, model training, model evaluation, and model registration.
  • Data ingestion refers to gathering raw (training) data from a data storage. After data ingestion, there may be a step that controls the validity of the gathered data.
  • Data pre-processing refers to feature engineering applied to the gathered data, e.g., it may include data normalization and possibly a data transformation required for the input data to the Al model.
  • Model training refers to the actual model training steps as previously outlined.
  • Model evaluation refers to benchmarking the performance to a model baseline. The iterative steps of model training and model evaluation continues until the acceptable level of performance (as previously exemplified) is achieved.
  • Model registration refers to registering the Al model, including any corresponding AI- metadata that provides information on how the Al model was developed, and possibly Al model evaluations performance outcomes.
  • the deployment stage makes the trained (or re-trained) Al model part of the inference pipeline.
  • the inference pipeline may include data ingestion, data pre-processing, model operational, and data and model monitoring.
  • Data ingestion refers to gathering raw (inference) data from a data storage.
  • the data pre-processing stage is typically identical to corresponding processing that occurs in the training pipeline.
  • Model operational refers to using the trained and deployed model in an operational mode.
  • Data and model monitoring refers to validating that the inference data are from a distribution that aligns well with the training data, as well as monitoring model outputs for detecting any performance, or operational, drifts.
  • a drift detection stage informs about any drifts in the model operations.
  • each capability parameter is defined per UE, per duplex mode (frequency division duplex (FDD)/time division duplex (TDD)), per frequency range (FR1/FR2), per band, and/or per band combinations because the UE may support different functionalities depending on those features (see TS 38.306).
  • SDAP Service Data Adaption Protocol
  • PDCP Packet Data Convergence Protocol
  • RLC radio link control
  • MAC medium access control
  • MAC Physical Layer Parameters
  • the UE capabilities in NR do not rely on UE categories: UE categories associated to fixed peak data rates are only defined for marketing purposes and not signalled to the network. Instead, the peak data rate for a given set of aggregated carriers in a band or band combination is the sum of the peak data rates of each individual carrier in that band or band combination, where the peak data rate of each individual carrier is computed according to the capabilities supported for that carrier in the corresponding band or band combination.
  • Feature Set For each block of contiguous serving cells in a band, the set of features supported thereon is defined in a Feature Set (FS).
  • the UE may indicate several Feature Sets for a band (also known as feature sets per band) to advertise different alternative features for the associated block of contiguous serving cells in that band.
  • the two-dimensional matrix of feature sets for all the bands of a band combination i.e., all the feature sets per band
  • a feature set combination the number of feature sets per band is equal to the number of band entries in the corresponding band combination, and all feature sets per band have the same number of feature sets.
  • Each band combination is linked to one feature set combination. This is depicted in FIGURE 2.
  • the UE reports its capabilities individually per carrier. Those capability parameters are sent in feature set per component carrier and they are signalled in the corresponding FSs (per Band), i.e., for the corresponding block of contiguous serving cells in a band. The capability applied to each individual carrier in a block is agnostic to the order in which they are signalled in the corresponding FS.
  • the gNB can request the UE to provide NR capabilities for a restricted set of bands.
  • the UE can skip a subset of the requested band combinations when the corresponding UE capabilities are the same.
  • the UE may provide an ID in non-access stratum (NAS) signalling that represents its radio capabilities for one or more radio access technologies (RATs) to reduce signalling overhead.
  • NAS non-access stratum
  • the ID may be assigned either by the manufacturer or by the serving public land mobile network (PLMN).
  • PLMN public land mobile network
  • the manufacturer- assigned ID corresponds to a pre-provisioned set of capabilities. In the case of the PLMN- assigned ID, assignment takes place in NAS signalling.
  • the access and mobility management function stores the UE Radio Capability uploaded by the gNB as specified in TS 23.501.
  • the gNB can request the UE capabilities for RAT-Types NR, EUTRA, UTRA-FDD.
  • the UTRAN capabilities i.e., the INTER RAT HANDOVER INFO, include START-CS, START-PS and "predefined configurations", which are "dynamic" IES.
  • the gNB always requests the UE UTRA- FDD capabilities before handover to UTRA-FDD.
  • the gNB does not upload the UE UTRA- FDD capabilities to the AMF.
  • a UE may support multiple ML-model based functionalities and report the ML model support to a network node.
  • An ML execution unit/resource in UE hardware e.g., a dedicated hardware accelerator or a general-purpose computational resource, may be dimensioned so that it can execute any one of the supported models at the designed performance level. It may however be prohibitive from the cost/size perspective to dimension the ML execution resource to run all supported ML models/functionalities in parallel at full performance.
  • the existing UE capability feature set framework designed for traditional, non-ML capability reporting, enables the UE to indicate different capabilities for different bands and/or carrier combinations, to account for different computation/processing requirements, e.g., depending on the operational bandwidth. However, for each band, all defined capabilities are assumed to be available in parallel. Using that framework is therefore not suitable for capturing ML model interdependencies.
  • ML machine learning
  • Certain aspects of the present disclosure and their embodiments may provide solutions to these or other challenges.
  • particular embodiments include a method for a user equipment (UE) to report support for ML model-based functionality/capability where not all reported ML-based functionalities need to be supported concurrently.
  • UE user equipment
  • the method facilitates indicating subsets and parameter configurations for combinations of reported ML-based functionalities that can be configured simultaneously, e.g., based on: (a) subsets of ML models/functionalities that can operate simultaneously; (b) bandwidth that can be supported by the ML models/functionalities; (c) number of antennas ports, number of uplink beams and/or downlink beams, etc.
  • the UE may provide the above ML model feature set information to the network, e.g., as part of the UE capability signalling procedure at connection establishment, during a related ML-model support signalling procedure, or via dedicated signalling after updating one or more ML models while connected to the network.
  • the ML model feature set may be determined by, or specific to, a version number of the one or more models.
  • the ML model feature set information may refer to ML models installed (stored) in the UE, available from the UE vendor (e.g., on demand), or available from the network (e.g., upon network command).
  • the network may use the ML model feature set information to select a combination of configurations for one or more ML models.
  • the UE may then be provided a configuration to operate the combination of models so that it is consistent with its implementation constraints provided in the feature set reporting.
  • a method in a UE for configuration of simultaneous operation of ML models comprises signalling to a network node a ML model feature set information comprising one or more constraints for operating one or more ML models/functionalities and receiving from the network node a ML model configuration for one or more ML models/functionalities based on the feature set information.
  • the constraints comprise one or more of: a subset of ML models to be operated concurrently, legacy functionality to be operated concurrently, operation bandwidth, number of antennas or ports, ML model output update rate, ML model output resolution, etc.
  • the constraints comprise one or more sets of ML models, wherein ML models in each set utilize mutually blocking computation resources.
  • the one or more ML models are specified in terms of their ML model version.
  • the method further comprises receiving from the network node a list of ML models available at the network node and determining the ML model feature set information additionally based on the list.
  • a method in a network node for configuration of simultaneous operation of ML models in a UE comprises receiving from the UE ML model feature set information comprising one or more constraints for operating one or more ML models/functionalities depending on an operational context, determining a ML model configuration for one or more ML models/functionalities based on the feature set information, and signalling to the UE the ML model configuration.
  • a method is performed by a wireless device for concurrent operation of ML models.
  • the method comprises: signaling to a network node ML model feature set information comprising one or more constraints for operating one or more ML models; receiving from the network node a ML model configuration for one or more ML models based on the ML model feature set information; and operating one or more ML models based on the received ML model configuration.
  • the constraints comprise an indication of a subset of ML models that are able to be operated concurrently.
  • the constraints may comprise an indication of a subset of ML models to be operated concurrently with non-ML functionality of the wireless device.
  • the constraints may comprise one or more of: a complexity level associated with a ML model; a total complexity level associated with two or more ML models; a performance level associated with a ML model; a model output resolution for a ML model; and a model accuracy or confidence threshold for a ML model.
  • the constraints may be associated with a radio link configuration comprising one or more of: bandwidth; number of component carriers; subcarrier spacing; frequency range; number of antenna ports; and number of reference signal ports.
  • the constraints may comprise an indication of one or more sets of ML models, wherein ML models in each set use mutually blocking computation resources.
  • the constraints may comprise an indication of one or more of: an energy consumption value associated with each ML model; a computational resource consumption value associated with each ML model; and a memory resource consumption value associated with each ML model.
  • signaling to the network node ML model feature set information comprises signaling one or more of an international mobile equipment identity (IMEI) of the wireless device, a partial IMEI of the wireless device, a model number of the wireless device, and a chipset identifier of the wireless device.
  • IMEI international mobile equipment identity
  • the ML model configuration comprises configuration of ML models of a first sequence of execution and ML models of a second sequence of execution that may be executed concurrently with the ML models of the first sequence.
  • the ML model configuration may comprise configuration of ML models each associated with a ML processing unit value and a sum of the ML processing unit values for each configured ML model is less than a threshold value.
  • a wireless device comprises processing circuitry operable to perform any of the methods of the wireless device described above.
  • a computer program product comprising a non-transitory computer readable medium storing computer readable program code, the computer readable program code operable, when executed by processing circuitry to perform any of the methods performed by the wireless device described above.
  • a method is performed by a network node for configuring a wireless device for concurrent operation of ML models.
  • the method comprises receiving from a wireless device ML model feature set information comprising one or more constraints for operating one or more ML models and transmitting an ML model configuration based on the ML model feature set information to the wireless device.
  • the method further comprises determining the ML model configuration for the wireless device for one or more ML models based on the ML model feature set information.
  • determining the ML model configuration comprises determining a configuration of ML models of a first sequence of execution and ML models of a second sequence of execution that may be executed concurrently with the ML models of the first sequence.
  • each of the ML models is associated with a ML processing unit value
  • determining the ML model configuration comprises determining a subset of ML models where a sum of the ML processing unit values associated with each ML model in the subset is less than a threshold value.
  • each of the ML models is associated with a priority value, and determining the ML model configuration is based on the priority value.
  • determining the ML model configuration is based on network information comprising one or more of: current prioritized key performance indicators (KPIs) in a cell; network load information; number of active users in the network; network coverage conditions;. mobility conditions; and number of beams configured.
  • KPIs current prioritized key performance indicators
  • a network node comprises processing circuitry operable to perform any of the network node methods described above.
  • Another computer program product comprises a non-transitory computer readable medium storing computer readable program code, the computer readable program code operable, when executed by processing circuitry to perform any of the methods performed by the network nodes described above.
  • Certain embodiments may provide one or more of the following technical advantages. For example, by enabling the UE to inform the network about ML model configurations that may be configured concurrently in different operating scenarios, the ML model feature set reporting mechanism increases the utility and usability of the ML models in many typical scenarios where not all available ML-based functionalities are simultaneously needed, and/or avoids the need for excessive dimensioning of ML computational support in the UE.
  • FIGURE 1 is an illustration of training and inference pipelines, and their interactions within a model lifecycle management procedure
  • FIGURE 2 illustrates feature set combinations
  • FIGURE 3 is a flowchart illustrating the high-level principle of the user equipment (UE) and network aspects of particular embodiments;
  • FIGURE 4 illustrates an example communication system, according to certain embodiments.
  • FIGURE 5 illustrates an example UE, according to certain embodiments
  • FIGURE 6 illustrates an example network node, according to certain embodiments.
  • FIGURE 7 illustrates a block diagram of a host, according to certain embodiments.
  • FIGURE 8 illustrates a virtualization environment in which functions implemented by some embodiments may be virtualized, according to certain embodiments
  • FIGURE 9 illustrates a host communicating via a network node with a UE over a partially wireless connection, according to certain embodiments
  • FIGURE 10 illustrates a method performed by a wireless device, according to certain embodiments.
  • FIGURE 11 illustrates a method performed by a network node, according to certain embodiments.
  • particular embodiments include a method for a UE to report support for ML model-based functionality/capability where not all reported ML-based functionalities need to be supported concurrently.
  • the method facilitates indicating subsets and parameter configurations for combinations of reported ML-based functionalities that can be configured simultaneously.
  • FIGURE 3 is a flowchart illustrating the high-level principle of the UE and network aspects of particular embodiments.
  • the UE e.g., UE 200 described in more detail with respect to FIGURES 4 and 5
  • the network node e.g., network node 300 described in more detail with respect to FIGURES 4 and 5
  • the signaling may comprise radio resource control (RRC) signaling, for example, as part of UE capability reporting framework at connection establishment or at connection reconfiguration.
  • RRC radio resource control
  • the UE may use the order of reporting each available ML model as an index to refer to the models and the feature set constraints may be provided in conjunction with model availability reporting.
  • the feature set information may be provided after the supported ML models have been reported and model IDs have been assigned by the network. The model IDs may then be used to refer to the models when conveying the constraints.
  • the ML models included in the feature set information may be referred to using specific model versions, where simultaneous operation criteria may differ between different versions of the same model.
  • the ML model feature set information may refer to ML models that are currently installed (stored) in the UE, available from the UE vendor (e.g., downloadable on demand), and/or available from the network (e.g., downloadable upon network command).
  • Step 120 includes ML model configuration.
  • the UE receives from the network node configuration for one or more ML models referred to in the feature set information.
  • the configuration may be provided via RRC signaling.
  • the configuration from the network is aligned with the constraints provided and thus fits the hardware processing limits of the UE.
  • the UE subsequently operates related functionalities using the ML model configuration.
  • Step 210 includes ML model configuration determination.
  • the network node e.g., gNB
  • the network node identifies a ML model combination and/or configuration that best, or sufficiently, matches the current prioritized key performance indicators (KPIs) in the cell.
  • KPIs key performance indicators
  • the decision on what ML model feature set may be dependent on network information, because the need for UEs to assist the network with predicted information is based on the current situation in the network.
  • the network information may comprise load information, for example, the number of active users in nearby cells. The number of active users in nearby cells will degrade the performance by creating interference in a UEs serving cell. Therefore, worse and less deterministic channel conditions may be expected when the interference is high.
  • the network may benefit from UEs providing predictions (configure such ML model operation) on signal to interference and noise ratio (SINR) for certain reference signals, or time-frequency resources to improve link adaptation, for example.
  • SINR signal to interference and noise ratio
  • the network information may comprise a number of active users in the current cell.
  • the complexity in RRM increases when the load is higher, for example in beam management, scheduling and mobility.
  • the scarce resources in, for example, beam measurements at high load can motivate activating a certain beam prediction model at the device. For example, activating a more complex model in comparison to if the network had been less loaded.
  • the network information may comprise inappropriate coverage conditions, such as presence of excessively overlapping cell borders or detection of not sufficient overlapping between neighboring cells (implying a coverage hole). Presence of a large coverage hole may motivate configuring a UE to use an ML-model capable of performing signal quality forecasts.
  • the network information may comprise suboptimal mobility conditions, for example, an excessive reception of radio link failure (RLF) reports from UEs indicating mobility failures or failures during the addition of new cells in multi connectivity, or reception of successful handover reports, indicating the sub-optimal execution of mobility procedures. This may motivate configuring a UE to perform signal quality forecasts.
  • RLF radio link failure
  • the network information may comprise the number of downlink beams configured for the UE to monitor, including the number of different synchronization signal block (SSB) beams (which is reflected by the number of SSB indices), and the number channel state information reference signal (CSI-RS) beams.
  • SSB synchronization signal block
  • CSI-RS channel state information reference signal
  • the UE needs to monitor a relatively large number of downlink beams, and the downlink/uplink beam pair is prone to failure, then it is justified to activate ML-based beam management.
  • models aimed at improving the robustness for example, models capable of forecasting and repairing a RLF, predicting and repairing a beam failure, or maintaining the stringent service requirement during a handover (e.g., establishing a connection with a target cell while maintaining the connection with the existing cell).
  • the ML model feature set information describes constraints for which combinations of models, other functionality, and other UE configurations are supported by the UE for simultaneous operation.
  • this description assumes that the UE has numerous operational options such as available (e.g., stored or possible to download) ML models: A, B, C, D (e.g., to CSI reporting, BM, positioning, etc. functionalities).
  • Another operational option is multiple performance levels for the available models. For example, multiple levels for A: A0 (full), Al, A2, A3, .... Similarly, B, C, D may each have their own multiple levels (e.g., model output update (inference) rate, model output resolution, model support for providing an accuracy/confidence interval or uncertainty value associated to the output, etc.).
  • the measured or estimated ML model/algorithm performance for example in terms of accuracy or precision, may be represented with average values over a certain interval of time, standard deviation, maximum or minimum value, etc.
  • Another performance level may be the estimated energy consumption of executing the ML-model.
  • Some operational options may be model execution variations that do not affect drift or model-to-environment matching properties, but may reduce computational complexity by lowering model output resolution in relevant dimensions.
  • legacy (or non-ML) functionality a, b, c, d (e.g., joint/IC receiver, non-orthogonal multiple access NOMA receiver, high-resolution digital predistortion (DPD) support, etc.).
  • the non-ML functionality may also relate to functionality that provides the same feature as the specific ML-model or ML-functionality. For example, there may be limitation in how the UE is able to estimate CSI based on a non-ML report and an ML report, i.e., the UE may not always be able to construct such reports.
  • RS patterns channel estimation
  • radio link configuration ⁇ 1>, ⁇ 2>, ⁇ 3>, which may include a combination of one or more of bandwidth, number of component carriers (CCs), subcarrier spacing (SCS)/frequency range (FR), number of antennas/ports, etc., or parameter limits, e.g., bandwidth ⁇ 100 MHz, number of CCs ⁇ 2, number of CSLRS ports ⁇ 32, etc.
  • CCs component carriers
  • SCS subcarrier spacing
  • FR frequency range
  • parameter limits e.g., bandwidth ⁇ 100 MHz, number of CCs ⁇ 2, number of CSLRS ports ⁇ 32, etc.
  • the feature set constraints for concurrent operation may be expressed in a number of ways. Some examples are provided below.
  • Some embodiments include simultaneously supported ML models.
  • the UE may indicate in the ML model feature set report, e.g., due to limited ML acceleration capacity, any one or more of the following.
  • subsetO ⁇ A, B]
  • subsetl ⁇ A, C]
  • subset2 ⁇ C, D, E]
  • the models in the subset can be activated concurrently.
  • two different subsets cannot be activated concurrently.
  • the UE can be configured to migrate from subsetO to subsetl by deactivating B, and activating C; but the subsetl (i.e., A, C) cannot be activated while subsetO (A, B) is active.
  • the UE may indicate reduced resolution for some models when other models are running, e.g.: A0 only if model A is running alone - otherwise Al; A0 only if model D is not running - otherwise A2, etc.
  • the UE may indicate models that cannot run when another model is running: e.g., if A then not E, if B then not C, etc.
  • the UE may indicate a maximum number of concurrent models, e.g., 2, 4, etc.
  • Some embodiments include dependency on legacy functionality.
  • the UE may indicate in the ML model feature set report, e.g., due to limited capacity of shared computation resources. For example: Model A may run only if functionality b is not configured; Model A may run at resolution A2 if functionality b is configured; and/or no ML models may run if functionality b or c is configured, etc.
  • Some embodiments include dependency on computational resource sharing.
  • he UE may indicate in the ML model feature set report, e.g., one or more sets of ML models, wherein ML models in each set use mutually blocking computation resources. That is, ML models in the same set cannot be executed simultaneously but ML models from different sets can be executed simultaneously.
  • the UE may indicate in the ML model feature set report, e.g., one or more sets of ML models with the associated blocking durations. That is, for each ML model in a set, the report indicates the duration during which other ML models in the same set cannot be executed.
  • This type of report enables the network and the UE to determine combinations and orders of ML model execution.
  • the report may indicate CSI compression has a blocking duration of six orthogonal frequency division multiplexing (OFDM) symbols.
  • OFDM orthogonal frequency division multiplexing
  • model A and model B may run concurrently if feature c is not active, model A can only run concurrently with B if the receiver configuration is ⁇ 2>, or if the receiver configuration is not ⁇ 3>, etc.
  • the feature set information may be provided using one or more tables/lists of permitted or prohibited model combinations based on ML model indices, and/or ASN.l-like structure for describing dependencies.
  • the network node uses the feature set information to determine a best or a sufficiently well-performing combination of ML features and configurations, where performance criteria may include link performance, cell performance, energy consumption, etc. considerations.
  • the gNB may first determine a priority list for functionalities to be supported via ML, e.g., based on expected performance or resource usage improvement. The gNB may then select a subset that contains as many ML models in the top of the list as possible, based on the feature set information provided by the UE. Alternatively, the network may select the largest ML model feature set that does not exclude one or more most highly prioritized functionalities.
  • a feature set refers to a set of ML model-based functionalities that are operated together, regardless of whether they are presented by the UE as sets or via other methods, e.g., by indicating incompatibility or inability to run concurrently.
  • the network node uses the feature set information to determine configuration and sequences of ML model execution.
  • the network configures the ML models belonging to the same sequence to execute in sequential order.
  • the network configures the ML models belonging to different sequences to execute in parallel.
  • each model has a complexity indicator associated with it and the UE has a cap on the total complexity with which the UE may be configured, simultaneously operate or active ML models.
  • ML Model A has the complexity number 10, ML Model B 20 and ML Model C 5.
  • the UE indicates a cap of 30, which means that the UE can either be configured/active/operate with ML Model A+B, ML Model B+C or ML model A+C, but not ML model A+B+C.
  • the complexity number may be expressed in a different manner, for example, as negative numbers and when the sum reaches 0 it is beyond the limit. If the UE is configured beyond the limit, the UE may choose which ML models it operates by itself and do not need to follow the network instruction. A result is that the reported outputs of the ML-models to the network may only exist for the ones that are operated or the UE only reports values that are applicable values. Applicable values may be HARQ-ACK as ACK, valid CSI report, e.g., a CQI that is not out of range, valid RSRP/RSRQ value, e.g., not an out of range value.
  • the complexity indicator reporting further comprises reporting a complexity indicator per reference duration.
  • a nonlimiting example of a reference duration is a slot duration for a reference sub-carrier spacing as defined in NR specifications.
  • a nonlimiting example of a reference sub-carrier spacing is 60 kHz for frequency range 1.
  • Another nonlimiting example of a reference duration comprises seven OFDM symbols for a reference sub-carrier spacing as defined in NR specifications.
  • Yet another nonlimiting example of a reference duration comprises four OFDM symbols for a reference sub-carrier spacing as defined in NR specifications.
  • Some embodiments comprise reporting embodiments with dependency on computational resource sharing. That is, the model complexity indicators further comprise indicating complexity per ML model set with a per ML model set complexity limit. ML model combinations from the same ML model set can be executed in parallel or in sequence so long as the per ML model set complexity limit is not exceeded. ML models from different ML model sets may be executed in parallel.
  • the ML processing capacity is measured in ML processing units (MLPU).
  • MLPU ML processing units
  • Each model is assigned a number of MLPU.
  • the number of MLPU assigned may depend on the storage burden and/or computation complexity of the ML model. Different version of a given ML model may be assigned the same or different MLPU, depending on if the different versions have different storage burden and/or computation complexity levels.
  • Type I CSI report i.e., simpler scenario
  • Type II CSI report i.e., more complex scenario
  • 2 MLPU are assigned.
  • the MLPU to assign may be provided in several possible ways.
  • the MLPU of the ML model may be reported by the UE.
  • different UE may report different MLPU value.
  • the range (MLPU_Beam_Selectionmin, MLPU_Beam_Selectionmax) may be defined in the specification, so that the UE has implementation flexibility as long as MLPU_Beam_Selectionmin ⁇ MLPU_Beam_Selectionreport ⁇ MLPU_Beam_Selectionmax, where MLPU_Beam_S electionreport is the MLPU for the beam selection functionality as reported by the given UE.
  • the MLPU of the ML model is specified independent of the UE implementation.
  • the same functionality e.g., downlink beam selection
  • a maximum total MLPU supported by the UE may be reported to the network by the UE, where the report may be part of UE capability reporting.
  • the UE may have one or more of its ML models activated simultaneously, as long as the sum of MLPU of these ML models do not exceed the maximum total.
  • the network may decide the set of ML models (including the version of each ML model, if multiple versions are possible) to activate for the UE, together with associated configuration parameters and assistance information provisioning.
  • a machine learning model (ML model, Al model, AI/ML model) is a procedure representing a mathematical algorithm, which through training over a set of data, is parameterized such that the parameters and hyperparameters define the model.
  • the hyperparameters define the model structure and model behavior, and are set manually.
  • hyperparameters include: the number of layers, the number of neurons in each, each layer’s activation functions, etc. Parameters are those obtained through learning or training with the data set, for example, weights at each neuron.
  • An ML-model may correspond to a function that receives one or more inputs (e.g., measurements) and provides as outcome one or more prediction(s) of a certain type.
  • an ML-model may correspond to a function receiving as input the measurement of a reference signal at time instance tO (e.g., transmitted in beam-X) and provide as outcome the prediction of the reference signal in timer tO+T.
  • an ML-model may correspond to a function receiving as input the measurement of a reference signal X (e.g., transmitted in beam-x), such as an SSB whose index is ‘x’, and provide as outcome the prediction of other reference signals transmitted in different beams e.g., reference signal Y (e.g., transmitted in beam-x), such as an SSB whose index is ‘x’.
  • a ML model for aid in CSI estimation In such a setup, the ML- model will be a specific ML-model for a UE and an ML-model within the network side. Jointly both ML-models provide joint network functionality. The function of the ML-model at the UE may be to compress a channel input and the function of the ML-model at the network side may be to decompress the received output from the UE.
  • Some embodiments may apply similar procedures for positioning wherein the input may be a channel impulse in a form related to a certain reference point in time.
  • the purpose on the network side is to detect different peaks within the impulse response that correspond to different reception directions of radio signals at the UE side.
  • Another way is to input multiple sets of measurements into an ML network and based on that derive an estimated positioning.
  • Another ML-model is an ML-model to aid the UE in channel estimation or interference estimation for channel estimation.
  • the channel estimation may, for example, be for the physical downlink shared channel (PDSCH) and be associated with specific set of reference signal patterns that are transmitted from the network to the UE.
  • the ML-model may be part of the receiver chain within the UE and may not be directly visible within the reference signal pattern as such that is configured/scheduled to be used between the network and UE.
  • Another example of an ML-model for CSI estimation is to predict a suitable CQI PMI, RI or similar value into the future.
  • the future may be a certain number of slots after the UE has performed the last measurement or targeting a specific slot in time within the future.
  • the report may or may not be part of the UE capability report, where the ML model support includes the supported ML-model type and version(s) supported for each of them. How the UE reports ML-model support may vary for each model type (functional area and configuration area).
  • the ML-model types are reported with one mechanism and the ML-model version with another mechanism.
  • the supported ML-model types may be reported within the UE capabilities.
  • the ML-model versions may be reports within a separate framework outside the UE capabilities.
  • the network may identify the supported functionality, and accordingly configure the specific ML-model identified by at least a ML-model ID, which subsequently also identifies a functional area and configuration area when the model is referred to using its model ID.
  • the UE may start to operate the ML-model in question until it is de-configured, or deactivated, or expired (i.e., no longer active).
  • the configuration may be a two-step mechanism wherein the UE is configured with a specific ML-model, including the model ID, by a first message, and the ML-model is later activated by a second message.
  • the UE indicates support for a specific ML-model with or without an ID and a specific version of that ML-model.
  • the network assigns a second ID to that ML-model in the configuration step of the ML-model.
  • This second ID is an ID that temporarily identifies between the network and the UE only.
  • a purpose is to minimize signaling and more easily address the specific ML-model.
  • the steps of model version reporting by the UE and model ID provision may be performed multiple times during an ongoing connection/session with a UE. This occurs, e.g., when the UE has obtained an updated model for a certain functionality (e.g., downloaded or locally retrained) and a new model Id is provided to distinguish the new version from the previous version.
  • a certain functionality e.g., downloaded or locally retrained
  • the UE When the UE connects, including legacy operation, the UE provides the network with its international mobile equipment identity (IMEI), which defines the model/chipset and its specific hardware configuration.
  • IMEI international mobile equipment identity
  • the IMEI may encapsulate implemented ML model type and/or version info for one or more implemented models.
  • the IMEI does not contain additional software (SW) version info.
  • the core network may request a SW version number (IMEI SV) if needed. In some embodiments, this currently
  • 2-digit (0-99) value may include the above ML model type and version info.
  • the IMEI SV field range may be extended to a larger number of digits, e.g.,
  • the UE transmits the IMEI (or IMEI SV) and upon reception, the network determines the one or more ML model types and/or ML model versions the UE supports. For example, by the network retrieving the ML model info from a look up table based on the IMEI or IMEI SV, in a server hosted by the UE and/or the device vendor.
  • the IMEI framework is augmented by adding additional info fields, referred to here as MI (model information).
  • MI model information
  • the MI may be conveyed as an extension of the IMEI framework or as a new mechanism.
  • the MI may contain, e.g., one or more of the following fields/elements: model type info (e.g., the functional area and configuration area (as above)); available UE model version (or a list of multiple versions supported, or a range of model versions, possibly backwards compatible/equivalent); gNB model version or interface version with which the implemented model is compatible (or a range of the same); computational complexity of the model (on a relative, pre-defined scale); scenario category to which the model applies (propagation environment, CA/DC configuration, etc.); performance guarantee token (compliance with a performance category (possibly out of multiple options) defined in a standard, or performance verification from an inter-vendor or third-party testing entity); model origin (vendor, time of creation/approval).
  • model type info e.g., the functional area and configuration area (as above)
  • available UE model version or a list of multiple versions supported, or a range of model versions, possibly backwards compatible/equivalent
  • model support reporting may be limited to model type and version components, multiple fields/items above may be included. For the purpose of generality, additional field contents may be viewed as part of the type and/or version definition.
  • available model may refer to a model that is implemented, stored, or downloaded in the UE, or that may be downloaded on demand to the UE.
  • the MI fields may be structured as a list of decimal numbers, a list of binary fields of fixed or variable length, a string structure including field names, an ASN.l-like structure, etc.
  • the elements of the MI info structure may be represented, e.g., as follows: the model type, expressed, e.g., via the functional area and the configuration area, may be defined via predetermined indices or descriptive character strings, where the list of possible such indices or strings is specified in a standard document or via another previous agreement; similar representation may be used for the scenario field; the UE and gNB version numbers may be binary or decimal numbers or character strings; performance guarantee may be in the form of a granted performance certificate number, the name or a code for a certifying entity, etc.; and model origin may be represented by date codes in numeric or string form and vendor/network/operator names or codes from a list that may be looked up in a cloud service, an origin identifier ID or the type of origin/node, etc.
  • the model version may be pre-registered in an inter-vendor database that also contains additional model properties. In that case, some fields above in the MI signaling may be omitted.
  • the model version number is not preregistered but the network collects version numbers and corresponding performance information as it encounters UEs indicating given model versions. The collection may be from multiple cells in a network, and/or from multiple networks where the gNB are provided by the vendor.
  • the gNB model version reflects a split model version with which the UE model has been trained.
  • the version is a training interface version with which the UE model has been trained, where the gNB model as such needs not be specified.
  • the model version code may be given as a predetermined special value (e.g., -1 or OxFFFF).
  • the model version value may be more generally interpreted as a functionality version or an algorithm version.
  • the legacy IMEI SV request and provision scheme is implemented in the core network, which causes additional delay/overhead until a gNB gets the information.
  • the MI is requested and/or provided via RAN signaling, e.g., via RRC or MAC CE, using a common IE or separate IES for parts of the listed elements.
  • the MI contents may be conveyed via extensions or additions to the existing UE capability reporting framework.
  • the UE transmits the at least one ML version or other ML capability information of at least one ML model to the network, e.g., network node as part of the UE capability transfer procedure, included in the UECapabilitylnformation message, as one or more Information Element(s) (IEs) and/or fields.
  • IEs Information Element(s)
  • this is transmitted by the UE in response to a request from the network, e.g., in response to the reception of a UECapabilityEnquire message.
  • the UECapabilityEnquire message may include an indication for the report of a specific ML version or other ML support info, e.g., associated to the ML model of a specific function, such as an ML model for beam management (BM), or CSI compression.
  • BM ML model for beam management
  • the UE transmits the ML support, e.g., the Model types in the UECapabilitylnformation message as one or more Information Element(s) (IEs) and/or fields, e.g., UEModelVersions IE. Further, the UE transmits the ML model version in separate fields and IEs that may be transmitted in a separate message (e.g., UEModelVersionsInformation message) may use the UECapabilitylnformation message but may also be appended to the UECapabilitylnformation message.
  • IEs Information Element(s)
  • UEModelVersionsInformation message may use the UECapabilitylnformation message but may also be appended to the UECapabilitylnformation message.
  • the report of the ML version or other ML support information of at least one ML model is reported upon the transition from IDLE to CONNECTED state.
  • the UE may do so when the UE has updated an ML-model version and trigger by itself sending a message indicating new ML-model versions, e.g., by sending a UEModelVersionsInformation message.
  • the message may include all the UE ML-model versions the UE supports or only the changed information since the last message was sent.
  • the report of ML version or other ML support info of at least one ML model occurs during an attach or registration procedure, e.g., during the first time the UE connects to a PLMN (so that after this time, the network stores the information).
  • the ML version or other ML support info of at least one ML model of a given UE is provided from a source network node to a target network node during a handover e.g. in the Handover Request message, so that the target network node is able to determine which ML model the UE supports, and possibly determine whether to re-assign the ML model ID to the UE, e.g., in the handover command/RRCReconfiguration message the UE applies upon the handover (reconfiguration with sync).
  • FIGURE 4 illustrates an example of a communication system 100 in accordance with some embodiments.
  • the communication system 100 includes a telecommunication network 102 that includes an access network 104, such as a radio access network (RAN), and a core network 106, which includes one or more core network nodes 108.
  • the access network 104 includes one or more access network nodes, such as network nodes 110a and 110b (one or more of which may be generally referred to as network nodes 110), or any other similar 3rd Generation Partnership Project (3GPP) access node or non-3GPP access point.
  • 3GPP 3rd Generation Partnership Project
  • the network nodes 110 facilitate direct or indirect connection of user equipment (UE), such as by connecting UEs 112a, 112b, 112c, and 112d (one or more of which may be generally referred to as UEs 112) to the core network 106 over one or more wireless connections.
  • UE user equipment
  • Example wireless communications over a wireless connection include transmitting and/or receiving wireless signals using electromagnetic waves, radio waves, infrared waves, and/or other types of signals suitable for conveying information without the use of wires, cables, or other material conductors.
  • the communication system 100 may include any number of wired or wireless networks, network nodes, UEs, and/or any other components or systems that may facilitate or participate in the communication of data and/or signals whether via wired or wireless connections.
  • the communication system 100 may include and/or interface with any type of communication, telecommunication, data, cellular, radio network, and/or other similar type of system.
  • the UEs 112 may be any of a wide variety of communication devices, including wireless devices arranged, configured, and/or operable to communicate wirelessly with the network nodes 110 and other communication devices.
  • the network nodes 110 are arranged, capable, configured, and/or operable to communicate directly or indirectly with the UEs 112 and/or with other network nodes or equipment in the telecommunication network 102 to enable and/or provide network access, such as wireless network access, and/or to perform other functions, such as administration in the telecommunication network 102.
  • the core network 106 connects the network nodes 110 to one or more hosts, such as host 116. These connections may be direct or indirect via one or more intermediary networks or devices. In other examples, network nodes may be directly coupled to hosts.
  • the core network 106 includes one more core network nodes (e.g., core network node 108) that are structured with hardware and software components. Features of these components may be substantially similar to those described with respect to the UEs, network nodes, and/or hosts, such that the descriptions thereof are generally applicable to the corresponding components of the core network node 108.
  • Example core network nodes include functions of one or more of a Mobile Switching Center (MSC), Mobility Management Entity (MME), Home Subscriber Server (HSS), Access and Mobility Management Function (AMF), Session Management Function (SMF), Authentication Server Function (AUSF), Subscription Identifier De-concealing function (SIDE), Unified Data Management (UDM), Security Edge Protection Proxy (SEPP), Network Exposure Function (NEF), and/or a User Plane Function (UPF).
  • MSC Mobile Switching Center
  • MME Mobility Management Entity
  • HSS Home Subscriber Server
  • AMF Access and Mobility Management Function
  • SMF Session Management Function
  • AUSF Authentication Server Function
  • SIDE Subscription Identifier De-concealing function
  • UDM Unified Data Management
  • SEPP Security Edge Protection Proxy
  • NEF Network Exposure Function
  • UPF User Plane Function
  • the host 116 may be under the ownership or control of a service provider other than an operator or provider of the access network 104 and/or the telecommunication network 102, and may be operated by the service provider or on behalf of the service provider.
  • the host 116 may host a variety of applications to provide one or more service. Examples of such applications include live and pre-recorded audio/video content, data collection services such as retrieving and compiling data on various ambient conditions detected by a plurality of UEs, analytics functionality, social media, functions for controlling or otherwise interacting with remote devices, functions for an alarm and surveillance center, or any other such function performed by a server.
  • the communication system 100 of 1FIGURE 4 enables connectivity between the UEs, network nodes, and hosts.
  • the communication system may be configured to operate according to predefined rules or procedures, such as specific standards that include, but are not limited to: Global System for Mobile Communications (GSM); Universal Mobile Telecommunications System (UMTS); Long Term Evolution (LTE), and/or other suitable 2G, 3G, 4G, 5G standards, or any applicable future generation standard (e.g., 6G); wireless local area network (WLAN) standards, such as the Institute of Electrical and Electronics Engineers (IEEE) 802.11 standards (WiFi); and/or any other appropriate wireless communication standard, such as the Worldwide Interoperability for Microwave Access (WiMax), Bluetooth, Z-Wave, Near Field Communication (NFC) ZigBee, LiFi, and/or any low-power wide-area network (LPWAN) standards such as LoRa and Sigfox.
  • GSM Global System for Mobile Communications
  • UMTS Universal Mobile Telecommunications System
  • LTE Long Term
  • the telecommunication network 102 is a cellular network that implements 3 GPP standardized features. Accordingly, the telecommunications network 102 may support network slicing to provide different logical networks to different devices that are connected to the telecommunication network 102. For example, the telecommunications network 102 may provide Ultra Reliable Low Latency Communication (URLLC) services to some UEs, while providing Enhanced Mobile Broadband (eMBB) services to other UEs, and/or Massive Machine Type Communication (mMTC)/Massive loT services to yet further UEs.
  • URLLC Ultra Reliable Low Latency Communication
  • eMBB Enhanced Mobile Broadband
  • mMTC Massive Machine Type Communication
  • the UEs 112 are configured to transmit and/or receive information without direct human interaction.
  • a UE may be designed to transmit information to the access network 104 on a predetermined schedule, when triggered by an internal or external event, or in response to requests from the access network 104.
  • a UE may be configured for operating in single- or multi-RAT or multi-standard mode.
  • a UE may operate with any one or combination of Wi-Fi, NR (New Radio) and LTE, i.e. being configured for multi-radio dual connectivity (MR-DC), such as E-UTRAN (Evolved-UMTS Terrestrial Radio Access Network) New Radio - Dual Connectivity (EN-DC).
  • MR-DC multi-radio dual connectivity
  • the hub 114 communicates with the access network 104 to facilitate indirect communication between one or more UEs (e.g., UE 112c and/or 112d) and network nodes (e.g., network node 110b).
  • the hub 114 may be a controller, router, content source and analytics, or any of the other communication devices described herein regarding UEs.
  • the hub 114 may be a broadband router enabling access to the core network 106 for the UEs.
  • the hub 114 may be a controller that sends commands or instructions to one or more actuators in the UEs.
  • the hub 114 may be a data collector that acts as temporary storage for UE data and, in some embodiments, may perform analysis or other processing of the data.
  • the hub 114 may be a content source. For example, for a UE that is a VR headset, display, loudspeaker or other media delivery device, the hub 114 may retrieve VR assets, video, audio, or other media or data related to sensory information via a network node, which the hub 114 then provides to the UE either directly, after performing local processing, and/or after adding additional local content.
  • the hub 114 acts as a proxy server or orchestrator for the UEs, in particular in if one or more of the UEs are low energy loT devices.
  • the hub 114 may have a constant/persistent or intermittent connection to the network node 110b.
  • the hub 114 may also allow for a different communication scheme and/or schedule between the hub 114 and UEs (e.g., UE 112c and/or 112d), and between the hub 114 and the core network 106.
  • the hub 114 is connected to the core network 106 and/or one or more UEs via a wired connection.
  • the hub 114 may be configured to connect to an M2M service provider over the access network 104 and/or to another UE over a direct connection.
  • UEs may establish a wireless connection with the network nodes 110 while still connected via the hub 114 via a wired or wireless connection.
  • the hub 114 may be a dedicated hub - that is, a hub whose primary function is to route communications to/from the UEs from/to the network node 110b.
  • the hub 114 may be a non-dedicated hub - that is, a device which is capable of operating to route communications between the UEs and network node 110b, but which is additionally capable of operating as a communication start and/or end point for certain data channels.
  • FIGURE 5 shows a UE 200 in accordance with some embodiments.
  • a UE refers to a device capable, configured, arranged and/or operable to communicate wirelessly with network nodes and/or other UEs.
  • Examples of a UE include, but are not limited to, a smart phone, mobile phone, cell phone, voice over IP (VoIP) phone, wireless local loop phone, desktop computer, personal digital assistant (PDA), wireless cameras, gaming console or device, music storage device, playback appliance, wearable terminal device, wireless endpoint, mobile station, tablet, laptop, laptop-embedded equipment (LEE), laptop-mounted equipment (LME), smart device, wireless customer-premise equipment (CPE), vehicle-mounted or vehicle embedded/integrated wireless device, etc.
  • VoIP voice over IP
  • LME laptop-embedded equipment
  • LME laptop-mounted equipment
  • CPE wireless customer-premise equipment
  • UEs identified by the 3rd Generation Partnership Project (3GPP), including a narrow band internet of things (NB-IoT) UE, a machine type communication (MTC) UE, and/or an enhanced MTC (eMTC) UE.
  • 3GPP 3rd Generation Partnership Project
  • NB-IoT narrow band internet of things
  • MTC machine type communication
  • eMTC enhanced MTC
  • a UE may support device-to-device (D2D) communication, for example by implementing a 3 GPP standard for sidelink communication, Dedicated Short-Range Communication (DSRC), vehicle-to-vehicle (V2V), vehicle-to-infrastructure (V2I), or vehicle- to-everything (V2X).
  • D2D device-to-device
  • DSRC Dedicated Short-Range Communication
  • V2V vehicle-to-vehicle
  • V2I vehicle-to-infrastructure
  • V2X vehicle- to-everything
  • a UE may not necessarily have a user in the sense of a human user who owns and/or operates the relevant device.
  • a UE may represent a device that is intended for sale to, or operation by, a human user but which may not, or which may not initially, be associated with a specific human user (e.g., a smart sprinkler controller).
  • a UE may represent a device that is not intended for sale
  • the UE 200 includes processing circuitry 202 that is operatively coupled via a bus 204 to an input/output interface 206, a power source 208, a memory 210, a communication interface 212, and/or any other component, or any combination thereof.
  • Certain UEs may utilize all or a subset of the components shown in FIGURE 5. The level of integration between the components may vary from one UE to another UE. Further, certain UEs may contain multiple instances of a component, such as multiple processors, memories, transceivers, transmitters, receivers, etc.
  • the processing circuitry 202 is configured to process instructions and data and may be configured to implement any sequential state machine operative to execute instructions stored as machine-readable computer programs in the memory 210.
  • the processing circuitry 202 may be implemented as one or more hardware-implemented state machines (e.g., in discrete logic, field-programmable gate arrays (FPGAs), application specific integrated circuits (ASICs), etc.); programmable logic together with appropriate firmware; one or more stored computer programs, general-purpose processors, such as a microprocessor or digital signal processor (DSP), together with appropriate software; or any combination of the above.
  • the processing circuitry 202 may include multiple central processing units (CPUs).
  • the input/output interface 206 may be configured to provide an interface or interfaces to an input device, output device, or one or more input and/or output devices.
  • Examples of an output device include a speaker, a sound card, a video card, a display, a monitor, a printer, an actuator, an emitter, a smartcard, another output device, or any combination thereof.
  • An input device may allow a user to capture information into the UE 200.
  • Examples of an input device include a touch-sensitive or presence-sensitive display, a camera (e.g., a digital camera, a digital video camera, a web camera, etc.), a microphone, a sensor, a mouse, a trackball, a directional pad, a trackpad, a scroll wheel, a smartcard, and the like.
  • the presence-sensitive display may include a capacitive or resistive touch sensor to sense input from a user.
  • a sensor may be, for instance, an accelerometer, a gyroscope, a tilt sensor, a force sensor, a magnetometer, an optical sensor, a proximity sensor, a biometric sensor, etc., or any combination thereof.
  • An output device may use the same type of interface port as an input device. For example, a Universal Serial Bus (USB) port may be used to provide an input device and an output device.
  • USB Universal Serial Bus
  • the power source 208 is structured as a battery or battery pack. Other types of power sources, such as an external power source (e.g., an electricity outlet), photovoltaic device, or power cell, may be used.
  • the power source 208 may further include power circuitry for delivering power from the power source 208 itself, and/or an external power source, to the various parts of the UE 200 via input circuitry or an interface such as an electrical power cable. Delivering power may be, for example, for charging of the power source 208.
  • Power circuitry may perform any formatting, converting, or other modification to the power from the power source 208 to make the power suitable for the respective components of the UE 200 to which power is supplied.
  • the memory 210 may be or be configured to include memory such as random access memory (RAM), read-only memory (ROM), programmable read-only memory (PROM), erasable programmable read-only memory (EPROM), electrically erasable programmable read-only memory (EEPROM), magnetic disks, optical disks, hard disks, removable cartridges, flash drives, and so forth.
  • the memory 210 includes one or more application programs 214, such as an operating system, web browser application, a widget, gadget engine, or other application, and corresponding data 216.
  • the memory 210 may store, for use by the UE 200, any of a variety of various operating systems or combinations of operating systems.
  • the memory 210 may be configured to include a number of physical drive units, such as redundant array of independent disks (RAID), flash memory, USB flash drive, external hard disk drive, thumb drive, pen drive, key drive, high-density digital versatile disc (HD-DVD) optical disc drive, internal hard disk drive, Blu-Ray optical disc drive, holographic digital data storage (HDDS) optical disc drive, external mini-dual in-line memory module (DIMM), synchronous dynamic random access memory (SDRAM), external micro-DIMM SDRAM, smartcard memory such as tamper resistant module in the form of a universal integrated circuit card (UICC) including one or more subscriber identity modules (SIMs), such as a USIM and/or ISIM, other memory, or any combination thereof.
  • RAID redundant array of independent disks
  • HD-DVD high-density digital versatile disc
  • HDDS holographic digital data storage
  • DIMM external mini-dual in-line memory module
  • SDRAM synchronous dynamic random access memory
  • SDRAM synchronous dynamic random access memory
  • the UICC may for example be an embedded UICC (eUICC), integrated UICC (iUICC) or a removable UICC commonly known as ‘SIM card.’
  • eUICC embedded UICC
  • iUICC integrated UICC
  • SIM card removable UICC commonly known as ‘SIM card.’
  • the memory 210 may allow the UE 200 to access instructions, application programs and the like, stored on transitory or non-transitory memory media, to off-load data, or to upload data.
  • An article of manufacture, such as one utilizing a communication system may be tangibly embodied as or in the memory 210, which may be or comprise a device-readable storage medium.
  • the processing circuitry 202 may be configured to communicate with an access network or other network using the communication interface 212.
  • the communication interface 212 may comprise one or more communication subsystems and may include or be communicatively coupled to an antenna 222.
  • the communication interface 212 may include one or more transceivers used to communicate, such as by communicating with one or more remote transceivers of another device capable of wireless communication (e.g., another UE or a network node in an access network).
  • Each transceiver may include a transmitter 218 and/or a receiver 220 appropriate to provide network communications (e.g., optical, electrical, frequency allocations, and so forth).
  • the transmitter 218 and receiver 220 may be coupled to one or more antennas (e.g., antenna 222) and may share circuit components, software or firmware, or alternatively be implemented separately.
  • communication functions of the communication interface 212 may include cellular communication, Wi-Fi communication, LPWAN communication, data communication, voice communication, multimedia communication, short-range communications such as Bluetooth, near-field communication, location-based communication such as the use of the global positioning system (GPS) to determine a location, another like communication function, or any combination thereof.
  • GPS global positioning system
  • Communications may be implemented in according to one or more communication protocols and/or standards, such as IEEE 802.11, Code Division Multiplexing Access (CDMA), Wideband Code Division Multiple Access (WCDMA), GSM, LTE, New Radio (NR), UMTS, WiMax, Ethernet, transmission control protocol/intemet protocol (TCP/IP), synchronous optical networking (SONET), Asynchronous Transfer Mode (ATM), QUIC, Hypertext Transfer Protocol (HTTP), and so forth.
  • CDMA Code Division Multiplexing Access
  • WCDMA Wideband Code Division Multiple Access
  • WCDMA Wideband Code Division Multiple Access
  • GSM Global System for Mobile communications
  • LTE Long Term Evolution
  • NR New Radio
  • UMTS Worldwide Interoperability for Microwave Access
  • WiMax Ethernet
  • TCP/IP transmission control protocol/intemet protocol
  • SONET synchronous optical networking
  • ATM Asynchronous Transfer Mode
  • QUIC Hypertext Transfer Protocol
  • HTTP Hypertext Transfer Protocol
  • a UE may provide an output of data captured by its sensors, through its communication interface 212, via a wireless connection to a network node.
  • Data captured by sensors of a UE can be communicated through a wireless connection to a network node via another UE.
  • the output may be periodic (e.g., once every 15 minutes if it reports the sensed temperature), random (e.g., to even out the load from reporting from several sensors), in response to a triggering event (e.g., when moisture is detected an alert is sent), in response to a request (e.g., a user initiated request), or a continuous stream (e.g., a live video feed of a patient).
  • a UE comprises an actuator, a motor, or a switch, related to a communication interface configured to receive wireless input from a network node via a wireless connection.
  • the states of the actuator, the motor, or the switch may change.
  • the UE may comprise a motor that adjusts the control surfaces or rotors of a drone in flight according to the received input or to a robotic arm performing a medical procedure according to the received input.
  • a UE when in the form of an Internet of Things (loT) device, may be a device for use in one or more application domains, these domains comprising, but not limited to, city wearable technology, extended industrial application and healthcare.
  • loT device are a device which is or which is embedded in: a connected refrigerator or freezer, a TV, a connected lighting device, an electricity meter, a robot vacuum cleaner, a voice controlled smart speaker, a home security camera, a motion detector, a thermostat, a smoke detector, a door/window sensor, a flood/moisture sensor, an electrical door lock, a connected doorbell, an air conditioning system like a heat pump, an autonomous vehicle, a surveillance system, a weather monitoring device, a vehicle parking monitoring device, an electric vehicle charging station, a smart watch, a fitness tracker, a head-mounted display for Augmented Reality (AR) or Virtual Reality (VR), a wearable for tactile augmentation or sensory enhancement, a water sprinkler, an animal-
  • AR Augmented Reality
  • VR
  • a UE may represent a machine or other device that performs monitoring and/or measurements, and transmits the results of such monitoring and/or measurements to another UE and/or a network node.
  • the UE may in this case be an M2M device, which may in a 3GPP context be referred to as an MTC device.
  • the UE may implement the 3GPP NB-IoT standard.
  • a UE may represent a vehicle, such as a car, a bus, a truck, a ship and an airplane, or other equipment that is capable of monitoring and/or reporting on its operational status or other functions associated with its operation.
  • any number of UEs may be used together with respect to a single use case.
  • a first UE might be or be integrated in a drone and provide the drone’s speed information (obtained through a speed sensor) to a second UE that is a remote controller operating the drone.
  • the first UE may adjust the throttle on the drone (e.g. by controlling an actuator) to increase or decrease the drone’s speed.
  • the first and/or the second UE can also include more than one of the functionalities described above.
  • a UE might comprise the sensor and the actuator, and handle communication of data for both the speed sensor and the actuators.
  • FIGURE 6 shows a network node 300 in accordance with some embodiments.
  • network node refers to equipment capable, configured, arranged and/or operable to communicate directly or indirectly with a UE and/or with other network nodes or equipment, in a telecommunication network.
  • network nodes include, but are not limited to, access points (APs) (e.g., radio access points), base stations (BSs) (e.g., radio base stations, Node Bs, evolved Node Bs (eNBs) and NR NodeBs (gNBs)).
  • APs access points
  • BSs base stations
  • Node Bs Node Bs
  • eNBs evolved Node Bs
  • gNBs NR NodeBs
  • Base stations may be categorized based on the amount of coverage they provide (or, stated differently, their transmit power level) and so, depending on the provided amount of coverage, may be referred to as femto base stations, pico base stations, micro base stations, or macro base stations.
  • a base station may be a relay node or a relay donor node controlling a relay.
  • a network node may also include one or more (or all) parts of a distributed radio base station such as centralized digital units and/or remote radio units (RRUs), sometimes referred to as Remote Radio Heads (RRHs). Such remote radio units may or may not be integrated with an antenna as an antenna integrated radio.
  • RRUs remote radio units
  • RRHs Remote Radio Heads
  • Such remote radio units may or may not be integrated with an antenna as an antenna integrated radio.
  • Parts of a distributed radio base station may also be referred to as nodes in a distributed antenna system (DAS).
  • DAS distributed antenna system
  • network nodes include multiple transmission point (multi-TRP) 5G access nodes, multi-standard radio (MSR) equipment such as MSR BSs, network controllers such as radio network controllers (RNCs) or base station controllers (BSCs), base transceiver stations (BTSs), transmission points, transmission nodes, multi-cell/multicast coordination entities (MCEs), Operation and Maintenance (O&M) nodes, Operations Support System (OSS) nodes, Self-Organizing Network (SON) nodes, positioning nodes (e.g., Evolved Serving Mobile Location Centers (E-SMLCs)), and/or Minimization of Drive Tests (MDTs).
  • MSR multi-standard radio
  • RNCs radio network controllers
  • BSCs base station controllers
  • BTSs base transceiver stations
  • OFDM Operation and Maintenance
  • OSS Operations Support System
  • SON Self-Organizing Network
  • positioning nodes e.g., Evolved Serving Mobile Location Centers (E-SMLCs)
  • the network node 300 includes a processing circuitry 302, a memory 304, a communication interface 306, and a power source 308.
  • the network node 300 may be composed of multiple physically separate components (e.g., a NodeB component and a RNC component, or a BTS component and a BSC component, etc.), which may each have their own respective components.
  • the network node 300 comprises multiple separate components (e.g., BTS and BSC components)
  • one or more of the separate components may be shared among several network nodes.
  • a single RNC may control multiple NodeBs.
  • each unique NodeB and RNC pair may in some instances be considered a single separate network node.
  • the network node 300 may be configured to support multiple radio access technologies (RATs).
  • RATs radio access technologies
  • some components may be duplicated (e.g., separate memory 304 for different RATs) and some components may be reused (e.g., a same antenna 310 may be shared by different RATs).
  • the network node 300 may also include multiple sets of the various illustrated components for different wireless technologies integrated into network node 300, for example GSM, WCDMA, LTE, NR, WiFi, Zigbee, Z-wave, LoRaWAN, Radio Frequency Identification (RFID) or Bluetooth wireless technologies. These wireless technologies may be integrated into the same or different chip or set of chips and other components within network node 300.
  • RFID Radio Frequency Identification
  • the processing circuitry 302 may comprise a combination of one or more of a microprocessor, controller, microcontroller, central processing unit, digital signal processor, application-specific integrated circuit, field programmable gate array, or any other suitable computing device, resource, or combination of hardware, software and/or encoded logic operable to provide, either alone or in conjunction with other network node 300 components, such as the memory 304, to provide network node 300 functionality.
  • the processing circuitry 302 includes a system on a chip (SOC). In some embodiments, the processing circuitry 302 includes one or more of radio frequency (RF) transceiver circuitry 312 and baseband processing circuitry 314. In some embodiments, the radio frequency (RF) transceiver circuitry 312 and the baseband processing circuitry 314 may be on separate chips (or sets of chips), boards, or units, such as radio units and digital units. In alternative embodiments, part or all of RF transceiver circuitry 312 and baseband processing circuitry 314 may be on the same chip or set of chips, boards, or units.
  • SOC system on a chip
  • the processing circuitry 302 includes one or more of radio frequency (RF) transceiver circuitry 312 and baseband processing circuitry 314.
  • the radio frequency (RF) transceiver circuitry 312 and the baseband processing circuitry 314 may be on separate chips (or sets of chips), boards, or units, such as radio units and digital units. In alternative embodiments, part or all of RF trans
  • the memory 304 may comprise any form of volatile or non-volatile computer-readable memory including, without limitation, persistent storage, solid-state memory, remotely mounted memory, magnetic media, optical media, random access memory (RAM), read-only memory (ROM), mass storage media (for example, a hard disk), removable storage media (for example, a flash drive, a Compact Disk (CD) or a Digital Video Disk (DVD)), and/or any other volatile or non-volatile, non-transitory device-readable and/or computer-executable memory devices that store information, data, and/or instructions that may be used by the processing circuitry 302.
  • volatile or non-volatile computer-readable memory including, without limitation, persistent storage, solid-state memory, remotely mounted memory, magnetic media, optical media, random access memory (RAM), read-only memory (ROM), mass storage media (for example, a hard disk), removable storage media (for example, a flash drive, a Compact Disk (CD) or a Digital Video Disk (DVD)), and/or any other volatile or non-
  • the memory 304 may store any suitable instructions, data, or information, including a computer program, software, an application including one or more of logic, rules, code, tables, and/or other instructions capable of being executed by the processing circuitry 302 and utilized by the network node 300.
  • the memory 304 may be used to store any calculations made by the processing circuitry 302 and/or any data received via the communication interface 306.
  • the processing circuitry 302 and memory 304 is integrated.
  • the communication interface 306 is used in wired or wireless communication of signaling and/or data between a network node, access network, and/or UE. As illustrated, the communication interface 306 comprises port(s)/terminal(s) 316 to send and receive data, for example to and from a network over a wired connection.
  • the communication interface 306 also includes radio front-end circuitry 318 that may be coupled to, or in certain embodiments a part of, the antenna 310. Radio front-end circuitry 318 comprises filters 320 and amplifiers 322. The radio front-end circuitry 318 may be connected to an antenna 310 and processing circuitry 302. The radio front-end circuitry may be configured to condition signals communicated between antenna 310 and processing circuitry 302.
  • the radio front-end circuitry 318 may receive digital data that is to be sent out to other network nodes or UEs via a wireless connection.
  • the radio front-end circuitry 318 may convert the digital data into a radio signal having the appropriate channel and bandwidth parameters using a combination of filters 320 and/or amplifiers 322.
  • the radio signal may then be transmitted via the antenna 310.
  • the antenna 310 may collect radio signals which are then converted into digital data by the radio front-end circuitry 318.
  • the digital data may be passed to the processing circuitry 302.
  • the communication interface may comprise different components and/or different combinations of components.
  • the network node 300 does not include separate radio front-end circuitry 318, instead, the processing circuitry 302 includes radio front-end circuitry and is connected to the antenna 310.
  • the processing circuitry 302 includes radio front-end circuitry and is connected to the antenna 310.
  • all or some of the RF transceiver circuitry 312 is part of the communication interface 306.
  • the communication interface 306 includes one or more ports or terminals 316, the radio front-end circuitry 318, and the RF transceiver circuitry 312, as part of a radio unit (not shown), and the communication interface 306 communicates with the baseband processing circuitry 314, which is part of a digital unit (not shown).
  • the antenna 310 may include one or more antennas, or antenna arrays, configured to send and/or receive wireless signals.
  • the antenna 310 may be coupled to the radio front-end circuitry 318 and may be any type of antenna capable of transmitting and receiving data and/or signals wirelessly.
  • the antenna 310 is separate from the network node 300 and connectable to the network node 300 through an interface or port.
  • the antenna 310, communication interface 306, and/or the processing circuitry 302 may be configured to perform any receiving operations and/or certain obtaining operations described herein as being performed by the network node. Any information, data and/or signals may be received from a UE, another network node and/or any other network equipment. Similarly, the antenna 310, the communication interface 306, and/or the processing circuitry 302 may be configured to perform any transmitting operations described herein as being performed by the network node. Any information, data and/or signals may be transmitted to a UE, another network node and/or any other network equipment.
  • the power source 308 provides power to the various components of network node 300 in a form suitable for the respective components (e.g., at a voltage and current level needed for each respective component).
  • the power source 308 may further comprise, or be coupled to, power management circuitry to supply the components of the network node 300 with power for performing the functionality described herein.
  • the network node 300 may be connectable to an external power source (e.g., the power grid, an electricity outlet) via an input circuitry or interface such as an electrical cable, whereby the external power source supplies power to power circuitry of the power source 308.
  • the power source 308 may comprise a source of power in the form of a battery or battery pack which is connected to, or integrated in, power circuitry. The battery may provide backup power should the external power source fail.
  • Embodiments of the network node 300 may include additional components beyond those shown in FIGURE 6 for providing certain aspects of the network node’s functionality, including any of the functionality described herein and/or any functionality necessary to support the subject matter described herein.
  • the network node 300 may include user interface equipment to allow input of information into the network node 300 and to allow output of information from the network node 300. This may allow a user to perform diagnostic, maintenance, repair, and other administrative functions for the network node 300.
  • FIGURE 7 is a block diagram of a host 400, which may be an embodiment of the host 116 of FIGURE 4, in accordance with various aspects described herein.
  • the host 400 may be or comprise various combinations hardware and/or software, including a standalone server, a blade server, a cloud-implemented server, a distributed server, a virtual machine, container, or processing resources in a server farm.
  • the host 400 may provide one or more services to one or more UEs.
  • the host 400 includes processing circuitry 402 that is operatively coupled via a bus 404 to an input/output interface 406, a network interface 408, a power source 410, and a memory 412.
  • processing circuitry 402 that is operatively coupled via a bus 404 to an input/output interface 406, a network interface 408, a power source 410, and a memory 412.
  • Other components may be included in other embodiments. Features of these components may be substantially similar to those described with respect to the devices of previous figures, such as Figures 3 and 4, such that the descriptions thereof are generally applicable to the corresponding components of host 400.
  • the memory 412 may include one or more computer programs including one or more host application programs 414 and data 416, which may include user data, e.g., data generated by a UE for the host 400 or data generated by the host 400 for a UE.
  • Embodiments of the host 400 may utilize only a subset or all of the components shown.
  • the host application programs 414 may be implemented in a container-based architecture and may provide support for video codecs (e.g., Versatile Video Coding (VVC), High Efficiency Video Coding (HEVC), Advanced Video Coding (AVC), MPEG, VP9) and audio codecs (e.g., FLAC, Advanced Audio Coding (AAC), MPEG, G.711), including transcoding for multiple different classes, types, or implementations of UEs (e.g., handsets, desktop computers, wearable display systems, heads-up display systems).
  • the host application programs 414 may also provide for user authentication and licensing checks and may periodically report health, routes, and content availability to a central node, such as a device in or on the edge of a core network.
  • the host 400 may select and/or indicate a different host for over-the-top services for a UE.
  • the host application programs 414 may support various protocols, such as the HTTP Live Streaming (HLS) protocol, Real-Time Messaging Protocol (RTMP), Real-Time Streaming Protocol (RTSP), Dynamic Adaptive Streaming over HTTP (MPEG-DASH), etc.
  • HLS HTTP Live Streaming
  • RTMP Real-Time Messaging Protocol
  • RTSP Real-Time Streaming Protocol
  • MPEG-DASH Dynamic Adaptive Streaming over HTTP
  • FIGURE 8 is a block diagram illustrating a virtualization environment 500 in which functions implemented by some embodiments may be virtualized.
  • virtualizing means creating virtual versions of apparatuses or devices which may include virtualizing hardware platforms, storage devices and networking resources.
  • virtualization can be applied to any device described herein, or components thereof, and relates to an implementation in which at least a portion of the functionality is implemented as one or more virtual components.
  • Some or all of the functions described herein may be implemented as virtual components executed by one or more virtual machines (VMs) implemented in one or more virtual environments 500 hosted by one or more of hardware nodes, such as a hardware computing device that operates as a network node, UE, core network node, or host.
  • VMs virtual machines
  • the node may be entirely virtualized.
  • Applications 502 (which may alternatively be called software instances, virtual appliances, network functions, virtual nodes, virtual network functions, etc.) are run in the virtualization environment Q400 to implement some of the features, functions, and/or benefits of some of the embodiments disclosed herein.
  • Hardware 504 includes processing circuitry, memory that stores software and/or instructions executable by hardware processing circuitry, and/or other hardware devices as described herein, such as a network interface, input/output interface, and so forth.
  • Software may be executed by the processing circuitry to instantiate one or more virtualization layers 506 (also referred to as hypervisors or virtual machine monitors (VMMs)), provide VMs 508a and 508b (one or more of which may be generally referred to as VMs 508), and/or perform any of the functions, features and/or benefits described in relation with some embodiments described herein.
  • the virtualization layer 506 may present a virtual operating platform that appears like networking hardware to the VMs 508.
  • the VMs 508 comprise virtual processing, virtual memory, virtual networking or interface and virtual storage, and may be run by a corresponding virtualization layer 506. Different embodiments of the instance of a virtual appliance 502 may be implemented on one or more of VMs 508, and the implementations may be made in different ways. Virtualization of the hardware is in some contexts referred to as network function virtualization (NFV). NFV may be used to consolidate many network equipment types onto industry standard high volume server hardware, physical switches, and physical storage, which can be located in data centers, and customer premise equipment.
  • NFV network function virtualization
  • a VM 508 may be a software implementation of a physical machine that runs programs as if they were executing on a physical, non-virtualized machine.
  • Each of the VMs 508, and that part of hardware 504 that executes that VM be it hardware dedicated to that VM and/or hardware shared by that VM with others of the VMs, forms separate virtual network elements.
  • a virtual network function is responsible for handling specific network functions that run in one or more VMs 508 on top of the hardware 504 and corresponds to the application 502.
  • Hardware 504 may be implemented in a standalone network node with generic or specific components. Hardware 504 may implement some functions via virtualization. Alternatively, hardware 504 may be part of a larger cluster of hardware (e.g. such as in a data center or CPE) where many hardware nodes work together and are managed via management and orchestration 510, which, among others, oversees lifecycle management of applications 502. In some embodiments, hardware 504 is coupled to one or more radio units that each include one or more transmitters and one or more receivers that may be coupled to one or more antennas. Radio units may communicate directly with other hardware nodes via one or more appropriate network interfaces and may be used in combination with the virtual components to provide a virtual node with radio capabilities, such as a radio access node or a base station.
  • radio units may communicate directly with other hardware nodes via one or more appropriate network interfaces and may be used in combination with the virtual components to provide a virtual node with radio capabilities, such as a radio access node or a base station.
  • FIGURE 9 shows a communication diagram of a host 602 communicating via a network node 604 with a UE 606 over a partially wireless connection in accordance with some embodiments.
  • host 602 Like host 400, embodiments of host 602 include hardware, such as a communication interface, processing circuitry, and memory.
  • the host 602 also includes software, which is stored in or accessible by the host 602 and executable by the processing circuitry.
  • the software includes a host application that may be operable to provide a service to a remote user, such as the UE 606 connecting via an over-the-top (OTT) connection 650 extending between the UE 606 and host 602.
  • OTT over-the-top
  • the network node 604 includes hardware enabling it to communicate with the host 602 and UE 606.
  • the connection 660 may be direct or pass through a core network (like core network 106 of FIGURE 4) and/or one or more other intermediate networks, such as one or more public, private, or hosted networks.
  • a core network like core network 106 of FIGURE 4
  • one or more other intermediate networks such as one or more public, private, or hosted networks.
  • an intermediate network may be a backbone network or the Internet.
  • the UE 606 includes hardware and software, which is stored in or accessible by UE 606 and executable by the UE’s processing circuitry.
  • the software includes a client application, such as a web browser or operator- specific “app” that may be operable to provide a service to a human or non-human user via UE 606 with the support of the host 602.
  • a client application such as a web browser or operator- specific “app” that may be operable to provide a service to a human or non-human user via UE 606 with the support of the host 602.
  • an executing host application may communicate with the executing client application via the OTT connection 650 terminating at the UE 606 and host 602.
  • the UE's client application may receive request data from the host's host application and provide user data in response to the request data.
  • the OTT connection 650 may transfer both the request data and the user data.
  • the UE's client application may interact with the user to generate the user data that it provides to the host application through the OTT
  • the OTT connection 650 may extend via a connection 660 between the host 602 and the network node 604 and via a wireless connection 670 between the network node 604 and the UE 606 to provide the connection between the host 602 and the UE 606.
  • the connection 660 and wireless connection 670, over which the OTT connection 650 may be provided, have been drawn abstractly to illustrate the communication between the host 602 and the UE 606 via the network node 604, without explicit reference to any intermediary devices and the precise routing of messages via these devices.
  • the host 602 provides user data, which may be performed by executing a host application.
  • the user data is associated with a particular human user interacting with the UE 606.
  • the user data is associated with a UE 606 that shares data with the host 602 without explicit human interaction.
  • the host 602 initiates a transmission carrying the user data towards the UE 606.
  • the host 602 may initiate the transmission responsive to a request transmitted by the UE 606.
  • the request may be caused by human interaction with the UE 606 or by operation of the client application executing on the UE 606.
  • the transmission may pass via the network node 604, in accordance with the teachings of the embodiments described throughout this disclosure. Accordingly, in step 612, the network node 604 transmits to the UE 606 the user data that was carried in the transmission that the host 602 initiated, in accordance with the teachings of the embodiments described throughout this disclosure. In step 614, the UE 606 receives the user data carried in the transmission, which may be performed by a client application executed on the UE 606 associated with the host application executed by the host 602.
  • the UE 606 executes a client application which provides user data to the host 602.
  • the user data may be provided in reaction or response to the data received from the host 602.
  • the UE 606 may provide user data, which may be performed by executing the client application.
  • the client application may further consider user input received from the user via an input/output interface of the UE 606. Regardless of the specific manner in which the user data was provided, the UE 606 initiates, in step 618, transmission of the user data towards the host 602 via the network node 604.
  • the network node 604 receives user data from the UE 606 and initiates transmission of the received user data towards the host 602.
  • the host 602 receives the user data carried in the transmission initiated by the UE 606.
  • One or more of the various embodiments improve the performance of OTT services provided to the UE 606 using the OTT connection 650, in which the wireless connection 670 forms the last segment. More precisely, the teachings of these embodiments may improve the delay to directly activate an SCell by RRC and power consumption of user equipment and thereby provide benefits such as reduced user waiting time and extended battery lifetime.
  • factory status information may be collected and analyzed by the host 602.
  • the host 602 may process audio and video data which may have been retrieved from a UE for use in creating maps.
  • the host 602 may collect and analyze real-time data to assist in controlling vehicle congestion (e.g., controlling traffic lights).
  • the host 602 may store surveillance video uploaded by a UE.
  • the host 602 may store or control access to media content such as video, audio, VR or AR which it can broadcast, multicast or unicast to UEs.
  • the host 602 may be used for energy pricing, remote control of non-time critical electrical load to balance power generation needs, location services, presentation services (such as compiling diagrams etc. from data collected from remote devices), or any other function of collecting, retrieving, storing, analyzing and/or transmitting data.
  • a measurement procedure may be provided for the purpose of monitoring data rate, latency and other factors on which the one or more embodiments improve.
  • the measurement procedure and/or the network functionality for reconfiguring the OTT connection may be implemented in software and hardware of the host 602 and/or UE 606.
  • sensors (not shown) may be deployed in or in association with other devices through which the OTT connection 650 passes; the sensors may participate in the measurement procedure by supplying values of the monitored quantities exemplified above, or supplying values of other physical quantities from which software may compute or estimate the monitored quantities.
  • the reconfiguring of the OTT connection 650 may include message format, retransmission settings, preferred routing etc.; the reconfiguring need not directly alter the operation of the network node 604. Such procedures and functionalities may be known and practiced in the art.
  • measurements may involve proprietary UE signaling that facilitates measurements of throughput, propagation times, latency and the like, by the host 602.
  • the measurements may be implemented in that software causes messages to be transmitted, in particular empty or ‘dummy’ messages, using the OTT connection 650 while monitoring propagation times, errors, etc.
  • FIGURE 10 is a flowchart illustrating an example method in a wireless device, according to certain embodiments. In particular embodiments, one or more steps of FIGURE 10 may be performed by UE 200 described with respect to FIGURE 5.
  • the wireless device is capable of concurrent operation of ML models (e.g., ML models for CSI estimation, beam management, positioning, etc.).
  • the method begins at step 1012, where the wireless device (e.g., UE 200) signals (e.g., RRC) to a network node (e.g., network node 300) ML model feature set information comprising one or more constraints for operating one or more ML models.
  • the wireless device e.g., UE 200
  • signals e.g., RRC
  • a network node e.g., network node 300
  • ML model feature set information comprising one or more constraints for operating one or more ML models.
  • the constraints comprise an indication of a subset of ML models that are able to be operated concurrently.
  • the constraints may comprise an indication of a subset of ML models to be operated concurrently with non-ML functionality of the wireless device.
  • the constraints may comprise one or more of: a complexity level associated with a ML model; a total complexity level associated with two or more ML models; a performance level associated with a ML model; a model output resolution for a ML model; and a model accuracy or confidence threshold for a ML model.
  • the constraints may be associated with a radio link configuration comprising one or more of: bandwidth; number of component carriers; subcarrier spacing; frequency range; number of antenna ports; and number of reference signal ports.
  • the constraints may comprise an indication of one or more sets of ML models, wherein ML models in each set use mutually blocking computation resources.
  • the constraints may comprise an indication of one or more of: an energy consumption value associated with each ML model; a computational resource consumption value associated with each ML model; and a memory resource consumption value associated with each ML model.
  • the constraints comprise any of the constraints described with respect to the embodiments and examples described herein.
  • signaling to the network node ML model feature set information comprises signaling one or more of an international mobile equipment identity (IMEI) of the wireless device, a partial IMEI of the wireless device, a model number of the wireless device, and a chipset identifier of the wireless device.
  • IMEI international mobile equipment identity
  • the wireless device may signal the ML model features set information according to any of the embodiments and examples described herein.
  • the wireless device receives from the network node a ML model configuration for one or more ML models based on the ML model feature set information.
  • the ML model configuration comprises configuration of ML models of a first sequence of execution and ML models of a second sequence of execution that may be executed concurrently with the ML models of the first sequence.
  • the ML model configuration may comprise configuration of ML models each associated with a ML processing unit value and a sum of the ML processing unit values for each configured ML model is less than a threshold value.
  • the ML model configuration comprises any of the ML model configurations described with respect to the embodiments and examples described herein.
  • the wireless device operates one or more ML models based on the received ML model configuration.
  • the ML models may comprise any of the ML models described with respect to the embodiments and examples described herein.
  • FIGURE 11 is a flowchart illustrating an example method in a network node, according to certain embodiments. In particular embodiments, one or more steps of FIGURE 11 may be performed by network node 300 described with respect to FIGURE 6.
  • the network node is operable to configure a wireless device for concurrent operation of ML models.
  • the method begins at step 1112, where the network node (e.g., network node 300) receives from a wireless device (e.g., wireless device 200) ML model feature set information comprising one or more constraints for operating one or more ML models.
  • a wireless device e.g., wireless device 200
  • ML model feature set information comprising one or more constraints for operating one or more ML models.
  • the network node may determine the ML model configuration for the wireless device for one or more ML models based on the ML model feature set information.
  • determining the ML model configuration comprises determining a configuration of ML models of a first sequence of execution and ML models of a second sequence of execution that may be executed concurrently with the ML models of the first sequence.
  • each of the ML models is associated with a ML processing unit value
  • determining the ML model configuration comprises determining a subset of ML models where a sum of the ML processing unit values associated with each ML model in the subset is less than a threshold value.
  • each of the ML models is associated with a priority value, and determining the ML model configuration is based on the priority value.
  • determining the ML model configuration is based on network information comprising one or more of: current prioritized key performance indicators (KPIs) in a cell; network load information; number of active users in the network; network coverage conditions;. mobility conditions; and number of beams configured.
  • KPIs current prioritized key performance indicators
  • the network node may determine the ML model configuration according to any of the embodiments and examples described herein.
  • the network node transmits the ML model configuration based on the ML model feature set information to the wireless device.
  • references in the specification to “one embodiment,” “an embodiment,” “an example embodiment,” etc., indicate that the embodiment described may include a particular feature, structure, or characteristic, but every embodiment may not necessarily include the particular feature, structure, or characteristic. Moreover, such phrases are not necessarily referring to the same embodiment. Further, when a particular feature, structure, or characteristic is described in connection with an embodiment, it is submitted that it is within the knowledge of one skilled in the art to implement such feature, structure, or characteristic in connection with other embodiments, whether or not explicitly described. [0218] Although this disclosure has been described in terms of certain embodiments, alterations and permutations of the embodiments will be apparent to those skilled in the art. Accordingly, the above description of the embodiments does not constrain this disclosure. Other changes, substitutions, and alterations are possible without departing from the scope of this disclosure, as defined by the claims below.
  • a method performed by a wireless device for simultaneous operation of machine learning (ML)- models comprising:
  • ML model feature set information comprising one or more constraints for operating one or more ML models/functionalities
  • constraints comprise one or more of: a subset of ML models to be operated concurrently, legacy functionality to be operated concurrently, operation bandwidth, number of antennas or ports, ML model output update rate, and ML model output resolution.
  • constraints comprise one or more sets of ML models, wherein ML models in each set use mutually blocking computation resources.
  • a method performed by a wireless device comprising:
  • a method performed by a base station for node for configuration of simultaneous operation of machine learning (ML) models in a wireless device comprising:
  • constraints comprise one or more of: a subset of ML models to be operated concurrently, legacy functionality to be operated concurrently, operation bandwidth, number of antennas or ports, ML model output update rate, and ML model output resolution.
  • constraints comprise one or more sets of ML models, wherein ML models in each set use mutually blocking computation resources.
  • the one or more ML models are specified in terms of their ML model version.
  • a method performed by a base station comprising:
  • a mobile terminal comprising:
  • - power supply circuitry configured to supply power to the wireless device.
  • a base station comprising:
  • - power supply circuitry configured to supply power to the wireless device.
  • a user equipment comprising:
  • radio front-end circuitry connected to the antenna and to processing circuitry, and configured to condition signals communicated between the antenna and the processing circuitry;
  • processing circuitry being configured to perform any of the steps of any of the Group A embodiments;
  • an input interface connected to the processing circuitry and configured to allow input of information into the UE to be processed by the processing circuitry
  • a battery connected to the processing circuitry and configured to supply power to the UE.
  • a communication system including a host computer comprising:
  • UE user equipment
  • the cellular network comprises a base station having a radio interface and processing circuitry, the base station’s processing circuitry configured to perform any of the steps of any of the Group B embodiments.
  • the communication system of the pervious embodiment further including the base station.
  • the communication system of the previous 2 embodiments further including the UE, wherein the UE is configured to communicate with the base station.
  • the processing circuitry of the host computer is configured to execute a host application, thereby providing the user data
  • the UE comprises processing circuitry configured to execute a client application associated with the host application.
  • the host computer at the host computer, providing user data; and at the host computer, initiating a transmission carrying the user data to the UE via a cellular network comprising the base station, wherein the base station performs any of the steps of any of the Group B embodiments.
  • a user equipment configured to communicate with a base station, the UE comprising a radio interface and processing circuitry configured to performs any of the previous 3 embodiments.
  • a communication system including a host computer comprising:
  • UE user equipment
  • the UE comprises a radio interface and processing circuitry, the UE’s components configured to perform any of the steps of any of the Group A embodiments.
  • the cellular network further includes a base station configured to communicate with the UE.
  • the processing circuitry of the host computer is configured to execute a host application, thereby providing the user data
  • the host computer initiating a transmission carrying the user data to the UE via a cellular network comprising the base station, wherein the UE performs any of the steps of any of the Group A embodiments.
  • a communication system including a host computer comprising:
  • a - communication interface configured to receive user data originating from a transmission from a user equipment (UE) to a base station
  • the UE comprises a radio interface and processing circuitry, the UE’s processing circuitry configured to perform any of the steps of any of the Group A embodiments.
  • the communication system of the previous 2 embodiments further including the base station, wherein the base station comprises a radio interface configured to communicate with the UE and a communication interface configured to forward to the host computer the user data carried by a transmission from the UE to the base station.
  • the processing circuitry of the host computer is configured to execute a host application
  • the UE’s processing circuitry is configured to execute a client application associated with the host application, thereby providing the user data.
  • the processing circuitry of the host computer is configured to execute a host application, thereby providing request data
  • the UE’s processing circuitry is configured to execute a client application associated with the host application, thereby providing the user data in response to the request data.
  • the host computer receiving user data transmitted to the base station from the UE, wherein the UE performs any of the steps of any of the Group A embodiments.
  • the user data to be transmitted is provided by the client application in response to the input data.
  • a communication system including a host computer comprising a communication interface configured to receive user data originating from a transmission from a user equipment (UE) to a base station, wherein the base station comprises a radio interface and processing circuitry, the base station’s processing circuitry configured to perform any of the steps of any of the Group B embodiments.
  • UE user equipment
  • the communication system of the previous embodiment further including the base station.
  • the processing circuitry of the host computer is configured to execute a host application
  • the UE is configured to execute a client application associated with the host application, thereby providing the user data to be received by the host computer.
  • the host computer receiving, from the base station, user data originating from a transmission which the base station has received from the UE, wherein the UE performs any of the steps of any of the Group A embodiments.

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Signal Processing (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Theoretical Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Medical Informatics (AREA)
  • Physics & Mathematics (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Artificial Intelligence (AREA)
  • Mobile Radio Communication Systems (AREA)

Abstract

Selon certains modes de réalisation, un procédé est mis en oeuvre par un dispositif sans fil pour un fonctionnement simultané de modèles d'apprentissage automatique (ML). Le procédé comprend les étapes suivantes: la signalisation à un noeud de réseau d'apprentissage automatique d'information d'ensemble de caractéristiques de modèle comprenant une ou plusieurs contrainte(s) pour le fonctionnement d'un ou de plusieurs modèle(s) d'apprentissage automatique; la réception en provenance du noeud de réseau d'une configuration de modèle d'apprentissage automatique pour un ou plusieurs modèle(s) d'apprentissage automatique sur la base de l'information d'ensemble de caractéristiques de modèle d'apprentissage automatique; et le fonctionnement d'un ou plusieurs modèle(s) d'apprentissage automatique sur la base de la configuration de modèle d'apprentissage automatique reçue.
PCT/SE2023/050385 2022-04-29 2023-04-26 Rapport d'ensemble de caractéristiques de modèle d'apprentissage automatique Ceased WO2023211343A1 (fr)

Priority Applications (1)

Application Number Priority Date Filing Date Title
EP23796924.1A EP4515463A1 (fr) 2022-04-29 2023-04-26 Rapport d'ensemble de caractéristiques de modèle d'apprentissage automatique

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US202263363822P 2022-04-29 2022-04-29
US63/363,822 2022-04-29

Publications (1)

Publication Number Publication Date
WO2023211343A1 true WO2023211343A1 (fr) 2023-11-02

Family

ID=88519429

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/SE2023/050385 Ceased WO2023211343A1 (fr) 2022-04-29 2023-04-26 Rapport d'ensemble de caractéristiques de modèle d'apprentissage automatique

Country Status (2)

Country Link
EP (1) EP4515463A1 (fr)
WO (1) WO2023211343A1 (fr)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2025171907A1 (fr) * 2024-02-16 2025-08-21 Nokia Technologies Oy Garantie de cohérence entre des étapes d'apprentissage et d'inférence par l'intermédiaire de procédures de surveillance
WO2025175432A1 (fr) * 2024-02-19 2025-08-28 Zte Corporation Transfert de modèle

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020139181A1 (fr) * 2018-12-28 2020-07-02 Telefonaktiebolaget Lm Ericsson (Publ) Dispositif sans fil, nœud de réseau et procédés associés permettant de mettre à jour une première instance d'un modèle d'apprentissage machine
WO2022008037A1 (fr) * 2020-07-07 2022-01-13 Nokia Technologies Oy Aptitude et incapacité d'ue ml
WO2022013090A1 (fr) * 2020-07-13 2022-01-20 Telefonaktiebolaget Lm Ericsson (Publ) Gestion d'un dispositif sans fil utilisable pour une connexion à un réseau de communication
WO2022015221A1 (fr) * 2020-07-14 2022-01-20 Telefonaktiebolaget Lm Ericsson (Publ) Gestion d'un dispositif sans fil utilisable pour se connecter à un réseau de communication
WO2022013104A1 (fr) * 2020-07-13 2022-01-20 Telefonaktiebolaget Lm Ericsson (Publ) Gestion d'un dispositif sans fil servant à se connecter à un réseau de communication
WO2022013093A1 (fr) * 2020-07-13 2022-01-20 Telefonaktiebolaget Lm Ericsson (Publ) Gestion d'un dispositif sans fil utilisable pour se connecter à un réseau de communication
WO2022013095A1 (fr) * 2020-07-13 2022-01-20 Telefonaktiebolaget Lm Ericsson (Publ) Gestion d'un dispositif sans fil permettant d'assurer une connexion à un réseau de communication
WO2022077202A1 (fr) * 2020-10-13 2022-04-21 Qualcomm Incorporated Procédés et appareil de gestion de modèle de traitement ml

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020139181A1 (fr) * 2018-12-28 2020-07-02 Telefonaktiebolaget Lm Ericsson (Publ) Dispositif sans fil, nœud de réseau et procédés associés permettant de mettre à jour une première instance d'un modèle d'apprentissage machine
WO2022008037A1 (fr) * 2020-07-07 2022-01-13 Nokia Technologies Oy Aptitude et incapacité d'ue ml
WO2022013090A1 (fr) * 2020-07-13 2022-01-20 Telefonaktiebolaget Lm Ericsson (Publ) Gestion d'un dispositif sans fil utilisable pour une connexion à un réseau de communication
WO2022013104A1 (fr) * 2020-07-13 2022-01-20 Telefonaktiebolaget Lm Ericsson (Publ) Gestion d'un dispositif sans fil servant à se connecter à un réseau de communication
WO2022013093A1 (fr) * 2020-07-13 2022-01-20 Telefonaktiebolaget Lm Ericsson (Publ) Gestion d'un dispositif sans fil utilisable pour se connecter à un réseau de communication
WO2022013095A1 (fr) * 2020-07-13 2022-01-20 Telefonaktiebolaget Lm Ericsson (Publ) Gestion d'un dispositif sans fil permettant d'assurer une connexion à un réseau de communication
WO2022015221A1 (fr) * 2020-07-14 2022-01-20 Telefonaktiebolaget Lm Ericsson (Publ) Gestion d'un dispositif sans fil utilisable pour se connecter à un réseau de communication
WO2022077202A1 (fr) * 2020-10-13 2022-04-21 Qualcomm Incorporated Procédés et appareil de gestion de modèle de traitement ml

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2025171907A1 (fr) * 2024-02-16 2025-08-21 Nokia Technologies Oy Garantie de cohérence entre des étapes d'apprentissage et d'inférence par l'intermédiaire de procédures de surveillance
WO2025175432A1 (fr) * 2024-02-19 2025-08-28 Zte Corporation Transfert de modèle

Also Published As

Publication number Publication date
EP4515463A1 (fr) 2025-03-05

Similar Documents

Publication Publication Date Title
US20250330373A1 (en) Ml model support and model id handling by ue and network
US20250219898A1 (en) :user equipment report of machine learning model performance
US20250227497A1 (en) Ue autonomous actions based on ml model failure detection
US20250203401A1 (en) Artificial Intelligence/Machine Learning Model Management Between Wireless Radio Nodes
US20250220471A1 (en) Network assisted error detection for artificial intelligence on air interface
WO2023211356A1 (fr) Surveillance de fonctionnalité d'apprentissage automatique d'équipement utilisateur
WO2023211343A1 (fr) Rapport d'ensemble de caractéristiques de modèle d'apprentissage automatique
US20250293942A1 (en) Machine learning fallback model for wireless device
US20250233800A1 (en) Adaptive prediction of time horizon for key performance indicator
WO2024125362A1 (fr) Procédé et appareil de commande de liaison de communication entre dispositifs de communication
WO2024040388A1 (fr) Procédé et appareil de transmission de données
EP4594962A1 (fr) Systèmes et procédés de configuration de décalage bêta pour transmettre des informations de commande de liaison montante
WO2024241222A1 (fr) Procédé et systèmes pour l'établissement de rapport sur la capacité d'équipement utilisateur et la configuration d'informations sur l'état des canaux basée sur l'apprentissage automatique.
WO2025183598A1 (fr) Noeud de réseau radio, équipement utilisateur et procédés mis en oeuvre dans celui-ci
EP4595647A1 (fr) Mappage de ressources pour une liaison montante basée sur l'ia
WO2024072300A1 (fr) Commande de puissance pour une liaison montante basée sur l'ia
WO2025172940A1 (fr) Rapport de faisceau initié par équipement utilisateur sur la base d'une prédiction de faisceau
WO2024236513A1 (fr) Configuration de ressources de mesure de canal pour informations d'état de canal d'intelligence artificielle
WO2024214075A1 (fr) Gestion du cycle de vie d'un modèle unilatéral basée sur l'id
WO2025105998A1 (fr) Gestion de collecte de données
WO2025095847A1 (fr) Appariement de modèles pour modèles d'ia/ml bilatéraux
WO2024072314A1 (fr) Ressources de canal pucch pour une liaison montante basée sur l'ia
WO2025178538A1 (fr) Procédés de sélection et de configuration de ressources de mesure sur la base de ressources de prédiction configurées pour des prédictions de mesure radio aiml
EP4612935A1 (fr) Communication basée sur un partage d'identifiant de configuration de réseau
WO2025071464A1 (fr) Transfert de données d'intelligence artificielle à un dispositif de communication

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23796924

Country of ref document: EP

Kind code of ref document: A1

WWE Wipo information: entry into national phase

Ref document number: 2023796924

Country of ref document: EP

NENP Non-entry into the national phase

Ref country code: DE

ENP Entry into the national phase

Ref document number: 2023796924

Country of ref document: EP

Effective date: 20241129