[go: up one dir, main page]

WO2025107294A1 - Devices and methods for real-time ml framework supporting network optimization - Google Patents

Devices and methods for real-time ml framework supporting network optimization Download PDF

Info

Publication number
WO2025107294A1
WO2025107294A1 PCT/CN2023/134021 CN2023134021W WO2025107294A1 WO 2025107294 A1 WO2025107294 A1 WO 2025107294A1 CN 2023134021 W CN2023134021 W CN 2023134021W WO 2025107294 A1 WO2025107294 A1 WO 2025107294A1
Authority
WO
WIPO (PCT)
Prior art keywords
network
dmfs
function
orchestrator
network node
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
PCT/CN2023/134021
Other languages
French (fr)
Inventor
Antonio De Domenico
Fadhel AYED
Nicola PIOVESAN
Nan Zhao
Marco Spini
Ali MAATOUK
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Technologies Co Ltd
Original Assignee
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co Ltd filed Critical Huawei Technologies Co Ltd
Priority to PCT/CN2023/134021 priority Critical patent/WO2025107294A1/en
Publication of WO2025107294A1 publication Critical patent/WO2025107294A1/en
Pending legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W24/00Supervisory, monitoring or testing arrangements
    • H04W24/02Arrangements for optimising operational condition
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/08Configuration management of networks or network elements
    • H04L41/0803Configuration setting
    • H04L41/0823Configuration setting characterised by the purposes of a change of settings, e.g. optimising configuration for enhancing reliability
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/08Configuration management of networks or network elements
    • H04L41/0894Policy-based network configuration management
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/14Network analysis or design
    • H04L41/145Network analysis or design involving simulating, designing, planning or modelling of a network
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/16Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks using machine learning or artificial intelligence

Definitions

  • the present disclosure relates to wireless communications and information processing technology. More specifically, the present disclosure relates to devices and methods for a real-time machine learning, ML, framework supporting cellular network optimization.
  • the performance of a real-world cellular network depends on not only the capabilities of its hardware (e.g., base stations (BSs) , mobile handsets) , but also how it is configured.
  • Optimizing the performance of a real-world cellular network is extremely challenging because of the difficulty to predict the network performance as a function of network parameters, and the prohibitively large problem size.
  • the conventional approach to network optimization relies on measurement campaigns, trial-and-error and engineering experience, which are costly and time-consuming.
  • a critical challenge for network optimization is the selection of the best combination of network parameters out of a gigantic solution space.
  • 3GPP Release 16 has focused on leveraging AI for enhanced Network Automation, Minimizing Drive Test, and Self-Organizing-Networks;
  • 3GPP Release 17 has focused on Enhancements for 5G interfaces for facilitating ML procedures, network and device data collection to support selected use cases, and introducing new Quality of Service (QoS) definitions tailored for ML models transferred over 5G.
  • QoS Quality of Service
  • each BS generates approximately 3,000 key performance indicators (KPIs) /hour.
  • KPIs key performance indicators
  • the cumulative KPI count exceeds 7 billion per day.
  • the daily count skyrockets to an astonishing 37 billion KPIs.
  • This vast amount of data opens opportunities for the implementation of AI-optimized cellular networks. Nevertheless, the abundance of data hinders a large overhead related to data transfer, processing and storage functionalities. Therefore, there is a need for a on a resource efficient architecture that supports future AI/ML optimization frameworks.
  • a network node configured to operate based on a plurality of different network policies for a cellular network.
  • the network node comprises, i.e. implements a data model function, DMF, orchestrator entity configured to train, execute and store a plurality of DMFs for modelling one or more performance metrics and/or parameters of the network node for each of the plurality of different network policies.
  • DMF data model function
  • orchestrator entity configured to train, execute and store a plurality of DMFs for modelling one or more performance metrics and/or parameters of the network node for each of the plurality of different network policies.
  • the network node comprises a DMF collection coordination function configured to store and retrieve the plurality of trained DMFs and a data collection coordination function configured to collect and store data used for training and executing the plurality of DMFs.
  • implementation forms and embodiments disclosed herein provide an improved architecture and improved functionalities that enable an accurate characterization of network KPIs through data models, ensuring limited overhead and high privacy. Transferring models instead of data significantly reduces the amount of information exchanged between network nodes and the central node responsible for optimization. Also, data is kept local and only useful data is stored. Moreover, according to implementation forms and embodiments disclosed herein ML models may be updated based on different centralized or local triggers. This is because implementation forms and embodiments disclosed herein make use of available data to detect network changes and model performance degradation and update models accordingly. Network models can be kept up-to-date while limiting communication overhead and energy consumption.
  • a management framework/system can push a model update based on e.g., observed network performance or models received by different network nodes.
  • Implementation forms and embodiments disclosed herein leverage ML/AI capabilities available in the network nodes. Thus, available local processing capability can be efficiently exploited, i.e., model trained at night when traffic load is low.
  • implementation forms and embodiments disclosed herein allow coordination across nodes in different domains. For instance, the management framework/system may exploit generative AI and expert knowledge to define the features required to construct each model, select model hyperparameters, and/or identify new model architectures.
  • Data models can be shared to characterize KPIs which cannot be usually shared through standardized interfaces.
  • one or more of the plurality of DMFs comprise one or more data model atomic functions, DMAFs, and wherein the DMF collection coordination function is further configured to store the DMAFs of the one or more DMFs.
  • the one or more DMAFs comprise one or more black box atomic functions and/or one or more white box atomic functions.
  • the DMF orchestrator entity is configured to receive instructions from a management entity of the cellular network for generating and/or updating the one or more DMFs based on the one or more DMAFs for one or more of the plurality of network policies.
  • the DMF orchestrator entity is configured to provide one or more trained parameters and/or hyperparameters of one or more DMFs trained for one or more of the plurality of network policies to the management entity.
  • the data collection coordination function entity is configured to collect and store the data for training and executing the plurality of DMFs, in response to a request, i.e., trigger from the DMF orchestrator entity.
  • the request from the DMF orchestrator entity to the data collection coordination function entity may indicate a frequency for collecting the data, such as, for instance, every 10 minutes, and/or a time period for which the collected data is to be stored.
  • the network node may be a base station, a routing node, or a controller of the cellular network.
  • a method for operating a network node configured to operate based on a plurality of different network policies for a cellular network, in particular a 3GPP cellular network.
  • the method according to the second aspect comprises the steps of:
  • DMF data model function
  • orchestrator entity for training, executing and storing a plurality of DMFs for modelling one or more performance metrics and/or parameters of the network node for each of the plurality of different network policies
  • a data collection coordination function entity for collecting and storing data for training and executing the plurality of DMFs.
  • the method according to the second aspect can be performed by the network node according to the first aspect.
  • further features of the method according to the second aspect result directly from the functionality of the network node according to the first aspect as well as its different implementation forms described above and below.
  • a machine learning, ML, function orchestrator entity for supporting the deployment of an improved, in particular optimized network policy for a plurality of network nodes of a cellular network, in particular a 3GPP cellular network.
  • the ML function orchestrator entity is configured to provide to each network node information, for instance, instructions for generating one or more trainable data model functions, DMFs, for modelling one or more performance metrics and/or parameters of the network node for a network policy.
  • the ML function orchestrator entity is configured to receive one or more parameters and/or hyperparameters from each network node of the one or more DMFs trained for the network policy.
  • the ML function orchestrator entity is further configured to provide information about the one or more DMFs trained for the network policy, including the one or more parameters and/or hyperparameters received from the plurality of network nodes and/or information generated based on the one or more parameters and/or hyperparameters received from the plurality of network nodes, to a policy optimization entity for optimizing the network policy.
  • the ML function orchestrator entity is configured to receive the information, for instance, instructions, for generating the one or more trainable DMFs for modelling the one or more performance metrics and/or parameters of the network node for the network policy from a policy management function entity of the cellular network.
  • the one or more trainable DMFs comprise one or more data model atomic functions, DMAFs, wherein the ML function orchestrator entity is configured to determine information, for instance, instructions, for the network node, in particular the DMF orchestrator thereof, for generating the one or more trainable DMFs based on the one or more DMAFs.
  • the one or more DMAFs comprise one or more black box atomic functions and/or one or more white box atomic functions.
  • the ML function orchestrator entity is configured to aggregate the one or more parameters and/or hyperparameters received from each network node about the one or more DMFs trained for the network policy.
  • the ML function orchestrator entity may be configured to aggregate the one or more parameters and/or hyperparameters from the plurality of network nodes, for instance, by averaging the plurality of parameters and/or hyperparameters.
  • a method for operating a machine learning, ML, function orchestrator entity for supporting the deployment of an improved, in particular optimized network policy for a plurality of network nodes of a cellular network, in particular a 3GPP cellular network.
  • the method according to the fourth aspect comprises the steps of:
  • each network node information, for instance, instructions, for generating one or more trainable data model functions, DMFs, for modelling one or more performance metrics and/or parameters of the network node for a network policy;
  • the method according to the fourth aspect can be performed by the ML function orchestrator entity according to the third aspect.
  • further features of the method according to the fourth aspect result directly from the functionality of the ML function orchestrator entity according to the third aspect as well as its different implementation forms described above and below.
  • a cellular network in particular a 3GPP cellular network, wherein the cellular network comprises a plurality of network nodes according to the first aspect and a ML function orchestrator entity according to the third aspect.
  • the plurality of network nodes are implemented in a radio access network, RAN, transport network, and/or a core network, CN, of the cellular network and/or the ML function orchestrator is implemented as a function of a network management system and/or a further network node in a radio access network, RAN, transport network, and/or a core network of the cellular network.
  • a computer program product comprising a computer-readable storage medium for storing program code which causes a computer or a processor to perform the method according to the second aspect or the method according to the fourth aspect when the program code is executed by the computer or the processor.
  • Fig. 1a shows a schematic diagram illustrating a cellular network with a plurality of network nodes according to an embodiment and a management framework including a ML function orchestrator entity according to an embodiment;
  • Fig. 1b shows a detailed view of a network node according to an embodiment
  • Fig. 1c shows a detailed view of the management framework including the ML function orchestrator entity according to an embodiment
  • Fig. 2 shows a signalling diagram illustrating the interaction between a network node according to an embodiment and the management framework including the ML function orchestrator entity according to an embodiment for deploying DMAFs;
  • Fig. 3 shows a signalling diagram illustrating the interaction between a UE, a network node according to an embodiment and the management framework including the ML function orchestrator entity according to an embodiment for data collection by the network node;
  • Fig. 4 shows a signalling diagram illustrating the interaction between a network node according to an embodiment and the management framework including the ML function orchestrator entity according to an embodiment for executing a DMF;
  • Fig. 5 shows a signalling diagram illustrating the interaction between a network node according to an embodiment and the management framework including the ML function orchestrator entity according to an embodiment for DMF performance evaluation and updating;
  • Fig. 6 shows a signalling diagram illustrating the interaction between a network node according to an embodiment and the management framework including the ML function orchestrator entity according to an embodiment for model aggregation and delivery;
  • Fig. 7 shows a signalling diagram illustrating the interaction between a network node according to an embodiment in the form of a network data analytics function, NWDAF, and the management framework including the ML function orchestrator entity according to an embodiment;
  • Fig. 8 shows a flow diagram illustrating steps of a method according to an embodiment for operating a network node of a cellular network
  • Fig. 9 shows a flow diagram illustrating steps of a method according to an embodiment for operating a ML function orchestrator entity of a cellular network.
  • a disclosure in connection with a described method may also hold true for a corresponding device or system configured to perform the method and vice versa.
  • a corresponding device may include one or a plurality of units, e.g. functional units, to perform the described one or plurality of method steps (e.g. one unit performing the one or plurality of steps, or a plurality of units each performing one or more of the plurality of steps) , even if such one or more units are not explicitly described or illustrated in the figures.
  • a specific apparatus is described based on one or a plurality of units, e.g.
  • a corresponding method may include one step to perform the functionality of the one or plurality of units (e.g. one step performing the functionality of the one or plurality of units, or a plurality of steps each performing the functionality of one or more of the plurality of units) , even if such one or plurality of steps are not explicitly described or illustrated in the figures. Further, it is understood that the features of the various exemplary embodiments and/or aspects described herein may be combined with each other, unless specifically noted otherwise.
  • Figure 1a shows a schematic diagram illustrating a cellular network with a plurality of network nodes 120a-n according to an embodiment and a management system 130 (also referred to as management framework 130) including a machine learning, ML, function orchestrator entity 131 (or short ML function orchestrator 131) according to an embodiment.
  • Figure 1b shows a more detailed view of the components of one representative network node 120a of the plurality of network nodes 120a-n according to an embodiment
  • figure 1c shows a more detailed view of the components of the management system 130 including the ML function orchestrator 131 according to an embodiment.
  • the plurality of network nodes 120a-n may be implemented in a radio access network, RAN, transport network, and/or a core network, CN, of a cellular network.
  • the plurality of network nodes 120a-n may comprises one or more base stations, routing nodes, or controllers of the cellular network.
  • the ML function orchestrator 131 may be implemented, for instance, as a function of the network management system 130 and/or as a further network node in a radio access network, RAN, transport network, and/or a core network.
  • the representative network node 120a may comprise a data model function, DMF, orchestrator 121, a data collection coordination functions and a data model collection coordination function 123. Moreover, the representative network node 120a may comprise a data repository 124, a DMF repository 125, and a DMAF repository 126.
  • the management system may comprise in addition to the ML function orchestrator 131, a model/optimization block 132, a policy management entity 133, an atomic function repository 133 and a mDMF repository 135.
  • the ML function orchestrator entity 131 generally has the role to support the deployment and optimization of a network policy by delivering to the model/optimization block 132 all the models used to simulate the network environment.
  • the DMF orchestrator 121 is configured to train, execute and store a plurality of DMFs for modelling one or more performance metrics and/or parameters of the network node 120a-n for each of a plurality of network policies.
  • the DMF collection coordination function 123 is configured to store and retrieve the plurality of trained DMFs, while the data collection coordination function 122 is configured to collect and store data used for training and executing the plurality of DMFs.
  • the ML function orchestrator 131 for supporting the deployment of a network policy for the plurality of network nodes 120a-n is configured to provide to each network node 120a-n information, for instance, instructions for generating one or more data model functions, DMFs, for modelling one or more performance metrics and/or parameters of the network node 120a-n for a given network policy.
  • the ML function orchestrator 131 is configured to receive in response one or more parameters and/or hyperparameters from each network node 120a-n of the one or more DMFs trained for the given network policy and to provide information about the one or more DMFs trained for the given network policy, including the one or more parameters and/or hyperparameters received from the plurality of network nodes 120a-n and/or information generated based on the one or more parameters and/or hyperparameters received from the plurality of network nodes 120a-n to the model/optimization block 132, i.e. a policy optimization entity 132 for optimizing the given network policy.
  • the model/optimization block 132 i.e. a policy optimization entity 132 for optimizing the given network policy.
  • the main role of the ML function orchestrator 131 is to support the network policy deployment and optimization by specifying the DMFs to be constructed for modelling a given network policy, transfer instructions to the network nodes 120a-n to build these models, and deliver the trained models to the model/optimization block 132, which uses them to simulate the network environment.
  • the ML function orchestrator 131 receives from the policy management entity 133 instructions on the models required to mimic and optimize a given network policy.
  • the ML function orchestrator 131 may, for instance, based on expert knowledge, define the input required for each model and may collect from the atomic functions repository 134 the Data Model Atomic Functions, DMAFs, to support the construction of the DMFs.
  • the atomic functions may comprise black box (Neural Networks, Gradient Bosting, etc. ) or white box functions (explicit analytical expressions) .
  • the ML function orchestrator 131 sends to the network nodes 120a-n the instructions to construct/update the local DMFs.
  • the ML function orchestrator 131 may save them in the management DMF (mDMF) repository 135 and transfer them to the model/optimization block 132, where the simulated network function 132c mimics the network behavior based on the received models, the optimal policy parameters are found by the policy optimization function 132b (e.g., using gradient-based stochastic optimization models) , and the network polices 132a are updated accordingly, and then tested through the simulation network function. The process may run iteratively until convergence.
  • mDMF management DMF
  • the data model function, DMF, orchestrator 121 is in charge of locally training, executing and maintaining the DMFs requested by the ML function orchestrator 131. More specifically, in a first stage the DMF orchestrator 121 receives from the ML function orchestrator 131 the DMAFs and related instructions (input, output, and a set of hyper parameters) for the models required by each network policy. In a second stage, the DMF orchestrator 121 initiates data collection supported by the data collection coordination function 122. This data may include measurement reports such as reference signals, received power strength or signal to noise ratio, as well as cell level KPIs, such as traffic volume or number of active users, as disclosed, for instance, in 3GPP TS 28.552.
  • measurement reports such as reference signals, received power strength or signal to noise ratio, as well as cell level KPIs, such as traffic volume or number of active users, as disclosed, for instance, in 3GPP TS 28.552.
  • the selected data is stored in the data repository 124 and it is kept up-to-date by the data collection coordination function 122.
  • the data collection coordination function 122 has also the role of doing simple data processing to generate local analytics, e.g., to determine whether the statistics of a performance measurement has changed.
  • the DMF orchestrator 121 of each network node 120a-n stores received DMAFs and trained DMFs with the help of the data model collection coordination function 123. Specifically, DMAFs and DMFs may be stored in the DMAF repository 126 and the DMF repository 125, respectively.
  • the DMF orchestrator 121 periodically runs the generated DMFs to evaluate and update them if necessary based on (a) a local DMF performance; (b) observed input changes signaled by the data collection coordination function 122; and/or (c) requests by the ML function orchestrator 131 Finally, the DMF orchestrator 121 transfers generated/updated DMFs to the ML function orchestrator 131.
  • FIG. 2 shows a signalling diagram illustrating the interaction between the representative network node 120i according to an embodiment and the management system 130 including the ML function orchestrator 131 according to an embodiment for DMAF deployment, i.e. the process (that may be an offline process) for deploying available DMAFs from the management system 130 to the plurality of network nodes 120a-n, including the representative network node 120i.
  • DMAFs may be white box or black box functions which are used to construct the DMFs at the network nodes 120a-n. More specifically, in an embodiment, there may be two types of DMAFs, namely White box functions, i.e. analytical relations known by the experts (e.g., Shannon formula, linear regression) , and Black box functions (e.g., neural networks, GB) .
  • the DMAF deployment at the representative network node 120i illustrated in figure 2 may be realized offline, independently from the network policy to deploy/optimize, e.g., when the representative network node 120i is deployed.
  • new DMAFs may be provided by the ML function orchestrator 131 to the representative network node 120i, if needed during running time.
  • novel atomic functions may be required to support a new policy deployment, which has to consider parameters that were not modelled by the optimization framework 132 yet.
  • the ML function orchestrator 131 is configured to collect the atomic functions needed for the policy by sending a DMAF request to the atomic function repository 134 and receiving the requested DMAFs from the atomic function repository 134 (see steps 201 and 202 of figure 2) . Thereafter, the ML function orchestrator 131 transfers the collected DMAFs to the representative network node 120i, more specifically to the DMF orchestrator 121 thereof (see step 203 of figure 2) . The DMF orchestrator 121 stores the received DMAFs in the DMAF repository 126, through the support of the data model collection coordination function 123 (see steps 204, 205 and 206 of figure 2) .
  • Figure 3 shows a signalling diagram illustrating the interaction between one or more UEs 150, the representative network node 120i according to an embodiment and the management system 130 including the ML function orchestrator 131 according to an embodiment for data collection.
  • the ML function orchestrator 131 determines the inputs and outputs required by each DMF used by the network policy (see steps 301, 302 of figure 3) .
  • these steps may be implemented based on expert knowledge or the output of an (generative) AI model, trained on e.g., 3GPP documents or equipment specifications.
  • the related DMF may require as inputs one or more of the following inputs associated with the network node 120i: the cell load; the number of active antennas; the number of active transceivers; the efficiency of power amplifier; the maximum transmit power; and/or the status of energy saving features, such as carrier shutdown, channel shutdown, or symbol shutdown.
  • the DMF may need to generate as output the mean power consumption of the base station 120i and the standard deviation thereof.
  • step 303 of figure 3 the ML function orchestrator 131 sends a configuration message to the DMF orchestrator 121 running in each network node 120i participating in the network policy, wherein the configuration message informs the respective network node 120i about the DMAFs to be used for the DMFs, I/O, and the periodicity with which the DMFs should be run.
  • the DMF orchestrator 121 maps the identified I/O to specific cell level KPIs and Measurement Reports, MRs, (see step 304 of figure 3) and instructs the data collection coordination function 122 to collect and store such data in the data repository 124 (see steps 305, 306 and 307 of figure 3) . More specifically, to collect required MRs, the data collection coordination function 122 sends a MR configuration message to one or more UEs 150 attached to the network node 120i as indicated in step 308 of figure 3. The MRs reported by the one or more UEs 150 attached to the network node 120i are then stored in the data repository 124 (see steps 309 and 310 of figure 3) .
  • the DMF orchestrator 121 may indicate the frequency with which the data has to be collected/refreshed, and the time during which the collected data has to be stored in the data repository 124 by means of the data configuration message send in step 305 of figure 3.
  • Figure 4 shows a signalling diagram illustrating the interaction between the representative network node 120i according to an embodiment and the management system 130 including the ML function orchestrator 131 according to an embodiment for training and or executing a DMF.
  • N is the index of the ith DMF, needed to train/run the DMF i (see steps 403 and 404 in figure 4) . Then, based on the periodicity indicated by the ML function orchestrator 131 in the DMFs configuration message (i.e., T i ) , the DMF orchestrator 121 retrieves the data required for each DMF through the data collection coordination function 122 (see steps 405, 406, and 407 in figure 4) . This data is continuously collected and stored in the data repository 124 (which is not shown in figure 4 for simplification) , based on the data configuration message sent by the DMF orchestrator 121 to the data collection coordination function 122, as already described above.
  • the DMF orchestrator 121 may train the DMAF, and construct the target DMF (see step 408 in figure 4) .
  • the DMAF used by the DMF orchestrator 121 taking as input the cell SINR may be:
  • the learned parameters ⁇ and ⁇ may then be shared with the ML function orchestrator 131 (see step 409 in figure 4) , such that the network spectral efficiency as a function of the SINR can be numerically evaluated by the simulated network block.
  • the DMF parameters may be stored in the DMF repository 125 by the data model collection coordination function 123 (see steps 410, 411, and 412 in figure 4) .
  • the accuracy of the model may also be stored in the DMF repository 125. In an embodiment, the accuracy of the model may also be transferred to the ML function orchestrator 131.
  • the model accuracy may be computed using known metrics, such as the mean absolute error, the mean absolute percentage error, or the mean squared error.
  • the DMF can be executed, to continuously evaluate its accuracy, and or its parameters may be recomputed.
  • the model parameters are shared with the ML function orchestrator 131 only when they change.
  • the new parameters are sent.
  • the difference between old and new parameters may be transmitted. If the model has recently changed, the related parameters may be sent for a fixed number of iterations even when they have not changed. In a further embodiment, the parameters may be always sent to the ML function orchestrator 131 after these parameters have been determined.
  • Figure 5 shows a signalling diagram illustrating the interaction between the representative network node 120i according to an embodiment and the management system 130 including the ML function orchestrator 131 according to an embodiment for DMF performance evaluation and updating.
  • the DMF orchestrator 121 of the network node 120i is in charge of verifying the performance of the local DMFs.
  • the performance check may be done periodically, by executing the DMF (as already described above) , as indicated by the ML function orchestrator 131 in the DMF configuration message (see step 401 in figure 4) .
  • the performance check may be done after a trigger received by the data collection coordination function 122, which has observed that the statistics of the DMF-related I/O have changed.
  • the performance check may be done after a trigger received by the ML function orchestrator 131 (see step 501 in figure 5) .
  • the trigger may include a specific DMAF, its I/O, and related hyperparameters (P) .
  • the DMF orchestrator 121 may (i) collect the indicated DMAF from the DMAF repository 126 via the data model collection coordination function 123 (see steps 502, 503, and 504 in figure 5) , (ii) collect the data stored in the data repository 124 (related to the I/O of the DMF under test) as indicated in steps 505, 506, and 507 in figure 5, and (iii) train the indicated DMAF.
  • the DMF orchestrator 121 may compare the newly trained DMAF performance with those of the stored DMF (see step 511 in figure 5) . In an embodiment, the DMF orchestrator 121 may replace the stored DMF with the output of the new trained DMAF if the loss of the latter is smaller than the one of the former. In a further embodiment, the DMF orchestrator 121 may replace the stored DMF with the output of the new trained DMAF, independently of their related performance. In a further embodiment, a combination of the two previous embodiments may be applied.
  • the DMF orchestrator 121 may retrieve (a) the stored DMF from the DMF repository 125 via the data model collection coordination function 123 and (b) the data stored in the data repository 124 (related to the I/O of the DMF under test) via the data collection coordination function 122. After the DMF performance evaluation, in an embodiment, the DMF orchestrator 121 may retrain the DMF when it results in a large loss or poor accuracy. Then, the DMF orchestrator 121 may store the parameters and the performance of the newly trained model (see steps 513 and 514 in figure 5) and transfer the trained, i.e. learned parameters to the ML function orchestrator 131 (see step 512 in figure 5) . In an embodiment, also the accuracy of the model may be transferred back to the ML function orchestrator 131.
  • Figure 6 shows a signalling diagram illustrating the interaction between the plurality of network nodes 120a-n, including the representative network node 120i, according to an embodiment and the management framework 130 including the ML function orchestrator 131 according to an embodiment for model/data aggregation and delivery.
  • the ML function orchestrator 131 periodically collects DMFs from multiple nodes 120a-n in the network to support policy deployment and optimization, i.e., by transferring the output model to the simulated network block (see step 603 in figure 6) .
  • These models may be aggregated by the ML function orchestrator 131 using standard ML methods (see step 602 in figure 6) .
  • the ML function orchestrator 131 may be configured to average the model parameters (aweighted average based on the model loss may also be used) .
  • the ML function orchestrator 131 may implement federated learning for the model/data aggregation.
  • the ML function orchestrator 131 may be configured to select the best performing model, i.e., the model with the lowest loss.
  • the aggregation process may be implemented at the RAN, edge, or core of the cellular network to limit the overhead in the network.
  • the ML function orchestrator 131 may feedback the aggregated models to the DMF orchestrator 121 of each node 120a-n for triggering the DMF update already described above (see steps 604 and 605 in figure 6) .
  • the ML function orchestrator 131 stores the aggregated model in a dedicated repository of the management system 130, e.g. the mDMF repository 135 shown in figures 1a and 1c.
  • Figure 7 shows a signalling diagram illustrating the interaction between a network node 120i according to an embodiment in the form of a network data analytics function, NWDAF, 120i and the management framework 130 including the ML function orchestrator 131 according to an embodiment.
  • the ML function orchestrator 131 may share the DMFs with the network nodes 121 to locally allow analytics and resource control. This may be particularly relevant for the case, where the network nodes 120i in the form of network functions 120i are implemented such that they cannot access specific network parameters via standardized interfaces.
  • One example is related to virtual/physical network function energy consumption, which can be used at the NWDAF to compute the node energy efficiency, or other related parameters (such as carbon emissions or energy consumed which is generated by renewable energies) and control network resources accordingly.
  • a network node cannot share its energy consumption through a standardized interface as the energy is typically measured through an external sensor, i.e., it is only available at the management system 130 and possibly at the node itself.
  • Embodiments disclosed herein make it possible to leverage DMFs to compute an estimate of these unavailable parameters, if the inputs related to these DMFs are available at the nodes where the parameters are required. In an embodiment, this may be the estimate of the energy consumption, or the energy efficiency of one or multiple nodes at the NWDAF 120i.
  • the NWDAF 120i receives from the ML function orchestrator 131 the DMF able to provide a characterization of the average RAN node energy consumption (see steps 701, 702, 703, and 704 in figure 7) , which may be computed as described previously (see the signaling diagram and the procedure described in figure 6) . Thereafter, the DMF orchestrator 121 in the NWDAF stores the received DMAF in the DMAF repository 126, through the support of the data model collection coordination function 123 (see steps 705 and 706 of figure 7) .
  • the DMF may be a white box function or a neural network.
  • the input parameters required to generate the output of the DMF may be (a) directly captured at the NWDAF 120i from the RAN (see steps 707, 708, and 709 of figure 7) , (b) described through other DMFs also provided by the ML function orchestrator 131, and/or (c) partly available as raw data and partly represented through DMFs.
  • the NWDAF 120i may indicate in the DMF request the input DMFs, which are also required.
  • the DMF orchestrator can run the DMF to get the required estimate and optimise network resources accordingly.
  • FIG. 8 shows a flow diagram illustrating steps of a method 800 according to an embodiment for operating a network node, such as the representative network node 120i.
  • the method 800 comprises a step 801 of implementing a data model function, DMF, orchestrator 121 for training, executing and storing a plurality of DMFs for modelling one or more performance metrics and/or parameters of the network node 120i for each of a plurality of network policies.
  • the method 800 comprises a step 803 of implementing a DMF collection coordination function 123 for storing and retrieving the plurality of trained DMFs.
  • the method 800 further comprises a step 805 of implementing a data collection coordination function 122 for collecting and storing data for training and executing the plurality of DMFs.
  • Figure 9 shows a flow diagram illustrating steps of a method 900 according to an embodiment for operating a ML function orchestrator 131.
  • the method 900 comprises a step 901 of providing to each network node 120a-n information for generating one or more data model functions, DMFs, for modelling one or more performance metrics and/or parameters of the network node 120a-n for a network policy.
  • the method 900 comprises a step 903 of receiving one or more parameters and/or hyperparameters from each network node 120a-n of the one or more DMFs trained for the network policy.
  • the method 900 further comprises a step 905 of providing information about the one or more DMFs trained for the network policy, including the one or more parameters and/or hyperparameters received from the plurality of network nodes 120a-n and/or information generated based on the one or more parameters and/or hyperparameters received from the plurality of network nodes 120a-n, to a policy optimization entity 132 for optimizing the network policy.
  • embodiments disclosed herein provide an improved architecture and improved functionalities that enable an accurate characterization of network KPIs through models, ensuring limited overhead and high privacy. Transferring models instead of data significantly reduces the amount of information exchanged between network nodes and the central node responsible for optimization. Also, data is kept local and only useful data is stored. Moreover, according to embodiments disclosed herein ML models may be updated based on different centralized or local triggers. This is because embodiments disclosed herein make use of available data to detect network changes and model performance degradation and update models accordingly. Network models can be kept up-to-date while limiting communication overhead and energy consumption. The management framework/system 130 can push a model update based on e.g. optimized network performance or models received by different RAN nodes 120a-n.
  • Embodiments disclosed herein leverage ML/AI capabilities available in the network nodes 120a-n.
  • available local processing capability can be efficiently exploited, i.e., model trained at night when traffic load is low.
  • embodiments disclosed herein allow coordination across nodes in different domains.
  • the management framework/system 130 can exploit distributed learning and expert knowledge to define the features required to construct each model, select model hyperparameters, create global models from the local ones, and/or identify new model architectures. Models can be shared to characterize KPIs which cannot be usually shared through standardize interfaces.
  • the disclosed system, apparatus, and method may be implemented in other manners.
  • the described embodiment of an apparatus is merely exemplary.
  • the unit division is merely logical function division and may be another division in an actual implementation.
  • a plurality of units or components may be combined or integrated into another system, or some features may be ignored or not performed.
  • the displayed or discussed mutual couplings or direct couplings or communication connections may be implemented by using some interfaces.
  • the indirect couplings or communication connections between the apparatuses or units may be implemented in electronic, mechanical, or other forms.
  • the units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one position, or may be distributed on a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the objectives of the solutions of the embodiments.
  • functional units in the embodiments of the invention may be integrated into one processing unit, or each of the units may exist alone physically, or two or more units are integrated into one unit.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Databases & Information Systems (AREA)
  • Evolutionary Computation (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

A network node (120a-n) configured to operate based on a plurality of network policies for a cellular network is disclosed. The network node (120a-n) comprises a data model function, DMF, orchestrator (121) configured to train, execute and store a plurality of DMFs for modelling one or more performance metrics and/or parameters of the network node (120a-n) for each of the plurality of network policies. Moreover, the network node (120a-n) comprises a DMF collection coordination function (123) configured to store and retrieve the plurality of trained DMFs. The network node (120a-n) further comprises a data collection coordination function (122) configured to collect and store data used for training and executing the plurality of DMFs. Moreover, a machine learning, ML, function orchestrator (131) for supporting the deployment of a network policy for a plurality of network nodes (120a-n) of a cellular network is disclosed.

Description

DEVICES AND METHODS FOR REAL-TIME ML FRAMEWORK SUPPORTING NETWORK OPTIMIZATION TECHNICAL FIELD
The present disclosure relates to wireless communications and information processing technology. More specifically, the present disclosure relates to devices and methods for a real-time machine learning, ML, framework supporting cellular network optimization.
BACKGROUND
The performance of a real-world cellular network depends on not only the capabilities of its hardware (e.g., base stations (BSs) , mobile handsets) , but also how it is configured. Optimizing the performance of a real-world cellular network is extremely challenging because of the difficulty to predict the network performance as a function of network parameters, and the prohibitively large problem size. The conventional approach to network optimization relies on measurement campaigns, trial-and-error and engineering experience, which are costly and time-consuming. A critical challenge for network optimization is the selection of the best combination of network parameters out of a gigantic solution space.
Recent research is showing the remarkable potential of artificial intelligence (AI) , in particular machine learning (ML) in solving complex mathematical optimization problems. Accordingly, 3GPP is working towards the integration of AI and ML in future mobile networks by design. More specifically, 3GPP Release 16 has focused on leveraging AI for enhanced Network Automation, Minimizing Drive Test, and Self-Organizing-Networks; 3GPP Release 17 has focused on Enhancements for 5G interfaces for facilitating ML procedures, network and device data collection to support selected use cases, and introducing new Quality of Service (QoS) definitions tailored for ML models transferred over 5G. Finally, 3GPP Release 18 is focusing on AI/ML-enabled air interface design, network optimization, and architecture.
This interest is pushed by a potential large availability of network data to train AI/ML models. In fact, each BS generates approximately 3,000 key performance indicators (KPIs) /hour. In a city with 10k BSs, the cumulative KPI count exceeds 7 billion per day. When the KPIs for 1 million subscribers are factored in, the daily count skyrockets to an astounding 37 billion KPIs. This vast amount of data opens opportunities for the implementation of AI-optimized cellular networks. Nevertheless, the abundance of data hinders a large overhead related to data transfer, processing and storage functionalities. Therefore, there is a need for a on a resource efficient architecture that supports future AI/ML optimization frameworks.
SUMMARY
It is an objective to provide improved devices and methods for a real-time machine learning, ML, framework supporting cellular network optimization.
The foregoing and other objectives are achieved by the subject matter of the independent claims. Further implementation forms are apparent from the dependent claims, the description and the figures.
According to a first aspect a network node configured to operate based on a plurality of different network policies for a cellular network is provided. The network node comprises, i.e. implements a data model function,  DMF, orchestrator entity configured to train, execute and store a plurality of DMFs for modelling one or more performance metrics and/or parameters of the network node for each of the plurality of different network policies.
Moreover, the network node comprises a DMF collection coordination function configured to store and retrieve the plurality of trained DMFs and a data collection coordination function configured to collect and store data used for training and executing the plurality of DMFs.
As will be described in more detail in the following, implementation forms and embodiments disclosed herein provide an improved architecture and improved functionalities that enable an accurate characterization of network KPIs through data models, ensuring limited overhead and high privacy. Transferring models instead of data significantly reduces the amount of information exchanged between network nodes and the central node responsible for optimization. Also, data is kept local and only useful data is stored. Moreover, according to implementation forms and embodiments disclosed herein ML models may be updated based on different centralized or local triggers. This is because implementation forms and embodiments disclosed herein make use of available data to detect network changes and model performance degradation and update models accordingly. Network models can be kept up-to-date while limiting communication overhead and energy consumption. A management framework/system can push a model update based on e.g., observed network performance or models received by different network nodes. Implementation forms and embodiments disclosed herein leverage ML/AI capabilities available in the network nodes. Thus, available local processing capability can be efficiently exploited, i.e., model trained at night when traffic load is low. Moreover, implementation forms and embodiments disclosed herein allow coordination across nodes in different domains. For instance, the management framework/system may exploit generative AI and expert knowledge to define the features required to construct each model, select model hyperparameters, and/or identify new model architectures.
Similarly, it may exploit distributed learning to create global models from the local ones. Data models can be shared to characterize KPIs which cannot be usually shared through standardized interfaces.
In a further possible implementation form, one or more of the plurality of DMFs comprise one or more data model atomic functions, DMAFs, and wherein the DMF collection coordination function is further configured to store the DMAFs of the one or more DMFs.
In a further possible implementation form, the one or more DMAFs comprise one or more black box atomic functions and/or one or more white box atomic functions.
In a further possible implementation form, the DMF orchestrator entity is configured to receive instructions from a management entity of the cellular network for generating and/or updating the one or more DMFs based on the one or more DMAFs for one or more of the plurality of network policies.
In a further possible implementation form, the DMF orchestrator entity is configured to provide one or more trained parameters and/or hyperparameters of one or more DMFs trained for one or more of the plurality of network policies to the management entity.
In a further possible implementation form, the data collection coordination function entity is configured to collect and store the data for training and executing the plurality of DMFs, in response to a request, i.e., trigger from the DMF orchestrator entity.
In a further possible implementation form, the request from the DMF orchestrator entity to the data collection coordination function entity may indicate a frequency for collecting the data, such as, for instance, every 10 minutes, and/or a time period for which the collected data is to be stored.
In a further possible implementation form, the network node may be a base station, a routing node, or a controller of the cellular network.
According to a second aspect a method is provided for operating a network node configured to operate based on a plurality of different network policies for a cellular network, in particular a 3GPP cellular network. The method according to the second aspect comprises the steps of:
implementing a data model function, DMF, orchestrator entity for training, executing and storing a plurality of DMFs for modelling one or more performance metrics and/or parameters of the network node for each of the plurality of different network policies;
implementing a DMF collection coordination function entity for storing and retrieving the plurality of trained DMFs; and
implementing a data collection coordination function entity for collecting and storing data for training and executing the plurality of DMFs.
The method according to the second aspect can be performed by the network node according to the first aspect. Thus, further features of the method according to the second aspect result directly from the functionality of the network node according to the first aspect as well as its different implementation forms described above and below.
According to a third aspect a machine learning, ML, function orchestrator entity is provided for supporting the deployment of an improved, in particular optimized network policy for a plurality of network nodes of a cellular network, in particular a 3GPP cellular network. The ML function orchestrator entity is configured to provide to each network node information, for instance, instructions for generating one or more trainable data model functions, DMFs, for modelling one or more performance metrics and/or parameters of the network node for a network policy. Moreover, the ML function orchestrator entity is configured to receive one or more parameters and/or hyperparameters from each network node of the one or more DMFs trained for the network policy. The ML function orchestrator entity is further configured to provide information about the one or more DMFs trained for the network policy, including the one or more parameters and/or hyperparameters received from the plurality of network nodes and/or information generated based on the one or more parameters and/or hyperparameters received from the plurality of network nodes, to a policy optimization entity for optimizing the network policy.
In a further possible implementation form, the ML function orchestrator entity is configured to receive the information, for instance, instructions, for generating the one or more trainable DMFs for modelling the one or more performance metrics and/or parameters of the network node for the network policy from a policy management function entity of the cellular network.
In a further possible implementation form, the one or more trainable DMFs comprise one or more data model atomic functions, DMAFs, wherein the ML function orchestrator entity is configured to determine information, for instance, instructions, for the network node, in particular the DMF orchestrator thereof, for generating the one or more trainable DMFs based on the one or more DMAFs.
In a further possible implementation form, the one or more DMAFs comprise one or more black box atomic functions and/or one or more white box atomic functions.
In a further possible implementation form, the ML function orchestrator entity is configured to aggregate the one or more parameters and/or hyperparameters received from each network node about the one or more DMFs trained for the network policy. In an implementation form, the ML function orchestrator entity may be configured to aggregate the one or more parameters and/or hyperparameters from the plurality of network nodes, for instance, by averaging the plurality of parameters and/or hyperparameters.
According to a fourth aspect a method is provided for operating a machine learning, ML, function orchestrator entity for supporting the deployment of an improved, in particular optimized network policy for a plurality of network nodes of a cellular network, in particular a 3GPP cellular network. The method according to the fourth aspect comprises the steps of:
providing to each network node information, for instance, instructions, for generating one or more trainable data model functions, DMFs, for modelling one or more performance metrics and/or parameters of the network node for a network policy;
receiving one or more parameters and/or hyperparameters from each network node of the one or more DMFs trained for the network policy, and
providing information about the one or more DMFs trained for the network policy, including the one or more parameters and/or hyperparameters received from the plurality of network nodes and/or information generated based on the one or more parameters and/or hyperparameters received from the plurality of network nodes, to a policy optimization entity for optimizing the network policy.
The method according to the fourth aspect can be performed by the ML function orchestrator entity according to the third aspect. Thus, further features of the method according to the fourth aspect result directly from the functionality of the ML function orchestrator entity according to the third aspect as well as its different implementation forms described above and below.
According to a fifth aspect a cellular network, in particular a 3GPP cellular network, is provided, wherein the cellular network comprises a plurality of network nodes according to the first aspect and a ML function orchestrator entity according to the third aspect.
In a further possible implementation form, the plurality of network nodes are implemented in a radio access network, RAN, transport network, and/or a core network, CN, of the cellular network and/or the ML function orchestrator is implemented as a function of a network management system and/or a further network node in a radio access network, RAN, transport network, and/or a core network of the cellular network.
According to a sixth aspect a computer program product is provided, comprising a computer-readable storage medium for storing program code which causes a computer or a processor to perform the method according to  the second aspect or the method according to the fourth aspect when the program code is executed by the computer or the processor.
Details of one or more embodiments are set forth in the accompanying drawings and the description below.
Other features, objects, and advantages will be apparent from the description, drawings, and claims.
BRIEF DESCRIPTION OF THE DRAWINGS
In the following, embodiments of the present disclosure are described in more detail with reference to the attached figures and drawings, in which:
Fig. 1a shows a schematic diagram illustrating a cellular network with a plurality of network nodes according to an embodiment and a management framework including a ML function orchestrator entity according to an embodiment;
Fig. 1b shows a detailed view of a network node according to an embodiment;
Fig. 1c shows a detailed view of the management framework including the ML function orchestrator entity according to an embodiment;
Fig. 2 shows a signalling diagram illustrating the interaction between a network node according to an embodiment and the management framework including the ML function orchestrator entity according to an embodiment for deploying DMAFs;
Fig. 3 shows a signalling diagram illustrating the interaction between a UE, a network node according to an embodiment and the management framework including the ML function orchestrator entity according to an embodiment for data collection by the network node;
Fig. 4 shows a signalling diagram illustrating the interaction between a network node according to an embodiment and the management framework including the ML function orchestrator entity according to an embodiment for executing a DMF;
Fig. 5 shows a signalling diagram illustrating the interaction between a network node according to an embodiment and the management framework including the ML function orchestrator entity according to an embodiment for DMF performance evaluation and updating;
Fig. 6 shows a signalling diagram illustrating the interaction between a network node according to an embodiment and the management framework including the ML function orchestrator entity according to an embodiment for model aggregation and delivery;
Fig. 7 shows a signalling diagram illustrating the interaction between a network node according to an embodiment in the form of a network data analytics function, NWDAF, and the management framework including the ML function orchestrator entity according to an embodiment;
Fig. 8 shows a flow diagram illustrating steps of a method according to an embodiment for operating a network node of a cellular network; and
Fig. 9 shows a flow diagram illustrating steps of a method according to an embodiment for operating a ML function orchestrator entity of a cellular network.
In the following, identical reference signs refer to identical or at least functionally equivalent features.
DETAILED DESCRIPTION OF THE EMBODIMENTS
In the following description, reference is made to the accompanying figures, which form part of the disclosure, and which show, by way of illustration, specific aspects of embodiments of the present disclosure or specific aspects in which embodiments of the present disclosure may be used. It is understood that embodiments of the present disclosure may be used in other aspects and comprise structural or logical changes not depicted in the figures. The following detailed description, therefore, is not to be taken in a limiting sense, and the scope of the present disclosure is defined by the appended claims.
For instance, it is to be understood that a disclosure in connection with a described method may also hold true for a corresponding device or system configured to perform the method and vice versa. For example, if one or a plurality of specific method steps are described, a corresponding device may include one or a plurality of units, e.g. functional units, to perform the described one or plurality of method steps (e.g. one unit performing the one or plurality of steps, or a plurality of units each performing one or more of the plurality of steps) , even if such one or more units are not explicitly described or illustrated in the figures. On the other hand, for example, if a specific apparatus is described based on one or a plurality of units, e.g. functional units, a corresponding method may include one step to perform the functionality of the one or plurality of units (e.g. one step performing the functionality of the one or plurality of units, or a plurality of steps each performing the functionality of one or more of the plurality of units) , even if such one or plurality of steps are not explicitly described or illustrated in the figures. Further, it is understood that the features of the various exemplary embodiments and/or aspects described herein may be combined with each other, unless specifically noted otherwise.
Figure 1a shows a schematic diagram illustrating a cellular network with a plurality of network nodes 120a-n according to an embodiment and a management system 130 (also referred to as management framework 130) including a machine learning, ML, function orchestrator entity 131 (or short ML function orchestrator 131) according to an embodiment. Figure 1b shows a more detailed view of the components of one representative network node 120a of the plurality of network nodes 120a-n according to an embodiment, while figure 1c shows a more detailed view of the components of the management system 130 including the ML function orchestrator 131 according to an embodiment. The plurality of network nodes 120a-n may be implemented in a radio access network, RAN, transport network, and/or a core network, CN, of a cellular network.
The plurality of network nodes 120a-n may comprises one or more base stations, routing nodes, or controllers of the cellular network. The ML function orchestrator 131 may be implemented, for instance, as a function of the network management system 130 and/or as a further network node in a radio access network, RAN, transport network, and/or a core network.
As can be taken from figures 1a and 1b and as will be described in more detail in the following, in an embodiment, the representative network node 120a may comprise a data model function, DMF, orchestrator 121, a data collection coordination functions and a data model collection coordination function 123. Moreover, the representative network node 120a may comprise a data repository 124, a DMF repository 125, and a DMAF repository 126.
As can be taken from figures 1a and 1c and as will be described in more detail in the following, in an embodiment, the management system may comprise in addition to the ML function orchestrator 131, a model/optimization block 132, a policy management entity 133, an atomic function repository 133 and a mDMF repository 135. As will be described in more detail below, in an embodiment the ML function orchestrator entity 131 generally has the role to support the deployment and optimization of a network policy by delivering to the model/optimization block 132 all the models used to simulate the network environment.
More specifically, for each network node 120a-n the DMF orchestrator 121 is configured to train, execute and store a plurality of DMFs for modelling one or more performance metrics and/or parameters of the network node 120a-n for each of a plurality of network policies. The DMF collection coordination function 123 is configured to store and retrieve the plurality of trained DMFs, while the data collection coordination function 122 is configured to collect and store data used for training and executing the plurality of DMFs.
As will be described in more detail below, the ML function orchestrator 131 for supporting the deployment of a network policy for the plurality of network nodes 120a-n is configured to provide to each network node 120a-n information, for instance, instructions for generating one or more data model functions, DMFs, for modelling one or more performance metrics and/or parameters of the network node 120a-n for a given network policy. Moreover, the ML function orchestrator 131 is configured to receive in response one or more parameters and/or hyperparameters from each network node 120a-n of the one or more DMFs trained for the given network policy and to provide information about the one or more DMFs trained for the given network policy, including the one or more parameters and/or hyperparameters received from the plurality of network nodes 120a-n and/or information generated based on the one or more parameters and/or hyperparameters received from the plurality of network nodes 120a-n to the model/optimization block 132, i.e. a policy optimization entity 132 for optimizing the given network policy.
Thus, the main role of the ML function orchestrator 131 is to support the network policy deployment and optimization by specifying the DMFs to be constructed for modelling a given network policy, transfer instructions to the network nodes 120a-n to build these models, and deliver the trained models to the model/optimization block 132, which uses them to simulate the network environment.
More specifically and as illustrated in figures 1a-c, the ML function orchestrator 131 receives from the policy management entity 133 instructions on the models required to mimic and optimize a given network policy. In an embodiment, the ML function orchestrator 131 may, for instance, based on expert knowledge, define the input required for each model and may collect from the atomic functions repository 134 the Data Model Atomic Functions, DMAFs, to support the construction of the DMFs. In an embodiment, the atomic functions may comprise black box (Neural Networks, Gradient Bosting, etc. ) or white box functions (explicit analytical expressions) . The ML function orchestrator 131 sends to the network nodes 120a-n the instructions to construct/update the local DMFs.
After the receptions of the DMFs from the network nodes 120a-n, the ML function orchestrator 131 may save them in the management DMF (mDMF) repository 135 and transfer them to the model/optimization block 132, where the simulated network function 132c mimics the network behavior based on the received models, the optimal policy parameters are found by the policy optimization function 132b (e.g., using gradient-based stochastic optimization models) , and the network polices 132a are updated accordingly, and then tested through the simulation network function. The process may run iteratively until convergence.
At each network node 120a-n, the data model function, DMF, orchestrator 121 is in charge of locally training, executing and maintaining the DMFs requested by the ML function orchestrator 131. More specifically, in a first stage the DMF orchestrator 121 receives from the ML function orchestrator 131 the DMAFs and related instructions (input, output, and a set of hyper parameters) for the models required by each network policy. In a second stage, the DMF orchestrator 121 initiates data collection supported by the data collection coordination function 122. This data may include measurement reports such as reference signals, received power strength or signal to noise ratio, as well as cell level KPIs, such as traffic volume or number of active users, as disclosed, for instance, in 3GPP TS 28.552. The selected data is stored in the data repository 124 and it is kept up-to-date by the data collection coordination function 122. The data collection coordination function 122 has also the role of doing simple data processing to generate local analytics, e.g., to determine whether the statistics of a performance measurement has changed.
In a third stage, the DMF orchestrator 121 of each network node 120a-n stores received DMAFs and trained DMFs with the help of the data model collection coordination function 123. Specifically, DMAFs and DMFs may be stored in the DMAF repository 126 and the DMF repository 125, respectively. In a fifth stage, the DMF orchestrator 121 periodically runs the generated DMFs to evaluate and update them if necessary based on (a) a local DMF performance; (b) observed input changes signaled by the data collection coordination function 122; and/or (c) requests by the ML function orchestrator 131 Finally, the DMF orchestrator 121 transfers generated/updated DMFs to the ML function orchestrator 131.
Figure 2 shows a signalling diagram illustrating the interaction between the representative network node 120i according to an embodiment and the management system 130 including the ML function orchestrator 131 according to an embodiment for DMAF deployment, i.e. the process (that may be an offline process) for deploying available DMAFs from the management system 130 to the plurality of network nodes 120a-n, including the representative network node 120i. In an embodiment, DMAFs may be white box or black box functions which are used to construct the DMFs at the network nodes 120a-n. More specifically, in an embodiment, there may be two types of DMAFs, namely White box functions, i.e. analytical relations known by the experts (e.g., Shannon formula, linear regression) , and Black box functions (e.g., neural networks, GB) .
As already mentioned above, the DMAF deployment at the representative network node 120i illustrated in figure 2 may be realized offline, independently from the network policy to deploy/optimize, e.g., when the representative network node 120i is deployed. According to an embodiment, new DMAFs may be provided by the ML function orchestrator 131 to the representative network node 120i, if needed during running time. For instance, novel atomic functions may be required to support a new policy deployment, which has to consider parameters that were not modelled by the optimization framework 132 yet. More specifically, in this process, as illustrated in figure 2, the ML function orchestrator 131 is configured to collect the atomic functions needed for the policy by sending a DMAF request to the atomic function repository 134 and receiving the requested DMAFs from the atomic function repository 134 (see steps 201 and 202 of figure 2) . Thereafter, the ML  function orchestrator 131 transfers the collected DMAFs to the representative network node 120i, more specifically to the DMF orchestrator 121 thereof (see step 203 of figure 2) . The DMF orchestrator 121 stores the received DMAFs in the DMAF repository 126, through the support of the data model collection coordination function 123 (see steps 204, 205 and 206 of figure 2) .
Figure 3 shows a signalling diagram illustrating the interaction between one or more UEs 150, the representative network node 120i according to an embodiment and the management system 130 including the ML function orchestrator 131 according to an embodiment for data collection. Based on a list of KPIs to characterize, specified by the policy management function 133, the ML function orchestrator 131 determines the inputs and outputs required by each DMF used by the network policy (see steps 301, 302 of figure 3) . In an embodiment, these steps may be implemented based on expert knowledge or the output of an (generative) AI model, trained on e.g., 3GPP documents or equipment specifications.
For instance, when modelling the energy consumption of a network node 120i in the form of a base station to optimize the network power consumption, the related DMF may require as inputs one or more of the following inputs associated with the network node 120i: the cell load; the number of active antennas; the number of active transceivers; the efficiency of power amplifier; the maximum transmit power; and/or the status of energy saving features, such as carrier shutdown, channel shutdown, or symbol shutdown. Moreover, in addition the DMF may need to generate as output the mean power consumption of the base station 120i and the standard deviation thereof.
In step 303 of figure 3 the ML function orchestrator 131 sends a configuration message to the DMF orchestrator 121 running in each network node 120i participating in the network policy, wherein the configuration message informs the respective network node 120i about the DMAFs to be used for the DMFs, I/O, and the periodicity with which the DMFs should be run.
To conclude this process, the DMF orchestrator 121 maps the identified I/O to specific cell level KPIs and Measurement Reports, MRs, (see step 304 of figure 3) and instructs the data collection coordination function 122 to collect and store such data in the data repository 124 (see steps 305, 306 and 307 of figure 3) . More specifically, to collect required MRs, the data collection coordination function 122 sends a MR configuration message to one or more UEs 150 attached to the network node 120i as indicated in step 308 of figure 3. The MRs reported by the one or more UEs 150 attached to the network node 120i are then stored in the data repository 124 (see steps 309 and 310 of figure 3) .
Also, the DMF orchestrator 121 may indicate the frequency with which the data has to be collected/refreshed, and the time during which the collected data has to be stored in the data repository 124 by means of the data configuration message send in step 305 of figure 3.
Figure 4 shows a signalling diagram illustrating the interaction between the representative network node 120i according to an embodiment and the management system 130 including the ML function orchestrator 131 according to an embodiment for training and or executing a DMF. As illustrated in step 402 of figure 4, at each network node, such as the representative network node 120i, participating in the deployment optimization, the DMF running process starts with the DMF orchestrator 121 receiving a DMFs configuration message from the ML function orchestrator 131 (see step 401 in figure 4) and then asking accordingly to the data model collection coordination function 123 to retrieve from the DMAF repository the DMAFi, where i=1... N, is the index of the ith DMF, needed to train/run the DMFi (see steps 403 and 404 in figure 4) . Then, based on the  periodicity indicated by the ML function orchestrator 131 in the DMFs configuration message (i.e., Ti) , the DMF orchestrator 121 retrieves the data required for each DMFthrough the data collection coordination function 122 (see steps 405, 406, and 407 in figure 4) . This data is continuously collected and stored in the data repository 124 (which is not shown in figure 4 for simplification) , based on the data configuration message sent by the DMF orchestrator 121 to the data collection coordination function 122, as already described above.
After receiving the required data, the DMF orchestrator 121 may train the DMAF, and construct the target DMF (see step 408 in figure 4) . For instance, considering a DMF modelling the cell spectral efficiency (SE) , the data rate that can be sent for unit of frequency resource, the DMAF used by the DMF orchestrator 121 taking as input the cell SINR may be:
After training, the DMF orchestrator 121 may find the following trained, i.e., learned parameters: α=0.5 and β=0.9. The learned parameters α and β may then be shared with the ML function orchestrator 131 (see step 409 in figure 4) , such that the network spectral efficiency as a function of the SINR can be numerically evaluated by the simulated network block.
Similarly, the DMF parameters may be stored in the DMF repository 125 by the data model collection coordination function 123 (see steps 410, 411, and 412 in figure 4) .
In an embodiment, the accuracy of the model may also be stored in the DMF repository 125. In an embodiment, the accuracy of the model may also be transferred to the ML function orchestrator 131. The model accuracy may be computed using known metrics, such as the mean absolute error, the mean absolute percentage error, or the mean squared error.
In the following iterations, the DMF can be executed, to continuously evaluate its accuracy, and or its parameters may be recomputed. In an embodiment, the model parameters are shared with the ML function orchestrator 131 only when they change. In an embodiment, the new parameters are sent. In a further embodiment, the difference between old and new parameters may be transmitted. If the model has recently changed, the related parameters may be sent for a fixed number of iterations even when they have not changed. In a further embodiment, the parameters may be always sent to the ML function orchestrator 131 after these parameters have been determined.
Figure 5 shows a signalling diagram illustrating the interaction between the representative network node 120i according to an embodiment and the management system 130 including the ML function orchestrator 131 according to an embodiment for DMF performance evaluation and updating. The DMF orchestrator 121 of the network node 120i is in charge of verifying the performance of the local DMFs. In an embodiment, the performance check may be done periodically, by executing the DMF (as already described above) , as indicated by the ML function orchestrator 131 in the DMF configuration message (see step 401 in figure 4) . In a further embodiment, the performance check may be done after a trigger received by the data collection coordination function 122, which has observed that the statistics of the DMF-related I/O have changed.
In a further embodiment, the performance check may be done after a trigger received by the ML function orchestrator 131 (see step 501 in figure 5) . The trigger may include a specific DMAF, its I/O, and related hyperparameters (P) . In such an embodiment, the DMF orchestrator 121 may (i) collect the indicated DMAF from the DMAF repository 126 via the data model collection coordination function 123 (see steps 502, 503, and 504 in figure 5) , (ii) collect the data stored in the data repository 124 (related to the I/O of the DMF under test) as indicated in steps 505, 506, and 507 in figure 5, and (iii) train the indicated DMAF. In an embodiment, the DMF orchestrator 121 may compare the newly trained DMAF performance with those of the stored DMF (see step 511 in figure 5) . In an embodiment, the DMF orchestrator 121 may replace the stored DMF with the output of the new trained DMAF if the loss of the latter is smaller than the one of the former. In a further embodiment, the DMF orchestrator 121 may replace the stored DMF with the output of the new trained DMAF, independently of their related performance. In a further embodiment, a combination of the two previous embodiments may be applied.
To verify the DMF performance, the DMF orchestrator 121 may retrieve (a) the stored DMF from the DMF repository 125 via the data model collection coordination function 123 and (b) the data stored in the data repository 124 (related to the I/O of the DMF under test) via the data collection coordination function 122. After the DMF performance evaluation, in an embodiment, the DMF orchestrator 121 may retrain the DMF when it results in a large loss or poor accuracy. Then, the DMF orchestrator 121 may store the parameters and the performance of the newly trained model (see steps 513 and 514 in figure 5) and transfer the trained, i.e. learned parameters to the ML function orchestrator 131 (see step 512 in figure 5) . In an embodiment, also the accuracy of the model may be transferred back to the ML function orchestrator 131.
Figure 6 shows a signalling diagram illustrating the interaction between the plurality of network nodes 120a-n, including the representative network node 120i, according to an embodiment and the management framework 130 including the ML function orchestrator 131 according to an embodiment for model/data aggregation and delivery. As can be taken from step 601 in figure 6, the ML function orchestrator 131 periodically collects DMFs from multiple nodes 120a-n in the network to support policy deployment and optimization, i.e., by transferring the output model to the simulated network block (see step 603 in figure 6) . These models may be aggregated by the ML function orchestrator 131 using standard ML methods (see step 602 in figure 6) . For instance, in an embodiment, the ML function orchestrator 131 may be configured to average the model parameters (aweighted average based on the model loss may also be used) . In a further embodiment, the ML function orchestrator 131 may implement federated learning for the model/data aggregation. In a further embodiment, the ML function orchestrator 131 may be configured to select the best performing model, i.e., the model with the lowest loss. In an embodiment, the aggregation process may be implemented at the RAN, edge, or core of the cellular network to limit the overhead in the network. In an embodiment, the ML function orchestrator 131 may feedback the aggregated models to the DMF orchestrator 121 of each node 120a-n for triggering the DMF update already described above (see steps 604 and 605 in figure 6) . In an embodiment, the ML function orchestrator 131 stores the aggregated model in a dedicated repository of the management system 130, e.g. the mDMF repository 135 shown in figures 1a and 1c.
Figure 7 shows a signalling diagram illustrating the interaction between a network node 120i according to an embodiment in the form of a network data analytics function, NWDAF, 120i and the management framework 130 including the ML function orchestrator 131 according to an embodiment. In this embodiment, the ML function orchestrator 131 may share the DMFs with the network nodes 121 to locally allow analytics and  resource control. This may be particularly relevant for the case, where the network nodes 120i in the form of network functions 120i are implemented such that they cannot access specific network parameters via standardized interfaces. One example is related to virtual/physical network function energy consumption, which can be used at the NWDAF to compute the node energy efficiency, or other related parameters (such as carbon emissions or energy consumed which is generated by renewable energies) and control network resources accordingly. However, currently a network node cannot share its energy consumption through a standardized interface as the energy is typically measured through an external sensor, i.e., it is only available at the management system 130 and possibly at the node itself. Embodiments disclosed herein make it possible to leverage DMFs to compute an estimate of these unavailable parameters, if the inputs related to these DMFs are available at the nodes where the parameters are required. In an embodiment, this may be the estimate of the energy consumption, or the energy efficiency of one or multiple nodes at the NWDAF 120i. In this case, the NWDAF 120i receives from the ML function orchestrator 131 the DMF able to provide a characterization of the average RAN node energy consumption (see steps 701, 702, 703, and 704 in figure 7) , which may be computed as described previously (see the signaling diagram and the procedure described in figure 6) . Thereafter, the DMF orchestrator 121 in the NWDAF stores the received DMAF in the DMAF repository 126, through the support of the data model collection coordination function 123 (see steps 705 and 706 of figure 7) .
As already described above, the DMF may be a white box function or a neural network. The input parameters required to generate the output of the DMF may be (a) directly captured at the NWDAF 120i from the RAN (see steps 707, 708, and 709 of figure 7) , (b) described through other DMFs also provided by the ML function orchestrator 131, and/or (c) partly available as raw data and partly represented through DMFs. In the cases (b) and (c) , the NWDAF 120i may indicate in the DMF request the input DMFs, which are also required. In step 710, the DMF orchestrator can run the DMF to get the required estimate and optimise network resources accordingly.
Figure 8 shows a flow diagram illustrating steps of a method 800 according to an embodiment for operating a network node, such as the representative network node 120i. The method 800 comprises a step 801 of implementing a data model function, DMF, orchestrator 121 for training, executing and storing a plurality of DMFs for modelling one or more performance metrics and/or parameters of the network node 120i for each of a plurality of network policies. Moreover, the method 800 comprises a step 803 of implementing a DMF collection coordination function 123 for storing and retrieving the plurality of trained DMFs. The method 800 further comprises a step 805 of implementing a data collection coordination function 122 for collecting and storing data for training and executing the plurality of DMFs.
Figure 9 shows a flow diagram illustrating steps of a method 900 according to an embodiment for operating a ML function orchestrator 131. The method 900 comprises a step 901 of providing to each network node 120a-n information for generating one or more data model functions, DMFs, for modelling one or more performance metrics and/or parameters of the network node 120a-n for a network policy. Moreover, the method 900 comprises a step 903 of receiving one or more parameters and/or hyperparameters from each network node 120a-n of the one or more DMFs trained for the network policy. The method 900 further comprises a step 905 of providing information about the one or more DMFs trained for the network policy, including the one or more parameters and/or hyperparameters received from the plurality of network nodes 120a-n and/or information generated based on the one or more parameters and/or hyperparameters received from the plurality of network nodes 120a-n, to a policy optimization entity 132 for optimizing the network policy.
As will be appreciated, embodiments disclosed herein provide an improved architecture and improved functionalities that enable an accurate characterization of network KPIs through models, ensuring limited overhead and high privacy. Transferring models instead of data significantly reduces the amount of information exchanged between network nodes and the central node responsible for optimization. Also, data is kept local and only useful data is stored. Moreover, according to embodiments disclosed herein ML models may be updated based on different centralized or local triggers. This is because embodiments disclosed herein make use of available data to detect network changes and model performance degradation and update models accordingly. Network models can be kept up-to-date while limiting communication overhead and energy consumption. The management framework/system 130 can push a model update based on e.g. optimized network performance or models received by different RAN nodes 120a-n. Embodiments disclosed herein leverage ML/AI capabilities available in the network nodes 120a-n. Thus, available local processing capability can be efficiently exploited, i.e., model trained at night when traffic load is low. Moreover, embodiments disclosed herein allow coordination across nodes in different domains. For instance, the management framework/system 130 can exploit distributed learning and expert knowledge to define the features required to construct each model, select model hyperparameters, create global models from the local ones, and/or identify new model architectures. Models can be shared to characterize KPIs which cannot be usually shared through standardize interfaces.
The person skilled in the art will understand that the "blocks" ( "units" ) of the various figures (method and apparatus) represent or describe functionalities of embodiments of the present disclosure (rather than necessarily individual "units" in hardware or software) and thus describe equally functions or features of apparatus embodiments as well as method embodiments (unit = step) .
In the several embodiments provided in the present application, it should be understood that the disclosed system, apparatus, and method may be implemented in other manners. For example, the described embodiment of an apparatus is merely exemplary. For example, the unit division is merely logical function division and may be another division in an actual implementation. For example, a plurality of units or components may be combined or integrated into another system, or some features may be ignored or not performed. In addition, the displayed or discussed mutual couplings or direct couplings or communication connections may be implemented by using some interfaces. The indirect couplings or communication connections between the apparatuses or units may be implemented in electronic, mechanical, or other forms.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one position, or may be distributed on a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the objectives of the solutions of the embodiments.
In addition, functional units in the embodiments of the invention may be integrated into one processing unit, or each of the units may exist alone physically, or two or more units are integrated into one unit.

Claims (17)

  1. A network node (120a-n) configured to operate based on a plurality of network policies for a cellular network, wherein the network node (120a-n) comprises:
    a data model function, DMF, orchestrator (121) configured to train, execute and store a plurality of DMFs for modelling one or more performance metrics and/or parameters of the network node (120a-n) for each of the plurality of network policies;
    a DMF collection coordination function (123) configured to store and retrieve the plurality of trained DMFs; and
    a data collection coordination function (122) configured to collect and store data used for training and executing the plurality of DMFs.
  2. The network node (120a-n) of claim 1, wherein one or more of the plurality of DMFs comprise one or more data model atomic functions, DMAFs, and wherein the DMF collection coordination function (123) is further configured to store the DMAFs of the one or more DMFs.
  3. The network node (120a-n) of claim 2, wherein the one or more DMAFs comprise one or more black box atomic functions and/or one or more white box atomic functions.
  4. The network node (120a-n) of any one of the preceding claims, wherein the DMF orchestrator (121) is configured to receive instructions from a management entity (130) of the cellular network (100) for generating and/or updating the one or more DMFs based on the one or more DMAFs for one or more of the plurality of network policies.
  5. The network node (120a-n) of claim 4, wherein the DMF orchestrator (121) is configured to provide one or more trained parameters and/or hyperparameters of one or more DMFs trained for one or more of the plurality of network policies to the management entity (130) .
  6. The network node (120a-n) of any one of the preceding claims, wherein the data collection coordination function (122) is configured to collect and store the data for training and executing the plurality of DMFs, in response to a request from the DMF orchestrator (121) .
  7. The network node (120a-n) of claim 6, wherein the request from the DMF orchestrator (121) indicates a frequency for collecting the data and/or a time period for which the collected data is to be stored.
  8. The network node (120a-n) of any one of the preceding claims, wherein the network node (120a-n) is a base station, a routing node, or a controller of the cellular network.
  9. A method (800) for operating a network node (120a-n) configured to operate based on a plurality of network policies for a cellular network, wherein the method (800) comprises:
    implementing (801) a data model function, DMF, orchestrator (121) for training, executing and storing a plurality of DMFs for modelling one or more performance metrics and/or parameters of the network node (120a-n) for each of the plurality of network policies;
    implementing (803) a DMF collection coordination function (123) for storing and retrieving the plurality of trained DMFs; and
    implementing (805) a data collection coordination function (122) for collecting and storing data for training and executing the plurality of DMFs.
  10. A machine learning, ML, function orchestrator (131) for supporting the deployment of a network policy for a plurality of network nodes (120a-n) of a cellular network, wherein the ML function orchestrator (131) is configured to:
    provide to each network node (120a-n) information for generating one or more data model functions, DMFs, for modelling one or more performance metrics and/or parameters of the network node (120a-n) for a network policy;
    receive one or more parameters and/or hyperparameters from each network node (120a-n) of the one or more DMFs trained for the network policy, and
    provide information about the one or more DMFs trained for the network policy, including the one or more parameters and/or hyperparameters received from the plurality of network nodes (120a-n) and/or information generated based on the one or more parameters and/or hyperparameters received from the plurality of network nodes (120a-n) , to a policy optimization entity (132) for optimizing the network policy.
  11. The ML function orchestrator (131) of claim 10, wherein the ML function orchestrator (131) is configured to receive the information for generating the one or more DMFs for modelling the one or more performance metrics and/or parameters of the network node (120a-n) for the network policy from a policy management function (133) of the cellular network.
  12. The ML function orchestrator (131) of claim 10 or 11, wherein the one or more DMFs comprise one or more data model atomic functions, DMAFs, and wherein the ML function orchestrator (131) is configured to determine the information for the network node (120a-n) for generating the one or more DMFs based on the one or more DMAFs.
  13. The ML function orchestrator (131) of claim 12, wherein the one or more DMAFs comprise one or more black box atomic functions and/or one or more white box atomic functions.
  14. The ML function orchestrator (131) of any one of claims 10 to 13, wherein the ML function orchestrator (131) is configured to aggregate the one or more parameters and/or hyperparameters received from each network node (120a-n) about the one or more DMFs trained for the network policy.
  15. A method (900) for operating a machine learning, ML, function orchestrator (131) for supporting the deployment of a network policy for a plurality of network nodes (120a-n) of a cellular network, wherein the method (900) comprises:
    providing (901) to each network node (120a-n) information for generating one or more data model functions, DMFs, for modelling one or more performance metrics and/or parameters of the network node (120a-n) for a network policy;
    receiving (903) one or more parameters and/or hyperparameters from each network node (120a-n) of the one or more DMFs trained for the network policy, and
    providing (905) information about the one or more DMFs trained for the network policy, including the one or more parameters and/or hyperparameters received from the plurality of network nodes (120a-n) and/or information generated based on the one or more parameters and/or hyperparameters received from the plurality of network nodes (120a-n) , to a policy optimization entity (132) for optimizing the network policy.
  16. A cellular network, comprising:
    a plurality of network nodes (120a-n) according to any one of claims 1 to 8; and
    a ML function orchestrator (131) according to any one of claims 10 to 14.
  17. The cellular network of claim 16, wherein the plurality of network nodes (120a-n) are implemented in a radio access network, RAN, transport network , and/or a core network, CN, of the cellular network and/or wherein the ML function orchestrator (131) is implemented as a function of a network management system (130) and/or a further network node in a radio access network, RAN, transport network, and/or a core network.
PCT/CN2023/134021 2023-11-24 2023-11-24 Devices and methods for real-time ml framework supporting network optimization Pending WO2025107294A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/CN2023/134021 WO2025107294A1 (en) 2023-11-24 2023-11-24 Devices and methods for real-time ml framework supporting network optimization

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2023/134021 WO2025107294A1 (en) 2023-11-24 2023-11-24 Devices and methods for real-time ml framework supporting network optimization

Publications (1)

Publication Number Publication Date
WO2025107294A1 true WO2025107294A1 (en) 2025-05-30

Family

ID=95825826

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2023/134021 Pending WO2025107294A1 (en) 2023-11-24 2023-11-24 Devices and methods for real-time ml framework supporting network optimization

Country Status (1)

Country Link
WO (1) WO2025107294A1 (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022061940A1 (en) * 2020-09-28 2022-03-31 华为技术有限公司 Model data transmission method, and communication apparatus
CN116018601A (en) * 2020-11-30 2023-04-25 华为技术有限公司 A federated learning method, device and system

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022061940A1 (en) * 2020-09-28 2022-03-31 华为技术有限公司 Model data transmission method, and communication apparatus
CN116018601A (en) * 2020-11-30 2023-04-25 华为技术有限公司 A federated learning method, device and system
US20230274192A1 (en) * 2020-11-30 2023-08-31 Huawei Technologies Co., Ltd. Federated learning method, apparatus, and system

Similar Documents

Publication Publication Date Title
US20240430176A1 (en) Methods, apparatus and machine-readable media relating to machine-learning in a communication network
US12284088B2 (en) Methods, apparatus and machine-readable media relating to machine-learning in a communication network
JP7159347B2 (en) MODEL UPDATE METHOD AND APPARATUS, AND SYSTEM
CN112913274B (en) Optimized process for self-organizing networks
US10375585B2 (en) System and method for deep learning and wireless network optimization using deep learning
US20230010095A1 (en) Methods for cascade federated learning for telecommunications network performance and related apparatus
CN112396070B (en) Model training method, device and system, and prediction method and device
WO2019007388A1 (en) System and method for deep learning and wireless network optimization using deep learning
Alsuhli et al. Mobility load management in cellular networks: A deep reinforcement learning approach
US20240370760A1 (en) Method and system of managing artificial intelligence/machine learning (ai/ml) model
US20230259744A1 (en) Grouping nodes in a system
CN115174416B (en) Network planning system, method and device and electronic equipment
US20240205101A1 (en) Inter-node exchange of data formatting configuration
WO2023006205A1 (en) Devices and methods for machine learning model transfer
KR20210140901A (en) Apparatus and method for quality management of wireless communication
WO2025107294A1 (en) Devices and methods for real-time ml framework supporting network optimization
US20230117824A1 (en) Simulating performance metrics for target systems based on sensor data, such as for use in 5g networks
EP4258730A1 (en) Method and apparatus for programmable and customized intelligence for traffic steering in 5g networks using open ran architectures
CN119865831A (en) Processing method and device for artificial intelligent service
WO2023110108A1 (en) Devices and methods for operating machine learning model performance evaluation
US20240365135A1 (en) Federated learning of growing neural gas models
EP4618497A1 (en) Network intelligence-as-a-service in a.i.-native telecommunication systems
EP4482105A1 (en) Ai/ml-related operational statistics/kpis
EP4568218A1 (en) Methods and apparatus for quantifying probability of expected network performance
Pham-Quoc et al. Predicting mobile networks' behaviors using Linear Neural Networks

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23959264

Country of ref document: EP

Kind code of ref document: A1