WO2025171907A1 - Ensuring consistency between training and inference stages via monitoring procedures - Google Patents
Ensuring consistency between training and inference stages via monitoring proceduresInfo
- Publication number
- WO2025171907A1 WO2025171907A1 PCT/EP2024/083753 EP2024083753W WO2025171907A1 WO 2025171907 A1 WO2025171907 A1 WO 2025171907A1 EP 2024083753 W EP2024083753 W EP 2024083753W WO 2025171907 A1 WO2025171907 A1 WO 2025171907A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- performance monitoring
- monitoring process
- user equipment
- network
- report
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04W—WIRELESS COMMUNICATION NETWORKS
- H04W24/00—Supervisory, monitoring or testing arrangements
- H04W24/02—Arrangements for optimising operational condition
Definitions
- the example and non-limiting embodiments relate generally to machine learning and, more particularly, to consistency between training and inference.
- Artificial intelligence (Al, often also referred to a machine learning, ML, or even AI/ML) is being used for many purposes in wireless networks such as cellular networks.
- AI/ML techniques continue to be studied in regard to wireless communications including NR air interface.
- an example embodiment is provided with an apparatus comprising: at least one processor; and at least one non-transitory memory storing instructions that, when executed with the at least one processor, cause the apparatus to perform: sending a capability message, where the apparatus supports at least one enabled feature of machine learning, and where the capability message comprises an indication of a maximum number of performance monitoring processes which can be handled by the apparatus for the at least one enabled feature; based, at least partially, on the sent capability message, sending a report, where the report comprises: an identifier for a performance monitoring process, and associated information related to the performance monitoring process; and based, at least partially, on the sent report, receiving an activation or selection command from a network, where the command is at least partially based on the identifier of the performance monitoring process.
- an example embodiment is provided with a non- transitory program storage device readable by an apparatus, tangibly embodying a program of instructions executable with the apparatus for performing operations, the operations comprising: sending a capability message, where the apparatus supports at least one enabled feature of machine learning, and where the capability message comprises an indication of a maximum number of performance monitoring processes which can be handled by the apparatus for the at least one enabled feature; based, at least partially, on the sent capability message, sending a report, where the report comprises: an identifier for a performance monitoring process, and associated information related to the performance monitoring process; and based, at least partially, on the sent report, receiving an activation or selection command from a network, where the command is at least partially based on the identifier of the performance monitoring process.
- an example embodiment is provided with an apparatus comprising: at least one processor; and at least one non-transitory memory storing instructions that, when executed with the at least one processor, cause the apparatus to perform: receiving a capability message from a user equipment, where the user equipment supports at least one enabled feature of machine learning, and where the capability message comprises an indication of a maximum number of performance monitoring processes which can be handled by the user equipment for the at least one enabled feature; and based at least partially upon receiving the capability message, sending a configuration to the user equipment, where the configuration is configured to enable reporting of performance monitoring process related information by the user equipment.
- an example embodiment is provided with a method comprising: receiving a capability message from a user equipment, where the user equipment supports at least one enabled feature of machine learning, and where the capability message comprises an indication of a maximum number of performance monitoring processes which can be handled by the user equipment for the at least one enabled feature; and based at least partially upon receiving the capability message, sending a configuration to the user equipment, where the configuration is configured to enable reporting of performance monitoring process related information by the user equipment.
- an example embodiment is provided with a non- transitory program storage device readable by an apparatus, tangibly embodying a program of instructions executable with the apparatus for performing operations, the operations comprising: receiving a capability message from a user equipment, where the user equipment supports at least one enabled feature of machine learning, and where the capability message comprises an indication of a maximum number of performance monitoring processes which can be handled by the user equipment for the at least one enabled feature; and based at least partially upon receiving the capability message, sending a configuration to the user equipment, where the configuration is configured to enable reporting of performance monitoring process related information by the user equipment.
- FIG. 2 is a diagram illustrating an example method
- FIG. 3 is a diagram illustrating an example method
- FIG. 4 is a diagram illustrating an example method.
- DU distributed unit eNB evolved Node B e.g., an LTE base station
- FG feature group gNB base station for 5G/NR i.e., a node providing NR user plane and control plane protocol terminations towards the UE, and connected via the NG interface to the 5G core network
- FIG. 1 shows a block diagram of one possible and non-limiting example of a wireless network 1 that is connected to a user equipment (UE) 10.
- UE user equipment
- a number of network elements are shown in the wireless network of FIG. 1 including a base station 70 and a core network 90.
- the programs 12, 72, and 92 contain instructions stored by corresponding one or more memories 15, 75, or 95. These instructions, when executed by the corresponding one or more processors 13, 73, or 93, allow or cause the corresponding apparatus 10, 70, or 90, to perform the operations described herein.
- the computer readable memories 15, 75, or 95 are circuitry and may be of any type suitable to the local technical environment and may be implemented using any suitable data storage technology, such as semiconductor-based memory devices, flash memory, firmware, magnetic memory devices and systems, optical memory devices and systems, fixed memory and removable memory.
- the computer readable memories 15, 75, and 95 may be means for performing storage functions.
- the processors 13, 73, and 93 are circuitry and may be of any type suitable to the local technical environment.
- the UE may receive a configuration that enable reporting of PMP related information.
- an inference operation of the ML-feature may be stated.
- the NW may initiate the inference operation for one or more functionalities configured towards the UE.
- the UE may also determine functionalities associated with the one or more PMP IDs (may be related only when reporting changes related to a new model). • When an existing model is updated or a new model is downloaded, the UE may have an initial assessment (can also be received from the entity that send the model) when determining the aforementioned parameters.
- the UE may report the one or more PMP IDs, and corresponding parameters for the PMP IDs according to the NW-initiated report or UE-initiated report.
- the UE may report the PMP IDs and corresponding parameters as UE- triggered MAC-CE command where fields that defined above are carried in the MAC- CE command.
- the NW may do performance monitoring/assessment for one or more PMP IDs, and may determine any changes associated with the NW-metric.
- the NW may indicate the changes associated with the NW-metric with the corresponding PMP -IDs to the UE.
- the UE may report the PMP IDs and corresponding parameters as UE- triggered MAC-CE command where fields that defined above are carried in the MAC- CE command.
- the NW may receive a reporting related to one or more PMPs within N PMPs.
- Step 10 at 220 shows that the UE may send a report containing the model assessments associated with one or more PMP IDs.
- the reported PMP related information may include, for example, PMP -ID and associated type of report, UE-metric, NW-metric, functionality IDs.
- Step 11 at 222 shows that the NW may perform an internal performance evaluation for PMP- IDs based on activated functionalities.
- the NW may initiate signalling to select, switch, activate, or de-activate a background ML model used at the UE by implicitly referring to the PMP ID in the signalling indication.
- the NW may determine an assessment (NW -metric) for one or more PMP IDs considered for an active functionality based on its own learning at the NW.
- Such information can be further reported to the UE, as in step 12.
- the UE may update as in step 14, at 228, the NW-metric associated with the corresponding PMP ID.
- Step 12, at 224 shows the UE receiving NW-sided PMP related information such as including PMP -ID and NW-metric for example.
- model-ID-based LCM and, in such instances PMP ID, may refer to the model-ID.
- PMP ID may refer to the model-ID.
- This may also refer as handling of additional conditions, wherein the NW may use reported performance indicators and associated functionalities when determining best background ML models in given NW situation.
- the NW may store PMP -related information such as under each corresponding PMP -ID for example.
- the NW may also consider storing timestamp information, NW assumptions, configuration details, and other types of information useful when handling additional conditions.
- the NW may also maintain past reports that correspond to a particular PMP -ID as long as it reports that the reports within a PMP -ID are related to each other (i.e., the same background model).
- the NW is capable of deciding the best background models for the UE via PMP IDs.
- signaling can consider PMP IDs when handling the UE’s background models when there are matching NW assumption as NW- additional conditions associated with a background model.
- a UE may keep only a limited number of ML models in the device and, with features as described herein, the NW may have an implicit control of ML models.
- the NW When an ML model is changed or updated, the NW only needs to focus on the latest ML model details. Storing of unnecessary information regarding older versions is not needed.
- the UE may be able to do a model performance assessment, via inactive model monitoring or via from the learning in past, and as the signalling allows getting such information on UE’s model assessment and provide framework for updating these assessments, features as described herein may be used to provide an improvement for handling the additional conditions via the process of monitoring.
- the NW gets the UE’s assessment of UE-model_l for different NW-assumptions. As this can also be derived for other models (model 2, 3, ..etc..) via similar process, the NW can use performance monitoring process IDs to select or activate a ML model when the matching NW-additional conditions are used for inference.
- An example embodiment may be provided with an apparatus comprising: at least one processor; and at least one non-transitory memory storing instructions that, when executed with the at least one processor, cause the apparatus to perform: sending a capability message, where the apparatus supports at least one enabled feature of machine learning, and where the capability message comprises an indication of a maximum number of performance monitoring processes which can be handled by the apparatus for the at least one enabled feature; based, at least partially, on the sent capability message, sending a report, where the report comprises: an identifier for a performance monitoring process, and associated information related to the performance monitoring process; and based, at least partially, on the sent report, receiving an activation or selection command from a network, where the command is at least partially based on the identifier of the performance monitoring process.
- the performance monitoring process may be associated with a machine learning model.
- the command may be configured to enable use of the machine learning model associated with the performance monitoring process.
- the identifier may be dimensioned based on the maximum number of the performance monitoring processes.
- the associated information may comprise at least one of: an indication of whether there is a relation between the report and an earlier report associated with the same performance monitoring process, a user equipment metric, at least partially determined at the apparatus, wherein the user equipment metric provides a relative performance indication of the performance monitoring process, a network metric, received from a network, wherein the network metric provides a relative performance indication of the performance monitoring process, or an identifier of at least one functionality configuration associated with the performance monitoring process, wherein the functionality configuration is received from a network to enable the machine learning feature.
- the apparatus may comprise a user equipment and the capability message may be a user equipment capability message of the user equipment, and the maximum number of the performance monitoring processes which can be handled by the apparatus may be a maximum number of the performance monitoring processes that can be handled by the user equipment when supporting the at least one enabled feature of the machine learning at the user equipment.
- the machine learning may comprise a background machine learning model, and the performance monitoring process may be associated with the background machine learning model available to the user equipment.
- the report may be configured or defined to the user equipment based, at least partially, upon the capability message.
- the instructions, when executed with the at least one processor, may be configured to cause the apparatus to perform: based, at least partially, on the sent capability message, receiving at least one functionality configuration.
- the instructions, when executed with the at least one processor, may be configured to cause the apparatus to perform: assessing performance of the machine learning regarding the active and inactive functionalities based on the available measurements accessible to the apparatus.
- the instructions, when executed with the at least one processor, may be configured to cause the apparatus to perform: receiving performance monitoring process information from a network; and updating a network-metric associated with the performance monitoring process.
- the instructions, when executed with the at least one processor may be configured to cause the apparatus to perform: receiving a configuration from a network comprising information related to a number of performance monitoring processes determined by the network.
- the instructions when executed with the at least one processor, may be configured to cause the apparatus to perform: determining when the number of performance monitoring processes in the received information from the network is lesser than the maximum number of the performance monitoring processes which can be handled by the apparatus; and selecting the lesser number of models to map with the performance monitoring process(es).
- an example method may be provided comprising: sending a capability message to an apparatus as illustrated with block 302, where the apparatus supports at least one enabled feature of machine learning, and where the capability message comprises an indication of a maximum number of performance monitoring processes which can be handled by the apparatus for the at least one enabled feature; based, at least partially, on the sent capability message, sending a report as illustrated with block 304, where the report comprises: an identifier for a performance monitoring process, and associated information related to the performance monitoring process; and based, at least partially, on the sent report, receiving an activation or selection command from a network as illustrated with block 306, where the command is at least partially based on the identifier of the performance monitoring process.
- the performance monitoring process may be associated with a machine learning model.
- the command may be configured to enable use of the machine learning model associated with the performance monitoring process.
- the identifier may be dimensioned based on the maximum number of the performance monitoring processes.
- the associated information may comprise at least one of an indication of whether there is a relation between the report and an earlier report associated with the same performance monitoring process, a user equipment metric, at least partially determined at the apparatus, wherein the user equipment metric provides a relative performance indication of the performance monitoring process, a network metric, received from a network, wherein the network metric provides a relative performance indication of the performance monitoring process, or an identifier of at least one functionality configuration associated with the performance monitoring process, wherein the functionality configuration is received from a network to enable the machine learning feature.
- the apparatus may comprise a user equipment and the capability message is a user equipment capability message of the user equipment, and the maximum number of the performance monitoring processes which can be handled by the apparatus may be a maximum number of the performance monitoring processes that can be handled by the user equipment when supporting the at least one enabled feature of the machine learning at the user equipment.
- the machine learning may comprise a background machine learning model, and the performance monitoring process may be associated with the background machine learning model available to the user equipment.
- the report may be configured or defined to the user equipment based, at least partially, upon the capability message.
- the method may comprise at least partially based on the sent capability message, receiving at least one functionality configuration.
- the method may comprise at least partially based on the sent capability message, receiving a configuration configured to enable reporting of performance monitoring process related information with the report.
- the method may comprise, with input from a network, initiating an inference operation for one or more functionalities configured towards the user equipment.
- the method may comprise monitoring performance of the machine learning regarding active and inactive functionalities based on available measurements accessible to the apparatus.
- the method may comprise assessing performance of the machine learning regarding the active and inactive functionalities based on the available measurements accessible to the apparatus.
- the method may comprise: receiving performance monitoring process information from a network; and updating a network-metric associated with the performance monitoring process.
- the method may comprise receiving a configuration from a network comprising information related to a number of performance monitoring processes determined by the network.
- the method may comprise: determining when the number of performance monitoring processes in the received information from the network is lesser than the maximum number of the performance monitoring processes which can be handled by the apparatus; and selecting the lesser number of models to map with the performance monitoring process(es).
- An example embodiment may be provided with an apparatus comprising: means for sending a capability message, where the apparatus supports at least one enabled feature of machine learning, and where the capability message comprises an indication of a maximum number of performance monitoring processes which can be handled by the apparatus for the at least one enabled feature; means for, based at least partially on the sent capability message, sending a report, where the report comprises: an identifier for a performance monitoring process, and associated information related to the performance monitoring process; and means for, based at least partially on the sent report, receiving an activation or selection command from a network, where the command is at least partially based on the identifier of the performance monitoring process.
- An example embodiment may be provided with a non-transitory program storage device readable by an apparatus, tangibly embodying a program of instructions executable with the apparatus for performing operations, the operations comprising: sending a capability message, where the apparatus supports at least one enabled feature of machine learning, and where the capability message comprises an indication of a maximum number of performance monitoring processes which can be handled by the apparatus for the at least one enabled feature; based, at least partially, on the sent capability message, sending a report, where the report comprises: an identifier for a performance monitoring process, and associated information related to the performance monitoring process; and based, at least partially, on the sent report, receiving an activation or selection command from a network, where the command is at least partially based on the identifier of the performance monitoring process.
- the instructions when executed with the at least one processor, may be configured to cause the apparatus to perform: determining that there is a network assumption as a network additional condition matching with an associated background model; and based upon the determining of the matching, sending to the user equipment the performance monitoring process IDs for the determined one or more background models for the user equipment to use.
- non-transitory is a limitation of the medium itself (i.e., tangible, not a signal) as opposed to a limitation on data storage persistency (e.g., RAM vs. ROM).
Landscapes
- Engineering & Computer Science (AREA)
- Computer Networks & Wireless Communication (AREA)
- Signal Processing (AREA)
- Mobile Radio Communication Systems (AREA)
Abstract
An apparatus including at least one processor; and at least one non-transitory memory storing instructions that, when executed with the at least one processor, cause the apparatus to perform: sending a capability message, where the apparatus supports at least one enabled feature of machine learning, and where the capability message comprises an indication of a maximum number of performance monitoring processes which can be handled by the apparatus for the at least one enabled feature; based, at least partially, on the sent capability message, sending a report, where the report comprises: an identifier for a performance monitoring process, and associated information related to the performance monitoring process; and based, at least partially, on the sent report, receiving an activation or selection command from a network, where the command is at least partially based on the identifier of the performance monitoring process.
Description
ENSURING CONSISTENCY BETWEEN TRAINING AND INFERENCE STAGES VIA
MONITORING PROCEDURES
CROSS-REFERENCE TO RELATED APPLICATION
[0001] This application claims priority to, and the benefit of, US Provisional Application No. 63/554268, filed February 16, 2024, the contents of which are hereby incorporated by reference in their entirety.
TECHNICAL FIELD
[0002] The example and non-limiting embodiments relate generally to machine learning and, more particularly, to consistency between training and inference.
BRIEF DESCRIPTION OF PRIOR DEVELOPMENTS
[0003] Artificial intelligence (Al, often also referred to a machine learning, ML, or even AI/ML) is being used for many purposes in wireless networks such as cellular networks. AI/ML techniques continue to be studied in regard to wireless communications including NR air interface.
SUMMARY OF THE INVENTION
[0004] The following summary is merely intended to be an example. The summary is not intended to limit the scope of the claims.
[0005] In accordance with one aspect, an example embodiment is provided with an apparatus comprising: at least one processor; and at least one non-transitory memory storing instructions that, when executed with the at least one processor, cause the apparatus to perform: sending a capability message, where the apparatus supports at least one enabled feature of machine learning, and where the capability message comprises an indication of a maximum number of performance monitoring processes which can be handled by the apparatus for the at least one enabled feature; based, at least partially, on the sent capability message, sending a report, where the report comprises: an identifier for a performance monitoring process, and associated information related to the performance monitoring process; and based, at least partially, on the
sent report, receiving an activation or selection command from a network, where the command is at least partially based on the identifier of the performance monitoring process.
[0006] In accordance with one aspect, an example embodiment is provided with a method comprising: sending a capability message to an apparatus, where the apparatus supports at least one enabled feature of machine learning, and where the capability message comprises an indication of a maximum number of performance monitoring processes which can be handled by the apparatus for the at least one enabled feature; based, at least partially, on the sent capability message, sending a report, where the report comprises: an identifier for a performance monitoring process, and associated information related to the performance monitoring process; and based, at least partially, on the sent report, receiving an activation or selection command from a network, where the command is at least partially based on the identifier of the performance monitoring process.
[0007] In accordance with one aspect, an example embodiment is provided with an apparatus comprising: means for sending a capability message, where the apparatus supports at least one enabled feature of machine learning, and where the capability message comprises an indication of a maximum number of performance monitoring processes which can be handled by the apparatus for the at least one enabled feature; means for, based at least partially on the sent capability message, sending a report, where the report comprises: an identifier for a performance monitoring process, and associated information related to the performance monitoring process; and means for, based at least partially on the sent report, receiving an activation or selection command from a network, where the command is at least partially based on the identifier of the performance monitoring process.
[0008] In accordance with one aspect, an example embodiment is provided with a non- transitory program storage device readable by an apparatus, tangibly embodying a program of instructions executable with the apparatus for performing operations, the operations comprising: sending a capability message, where the apparatus supports at least one enabled feature of machine learning, and where the capability message comprises an indication of a maximum number of performance monitoring processes which can be handled by the apparatus for the at least one enabled feature; based, at least partially, on the sent capability message, sending a report, where the report comprises: an identifier for a performance monitoring process, and associated information related to the performance monitoring process; and based, at least partially, on the sent report, receiving an activation or selection command from a
network, where the command is at least partially based on the identifier of the performance monitoring process.
[0009] In accordance with one aspect, an example embodiment is provided with an apparatus comprising: at least one processor; and at least one non-transitory memory storing instructions that, when executed with the at least one processor, cause the apparatus to perform: receiving a capability message from a user equipment, where the user equipment supports at least one enabled feature of machine learning, and where the capability message comprises an indication of a maximum number of performance monitoring processes which can be handled by the user equipment for the at least one enabled feature; and based at least partially upon receiving the capability message, sending a configuration to the user equipment, where the configuration is configured to enable reporting of performance monitoring process related information by the user equipment.
[0010] In accordance with one aspect, an example embodiment is provided with a method comprising: receiving a capability message from a user equipment, where the user equipment supports at least one enabled feature of machine learning, and where the capability message comprises an indication of a maximum number of performance monitoring processes which can be handled by the user equipment for the at least one enabled feature; and based at least partially upon receiving the capability message, sending a configuration to the user equipment, where the configuration is configured to enable reporting of performance monitoring process related information by the user equipment.
[0011] In accordance with one aspect, an example embodiment is provided with an apparatus comprising: means for receiving a capability message from a user equipment, where the user equipment supports at least one enabled feature of machine learning, and where the capability message comprises an indication of a maximum number of performance monitoring processes which can be handled by the user equipment for the at least one enabled feature; and means for, based at least partially upon receiving the capability message, sending a configuration to the user equipment, where the configuration is configured to enable reporting of performance monitoring process related information by the user equipment.
[0012] In accordance with one aspect, an example embodiment is provided with a non- transitory program storage device readable by an apparatus, tangibly embodying a program of instructions executable with the apparatus for performing operations, the operations comprising: receiving a capability message from a user equipment, where the user equipment supports at least one enabled feature of machine learning, and where the capability message
comprises an indication of a maximum number of performance monitoring processes which can be handled by the user equipment for the at least one enabled feature; and based at least partially upon receiving the capability message, sending a configuration to the user equipment, where the configuration is configured to enable reporting of performance monitoring process related information by the user equipment.
[0013] According to some aspects, there is provided the subject matter of the independent claims. Some further aspects are provided in subject matter of the dependent claims.
BRIEF DESCRIPTION OF DRAWINGS
[0014] The foregoing aspects and other features are explained in the following description, taken in connection with the accompanying drawings, wherein:
[0015] FIG. 1 is a block diagram of one possible and non-limiting example system in which the example embodiments may be practiced;
[0016] FIG. 2 is a diagram illustrating an example method;
[0017] FIG. 3 is a diagram illustrating an example method;
[0018] FIG. 4 is a diagram illustrating an example method.
DETAILED DESCRIPTION
[0019] The following abbreviations that may be found in the specification and/or the drawing figures are defined as follows:
3 GPP third generation partnership project
5G fifth generation
Al artificial intelligence
AMF access and mobility management function
CE control element
CU central unit
DU distributed unit eNB evolved Node B (e.g., an LTE base station)
EN-DC LTE-NR dual connectivity
FG feature group
gNB base station for 5G/NR, i.e., a node providing NR user plane and control plane protocol terminations towards the UE, and connected via the NG interface to the 5G core network
I/F interface
LCM latent consistency model
LTE long term evolution
MAC medium access control
ML machine learning
MME mobility management entity ng or NG new generation
NR new radio
N/W or NW network
OTT over the top
RAN radio access network
Rel release
RLC radio link control
RRC radio resource control
RU radio unit
Rx receiver
SGW serving gateway
SMF session management function
TS technical specification
Tx transmitter
UE user equipment (e.g., a wireless, typically mobile device)
[0020] Turning to FIG. 1, this figure shows a block diagram of one possible and non-limiting example of a wireless network 1 that is connected to a user equipment (UE) 10. A number of network elements are shown in the wireless network of FIG. 1 including a base station 70 and a core network 90.
[0021] In FIG. 1, the user equipment (UE) 10 is in wireless communication via radio link 11 with the base station 70 of the network 1. The UE 10 is a wireless communication device, such as a mobile device, that is configured to access the network. The UE 10 is illustrated with one or more antennas 28. The ellipses 2 indicate there could be multiple UEs 10 in wireless
communication via radio links with the base station 70. The UE 10 includes one or more processors 13, one or more memories 15, and other circuitry 16. The other circuitry 16 may include one or more receivers (Rx(s)) 17 and one or more transmitters (Tx(s)) 18. One of more programs 12 is used to cause the UE 10 to perform the operations described herein. For a UE 10, the other circuitry 16 could include circuitry such as for user interface elements (not shown) like a display.
[0022] The base station 70, as a network element of the network 1, provides the UE 10 access to network 1 and to the data network 91 via the core network 90 (e.g., via a user plane function of the core network 90). As such, the base station 70 may be considered to be an access node or network equipment, which provides access by UE(s) 10 to the network 1. The base station 70 is illustrated as having one or more antennas 58. In general, the base station 70 may be referred to as RAN node 70, although many will make reference to this as a gNB (gNode B, a base station for NR, new radio) instead. There are, however, many other examples of RAN nodes including an eNB (evolved Node B) or TRP (Transmission-Reception Point). The term TRP is used mainly herein, and there are multiple options for this, such as a single TRP (of a base station), or distributed unit or radio unit, where multiple such units may be coupled to a central unit.
[0023] The base station 70 (or an individual TRP) includes one or more processors 73, one or more memories 75, and other circuitry 76. The other circuitry 76 includes one or more receivers (Rx(s)) 77 and one or more transmitters (Tx(s)) 78. One of more programs 72 is used to cause the base station 70 to perform the operations described herein.
[0024] It is noted that the base station 70 may instead be implemented via other wireless technologies, such as Wi-Fi (a wireless networking protocol that devices use to communicate without direct cable connections). In the case of Wi-Fi, the link 11 could be characterized as a wireless link.
[0025] Two or more base stations 70 communicate using, e.g., link(s) 79. The link(s) 79 may be wired or wireless or both and may implement, e.g., an Xn interface for 5G (fifth generation), an X2 interface for LTE (Long Term Evolution), or other suitable interface for other standards. [0026] The network 1 may include a core network 90, such as a second network element or elements for example, that may include core network functionality, and which provide connectivity via a link or links 81 with a data network 91, such as a telephone network and/or a data communications network (e.g., the Internet). The core network 90 includes one or more processors 93, one or more memories 95, and other circuitry 96. The other circuitry 96 includes
one or more receivers (Rx(s)) 97 and one or more transmitters (Tx(s)) 98. One of more programs 92 is used to cause the core network 90 to perform the operations described herein. [0027] The core network 90 could be a 5G core network. The core network 90 can implement or comprise multiple network functions (NF(s)) 99, and the program 92 may comprise one or more of the NFs 99. A 5G core network may use hardware such as memory and processors and a virtualization layer. It could be a single standalone computing system, a distributed computing system, or a cloud computing system. The NFs 99, as network elements, of the core network could be containers or virtual machines running on the hardware of the computing system(s) making up the core network 90.
[0028] Core network functionality for 5G may include access and mobility management functionality that is provided by a network function 99 such as, for example, an access and mobility management function (AMF), session management functionality that is provided by a network function such as a session management function (SMF). Core network functionality for access and mobility management in an LTE (Long Term Evolution) network, for example, may be provided by an MME (Mobility Management Entity) and/or SGW (Serving Gateway) functionality, which routes data to the data network. Many others are possible, as illustrated by the examples in FIG. 1 : AMF; SMF; MME; SGW; GMLC (Gateway Mobile Location Center); LMF (Location Management Function); UDM (Unified Data Management)/UDR (Unified Data Repository); NRF (Network Repository Function); and/or E-SMLC (Evolved Serving Mobile Location Center). These are merely exemplary core network functionality that may be provided by the core network 90, and note that both 5G and LTE core network functionality might be provided by the core network 90. The base station 70 is coupled via a backhaul link 31 to the core network 90. The base station 70 and the core network 90 may include an NG (Next Generation) interface for 5G, or an SI interface for LTE, or other suitable interface for other radio access technologies for communicating via the backhaul link 31.
[0029] In the data network 91, there is a computer-readable medium 94. The computer- readable medium 94 contains instructions that, when downloaded and installed into the memories 15, 75, or 95 of the corresponding UE 10, base station 70, and/or core network element(s) 90, and executed by processor(s) 13, 73, or 93, allow or cause the respective device to perform corresponding actions described herein. The computer-readable medium 94 may be implemented in other forms, such as via a compact disc or memory stick for example.
[0030] The programs 12, 72, and 92 contain instructions stored by corresponding one or more memories 15, 75, or 95. These instructions, when executed by the corresponding one or more
processors 13, 73, or 93, allow or cause the corresponding apparatus 10, 70, or 90, to perform the operations described herein. The computer readable memories 15, 75, or 95 are circuitry and may be of any type suitable to the local technical environment and may be implemented using any suitable data storage technology, such as semiconductor-based memory devices, flash memory, firmware, magnetic memory devices and systems, optical memory devices and systems, fixed memory and removable memory. The computer readable memories 15, 75, and 95 may be means for performing storage functions. The processors 13, 73, and 93, are circuitry and may be of any type suitable to the local technical environment. For example, these processors may include one or more of general-purpose computers, special purpose computers, microprocessors, digital signal processors (DSPs), processors based on a multi-core processor architecture, and may also include specialized circuits such as field-programmable gate arrays (FPGAs), application specific circuits (ASICs), signal processing devices and other devices, or combinations of these devices, as non-limiting examples. The processors 13, 73, and 93 may be means for causing their respective apparatus to perform functions, such as those described herein. Particularly, for any apparatus having means to perform functions described herein, the means may include at least one processor, and at least one memory storing instructions that, when executed by at least one processor, cause the performance of the apparatus.
[0031] The receivers 17, 77, and 97, and the transmitters 18, 78, and 98 may implement wired and/or wireless interfaces. The receivers and transmitters may be grouped together as transceivers.
[0032] The network 1 may implement network virtualization, which is the process of combining hardware and software network resources and network functionality into a single, software-based administrative entity, a virtual network. Network virtualization involves platform virtualization, often combined with resource virtualization. Network virtualization is categorized as either external, combining many networks, or parts of networks, into a virtual unit, or internal, providing network-like functionality to software containers on a single system. Note that the virtualized entities (such as network functions 99) that result from the network virtualization are still implemented, at some level, using hardware such as processors 73 and/or 93 and memories 75 and/or 95, and also such virtualized entities create technical effects.
[0033] In general, the various embodiments of the user equipment 10 can include, but are not limited to, wireless phones (such as smart phones, mobile phones, cellular phones, voice over Internet Protocol (IP) (VoIP) phones, and/or wireless local loop phones), tablets, portable computers, vehicles or vehicle-mounted devices for, e.g., wireless V2X (vehicle-to-everything)
communication, image capture devices such as digital cameras, gaming devices, music storage and playback appliances, Internet appliances (including Internet of Things, loT, devices), loT devices with sensors and/or actuators for, e.g., automation applications, as well as portable units or terminals that incorporate combinations of such functions, laptop-embedded equipment (LEE), laptop-mounted equipment (LME), Universal Serial Bus (USB) dongles, smart devices, wireless customer-premises equipment (CPE), an Internet of Things (loT) device, a watch or other wearable, a head-mounted display (HMD), a vehicle, a drone, a medical device and applications (e.g., remote surgery), an industrial device and applications (e.g., a robot and/or other wireless devices operating in an industrial and/or an automated processing chain contexts), a consumer electronics device, a device operating on commercial and/or industrial wireless networks, and the like. That is, the UE 10 could be any end device that may be capable of wireless communication. By way of example rather than limitation, the UE may also be referred to as a communication device, terminal device (MT), a Subscriber Station (SS), a Portable Subscriber Station, a Mobile Station (MS), or an Access Terminal (AT).
[0034] RAN #102 meeting approved the Rel-19 WI on AI/ML for NR Air Interface [RP- 234039], based on the AI/ML techniques to NR air interface studied in FS_NR_AIML_Air [TR 38.843], This included the following:
Objectives in RP-234039
Provide specification support for the following aspects:
- Beam management - DL Tx beam prediction for both UE-sided model and NW-sided model, encompassing [RAN1/RAN2]: o Spatial-domain DL Tx beam prediction for Set A of beams based on measurement results of Set B of beams (“BM-Casel”) o Temporal DL Tx beam prediction for Set A of beams based on the historic measurement results of Set B of beams (“BM-Case2”) o Specify necessary signalling/mechanism(s) to facilitate LCM operations specific to the Beam Management use cases, if any o Enabling method(s) to ensure consistency between training and inference regarding NW-side additional conditions (if identified) for inference at UE
NOTE: Strive for common framework design to support both BM-Casel and BM-Case2
- Positioning accuracy enhancements, encompassing [RAN1/RAN2/RAN3]: o Direct AI/ML positioning:
■ (1st priority) Case 1 : UE-based positioning with UE-side model, direct AI/ML positioning
■ (2nd priority) Case 2b: UE-assisted/LMF -based positioning with LMF-side model, direct AI/ML positioning
■ (1st priority) Case 3b: NG-RAN node assisted positioning with LMF-side model, direct AI/ML positioning o AI/ML assisted positioning
(2nd priority) Case 2a: UE-assisted/LMF -based positioning with UE-side model, AI/ML assisted positioning
■ (1st priority) Case 3a: NG-RAN node assisted positioning with gNB-side model, AI/ML assisted positioning o Specify necessary measurements, signalling/mechanism(s) to facilitate LCM operations specific to the Positioning accuracy enhancements use cases, if any o Investigate and specify the necessary signalling of necessary measurement enhancements (if any) o Enabling method(s) to ensure consistency between training and inference regarding NW-side additional conditions (if identified) for inference at UE for relevant positioning sub use cases
[0035] From the above, the following two items are addressed below:
Beam management - DL Tx beam prediction for both UE-sided model and NW-sided model, encompassing [RAN1/RAN2]: o Enabling method(s) to ensure consistency between training and inference regarding NW-side additional conditions (if identified) for inference at UE
Positioning accuracy enhancements, encompassing [RAN1/RAN2/RAN3]: o Enabling method(s) to ensure consistency between training and inference regarding NW-side additional conditions (if identified) for inference at UE for relevant positioning sub use cases
[0036] Rel-19 still needs to investigate and provide specification support for “Enabling method(s) to ensure consistency between training and inference regarding NW-side additional conditions (if identified) for inference at UE”. In the TR 38.843, the following is captured when discussing the additional conditions.
4.2.3 Additional conditions
For an AI/ML-enabled feature/FG, additional conditions refer to any aspects that are assumed for the training of the model but are not a part of UE capability for the AI/ML-enabled feature/FG. It does not imply that additional conditions are necessarily specified. Additional conditions can be divided into two categories: NW- side additional conditions and UE-side additional conditions. Note: whether specification impact is needed is a separate discussion.
For inference for UE-side models, to ensure consistency between training and inference regarding NW-side additional conditions (if identified), the following options can be taken as potential approaches (when feasible and necessary):
- Model identification to achieve alignment on the NW-side additional condition between NW-side and UE-side
- Model training at NW and transfer to UE, where the model has been trained under the additional condition
Information and/or indication on NW-side additional conditions is provided to UE
Consistency assisted by monitoring (by UE and/or NW, the performance of UE-side candidate models/functionalities to select a model/functionality) Other approaches are not precluded
Note: the possibility that different approaches can achieve the same function is not denied
[0037] Even though it is not documented properly in the TR 38.843, in past discussions there have been some aspects that are often referred to as additional conditions, where the following are considered as possible examples: o Training dataset information o Site-related information (e.g., scenario, location/TRP/area information, beam direction/codebook information) o Time related/Timestamp information
o gNB implementation information (explicit or implicit details for specific gNB implementation details) o UE implementation information (explicit or implicit details for specific UE implementation details) o Statistical information (like delay spread, angular spread, and LOS/NLOS data) o Speed and range of speeds information
[0038] An aspect as described herein is in regard to “Consistency assisted by monitoring (by UE and/or NW, the performance of UE-side candidate models/functionalities to select a model/functionality)” with an ability to optionally consider additional conditions.
[0039] In the context of AI/ML associated with Air Interface use cases, additional conditions may include any factors assumed during the model training, but which are not reported in the UE capability reporting associated with the AI/ML-enabled feature/FG. It is understood that ensuring consistency between training and inference with respect to NW-side additional conditions can be addressed through various potential approaches, such as:
- Model Identification: Aligning models on the NW-side and UE-side to achieve consistency regarding additional conditions.
- Model Training at NW and Transfer to UE: Training the model at the Network (NW) and transferring it to the User Equipment, where the model has been trained under the additional conditions.
- Information/Indication from NW: Providing information and/or indication about NW-side additional conditions to the UE.
- Consistency Assisted by Monitoring: Monitoring the performance of UE-side candidate models/functionalities by both the UE and NW to facilitate consistency, enabling the selection of a suitable model or functionality.
Features as described herein may be used in regard to the above fourth approach; to build a framework that enables the handling of additional conditions.
[0040] Referring also to FIG. 2, a diagram is shown illustrating some features of an example embodiment. With the UE and the network (NW) connected, as illustrated with 202, the UE may report a UE capability indication, as illustrated with 204, with the UE reporting that it supports one or more ML-enabled features (e.g., beam prediction, CSI prediction). The UE connects to the NW, where UE capabilities are also sent to the NW including Nmax. For example, the capability indication may contain the following:
o a maximum number of performance monitoring processes (PMPs) that can be handled at the UE, Nmax, wherein each performance monitoring process (PMP) of Nmax may be associate with a background ML model (a logical model where one or more physical ML models can be associated with the same logical model) available at the UE.
[0041] In one example embodiment, Nmax may be determined based on number of ML models that the UE wishes to identify to the NW. In other words, the number Nmax reveals max number of ML models that can be considered in the model identification.
[0042] In another example embodiment, Nmax may be determined based on hardware limitations when storing model parameters or complied versions of ML models (in other words, trained ML models) at the UE. In some examples, Nmax may be depended on the UE’s memory limitations on keeping the model parameters associated with trained ML models. In some examples, within the limit of Nmax, the UE may download a new model from the OTT server/NW, and the UE may have to remove the older ML model or update the model parameters of the older model (if the download is model parameters for an older model).
[0043] In another example embodiment, Nmax may be determined based on a limitation when monitoring active and inactive ML models at the UE, wherein active may mean that the model is used for the inference operation (used for supporting the functionality) handled by the NW and inactive may mean that the model is not directly used for the inference operation handled by the UE but model inference can be performed (at least time to time) by the UE.
[0044] The number Nmax may be reported per one ML-enabled feature or across all ML- enabled features,
[0045] In one example embodiment, an optional step may be provided based on the received UE capability. The NW may determine a number of performance monitoring processes (PMP), N (where N is less than or equal to Nmax) to configure the UE. This is illustrated with 206 in Fig. 2. The NW may then expect only to keeps track of N number of performance monitoring processes (PMP) for the UE. Each PMP may be identified by a bit field of ceil(log2(N)), which may be used as a PMP identification (PMP ID). In an example embodiment, this number N may be determined by the AMF. In an example embodiment, this may be interpreted as determining the number of ML models that NW wishes to maintain in model identification for the UE, and PMP ID may refer to a Model ID. As illustrated with 208, the UE may receive a configuration that has information related to the number, N, of PMPs determined by the NW. 206 and 208 are optional steps that provides the NW control when defining a limit that NW wants to keep track for PMPs.
[0046] As illustrated with 210, the UE may receive a configuration defined to report PMP related information. The UE receives functionality configurations based on the reported capabilities of supporting ML-feature. The reporting can contain the following:
• A bitfield indicating a PMP ID,
• A Bit or bit field indicating type of the report,
• A Bit field indicating at least one UE-metric for an indicated PMP ID, and
• Other optional features.
[0047] For the bitfield indicating a PMP ID, it may be dimensioned with ceil(log2(N)) or ceil(log2(Nmax)) for example. If max number is Nmax, an indicator field size may be determined by that Nmax. For example, where an identifier is a bit field of size ceil(log2(Nmax)). The indication may point to the one or more of the parameters noted below. The bit or bit field indicating a type of the report may also reflect background model changes. For example, this may indicate whether the indicated PMP ID is associated with an older model (earlier reports on PMP ID are applicable) or a new model (earlier reports on PMP ID are not applicable). A model may be changed due to a model download or update. For the bit field indicating at least one UE-metric for the indicated PMP ID, the UE-metric may provide an assessment of the UE’s model performance (associated with the ML model corresponding to the PMP ID) from the UE-perspective. The UE-metric may contain a value or range that is determined by the UE based on one or more predefined values to the UE, where the value provides a relative assessment/performance of a ML model when supporting a ML-enabled feature (or functionality). For example, a relative assessment can be provided as, the following:
In one variant, instead of a single metric, there may be multiple parameters that define a UE- metric for the PMP ID which may be reported by the UE.
[0048] Additionally (optionally), a bit field may be provided for indicating at least one NW- metric for the indicated PMP ID. The NW-metric may provide an assessment of the NW for the UE’s model performance (associated with the monitoring of the PMP ID). In an example, this parameter may be applicable when UE is switching to a target cell, and when the earlier NW-metric is needed to report to the target cell. The NW-metric may contain a value that is
determined by the NW based on predefined values in a specification, where the value may provide a relative assessment/performance of a ML model when supporting a ML-enabled feature (or functionality). In an example, instead of a single metric, there may be multiple parameters that defining a NW-metric for the PMP ID.
[0049] Additionally (optionally), a bit field may be provided indicating a configuration ID(s) associated with the PMP ID. The configuration ID may refer to a functionality that enables a ML-feature (e.g., CSI reporting configuration ID for ML-enabled beam prediction feature).
[0050] In an example embodiment, it is also possible that the UE capability reporting (at step 2) to carry some of the above information towards the NW with the PMP IDs (where PMP IDs dimensioned on Nmax) and NW may consider them as initial assessment made by the UE.
[0051] As indicated with 212, in step 6 the UE may receive a configuration that enable reporting of PMP related information. As illustrated with 214, in step 7 an inference operation of the ML-feature may be stated. The NW may initiate the inference operation for one or more functionalities configured towards the UE.
[0052] Regarding steps 8 and 9 (216, 218), step 8 becomes relevant as an optional consideration only if steps 3-4 are valid. For step 8, the UE may determine if N is less than Nmax, and select N ML models to map with the PMPs. In step 9, the UE may assess/monitor models for both active and inactive functionalities based on the available measurements accessible to the UE. For example, when the gNB frequently transmits numerous DL RSs, the UE may assess model performances effectively. Additionally, there are cases where the UE server (OTT) may already send certain assessments for new models, and these assessments at OTT could take from previous evaluations associated with the corresponding ML model.
[0053] With regard to the UE, when the UE is in connection with the NW, the following may happen at any point of the time:
• For one or more PMP IDs, i.e., for the applicable models representing the one or more PMP IDs, the UE may do performance monitoring or model assessment, and determine any changes associated with the background ML model (new or older).
• Based on the monitoring or assessment, the UE may derive UE-metric(s) for the one or more PMP IDs.
• If applicable, the UE may also determine the NW-metric (based on the received value from the NW in a last instance).
• If applicable, the UE may also determine functionalities associated with the one or more PMP IDs (may be related only when reporting changes related to a new model).
• When an existing model is updated or a new model is downloaded, the UE may have an initial assessment (can also be received from the entity that send the model) when determining the aforementioned parameters.
• The UE may report the one or more PMP IDs, and corresponding parameters for the PMP IDs according to the NW-initiated report or UE-initiated report.
• In one variant, the UE may report the PMP IDs and corresponding parameters as UE- triggered MAC-CE command where fields that defined above are carried in the MAC- CE command.
[0054] With regard to the NW, when the UE is in connection with the NW, the following may happen at any point of the time:
• The NW may do performance monitoring/assessment for one or more PMP IDs, and may determine any changes associated with the NW-metric.
• The NW may indicate the changes associated with the NW-metric with the corresponding PMP -IDs to the UE.
• In one variant, the UE may report the PMP IDs and corresponding parameters as UE- triggered MAC-CE command where fields that defined above are carried in the MAC- CE command.
• Based on the UE assessment, the NW may receive a reporting related to one or more PMPs within N PMPs.
[0055] Step 10 at 220 shows that the UE may send a report containing the model assessments associated with one or more PMP IDs. The reported PMP related information may include, for example, PMP -ID and associated type of report, UE-metric, NW-metric, functionality IDs. Step 11 at 222 shows that the NW may perform an internal performance evaluation for PMP- IDs based on activated functionalities.
[0056] Based on the reported PMP related information, the NW may initiate signalling to select, switch, activate, or de-activate a background ML model used at the UE by implicitly referring to the PMP ID in the signalling indication. At the NW, similar to how the UE may have conducted performance monitoring and assessment, the NW may determine an assessment (NW -metric) for one or more PMP IDs considered for an active functionality based on its own learning at the NW. Such information can be further reported to the UE, as in step 12. The UE may update as in step 14, at 228, the NW-metric associated with the corresponding PMP ID. Step 12, at 224, shows the UE receiving NW-sided PMP related information such as including PMP -ID and NW-metric for example. This may refer as model-ID-based LCM and,
in such instances PMP ID, may refer to the model-ID. This may also refer as handling of additional conditions, wherein the NW may use reported performance indicators and associated functionalities when determining best background ML models in given NW situation.
[0057] In Step 13 shown at 226, the NW may store PMP -related information such as under each corresponding PMP -ID for example. In addition to the reported parameters from the UE, the NW may also consider storing timestamp information, NW assumptions, configuration details, and other types of information useful when handling additional conditions. Here, the NW may also maintain past reports that correspond to a particular PMP -ID as long as it reports that the reports within a PMP -ID are related to each other (i.e., the same background model). [0058] Referring also to step 15, at 230, and step 16, at 232, with good knowledge of PMP ID-related performances and their applicability in the past and present, both in terms of functionality and additional conditions, the NW is capable of deciding the best background models for the UE via PMP IDs. As indicated in Step 16, signaling can consider PMP IDs when handling the UE’s background models when there are matching NW assumption as NW- additional conditions associated with a background model.
[0059] A UE may keep only a limited number of ML models in the device and, with features as described herein, the NW may have an implicit control of ML models. When an ML model is changed or updated, the NW only needs to focus on the latest ML model details. Storing of unnecessary information regarding older versions is not needed. Because the UE may be able to do a model performance assessment, via inactive model monitoring or via from the learning in past, and as the signalling allows getting such information on UE’s model assessment and provide framework for updating these assessments, features as described herein may be used to provide an improvement for handling the additional conditions via the process of monitoring. For example, if the UE is doing an assessment for a model, UE-model_l, in a first set of NW assumptions (not known to the UE), and the assessment is reported back to the NW, and later the same UE model updating its assessment in a second set of NW assumptions (not known to the UE) and the assessment is reported to the NW. Over time, the NW gets the UE’s assessment of UE-model_l for different NW-assumptions. As this can also be derived for other models (model 2, 3, ..etc..) via similar process, the NW can use performance monitoring process IDs to select or activate a ML model when the matching NW-additional conditions are used for inference.
[0060] An example embodiment may be provided with an apparatus comprising: at least one processor; and at least one non-transitory memory storing instructions that, when executed with
the at least one processor, cause the apparatus to perform: sending a capability message, where the apparatus supports at least one enabled feature of machine learning, and where the capability message comprises an indication of a maximum number of performance monitoring processes which can be handled by the apparatus for the at least one enabled feature; based, at least partially, on the sent capability message, sending a report, where the report comprises: an identifier for a performance monitoring process, and associated information related to the performance monitoring process; and based, at least partially, on the sent report, receiving an activation or selection command from a network, where the command is at least partially based on the identifier of the performance monitoring process.
[0061] The performance monitoring process may be associated with a machine learning model. The command may be configured to enable use of the machine learning model associated with the performance monitoring process. The identifier may be dimensioned based on the maximum number of the performance monitoring processes. The associated information may comprise at least one of: an indication of whether there is a relation between the report and an earlier report associated with the same performance monitoring process, a user equipment metric, at least partially determined at the apparatus, wherein the user equipment metric provides a relative performance indication of the performance monitoring process, a network metric, received from a network, wherein the network metric provides a relative performance indication of the performance monitoring process, or an identifier of at least one functionality configuration associated with the performance monitoring process, wherein the functionality configuration is received from a network to enable the machine learning feature. The apparatus may comprise a user equipment and the capability message may be a user equipment capability message of the user equipment, and the maximum number of the performance monitoring processes which can be handled by the apparatus may be a maximum number of the performance monitoring processes that can be handled by the user equipment when supporting the at least one enabled feature of the machine learning at the user equipment. The machine learning may comprise a background machine learning model, and the performance monitoring process may be associated with the background machine learning model available to the user equipment. The report may be configured or defined to the user equipment based, at least partially, upon the capability message. The instructions, when executed with the at least one processor, may be configured to cause the apparatus to perform: based, at least partially, on the sent capability message, receiving at least one functionality configuration. The instructions, when executed with the at least one processor, may be
configured to cause the apparatus to perform: based, at least partially, on the sent capability message, receiving a configuration configured to enable reporting of performance monitoring process related information with the report. The instructions, when executed with the at least one processor, may be configured to cause the apparatus to perform: with input from a network, initiating an inference operation for one or more functionalities configured towards the user equipment. The instructions, when executed with the at least one processor, may be configured to cause the apparatus to perform: monitoring performance of the machine learning regarding active and inactive functionalities based on available measurements accessible to the apparatus. The instructions, when executed with the at least one processor, may be configured to cause the apparatus to perform: assessing performance of the machine learning regarding the active and inactive functionalities based on the available measurements accessible to the apparatus. The instructions, when executed with the at least one processor, may be configured to cause the apparatus to perform: receiving performance monitoring process information from a network; and updating a network-metric associated with the performance monitoring process. The instructions, when executed with the at least one processor, may be configured to cause the apparatus to perform: receiving a configuration from a network comprising information related to a number of performance monitoring processes determined by the network. The instructions, when executed with the at least one processor, may be configured to cause the apparatus to perform: determining when the number of performance monitoring processes in the received information from the network is lesser than the maximum number of the performance monitoring processes which can be handled by the apparatus; and selecting the lesser number of models to map with the performance monitoring process(es).
[0062] Referring also to Fig. 3, an example method may be provided comprising: sending a capability message to an apparatus as illustrated with block 302, where the apparatus supports at least one enabled feature of machine learning, and where the capability message comprises an indication of a maximum number of performance monitoring processes which can be handled by the apparatus for the at least one enabled feature; based, at least partially, on the sent capability message, sending a report as illustrated with block 304, where the report comprises: an identifier for a performance monitoring process, and associated information related to the performance monitoring process; and based, at least partially, on the sent report, receiving an activation or selection command from a network as illustrated with block 306, where the command is at least partially based on the identifier of the performance monitoring process. The performance monitoring process may be associated with a machine learning
model. The command may be configured to enable use of the machine learning model associated with the performance monitoring process. The identifier may be dimensioned based on the maximum number of the performance monitoring processes. The associated information may comprise at least one of an indication of whether there is a relation between the report and an earlier report associated with the same performance monitoring process, a user equipment metric, at least partially determined at the apparatus, wherein the user equipment metric provides a relative performance indication of the performance monitoring process, a network metric, received from a network, wherein the network metric provides a relative performance indication of the performance monitoring process, or an identifier of at least one functionality configuration associated with the performance monitoring process, wherein the functionality configuration is received from a network to enable the machine learning feature. The apparatus may comprise a user equipment and the capability message is a user equipment capability message of the user equipment, and the maximum number of the performance monitoring processes which can be handled by the apparatus may be a maximum number of the performance monitoring processes that can be handled by the user equipment when supporting the at least one enabled feature of the machine learning at the user equipment. The machine learning may comprise a background machine learning model, and the performance monitoring process may be associated with the background machine learning model available to the user equipment. The report may be configured or defined to the user equipment based, at least partially, upon the capability message. The method may comprise at least partially based on the sent capability message, receiving at least one functionality configuration. The method may comprise at least partially based on the sent capability message, receiving a configuration configured to enable reporting of performance monitoring process related information with the report. The method may comprise, with input from a network, initiating an inference operation for one or more functionalities configured towards the user equipment. The method may comprise monitoring performance of the machine learning regarding active and inactive functionalities based on available measurements accessible to the apparatus. The method may comprise assessing performance of the machine learning regarding the active and inactive functionalities based on the available measurements accessible to the apparatus. The method may comprise: receiving performance monitoring process information from a network; and updating a network-metric associated with the performance monitoring process. The method may comprise receiving a configuration from a network comprising information related to a number of performance monitoring processes determined by the network. The method
may comprise: determining when the number of performance monitoring processes in the received information from the network is lesser than the maximum number of the performance monitoring processes which can be handled by the apparatus; and selecting the lesser number of models to map with the performance monitoring process(es).
[0063] An example embodiment may be provided with an apparatus comprising: means for sending a capability message, where the apparatus supports at least one enabled feature of machine learning, and where the capability message comprises an indication of a maximum number of performance monitoring processes which can be handled by the apparatus for the at least one enabled feature; means for, based at least partially on the sent capability message, sending a report, where the report comprises: an identifier for a performance monitoring process, and associated information related to the performance monitoring process; and means for, based at least partially on the sent report, receiving an activation or selection command from a network, where the command is at least partially based on the identifier of the performance monitoring process.
[0064] An example embodiment may be provided with a non-transitory program storage device readable by an apparatus, tangibly embodying a program of instructions executable with the apparatus for performing operations, the operations comprising: sending a capability message, where the apparatus supports at least one enabled feature of machine learning, and where the capability message comprises an indication of a maximum number of performance monitoring processes which can be handled by the apparatus for the at least one enabled feature; based, at least partially, on the sent capability message, sending a report, where the report comprises: an identifier for a performance monitoring process, and associated information related to the performance monitoring process; and based, at least partially, on the sent report, receiving an activation or selection command from a network, where the command is at least partially based on the identifier of the performance monitoring process.
[0065] An example embodiment may be provided with an apparatus comprising: at least one processor; and at least one non-transitory memory storing instructions that, when executed with the at least one processor, cause the apparatus to perform: receiving a capability message from a user equipment, where the user equipment supports at least one enabled feature of machine learning, and where the capability message comprises an indication of a maximum number of performance monitoring processes which can be handled by the user equipment for the at least one enabled feature; and based at least partially upon receiving the capability message, sending
a configuration to the user equipment, where the configuration is configured to enable reporting of performance monitoring process related information by the user equipment.
[0066] A performance monitoring process, of the performance monitoring process related information, may be associated with a machine learning model. The instructions, when executed with the at least one processor, may be configured to cause the apparatus to perform: receiving a report from the user equipment, where the report comprises: an identifier for the performance monitoring process, and associated information related to the performance monitoring process. The instructions, when executed with the at least one processor, may be configured to cause the apparatus to perform: based at least partially upon the receiving of the report, determining an assessment for one or more performance monitoring process considered for an active functionality based on machine learning at the apparatus. The instructions, when executed with the at least one processor, may be configured to cause the apparatus to perform: sending information to the user equipment regarding the assessment, where information comprises one or more performance monitoring process IDs and one or more network metrics for the one or more performance monitoring process IDs. The instructions, when executed with the at least one processor, may be configured to cause the apparatus to perform: storing the information under each corresponding performance monitoring process ID. The storing may comprise storing at least one of: timestamp information, network assumptions, configuration details, or other types of information for handling additional conditions. The instructions, when executed with the at least one processor, may be configured to cause the apparatus to perform: determining one or more background models for the user equipment to use, where the determining uses the performance monitoring process IDs. The instructions, when executed with the at least one processor, may be configured to cause the apparatus to perform: determining that there is a network assumption as a network additional condition matching with an associated background model; and based upon the determining of the matching, sending to the user equipment the performance monitoring process IDs for the determined one or more background models for the user equipment to use.
[0067] Referring also to Fig. 4, an example method may be provided comprising: receiving a capability message from a user equipment as illustrated with block 402, where the user equipment supports at least one enabled feature of machine learning, and where the capability message comprises an indication of a maximum number of performance monitoring processes which can be handled by the user equipment for the at least one enabled feature; and based at least partially upon receiving the capability message, sending a configuration to the user
equipment as illustrated with block 404, where the configuration is configured to enable reporting of performance monitoring process related information by the user equipment. A performance monitoring process, of the performance monitoring process related information, may be associated with a machine learning model. The method may comprise receiving a report from the user equipment, where the report comprises: an identifier for the performance monitoring process, and associated information related to the performance monitoring process. The method may comprise, based at least partially upon the receiving of the report, determining an assessment for one or more performance monitoring process considered for an active functionality based on machine learning at the apparatus. The method may comprise sending information to the user equipment regarding the assessment, where the information comprises one or more performance monitoring process IDs and one or more network metrics for the one or more performance monitoring process IDs. The method may comprise storing the information under each corresponding performance monitoring process ID. The storing f may comprise storing at least one of timestamp information, network assumptions, configuration details, or other types of information for handling additional conditions. The method may comprise determining one or more background models for the user equipment to use, where the determining may at least partially use the performance monitoring process IDs for indexing of the storage. The method may comprise determining that there is a network assumption as a network additional condition matching with an associated background model; and based upon the determining of the matching, sending to the user equipment the performance monitoring process IDs for the determined one or more background models for the user equipment to use. [0068] An example embodiment may be provided with an apparatus comprising: means for receiving a capability message from a user equipment, where the user equipment supports at least one enabled feature of machine learning, and where the capability message comprises an indication of a maximum number of performance monitoring processes which can be handled by the user equipment for the at least one enabled feature; and means for, based at least partially upon receiving the capability message, sending a configuration to the user equipment, where the configuration is configured to enable reporting of performance monitoring process related information by the user equipment.
[0069] An example embodiment may be provided with a non-transitory program storage device readable by an apparatus, tangibly embodying a program of instructions executable with the apparatus for performing operations, the operations comprising: receiving a capability message from a user equipment, where the user equipment supports at least one enabled feature
of machine learning, and where the capability message comprises an indication of a maximum number of performance monitoring processes which can be handled by the user equipment for the at least one enabled feature; and based at least partially upon receiving the capability message, sending a configuration to the user equipment, where the configuration is configured to enable reporting of performance monitoring process related information by the user equipment.
[0070] An example embodiment may be provided with an apparatus comprising: means for reporting, by a UE, in a UE capability message: a maximum number for performance monitoring processes (PMPs) that can be handled by the UE when supporting at least one ML- enabled feature at the UE, wherein each performance monitoring process is associated with a background ML model available to the UE; means for reporting, by the UE, in a report which is configured or defined to the UE, wherein the report carrying at least an identifier for at least one performance monitoring process (PMP) and associated information related to the at least one performance monitoring process, wherein the identifier is dimensioned based on the maximum number for performance monitoring processes and the associated information comprises one or more of the following:
• whether there is any relation between latest report and the earlier report associated with same the PMP,
• a UE metric, determined at the UE, wherein the metric provides a relative performance of the PMP,
• a NW metric, received from NW in an earlier instance, wherein the metric provides a relative performance of the PMP,
• identifiers of one or more ML configurations associated with the PMP, wherein the ML configurations are received from the NW to enable the ML-feature; and means for receiving, by the UE, an activation or selection command from the NW, wherein the command is based on the identifier for at least one PMP, and the activation or selection is enabling a use of background ML model associated with the at least one PMP.
[0071] The term “non-transitory,” as used herein, is a limitation of the medium itself (i.e., tangible, not a signal) as opposed to a limitation on data storage persistency (e.g., RAM vs. ROM).
[0072] As used in this application, the term “circuitry” may refer to one or more or all of the following:
(a) hardware-only circuit implementations (such as implementations in only analog and/or digital circuitry) and
(b) combinations of hardware circuits and software, such as (as applicable):
(i) a combination of analog and/or digital hardware circuit(s) with software/firmware and
(ii) any portions of hardware processor(s) with software (including digital signal processor(s)), software, and memory(ies) that work together to cause an apparatus, such as a mobile phone or server, to perform various functions) and
(iii) hardware circuit(s) and or processor(s), such as a microprocessor(s) or a portion of a microprocessor s), that requires software (e.g., firmware) for operation, but the software may not be present when it is not needed for operation.”
[0073] This definition of circuitry applies to all uses of this term in this application, including in any claims. As a further example, as used in this application, the term circuitry also covers an implementation of merely a hardware circuit or processor (or multiple processors) or portion of a hardware circuit or processor and its (or their) accompanying software and/or firmware. The term circuitry also covers, for example and if applicable to the particular claim element, a baseband integrated circuit or processor integrated circuit for a mobile device or a similar integrated circuit in server, a cellular network device, or other computing or network device.
[0074] It should be understood that the foregoing description is only illustrative. Various alternatives and modifications can be devised by those skilled in the art. For example, features recited in the various dependent claims could be combined with each other in any suitable combination(s). In addition, features from different embodiments described above could be selectively combined into a new embodiment. Accordingly, the description is intended to embrace all such alternatives, modifications and variances which fall within the scope of the appended claims.
Claims
1. An apparatus comprising: at least one processor; and at least one non-transitory memory storing instructions that, when executed with the at least one processor, cause the apparatus to perform: sending a capability message, where the apparatus supports at least one enabled feature of machine learning, and where the capability message comprises an indication of a maximum number of performance monitoring processes which can be handled by the apparatus for the at least one enabled feature; based, at least partially, on the sent capability message, sending a report, where the report comprises: an identifier for a performance monitoring process, and associated information related to the performance monitoring process; and based, at least partially, on the sent report, receiving an activation or selection command from a network, where the command is at least partially based on the identifier of the performance monitoring process.
2. The apparatus as claimed in claim 1 where the performance monitoring process is associated with a machine learning model.
3. The apparatus as claimed in claim 2 where the command is configured to enable use of the machine learning model associated with the performance monitoring process.
4. The apparatus as claimed in any one of claims 1-3 where the identifier is dimensioned based on the maximum number of the performance monitoring processes.
5. The apparatus as claimed in any one of claims 1-4 where the associated information comprises at least one of: an indication of whether there is a relation between the report and an earlier report associated with the same performance monitoring process,
a user equipment metric, at least partially determined at the apparatus, wherein the user equipment metric provides a relative performance indication of the performance monitoring process, a network metric, received from a network, wherein the network metric provides a relative performance indication of the performance monitoring process, or an identifier of at least one functionality configuration associated with the performance monitoring process, wherein the functionality configuration is received from a network to enable the machine learning feature.
6. The apparatus as claimed in any one of claims 1-5 where the apparatus comprises a user equipment and the capability message is a user equipment capability message of the user equipment, and where the maximum number of the performance monitoring processes which can be handled by the apparatus is a maximum number of the performance monitoring processes that can be handled by the user equipment when supporting the at least one enabled feature of the machine learning at the user equipment.
7. The apparatus as claimed in any one of claims 1-6 where the machine learning comprises a background machine learning model, and where the performance monitoring process is associated with the background machine learning model available to the user equipment.
8. The apparatus as claimed in any one of claims 1-7 where the report which is configured or defined to the user equipment based, at least partially, upon the capability message.
9. The apparatus as claimed in any one of claims 1-8 where the instructions, when executed with the at least one processor, cause the apparatus to perform: based, at least partially, on the sent capability message, receiving at least one functionality configuration.
10. The apparatus as claimed in any one of claims 1-9 where the instructions, when executed with the at least one processor, cause the apparatus to perform: based, at least partially, on the sent capability message, receiving a configuration configured to enable reporting of performance monitoring process related information with the report.
11. The apparatus as claimed in any one of claims 1-9 where the instructions, when executed with the at least one processor, cause the apparatus to perform: with input from a network, initiating an inference operation for one or more functionalities configured towards the user equipment.
12. The apparatus as claimed in any one of claims 1-10 where the instructions, when executed with the at least one processor, cause the apparatus to perform: monitoring performance of the machine learning regarding active and inactive functionalities based on available measurements accessible to the apparatus.
13. The apparatus as claimed in claim 12 where the instructions, when executed with the at least one processor, cause the apparatus to perform: assessing performance of the machine learning regarding the active and inactive functionalities based on the available measurements accessible to the apparatus.
14. The apparatus as claimed in any one of claims 1-13 where the instructions, when executed with the at least one processor, cause the apparatus to perform: receiving performance monitoring process information from a network; and updating a network-metric associated with the performance monitoring process.
15. The apparatus as claimed in any one of claims 1-14 where the instructions, when executed with the at least one processor, cause the apparatus to perform: receiving a configuration from a network comprising information related to a number of performance monitoring processes determined by the network.
16. The apparatus as claimed in claim 15 where the instructions, when executed with the at least one processor, cause the apparatus to perform: determining when the number of performance monitoring processes in the received information from the network is lesser than the maximum number of the performance monitoring processes which can be handled by the apparatus; and selecting the lesser number of models to map with the performance monitoring process(es).
17. A method comprising: sending a capability message to an apparatus, where the apparatus supports at least one enabled feature of machine learning, and where the capability message comprises an indication of a maximum number of performance monitoring processes which can be handled by the apparatus for the at least one enabled feature; based, at least partially, on the sent capability message, sending a report, where the report comprises: an identifier for a performance monitoring process, and associated information related to the performance monitoring process; and based, at least partially, on the sent report, receiving an activation or selection command from a network, where the command is at least partially based on the identifier of the performance monitoring process.
18. An apparatus comprising: means for sending a capability message, where the apparatus supports at least one enabled feature of machine learning, and where the capability message comprises an indication of a maximum number of performance monitoring processes which can be handled by the apparatus for the at least one enabled feature; means for, based at least partially on the sent capability message, sending a report, where the report comprises: an identifier for a performance monitoring process, and associated information related to the performance monitoring process; and means for, based at least partially on the sent report, receiving an activation or selection command from a network, where the command is at least partially based on the identifier of the performance monitoring process.
19. An apparatus comprising: at least one processor; and at least one non-transitory memory storing instructions that, when executed with the at least one processor, cause the apparatus to perform: receiving a capability message from a user equipment, where the user equipment supports at least one enabled feature of machine learning, and where the capability message
comprises an indication of a maximum number of performance monitoring processes which can be handled by the user equipment for the at least one enabled feature; and based at least partially upon receiving the capability message, sending a configuration to the user equipment, where the configuration is configured to enable reporting of performance monitoring process related information by the user equipment.
20. The apparatus as claimed in claim 19 where a performance monitoring process, of the performance monitoring process related information, is associated with a machine learning model.
21. The apparatus as claimed in any one of claims 19-20 where the instructions, when executed with the at least one processor, cause the apparatus to perform: receiving a report from the user equipment, where the report comprises: an identifier for the performance monitoring process, and associated information related to the performance monitoring process.
22. The apparatus as claimed in claim 21 where the instructions, when executed with the at least one processor, cause the apparatus to perform: based at least partially upon the receiving of the report, determining an assessment for one or more performance monitoring process considered for an active functionality based on machine learning at the apparatus.
23. The apparatus as claimed in claim 22 where the instructions, when executed with the at least one processor, cause the apparatus to perform: sending information to the user equipment regarding the assessment, where information comprises one or more performance monitoring process IDs and one or more network metrics for the one or more performance monitoring process IDs.
24. The apparatus as claimed in any claim 22 where the instructions, when executed with the at least one processor, cause the apparatus to perform: storing the information under each corresponding performance monitoring process ID.
25. The apparatus as claimed in claim 24 where the storing further comprises storing at least one of: timestamp information, network assumptions, configuration details, or other types of information for handling additional conditions.
26. The apparatus as claimed in any one of claims 19-25 where the instructions, when executed with the at least one processor, cause the apparatus to perform: determining one or more background models for the user equipment to use, where the determining uses the performance monitoring process IDs.
27. The apparatus as claimed in claim 26 where the instructions, when executed with the at least one processor, cause the apparatus to perform: determining that there is a network assumption as a network additional condition matching with an associated background model; and based upon the determining of the matching, sending to the user equipment the performance monitoring process IDs for the determined one or more background models for the user equipment to use.
Applications Claiming Priority (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| US202463554268P | 2024-02-16 | 2024-02-16 | |
| US63/554,268 | 2024-02-16 |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| WO2025171907A1 true WO2025171907A1 (en) | 2025-08-21 |
Family
ID=93744102
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| PCT/EP2024/083753 Pending WO2025171907A1 (en) | 2024-02-16 | 2024-11-27 | Ensuring consistency between training and inference stages via monitoring procedures |
Country Status (1)
| Country | Link |
|---|---|
| WO (1) | WO2025171907A1 (en) |
Citations (1)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| WO2023211343A1 (en) * | 2022-04-29 | 2023-11-02 | Telefonaktiebolaget Lm Ericsson (Publ) | Machine learning model feature set reporting |
-
2024
- 2024-11-27 WO PCT/EP2024/083753 patent/WO2025171907A1/en active Pending
Patent Citations (1)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| WO2023211343A1 (en) * | 2022-04-29 | 2023-11-02 | Telefonaktiebolaget Lm Ericsson (Publ) | Machine learning model feature set reporting |
Non-Patent Citations (3)
| Title |
|---|
| KEETH JAYASINGHE ET AL: "AI/ML for Beam Management", vol. RAN WG1, no. Hefei, CN; 20241014 - 20241018, 4 October 2024 (2024-10-04), XP052655799, Retrieved from the Internet <URL:https://www.3gpp.org/ftp/TSG_RAN/WG1_RL1/TSGR1_118b/Docs/R1-2408544.zip R1-2408544_AIML for BM.docx> [retrieved on 20241004] * |
| KEETH JAYASINGHE ET AL: "Other aspects on ML for positioning accuracy enhancement", vol. RAN WG1, no. Toulouse, FR; 20230821 - 20230825, 11 August 2023 (2023-08-11), XP052436473, Retrieved from the Internet <URL:https://www.3gpp.org/ftp/TSG_RAN/WG1_RL1/TSGR1_114/Docs/R1-2307243.zip R1-2307243_Other Aspects on ML for Positioning.docx> [retrieved on 20230811] * |
| NOKIA ET AL: "Other aspects on ML for positioning accuracy enhancement", vol. 3GPP RAN 1, no. Electronic Meeting; 20230417 - 20230426, 7 April 2023 (2023-04-07), XP052352118, Retrieved from the Internet <URL:https://ftp.3gpp.org/tsg_ran/WG1_RL1/TSGR1_112b-e/Docs/R1-2302633.zip R1-2302633_Other Aspects on ML for Positioning.docx> [retrieved on 20230407] * |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| US20210256855A1 (en) | Information transmission methods and apparatuses | |
| CN115914986A (en) | Perception data transmission method, device, apparatus and storage medium | |
| CN111989948A (en) | Positioning measurement data reporting method, device, terminal and storage medium | |
| JP7201707B2 (en) | Method and apparatus for transmitting information | |
| US20160337805A1 (en) | User equipment, device to device user equipment, base station, backhaul device and assistant positioning method for device to device user equipment | |
| JP2022501949A (en) | Information processing methods, communication devices, systems, and storage media | |
| JP7488359B2 (en) | Conditional Measurement Reporting Mode for Positioning | |
| US20240292301A1 (en) | Network slice specific conditional handover optimization | |
| CN116347356A (en) | Communication method, device and system | |
| CN115150937A (en) | A communication method and device | |
| US20250374225A1 (en) | Method for identifying sidelink positioning synchronization sources | |
| JP2025510370A (en) | Information transmission method and network element | |
| CN115843098A (en) | Positioning method, positioning device and readable storage medium | |
| WO2025171907A1 (en) | Ensuring consistency between training and inference stages via monitoring procedures | |
| CN110346754B (en) | Positioning time obtaining method and device | |
| CN116801182A (en) | Method and device for determining positioning method | |
| US20240345935A1 (en) | Procedure for pre-deployment validation of ai/ml enabled feature | |
| US12339381B2 (en) | Enhancing positioning measurement | |
| WO2025210457A1 (en) | Selection of positioning method with multiple requirements | |
| US20240114481A1 (en) | Method for determining positioning integrity based on speed of location estimation | |
| EP4418776A1 (en) | Method, apparatus and computer program | |
| EP4668914A1 (en) | Information processing method and apparatus, communication device, and storage medium | |
| WO2024082484A1 (en) | Method and apparatus of supporting positioning method selection | |
| CN108401499B (en) | Method, device and system for determining position information | |
| WO2025021346A1 (en) | Applying conditions for input parameters for ue-sided prediction using an ml model |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| 121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 24817195 Country of ref document: EP Kind code of ref document: A1 |