WO2024239663A1 - Surveillance de performance pour une fonctionnalité ia/ml ou un modèle ia/ml - Google Patents
Surveillance de performance pour une fonctionnalité ia/ml ou un modèle ia/ml Download PDFInfo
- Publication number
- WO2024239663A1 WO2024239663A1 PCT/CN2024/070851 CN2024070851W WO2024239663A1 WO 2024239663 A1 WO2024239663 A1 WO 2024239663A1 CN 2024070851 W CN2024070851 W CN 2024070851W WO 2024239663 A1 WO2024239663 A1 WO 2024239663A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- model
- monitoring
- functionality
- performance
- performance metric
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04W—WIRELESS COMMUNICATION NETWORKS
- H04W24/00—Supervisory, monitoring or testing arrangements
- H04W24/10—Scheduling measurement reports ; Arrangements for measurement reports
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04W—WIRELESS COMMUNICATION NETWORKS
- H04W24/00—Supervisory, monitoring or testing arrangements
- H04W24/02—Arrangements for optimising operational condition
Definitions
- the present disclosure relates to wireless communications, and more specifically to a user equipment (UE) , a network device, processors for wireless communication, methods, and non-transitory computer readable media for performance monitoring for artificial intelligence/machine learning (AI/ML) functionality or AI/ML model.
- UE user equipment
- AI/ML artificial intelligence/machine learning
- a wireless communications system may include one or multiple network communication devices, such as base stations, which may be otherwise known as an eNodeB (eNB) , a next-generation NodeB (gNB) , or other suitable terminology.
- Each network communication devices such as a base station may support wireless communications for one or multiple user communication devices, which may be otherwise known as user equipment (UE) , or other suitable terminology.
- the wireless communications system may support wireless communications with one or multiple user communication devices by utilizing resources of the wireless communication system (e.g., time resources (e.g., symbols, slots, subframes, frames, or the like) or frequency resources (e.g., subcarriers, carriers) .
- the wireless communications system may support wireless communications across various radio access technologies including third generation (3G) radio access technology, fourth generation (4G) radio access technology, fifth generation (5G) radio access technology, among other suitable radio access technologies beyond 5G (e.g., sixth generation (6G) ) .
- 3G third generation
- 4G fourth generation
- 5G fifth generation
- 6G sixth generation
- a life cycle management (LCM) procedure manages AI/ML model in the entire life cycle.
- LCM life cycle management
- ID model-identification
- functionality-based LCM Further study on functionality/model-ID based LCM is still needed.
- the present disclosure relates to methods, apparatuses, and systems that support performance monitoring for AI/ML functionality or AI/ML model.
- a framework of performance monitoring for AI/ML functionality or AI/ML model may be designed.
- a UE receives, from a base station, a parameter configuration for monitoring performance of an artificial intelligence/machine learning (AI/ML) functionality or performance of an AI/ML model.
- the UE transmits, to the base station, a result of the monitoring of the performance of the AI/ML functionality or the performance of the AI/ML model based on the parameter configuration.
- AI/ML artificial intelligence/machine learning
- the parameter configuration may include at least one of the following: a condition for triggering the monitoring; a report quantity for specifying a content type of the result to be transmitted; a time requirement for reporting the result; an indication of a level of the monitoring in the case that the parameter configuration is associated with the AI/ML functionality; a performance metric target associated with the AI/ML model in the case that the parameter configuration is associated with the AI/ML model; at least one performance metric target associated with at least one AI/ML model to be assessed in the case that the parameter configuration is associated with the AI/ML functionality, wherein the at least one AI/ML model to be assessed is associated with the AI/ML functionality; a resource configuration for reference signal (RS) transmissions for the monitoring; a time duration for RS transmissions for the monitoring; the number of performance metric samples needed for the monitoring in the case that the parameter configuration is associated with the AI/ML model; at least one number of performance metric samples needed for the monitoring corresponding to at least one AI/ML model to be
- RS reference signal
- the condition may include an occurrence of an event associated with potential degradation of the performance of the AI/ML functionality or the performance of the AI/ML model.
- Some implementations of the method and apparatuses described herein may further include: transmitting, to the base station, an indication that the condition is fulfilled in the case that the event occurs for a time period reaching a threshold, wherein the threshold is predefined or configured.
- the condition may include a periodicity and an offset.
- Some implementations of the method and apparatuses described herein may further include: starting the monitoring at a time instance associated with the periodicity and the offset.
- Some implementations of the method and apparatuses described herein may further include:
- the content type is one of the following: an event type associated with the performance of the AI/ML functionality or the performance of the AI/ML model; a performance metric type; or none.
- transmitting the result of the monitoring may include: transmitting, to the base station, at least one performance metric of the AI/ML model.
- transmitting the result of the monitoring may include: transmitting, to the base station, an indication of an event that the AI/ML model is not applicable in the case of one of the following: a ratio of sample failures of the AI/ML model in the time duration for RS transmissions for the monitoring or among the number of performance metric samples needed for the monitoring reaches a maximum ratio of sample failures; the number of sample failures of the AI/ML model before end of the time duration for RS transmissions for the monitoring reaches a maximum number of sample failures; the number of sample failures of the AI/ML model among performance metric samples obtained for the monitoring reaches a maximum number of sample failures, wherein the number of the obtained performance metric samples is less than or equal to the number of performance metric samples needed for the monitoring; the number of consecutive sample failures of the AI/ML model before end of the time duration for RS transmissions for the monitoring reaches a maximum number of consecutive sample failures; the number of consecutive sample failures of the AI/ML model
- transmitting the result of the monitoring may include: transmitting, to the base station, an indication of an event that the AI/ML model is applicable in the case of one of the following: a ratio of sample failures of the AI/ML model in the time duration for RS transmissions for the monitoring or among the number of performance metric samples needed for the monitoring does not reach a maximum ratio of sample failures; the number of sample failures of the AI/ML model in the time duration for RS transmissions for the monitoring or among the number of performance metric samples needed for the monitoring does not reach a maximum number of sample failures; the number of consecutive sample failures of the AI/ML model in the time duration for RS transmissions for the monitoring or among the number of performance metric samples needed for the monitoring does not reach a maximum number of consecutive sample failures; or a time duration with sample failures of the AI/ML model in the time duration for RS transmissions for the monitoring or among the number of performance metric samples needed for the monitoring does not reach a maximum duration of model failure
- the parameter configuration may include the performance metric target associated with the AI/ML model, and the sample failure of the AI/ML model is determined by that a performance metric of the AI/ML model for a performance metric sample is worse than the performance metric target.
- the performance of the AI/ML model is monitored. Transmitting the result of the monitoring may include: transmitting, to the base station, the result of the monitoring at one of the following time instances: a first time instance before end of the time duration for RS transmissions for the monitoring or when the number of performance metric samples obtained for the monitoring is less than the number of performance metric samples needed for the monitoring in the case of one of the following: the number of sample failures of the AI/ML model reaches a maximum number of sample failures at the first time instance, the number of consecutive sample failures of the AI/ML model reaches a maximum number of consecutive sample failures at the first time instance, or a time duration with sample failures of the AI/ML model reaches a maximum duration of model failure at the first time instance; a second time instance after the end of the time duration for RS transmissions for the monitoring in the case that the result of the monitoring has not been transmitted or in the case that a ratio of sample failures of the AI/ML model reaches
- the parameter configuration further may include one of the following: the maximum ratio of sample failures; the maximum number of sample failures; the maximum number of consecutive sample failures; or the maximum duration of model failure.
- the performance of the AI/ML functionality is based on at least one performance of at least one AI/ML model to be assessed among one or more AI/ML models associated with the AI/ML functionality.
- the indication of the level is indicative of one of the following: a first monitoring level in which one AI/ML model associated with the AI/ML functionality is to be assessed during the monitoring; a second monitoring level in which a default AI/ML model and an AI/ML model associated with the AI/ML functionality are to be assessed during the monitoring; or a third monitoring level in which a plurality of AI/ML models associated with the AI/ML functionality are to be assessed during the monitoring.
- the indication of the level is indicative of the third monitoring level.
- Some implementations of the method and apparatuses described herein may further include: receiving, from the base station, one of the following: identifications of the plurality of AI/ML models; the number of AI/ML models associated with the AI/ML functionality that are to be assessed; or a maximum number of AI/ML models associated with the AI/ML functionality that are to be assessed. The result of the monitoring is based on the performances of the plurality of AI/ML models.
- the parameter configuration may include the at least one performance metric target associated with the at least one AI/ML model to be assessed.
- Some implementations of the method and apparatuses described herein may further include: determining a sample failure of a respective AI/ML model among the at least one AI/ML model in the case that a performance metric for a performance metric sample of the respective AI/ML model is worse than a performance metric target associated with the respective AI/ML model.
- Some implementations of the method and apparatuses described herein may further include: determining that the respective AI/ML model is not applicable in the case of one of the following: a ratio of sample failures of the respective AI/ML model in the time duration for RS transmissions for the monitoring or among a respective number of performance metric samples needed for the monitoring corresponding to the respective AI/ML model reaches a respective maximum ratio of sample failures; the number of sample failures of the respective AI/ML model before end of the time duration for RS transmissions for the monitoring reaches a respective maximum number of sample failures; the number of sample failures of the respective AI/ML model among performance metric samples of the respective AI/ML model obtained for the monitoring reaches a respective maximum number of sample failures, wherein the number of the obtained performance metric samples of the respective AI/ML model is less than or equal to the respective number of performance metric samples needed for the monitoring corresponding to the respective AI/ML model; the number of consecutive sample failures of the respective AI/ML model before end of the time duration for RS transmissions for the monitoring
- Some implementations of the method and apparatuses described herein may further include: determining that the respective AI/ML model is applicable in the case of one of the following: a ratio of sample failures of the respective AI/ML model in the time duration for RS transmissions for the monitoring or among a respective number of performance metric samples needed for the monitoring does not reach a respective maximum ratio of sample failures; the number of sample failures of the respective AI/ML model in the time duration for RS transmissions for the monitoring or among a respective number of performance metric samples needed for the monitoring does not reach a respective maximum number of sample failures; the number of consecutive sample failures of the respective AI/ML model in the time duration for RS transmissions for the monitoring or among a respective number of performance metric samples needed for the monitoring does not reach a respective maximum number of consecutive sample failures; or a time duration with sample failures of the respective AI/ML model in the time duration for RS transmissions for the monitoring or among a respective number of performance metric samples needed for the monitoring does not reach a respective maximum duration of model failure.
- transmitting the result of the monitoring may include: transmitting, to the base station, the result of the monitoring at one of the following time instances: a first time instance before end of the time duration for RS transmissions for the monitoring or when the number of performance metric samples obtained for the monitoring for a respective AI/ML model among the at least one AI/ML model is less than a respective number of performance metric samples needed for the monitoring in the case of one of the following: for each of the at least one AI/ML model, the number of sample failures of the AI/ML model reaches a respective maximum number of sample failures at the first time instance; for each of the at least one AI/ML model, the number of consecutive sample failures of the AI/ML model reaches a respective maximum number of consecutive sample failures at the first time instance; for each of the at least one AI/ML model, a time duration with sample failures of the AI/ML model reaches a respective maximum duration of model failure at the first time instance; a second time instance after the end of
- the parameter configuration further may include one of the following: at least one maximum ratio of sample failures for the at least one AI/ML model; at least one maximum number of sample failures for the at least one AI/ML model; at least one maximum number of consecutive sample failures for the at least one AI/ML model; or at least one maximum duration of model failure for the at least one AI/ML model.
- the indication of the level is indicative of the first monitoring level
- the content type is a performance metric type.
- Transmitting the result of the monitoring may include: transmitting, to the base station, at least one performance metric of the AI/ML model associated with the AI/ML functionality.
- the indication of the level is indicative of the second monitoring level
- the content type is a performance metric type.
- Transmitting the result of the monitoring may include: transmitting, to the base station, at least one performance metric of the default AI/ML model and at least one performance metric of the AI/ML model associated with the AI/ML functionality.
- the indication of the level is indicative of the third monitoring level
- the content type is a performance metric type.
- Transmitting the result of the monitoring may include: transmitting, to the base station, a plurality of performance metrics of the plurality of AI/ML models associated with the AI/ML functionality.
- the indication of the level is indicative of the first monitoring level
- the content type is an event type associated with the performance of the AI/ML functionality.
- Transmitting the result of the monitoring may include: transmitting, to the base station, one of the following: an indication of an event that the AI/ML functionality is applicable in the case that the AI/ML model associated with the AI/ML functionality is applicable; or an indication of an event that the AI/ML functionality is not applicable in the case that the AI/ML model associated with the AI/ML functionality is not applicable.
- the indication of the level is indicative of the second monitoring level
- the content type is an event type associated with the performance of the AI/ML functionality.
- Transmitting the result of the monitoring may include: transmitting, to the base station, one of the following: an indication of an event that the AI/ML functionality is applicable in the case that the default AI/ML model and the AI/ML model associated with the AI/ML functionality are applicable; an indication of an event that the AI/ML functionality is applicable with the default AI/ML model in the case that the default AI/ML model is applicable while the AI/ML model associated with the AI/ML functionality is not applicable; or an indication of an event that the AI/ML functionality is not applicable in the case that both the default AI/ML model and the AI/ML model associated with the AI/ML functionality are not applicable.
- the indication of the level is indicative of the third monitoring level
- the content type is an event type associated with the performance of the AI/ML functionality.
- Transmitting the result of the monitoring may include: transmitting, to the base station, one of the following: an indication of an event that the AI/ML functionality is applicable in the case that at least one of the plurality of AI/ML models associated with the AI/ML functionality is applicable; an indication of an event that the AI/ML functionality is applicable but requires for a configuration for inference in the case that at least one of the plurality of AI/ML models associated with the AI/ML functionality is applicable but requires for a configuration for inference; or an indication of an event that the AI/ML functionality is not applicable in the case that none of the plurality of AI/ML models associated with the AI/ML functionality is applicable.
- a base station transmits, to a user equipment (UE) , a parameter configuration for monitoring performance of an artificial intelligence/machine learning (AI/ML) functionality or performance of an AI/ML model.
- the base station receives, from the UE, a result of the monitoring.
- a framework of performance monitoring for AI/ML functionality or AI/ML model may be designed.
- the parameter configuration may include at least one of the following: a condition for triggering the monitoring; a report quantity for specifying a content type of the result; a time requirement for reporting the result; an indication of a level of the monitoring in the case that the parameter configuration is associated with the AI/ML functionality; a performance metric target associated with the AI/ML model in the case that the parameter configuration is associated with the AI/ML model; at least one performance metric target associated with at least one AI/ML model to be assessed in the case that the parameter configuration is associated with the AI/ML functionality, wherein the at least one AI/ML model to be assessed is associated with the AI/ML functionality; a resource configuration for RS transmissions for the monitoring; a time duration for RS transmissions for the monitoring; the number of performance metric samples needed for the monitoring in the case that the parameter configuration is associated with the AI/ML model; at least one number of performance metric samples needed for the monitoring corresponding to at least one AI/ML model to be assessed in the case that the
- the condition may include an occurrence of an event associated with potential degradation of the performance of the AI/ML functionality or the performance of the AI/ML model.
- Some implementations of the method and apparatuses described herein may further include: receiving, from the UE, an indication that the condition is fulfilled.
- the condition may include a periodicity and an offset.
- Some implementations of the method and apparatuses described herein may further include: starting transmitting reference signals (RSs) for the monitoring at a time instance associated with the periodicity and the offset.
- RSs reference signals
- Some implementations of the method and apparatuses described herein may further include: transmitting, to the UE, an indication for triggering the monitoring.
- the content type is one of the following: an event type associated with the performance of the AI/ML functionality or the performance of the AI/ML model; a performance metric type; or none.
- the result of the monitoring may include: at least one performance metric of the AI/ML model.
- the parameter configuration may include a performance metric target associated with the AI/ML model
- the result of the monitoring may include one of the following: an indication of an event that the AI/ML model is not applicable; or an indication of an event that the AI/ML model is applicable.
- the performance of the AI/ML model is monitored
- the parameter configuration may include the performance metric target associated with the AI/ML model
- the parameter configuration further may include one of the following: a maximum ratio of sample failures; a maximum number of sample failures; a maximum number of consecutive sample failures; or a maximum duration of model failure.
- the performance of the AI/ML functionality is monitored
- the parameter configuration may include the at least one performance metric target associated with the at least one AI/ML model to be assessed, and the parameter configuration further may include one of the following: at least one maximum ratio of sample failures for the at least one AI/ML model to be assessed; at least one maximum number of sample failures for the at least one AI/ML model to be assessed; at least one maximum number of consecutive sample failures for the at least one AI/ML model to be assessed; or at least one maximum duration of model failure for the at least one AI/ML model to be assessed.
- the indication of the level is indicative of one of the following: a first monitoring level in which one AI/ML model associated with the AI/ML functionality is to be assessed during the monitoring; a second monitoring level in which a default AI/ML model and an AI/ML model associated with the AI/ML functionality are to be assessed during the monitoring; or a third monitoring level in which a plurality of AI/ML models associated with the AI/ML functionality are to be assessed during the monitoring.
- the indication of the level is indicative of the third monitoring level.
- Some implementations of the method and apparatuses described herein may further include: transmitting, to the UE, one of the following: identifications of the plurality of AI/ML models; the number of AI/ML models associated with the AI/ML functionality that are to be assessed; or a maximum number of AI/ML models associated with the AI/ML functionality that are to be assessed. The result of the monitoring is based on the performances of the plurality of AI/ML models.
- the indication of the level is indicative of the first monitoring level
- the content type is a performance metric type
- the result of the monitoring may include: at least one performance metric of the AI/ML model associated with the AI/ML functionality.
- the indication of the level is indicative of the second monitoring level
- the content type is a performance metric type
- the result of the monitoring may include: at least one performance metric of the default AI/ML model and at least one performance metric of the AI/ML model associated with the AI/ML functionality.
- the indication of the level is indicative of the third monitoring level
- the content type is a performance metric type
- the result of the monitoring may include: a plurality of performance metrics of the plurality of AI/ML models associated with the AI/ML functionality.
- the indication of the level is indicative of the first monitoring level
- the content type is an event type associated with the performance of the AI/ML functionality
- the result of the monitoring may include one of the following: an indication of an event that the AI/ML functionality is applicable; or an indication of an event that the AI/ML functionality is not applicable.
- the indication of the level is indicative of the second monitoring level
- the content type is an event type associated with the performance of the AI/ML functionality
- the result of the monitoring may include one of the following: an indication of an event that the AI/ML functionality is applicable; an indication of an event that the AI/ML functionality is applicable with the default AI/ML model; or an indication of an event that the AI/ML functionality is not applicable.
- the indication of the level is indicative of the third monitoring level
- the content type is an event type associated with the performance of the AI/ML functionality
- the result of the monitoring may include one of the following: an indication of an event that the AI/ML functionality is applicable; an indication of an event that the AI/ML functionality is applicable but requires for a configuration for inference; or an indication of an event that the AI/ML functionality is not applicable.
- FIG. 1 illustrates an example of a wireless communications system that supports performance monitoring for AI/ML functionality or AI/ML model in accordance with aspects of the present disclosure.
- FIG. 2 illustrates an example signaling chart of an example process that supports performance monitoring for AI/ML functionality or AI/ML model in accordance with aspects of the present disclosure.
- FIG. 3 illustrates an example signaling chart of a general procedure of performance monitoring of an AI/ML functionality or an AI/ML model in accordance with aspects of the present disclosure.
- FIG. 4 illustrates an example signaling chart of a specific procedure of performance monitoring of an AI/ML functionality in accordance with aspects of the present disclosure.
- FIG. 5 illustrates an example of a device that supports performance monitoring for AI/ML functionality or AI/ML model in accordance with aspects of the present disclosure.
- FIG. 6 illustrates an example of a processor that support performance monitoring for AI/ML functionality or AI/ML model in accordance with aspects of the present disclosure.
- FIGS. 7 through 10 illustrate flowcharts of methods that support performance monitoring for AI/ML functionality or AI/ML model in accordance with aspects of the present disclosure.
- references in the present disclosure to “one embodiment, ” “an example embodiment, ” “an embodiment, ” “some embodiments, ” and the like indicate that the embodiment (s) described may include a particular feature, structure, or characteristic, but it is not necessary that every embodiment includes the particular feature, structure, or characteristic. Moreover, such phrases do not necessarily refer to the same embodiment (s) . Further, when a particular feature, structure, or characteristic is described in connection with an embodiment, it is submitted that it is within the knowledge of one skilled in the art to affect such feature, structure, or characteristic in connection with other embodiments whether or not explicitly described.
- first and second or the like may be used herein to describe various elements, these elements should not be limited by these terms. These terms are only used to distinguish one element from another element. For example, a first element could also be termed as a second element, and similarly, a second element could also be termed as a first element, without departing from the scope of embodiments. As used herein, the term “and/or” includes any and all combinations of one or more of the listed terms.
- the term “includes” and its variants are to be read as open terms that mean “includes, but is not limited to. ”
- the term “based on” is to be read as “based at least in part on. ”
- the term “one embodiment” and “an embodiment” are to be read as “at least one embodiment. ”
- the term “another embodiment” is to be read as “at least one other embodiment. ”
- the use of an expression such as “A and/or B” can mean either “only A” or “only B” or “both A and B. ”
- Other definitions, explicit and implicit, may be included below.
- the term “communication network” refers to a network following any suitable communication standards, such as, 5G NR, long term evolution (LTE) , LTE-advanced (LTE-A) , wideband code division multiple access (WCDMA) , high-speed packet access (HSPA) , narrow band internet of things (NB-IoT) , and so on.
- LTE long term evolution
- LTE-A LTE-advanced
- WCDMA wideband code division multiple access
- HSPA high-speed packet access
- NB-IoT narrow band internet of things
- the communications between a terminal device and a network device in the communication network may be performed according to any suitable generation communication protocols, including but not limited to, the first generation (1G) , the second generation (2G) , 2.5G, 2.75G, the third generation (3G) , the fourth generation (4G) , 4.5G, the fifth generation (5G) communication protocols, and/or any other protocols either currently known or to be developed in the future.
- any suitable generation communication protocols including but not limited to, the first generation (1G) , the second generation (2G) , 2.5G, 2.75G, the third generation (3G) , the fourth generation (4G) , 4.5G, the fifth generation (5G) communication protocols, and/or any other protocols either currently known or to be developed in the future.
- Embodiments of the present disclosure may be applied in various communication systems. Given the rapid development in communications, there will also be future type communication technologies and systems in which the present disclosure may be embodied. It should not be seen as limiting the scope of the present disclosure to only the aforementioned systems.
- the term “network device” generally refers to a node in a communication network via which a terminal device can access the communication network and receive services therefrom.
- the network device may refer to a base station (BS) or an access point (AP) , for example, a node B (NodeB or NB) , a radio access network (RAN) node, an evolved NodeB (eNodeB or eNB) , an NR NB (also referred to as a gNB) , a remote radio unit (RRU) , a radio header (RH) , an infrastructure device for a V2X (vehicle-to-everything) communication, a transmission and reception point (TRP) , a reception point (RP) , a remote radio head (RRH) , a relay, an integrated access and backhaul (IAB) node, a low power node such as a femto BS, a pico BS, and so forth, depending on the BS
- terminal device generally refers to any end device that may be capable of wireless communications.
- a terminal device may also be referred to as a communication device, a user equipment (UE) , an end user device, a subscriber station (SS) , an unmanned aerial vehicle (UAV) , a portable subscriber station, a mobile station (MS) , or an access terminal (AT) .
- UE user equipment
- SS subscriber station
- UAV unmanned aerial vehicle
- MS mobile station
- AT access terminal
- the terminal device may include, but is not limited to, a mobile phone, a cellular phone, a smart phone, a voice over IP (VoIP) phone, a wireless local loop phone, a tablet, a wearable terminal device, a personal digital assistant (PDA) , a portable computer, a desktop computer, an image capture terminal device such as a digital camera, a gaming terminal device, a music storage and playback appliance, a vehicle-mounted wireless terminal device, a wireless endpoint, a mobile station, laptop-embedded equipment (LEE) , laptop-mounted equipment (LME) , a USB dongle, a smart device, wireless customer-premises equipment (CPE) , an internet of things (loT) device, a watch or other wearable, a head-mounted display (HMD) , a vehicle, a drone, a medical device (for example, a remote surgery device) , an industrial device (for example, a robot and/or other wireless devices operating in an industrial and/or an automated processing chain
- model-ID-based LCM and functionality-based LCM are two types of management methods with different management units for UE side model.
- the LCM procedure is currently studied for the case that an AI/ML model has a model ID with associated information and/or for the case that a given functionality is provided by some AI/ML operations.
- the network indicates activation/deactivation/fallback/switching of AI/ML functionality via 3GPP signalling (e.g., RRC, MAC-CE, DCI) .
- Models may not be identified at the network, and UE may perform model-level LCM. Whether and how much awareness/interaction the network should have about model-level LCM requires further study.
- functionality identification there may be either one or more than one Functionalities defined within an AI/ML-enabled feature, whereby AI/ML-enabled Feature refers to a Feature where AI/ML may be used.
- the UE may have one AI/ML model for the functionality, or the UE may have multiple AI/ML models for the functionality.
- functionality refers to an AI/ML-enabled Feature/FG enabled by configuration (s) , where configuration (s) is (are) supported based on conditions indicated by UE capability.
- functionality-based LCM operates at least based on one configuration of AI/ML-enabled Feature/FG or specific configurations of an AI/ML-enabled Feature/FG.
- model-ID-based LCM models are identified at the network, and the network or the UE may activate/deactivate/select/switch individual AI/ML models via model ID.
- model-ID-based LCM operates based on identified models, where a model may be associated with specific configurations/conditions associated with UE capability of an AI/ML-enabled Feature/FG and additional conditions (e.g., scenarios, sites, and datasets) as determined/identified between UE-side and network -side.
- Model ID For functionality/model-ID based LCM, once functionalities/models are identified, the same or similar procedures may be used for their activation, deactivation, switching, fallback, and monitoring. Model ID, if needed, can be used in a Functionality (defined in functionality-based LCM) for LCM operations.
- model identification may be categorized in Type A and Type B including Type B1 and Type B2.
- Type A model identification a model is identified to the network (if applicable) and the UE (if applicable) without over-the-air signalling.
- the model may be assigned with a model ID during the model identification, which may be referred/used in over-the-air signalling after model identification.
- Type B model identification a model is identified via over-the-air signalling. Specifically, in Type B1 model identification, model identification is initiated by the UE, and the network assists the remaining steps (if any) of the model identification. The model may be assigned with a model ID during the model identification. In Type B2 model identification, model identification is initiated by the network, and the UE responds (if applicable) for the remaining steps (if any) of the model identification. The model may be assigned with a model ID during the model identification.
- Type B1 and Type B2 are model identification in model transfer from the network to the UE.
- Another example is model identification with data collection related configuration (s) and/or indication (s) and/or dataset transfer.
- the UE can indicate supported AI/ML model IDs for a given AI/ML-enabled Feature/FG in a UE capability report as starting point.
- Model ID may or may not be globally unique, and different types of model IDs may be created for a single model for various LCM purposes.
- embodiments of the present disclosure provide solutions for performance monitoring for AI/ML functionality or AI/ML model. Aspects of the present disclosure are described in the context of a wireless communications system.
- FIG. 1 illustrates an example of a wireless communications system 100 that supports performance monitoring for AI/ML functionality or AI/ML model in accordance with aspects of the present disclosure.
- the wireless communications system 100 may include one or more network entities 102 (also referred to as network equipment (NE) ) , one or more UEs 104, a core network 106, and a packet data network 108.
- the wireless communications system 100 may support various radio access technologies.
- the wireless communications system 100 may be a 4G network, such as an LTE network or an LTE-Advanced (LTE-A) network.
- LTE-A LTE-Advanced
- the wireless communications system 100 may be a 5G network, such as an NR network.
- the wireless communications system 100 may be a combination of a 4G network and a 5G network, or other suitable radio access technology including Institute of Electrical and Electronics Engineers (IEEE) 802.11 (Wi-Fi) , IEEE 802.16 (WiMAX) , IEEE 802.20.
- IEEE Institute of Electrical and Electronics Engineers
- Wi-Fi Wi-Fi
- WiMAX IEEE 802.16
- IEEE 802.20 The wireless communications system 100 may support radio access technologies beyond 5G. Additionally, the wireless communications system 100 may support technologies, such as time division multiple access (TDMA) , frequency division multiple access (FDMA) , or code division multiple access (CDMA) , etc.
- TDMA time division multiple access
- FDMA frequency division multiple access
- CDMA code division multiple access
- the one or more network entities 102 may be dispersed throughout a geographic region to form the wireless communications system 100.
- One or more of the network entities 102 described herein may be or include or may be referred to as a network node, a base station, a network element, a radio access network (RAN) , a base transceiver station, an access point, a NodeB, an eNodeB (eNB) , a next-generation NodeB (gNB) , or other suitable terminology.
- a network entity 102 and a UE 104 may communicate via a communication link 110, which may be a wireless or wired connection.
- a network entity 102 and a UE 104 may perform wireless communication (e.g., receive signaling, transmit signaling) over a Uu interface.
- a network entity 102 may provide a geographic coverage area 112 for which the network entity 102 may support services (e.g., voice, video, packet data, messaging, broadcast, etc. ) for one or more UEs 104 within the geographic coverage area 112.
- a network entity 102 and a UE 104 may support wireless communication of signals related to services (e.g., voice, video, packet data, messaging, broadcast, etc. ) according to one or multiple radio access technologies.
- a network entity 102 may be moveable, for example, a satellite associated with a non-terrestrial network.
- different geographic coverage areas 112 associated with the same or different radio access technologies may overlap, but the different geographic coverage areas 112 may be associated with different network entities 102.
- Information and signals described herein may be represented using any of a variety of different technologies and techniques.
- data, instructions, commands, information, signals, bits, symbols, and chips that may be referenced throughout the description may be represented by voltages, currents, electromagnetic waves, magnetic fields or particles, optical fields or particles, or any combination thereof.
- a UE 104 may also be able to support wireless communication directly with other UEs 104 over a communication link 114.
- a UE 104 may support wireless communication directly with another UE 104 over a device-to-device (D2D) communication link.
- D2D device-to-device
- the communication link 114 may be referred to as a sidelink.
- a UE 104 may support wireless communication directly with another UE 104 over a PC5 interface.
- a network entity 102 may support communications with the core network 106, or with another network entity 102, or both.
- a network entity 102 may interface with the core network 106 through one or more backhaul links 116 (e.g., via an S1, N2, N2, or another network interface) .
- the network entities 102 may communicate with each other over the backhaul links 116 (e.g., via an X2, Xn, or another network interface) .
- the network entities 102 may communicate with each other directly (e.g., between the network entities 102) .
- the network entities 102 may communicate with each other or indirectly (e.g., via the core network 106) .
- one or more network entities 102 may include subcomponents, such as an access network entity, which may be an example of an access node controller (ANC) .
- An ANC may communicate with the one or more UEs 104 through one or more other access network transmission entities, which may be referred to as a radio heads, smart radio heads, or transmission-reception points (TRPs) .
- TRPs transmission-reception points
- a network entity 102 may be configured in a disaggregated architecture, which may be configured to utilize a protocol stack physically or logically distributed among two or more network entities 102, such as an integrated access backhaul (IAB) network, an open RAN (O-RAN) (e.g., a network configuration sponsored by the O-RAN Alliance) , or a virtualized RAN (vRAN) (e.g., a cloud RAN (C-RAN) ) .
- IAB integrated access backhaul
- O-RAN open RAN
- vRAN virtualized RAN
- C-RAN cloud RAN
- a network entity 102 may include one or more of a central unit (CU) , a distributed unit (DU) , a radio unit (RU) , a RAN Intelligent Controller (RIC) (e.g., a Near-Real Time RIC (Near-RT RIC) , a Non-Real Time RIC (Non-RT RIC) ) , a Service Management and Orchestration (SMO) system, or any combination thereof.
- CU central unit
- DU distributed unit
- RU radio unit
- RIC RAN Intelligent Controller
- RIC e.g., a Near-Real Time RIC (Near-RT RIC) , a Non-Real Time RIC (Non-RT RIC)
- SMO Service Management and Orchestration
- An RU may also be referred to as a radio head, a smart radio head, a remote radio head (RRH) , a remote radio unit (RRU) , or a transmission reception point (TRP) .
- One or more components of the network entities 102 in a disaggregated RAN architecture may be co-located, or one or more components of the network entities 102 may be located in distributed locations (e.g., separate physical locations) .
- one or more network entities 102 of a disaggregated RAN architecture may be implemented as virtual units (e.g., a virtual CU (VCU) , a virtual DU (VDU) , a virtual RU (VRU) ) .
- VCU virtual CU
- VDU virtual DU
- VRU virtual RU
- Split of functionality between a CU, a DU, and an RU may be flexible and may support different functionalities depending upon which functions (e.g., network layer functions, protocol layer functions, baseband functions, radio frequency functions, and any combinations thereof) are performed at a CU, a DU, or an RU.
- functions e.g., network layer functions, protocol layer functions, baseband functions, radio frequency functions, and any combinations thereof
- a functional split of a protocol stack may be employed between a CU and a DU such that the CU may support one or more layers of the protocol stack and the DU may support one or more different layers of the protocol stack.
- the CU may host upper protocol layer (e.g., a layer 3 (L3) , a layer 2 (L2) ) functionality and signaling (e.g., Radio Resource Control (RRC) , service data adaption protocol (SDAP) , Packet Data Convergence Protocol (PDCP) ) .
- RRC Radio Resource Control
- SDAP service data adaption protocol
- PDCP Packet Data Convergence Protocol
- the CU may be connected to one or more DUs or RUs, and the one or more DUs or RUs may host lower protocol layers, such as a layer 1 (L1) (e.g., physical (PHY) layer) or an L2 (e.g., radio link control (RLC) layer, medium access control (MAC) layer) functionality and signaling, and may each be at least partially controlled by the CU 160.
- L1 e.g., physical (PHY) layer
- L2 e.g., radio link control (RLC) layer, medium access
- a functional split of the protocol stack may be employed between a DU and an RU such that the DU may support one or more layers of the protocol stack and the RU may support one or more different layers of the protocol stack.
- the DU may support one or multiple different cells (e.g., via one or more RUs) .
- a functional split between a CU and a DU, or between a DU and an RU may be within a protocol layer (e.g., some functions for a protocol layer may be performed by one of a CU, a DU, or an RU, while other functions of the protocol layer are performed by a different one of the CU, the DU, or the RU) .
- a CU may be functionally split further into CU control plane (CU-CP) and CU user plane (CU-UP) functions.
- a CU may be connected to one or more DUs via a midhaul communication link (e.g., F1, F1-c, F1-u)
- a DU may be connected to one or more RUs via a fronthaul communication link (e.g., open fronthaul (FH) interface)
- FH open fronthaul
- a midhaul communication link or a fronthaul communication link may be implemented in accordance with an interface (e.g., a channel) between layers of a protocol stack supported by respective network entities 102 that are in communication via such communication links.
- the core network 106 may support user authentication, access authorization, tracking, connectivity, and other access, routing, or mobility functions.
- the core network 106 may be an evolved packet core (EPC) , or a 5G core (5GC) , which may include a control plane entity that manages access and mobility (e.g., a mobility management entity (MME) , an access and mobility management functions (AMF) ) and a user plane entity that routes packets or interconnects to external networks (e.g., a serving gateway (S-GW) , a Packet Data Network (PDN) gateway (P-GW) , or a user plane function (UPF) ) .
- EPC evolved packet core
- 5GC 5G core
- MME mobility management entity
- AMF access and mobility management functions
- S-GW serving gateway
- PDN gateway Packet Data Network gateway
- UPF user plane function
- control plane entity may manage non-access stratum (NAS) functions, such as mobility, authentication, and bearer management (e.g., data bearers, signal bearers, etc. ) for the one or more UEs 104 served by the one or more network entities 102 associated with the core network 106.
- NAS non-access stratum
- the core network 106 may further include a location server, e.g., a location management function (LMF) .
- the LMF may receive measurements and assistance information from the network entity 102 and the UE 104 via the AMF to compute the position of the UE 104.
- a NR positioning protocol A (NRPPa) protocol was introduced to carry the positioning information between RAN and LMF over the next generation control plane interface (NG-C) .
- the LMF and the network entity 102 may communicate using the NRPPa defined in 3GPP TS 38.455, where NRPPa messages are communicated between the network entity 102 and the LMF via an AMF.
- the LMF and the UE 104 may communicate using the LTE Positioning Protocol (LPP) defined in 3GPP TS 36.355, where LPP messages are communicated between the UE 104 and the LMF via a serving AMF and a serving network entity for the UE.
- LPP messages may be communicated between the LMF and the AMF using hypertext transfer protocol (HTTP) -based service operations, and LPP messages may be communicated between the AMF and the UE using a 5G non-access stratum (NAS) protocol.
- HTTP hypertext transfer protocol
- NAS 5G non-access stratum
- the LPP protocol may be used to support positioning of the UE using UE-assisted and/or UE-based positioning methods, such as assisted GNSS (a-GNSS) , Real Time Kinematics (RTK) , Wireless Local Area Network (WLAN) , observed time difference of arrival (OTDOA) , and/or Enhanced Cell Identity (ECID) .
- assisted GNSS a-GNSS
- RTK Real Time Kinematics
- WLAN Wireless Local Area Network
- OTDOA observed time difference of arrival
- ECID Enhanced Cell Identity
- the NRPPa protocol may be used to support positioning of UE using network-based positioning methods, such as ECID (when used with measurements obtained by the network entity 102) , and/or the NRPPa protocol may be used by the LMF to obtain location-related information from the network entity 102, such as parameters defining Positioning Reference Signal (PRS) transmissions from the network entity 102 and the location of the network entity 102, to support OTDOA and ECID.
- network-based positioning methods such as ECID (when used with measurements obtained by the network entity 102)
- PRS Positioning Reference Signal
- the core network 106 may communicate with the packet data network 108 over one or more backhaul links 116 (e.g., via an S1, N2, N2, or another network interface) .
- the packet data network 108 may include an application server 118.
- one or more UEs 104 may communicate with the application server 118.
- a UE 104 may establish a session (e.g., a protocol data unit (PDU) session, or the like) with the core network 106 via a network entity 102.
- the core network 106 may route traffic (e.g., control information, data, and the like) between the UE 104 and the application server 118 using the established session (e.g., the established PDU session) .
- the PDU session may be an example of a logical connection between the UE 104 and the core network 106 (e.g., one or more network functions of the core network 106) .
- the network entities 102 and the UEs 104 may use resources of the wireless communications system 100 (e.g., time resources (e.g., symbols, slots, subframes, frames, or the like) or frequency resources (e.g., subcarriers, carriers) ) to perform various operations (e.g., wireless communications) .
- the network entities 102 and the UEs 104 may support different resource structures.
- the network entities 102 and the UEs 104 may support different frame structures.
- the network entities 102 and the UEs 104 may support a single frame structure.
- the network entities 102 and the UEs 104 may support various frame structures (i.e., multiple frame structures) .
- the network entities 102 and the UEs 104 may support various frame structures based on one or more numerologies.
- One or more numerologies may be supported in the wireless communications system 100, and a numerology may include a subcarrier spacing and a cyclic prefix.
- a first subcarrier spacing e.g., 15 kHz
- a normal cyclic prefix e.g. 15 kHz
- the first numerology associated with the first subcarrier spacing (e.g., 15 kHz) may utilize one slot per subframe.
- a time interval of a resource may be organized according to frames (also referred to as radio frames) .
- Each frame may have a duration, for example, a 10 millisecond (ms) duration.
- each frame may include multiple subframes.
- each frame may include 10 subframes, and each subframe may have a duration, for example, a 1 ms duration.
- each frame may have the same duration.
- each subframe of a frame may have the same duration.
- a time interval of a resource may be organized according to slots.
- a subframe may include a number (e.g., quantity) of slots.
- the number of slots in each subframe may also depend on the one or more numerologies supported in the wireless communications system 100.
- Each slot may include a number (e.g., quantity) of symbols (e.g., OFDM symbols) .
- the number (e.g., quantity) of slots for a subframe may depend on a numerology.
- a slot For a normal cyclic prefix, a slot may include 14 symbols.
- a slot For an extended cyclic prefix (e.g., applicable for 60 kHz subcarrier spacing) , a slot may include 12 symbols.
- an electromagnetic (EM) spectrum may be split, based on frequency or wavelength, into various classes, frequency bands, frequency channels, etc.
- the wireless communications system 100 may support one or multiple operating frequency bands, such as frequency range designations FR1 (410 MHz –7.125 GHz) , FR2 (24.25 GHz –52.6 GHz) , FR3 (7.125 GHz –24.25 GHz) , FR4 (52.6 GHz –114.25 GHz) , FR4a or FR4-1 (52.6 GHz –71 GHz) , and FR5 (114.25 GHz –300 GHz) .
- FR1 410 MHz –7.125 GHz
- FR2 24.25 GHz –52.6 GHz
- FR3 7.125 GHz –24.25 GHz
- FR4 (52.6 GHz –114.25 GHz)
- FR4a or FR4-1 52.6 GHz –71 GHz
- FR5 114.25 GHz
- the network entities 102 and the UEs 104 may perform wireless communications over one or more of the operating frequency bands.
- FR1 may be used by the network entities 102 and the UEs 104, among other equipment or devices for cellular communications traffic (e.g., control information, data) .
- FR2 may be used by the network entities 102 and the UEs 104, among other equipment or devices for short-range, high data rate capabilities.
- FR1 may be associated with one or multiple numerologies (e.g., at least three numerologies) .
- FR2 may be associated with one or multiple numerologies (e.g., at least 2 numerologies) .
- FIG. 2 illustrates an example signaling chart of an example process 200 that supports performance monitoring for AI/ML functionality or AI/ML model in accordance with aspects of the present disclosure.
- the process 200 will be described with reference to FIG. 1, and the process 200 may involve a UE 104 and a network entity 102 as shown in FIG. 1.
- the network entity 102 may be implemented as a base station.
- process 200 may further include additional blocks not shown and/or omit some shown blocks, and the scope of the present disclosure is not limited in this regard.
- the base station 102 transmits 201, to the UE 104, a parameter configuration 202 for monitoring performance of an AI/ML functionality or performance of an AI/ML model. Accordingly, the UE 104 receives 203 the parameter configuration 202 from the base station 102. The UE 104 may monitor the performance of the AI/ML functionality or the performance of the AI/ML model based on the parameter configuration 202. The UE 104 then transmits 204 a result 205 of the monitoring to the base station 102. Accordingly, the base station 102 receives 206 the result 205 of the monitoring from the UE 104. In this way, a framework of performance monitoring for AI/ML functionality or AI/ML model may be designed.
- the parameter configuration 202 may include various information for performing the performance monitoring. Based on the parameter configuration 202, information associated with performance monitoring of AI/ML functionalities/models may be aligned between the base station 102 and the UE 104.
- the parameter configuration 202 may include a condition for triggering the monitoring.
- the monitoring may be triggered or started or resumed based on the condition.
- the condition may include an occurrence of an event associated with potential degradation of the performance of the AI/ML functionality or the performance of the AI/ML model.
- the event may be that a strength (e.g., signal to interference plus noise ratio (SINR) , reference signal receiving power (RSRP) , or reference signal received quality (RSRQ) ) of reference signals (RSs) received by the UE 104 from the base station 102 is lower than a strength threshold.
- SINR signal to interference plus noise ratio
- RSRP reference signal receiving power
- RSRQ reference signal received quality
- the event may be that the UE is moving to the cell edge or moving to a new cell.
- the UE 104 is aware of the event occurrence. If the event occurs for a time period reaching a duration threshold, the UE 104 may transmit an indication that the condition is fulfilled to the base station 102.
- the duration threshold may be configured by the base station 102 or predefined.
- the performance monitoring may thus be triggered or started or resumed by the UE 104.
- the condition may include a periodicity and an offset.
- the base station 102 may start transmitting RSs for the monitoring at a time instance associated with the periodicity and the offset. Accordingly, the UE 104 may start monitoring the performance of the AI/ML functionality/model at a time instance associated with the periodicity and the offset. The performance monitoring may thus be started periodically based on the periodicity and the offset.
- the base station 102 may transmit an indication for triggering the monitoring to the UE 104.
- the monitoring may be triggered or started or resumed by the base station 102 via a downlink (DL) signaling, e.g., a DCI signaling, a MAC CE signaling or a RRC signaling.
- DL downlink
- the parameter configuration 202 may include information of an area in which the UE 104 is located. In some examples, the parameter configuration 202 may include information of a cell, a site, a TRP or a geographical area in which the UE 104 is located. For example, the RS sequence, the RS ID, the cell ID or the area ID may be considered.
- the parameter configuration 202 may include a report quantity for specifying a content type of the result 205 to be transmitted.
- the content type may be an event type associated with the performance of the AI/ML functionality or the performance of the AI/ML model.
- the content type may be a performance metric type.
- the content of the result 205 may include the performance metric (s) of the AI/ML functionality/model.
- the content type may be none. In other words, nothing would be reported from the UE 104 to the base station 102 for the functionality monitoring. In this case, the UE 104 may make decisions based on the performance monitoring outcome.
- the parameter configuration 202 may include a resource configuration for RS transmissions for the monitoring.
- the UE 104 may generate a dataset for the AI/ML functionality/model by monitoring RSs based on the resource configuration and calculate the result 205 of the performance monitoring based on the dataset.
- the parameter configuration 202 may include a time duration for RS transmissions for the monitoring.
- the dataset for the AI/ML functionality/model may be generated by monitoring RSs during the time duration.
- the parameter configuration 202 may include the number (s) of samples needed for performance monitoring.
- more than one performance metric samples may be obtained so as to improve the reliability of the performance monitoring.
- the number (s) of the performance metric samples for the performance monitoring may be indicated by the base station 102 to the UE 104.
- the parameter configuration 202 may include the number of performance metric samples needed for the monitoring.
- the parameter configuration 202 is associated with the AI/ML functionality, at least one AI/ML model associated with the AI/ML functionality may be assessed during the performance monitoring, and the parameter configuration 202 may include at least one number of performance metric samples needed for the monitoring corresponding to the at least one AI/ML model to be assessed.
- the result 205 may be transmitted based on the parameter configuration 202.
- the parameter configuration 202 may include report configurations for the result of the monitoring.
- the parameter configuration 202 may include a time requirement for reporting the result 205.
- the time requirement of report may indicate that the time requirement of report of the performance monitoring is tight or relax.
- a value of 1 means a tight time requirement and a value of 0 means a relax time requirement.
- the parameter configuration 202 may include performance metric target (s) for performance metric samples obtained during the performance monitoring.
- the parameter configuration 202 may include a performance metric target associated with the AI/ML model.
- a sample failure of the AI/ML model may be determined by that a performance metric of the AI/ML model for a performance metric sample is worse than the performance metric target.
- the parameter configuration 202 may include at least one performance metric target associated with at least one AI/ML model to be assessed. The at least one AI/ML model to be assessed is associated with the AI/ML functionality.
- a sample failure of a respective AI/ML model among the at least one AI/ML model may be determined by that a performance metric for a performance metric sample of the respective AI/ML model is worse than a performance metric target associated with the respective AI/ML model.
- the parameter configuration 202 may include an indication of a level of the monitoring.
- the performance of the AI/ML functionality may be monitored by monitoring at least one performance of at least one AI/ML model to be assessed among one or more AI/ML models associated with the AI/ML functionality.
- the indication of the level of the monitoring may be indicative of one among a first monitoring level, a second monitoring level and a third monitoring level.
- the first monitoring level one AI/ML model associated with the AI/ML functionality is to be assessed during the monitoring.
- the performance of one model associated with the AI/ML functionality would be assessed and the monitoring result of the performance of the model would be regarded as the performance monitoring result of the AI/ML functionality.
- the model may be determined by the UE 104.
- a default AI/ML model and an AI/ML model associated with the AI/ML functionality are to be assessed during the monitoring.
- the second monitoring level when the second monitoring level is indicated for the performance monitoring of the AI/ML functionality, the performance of a default model and a specific model of the AI/ML functionality would be assessed, where the default model has general performance in conditions relevant to the AI/ML functionality and the specific model may be selected by UE within models associated with the AI/ML functionality.
- the default model should be identified between the base station 102 and the UE 104 through type A, type B1 or type B2 model identification. It should be understood that these monitoring levels are merely examples for illustrations. Other monitoring levels are also possible.
- the UE 104 may further receive, from the base station 102, one of the following: identifications of the plurality of AI/ML models, the number of AI/ML models associated with the AI/ML functionality that are to be assessed, or a maximum number of AI/ML models associated with the AI/ML functionality that are to be assessed.
- the result 205 of the monitoring is based on the performances of the plurality of AI/ML models.
- the performances of candidate models belonging to the AI/ML functionality would be assessed, where the number or maximum number of the candidate models or the combinations of candidate models should be aligned by the base station 102 and the UE 104.
- the UE 104 may report information associated with the performance monitoring of the AI/ML functionality/model through UE capability or UE assistance information (UAI) .
- the base station 102 may transmit the parameter configuration 202 based on the reported information.
- the base station 102 may transmit RSs to enable the UE 104 to generate a dataset for the performance monitoring.
- the dataset may include input data for an AI/ML model (e.g., the AI/ML model associated with the parameter configuration 202 is associated with the AI/ML model, or an AI/ML model belonging to the AI/ML functionality associated with the parameter configuration 202) and labeled data.
- the input data may be e.g., beam measurement results, CSI measurement results, or UE measurement results for positioning (reference signal time difference (RSTD) , RSRP, received signal code power (RSCP) , etc. ) .
- the labeled data may be e.g., identifiers and/or L1-RSRP of top N beams, CSI measurement result or UE position.
- the UE 104 may calculate the result 205 of the performance monitoring based on the dataset.
- example embodiments are illustrated in detail for the case where the parameter configuration 202 is associated with the AI/ML model and for the case where the parameter configuration 202 is associated with the AI/ML functionality, respectively.
- the UE 104 may monitor the performance of the AI/ML model. For example, the UE 104 may measure RSs from the base station 102 and generate a dataset based on the measured RSs. The dataset may include input data for the AI/ML model and corresponding labeled data. The UE 104 may calculate at least one inferencing result based on the input data and the AI/ML model, and determine at least one performance metric of the AI/ML model for at least one performance metric sample by a comparison of the at least one inferencing result and the corresponding labeled data. The result 205 of the monitoring may be based on the at least one performance metric.
- the UE 104 may determine a sample failure of the AI/ML model by that a performance metric of the AI/ML model for a performance metric sample is worse than a performance metric target associated with the AI/ML model.
- the performance metric target may be configured to the UE 104 through e.g., the parameter configuration 202.
- a counter may be maintained at the UE 104 to count the number of times that the performance metric of a sample is worse than the performance metric target.
- the parameter configuration 202 may include a time duration for RS transmissions for the monitoring. Before the end of the configured time duration for RS transmissions for the monitoring, the UE 104 may transmit the result 205 of the monitoring to the base station 102 if the number of sample failures of the AI/ML model has reached a maximum number of sample failures. Alternatively, before the end of the configured time duration for RS transmissions for the monitoring, the UE 104 may transmit the result 205 of the monitoring to the base station 102 if the number of consecutive sample failures of the AI/ML model has reached a maximum number of consecutive sample failures.
- the UE 104 may transmit the result 205 of the monitoring to the base station 102 if a time duration with sample failures of the AI/ML model has reached a maximum duration of model failure.
- the parameter configuration 202 may include the number of performance metric samples needed for the monitoring. Even if the number of the obtained performance metric samples is less than the number of performance metric samples needed for the monitoring, the UE 104 may transmit the result 205 of the monitoring to the base station 102 based on the obtained performance metric samples if the number of sample failures of the AI/ML model has reached a maximum number of sample failures. Alternatively, even if the number of the obtained performance metric samples is less than the number of performance metric samples needed for the monitoring, the UE 104 may transmit the result 205 of the monitoring to the base station 102 based on the obtained performance metric samples if the number of consecutive sample failures of the AI/ML model has reached a maximum number of consecutive sample failures.
- the UE 104 may transmit the result 205 of the monitoring to the base station 102 based on the obtained performance metric samples if a time duration with sample failures of the AI/ML model has reached a maximum duration of model failure.
- the monitoring result may be reported once the UE 104 determines that the AI/ML model is not applicable and the RS transmissions and performance monitoring may be skipped after the reporting of the monitoring result. In this way, the efficiency for performance monitoring may be improved and the resource overhead may be reduced.
- the UE 104 may transmit the result 205 of the monitoring to the base station 102.
- the UE 104 may transmit the result 205 of the monitoring to the base station 102.
- the UE 104 may transmit the result 205 of the monitoring to the base station 102 if a ratio of sample failures of the AI/ML model reaches a maximum ratio of sample failures.
- the UE 104 may transmit the result 205 of the monitoring to the base station 102 if a ratio of sample failures of the AI/ML model reaches a maximum ratio of sample failures.
- the parameter configuration 202 may include one of the following: the maximum ratio of sample failures, the maximum number of sample failures, the maximum number of consecutive sample failures, or the maximum duration of model failure.
- the UE 104 may transmit the at least one performance metric of the AI/ML model to the base station 102 if the content type of the result of the monitoring is a performance metric type.
- the UE 104 may transmit an indication of an event that the AI/ML model is applicable or an indication of an event that the AI/ML model is not applicable if the content type of the result of the monitoring is an event type.
- the UE 104 may transmit an indication of an event that the AI/ML model is not applicable if a ratio of sample failures of the AI/ML model reaches a maximum ratio of sample failures.
- the UE 104 may transmit an indication of an event that the AI/ML model is applicable if a ratio of sample failures of the AI/ML model does not reach a maximum ratio of sample failures.
- the ratio of sample failures of the AI/ML model may be based on the performance metrics obtained by the measurement of the RS transmitted in the time duration configured in the parameter configuration 202. Alternatively, the ratio of sample failures of the AI/ML model may be based on the obtained performance metrics whose number is the same as the number of performance metric samples needed for the monitoring configured in the parameter configuration 202.
- the maximum ratio of sample failures may be configured to the UE 104 through e.g., the parameter configuration 202.
- the UE 104 may transmit an indication of an event that the AI/ML model is not applicable if the number of sample failures of the AI/ML model before end of the time duration for RS transmissions for the monitoring reaches a maximum number of sample failures.
- the UE 104 may transmit an indication of an event that the AI/ML model is applicable if the number of sample failures of the AI/ML model in the time duration for RS transmissions for the monitoring does not reach the maximum number of sample failures.
- the UE 104 may report that the AI/ML model is not applicable once the number of sample failures reaches the maximum number of sample failures.
- the UE 104 may report that the AI/ML model is applicable.
- the maximum number of sample failures may be configured to the UE 104 through e.g., the parameter configuration 202.
- the UE 104 may transmit an indication of an event that the AI/ML model is not applicable if the number of sample failures of the AI/ML model among performance metric samples obtained for the monitoring reaches a maximum number of sample failures, wherein the number of the obtained performance metric samples is less than or equal to the number of performance metric samples needed for the monitoring.
- the UE 104 may transmit an indication of an event that the AI/ML model is applicable if the number of sample failures of the AI/ML model among the number of performance metric samples needed for the monitoring does not reach the maximum number of sample failures.
- the UE 104 may report that the AI/ML model is not applicable once the number of sample failures reaches the maximum number of sample failures. If the number of sample failures does not reach the maximum number of sample failures when the UE 104 obtains the number of performance metric samples needed for the monitoring, the UE 104 may report that the AI/ML model is applicable.
- the UE 104 may transmit an indication of an event that the AI/ML model is not applicable if the number of consecutive sample failures of the AI/ML model before end of the time duration for RS transmissions for the monitoring reaches a maximum number of consecutive sample failures.
- the UE 104 may transmit an indication of an event that the AI/ML model is applicable if the number of consecutive sample failures of the AI/ML model in the time duration for RS transmissions for the monitoring does not reach the maximum number of consecutive sample failures.
- the UE 104 may report that the AI/ML model is not applicable once the number of consecutive sample failures reaches the maximum number of consecutive sample failures.
- the UE 104 may report that the AI/ML model is applicable.
- the maximum number of consecutive sample failures may be configured to the UE 104 through e.g., the parameter configuration 202.
- the UE 104 may transmit an indication of an event that the AI/ML model is not applicable if the number of consecutive sample failures of the AI/ML model among performance metric samples obtained for the monitoring reaches a maximum number of consecutive sample failures, wherein the number of the obtained performance metric samples is less than or equal to the number of performance metric samples needed for the monitoring.
- the UE 104 may transmit an indication of an event that the AI/ML model is applicable if the number of consecutive sample failures of the AI/ML model among the number of performance metric samples needed for the monitoring does not reach the maximum number of consecutive sample failures.
- the UE 104 may report that the AI/ML model is not applicable once the number of consecutive sample failures reaches the maximum number of consecutive sample failures. If the number of consecutive sample failures does not reach the maximum number of consecutive sample failures when the UE 104 obtains the number of performance metric samples needed for the monitoring, the UE 104 may report that the AI/ML model is applicable.
- the UE 104 may transmit an indication of an event that the AI/ML model is not applicable if the time duration with sample failures of the AI/ML model before end of the time duration for RS transmissions for the monitoring reaches a maximum duration of model failure.
- the UE 104 may transmit an indication of an event that the AI/ML model is applicable if the time duration with sample failures of the AI/ML model in the time duration for RS transmissions for the monitoring does not reach the maximum duration of model failure.
- the UE 104 may report that the AI/ML model is not applicable once the time duration with sample failures reaches the maximum duration of model failure.
- the UE 104 may report that the AI/ML model is applicable.
- the maximum duration of model failure may be configured to the UE 104 through e.g., the parameter configuration 202.
- the UE 104 may transmit an indication of an event that the AI/ML model is not applicable if the time duration with sample failures of the AI/ML model among performance metric samples obtained for the monitoring reaches a maximum duration of model failure, wherein the number of the obtained performance metric samples is less than or equal to the number of performance metric samples needed for the monitoring.
- the UE 104 may transmit an indication of an event that the AI/ML model is applicable if the time duration with sample failures of the AI/ML model among the number of performance metric samples needed for the monitoring does not reach the maximum duration of model failure.
- the UE 104 may report that the AI/ML model is not applicable once the time duration with sample failures reaches the maximum duration of model failure. If the time duration with sample failures does not reach the maximum duration of model failure when the UE 104 obtains the number of performance metric samples needed for the monitoring, the UE 104 may report that the AI/ML model is applicable.
- the UE 104 may monitor at least one performance of at least one AI/ML model among one or more AI/ML models associated with the AI/ML functionality.
- the UE 104 may measure RSs from the base station 102 and generate a dataset based on the measured RSs.
- the dataset may include input data for the AI/ML model and corresponding labeled data.
- the UE 104 may calculate at least one inferencing result based on the input data and the AI/ML model, and determine at least one performance metric of the AI/ML model for at least one performance metric sample by a comparison of the at least one inferencing result and the corresponding labeled data.
- the performance of the AI/ML model may be based on the at least one performance metric of the AI/ML model.
- the result 205 of the performance monitoring of the AI/ML functionality may be based on at least one performance of the at least one AI/ML model.
- the parameter configuration 202 may include an indication of a level of the monitoring.
- the indication of the level of the monitoring may be indicative of one among a first monitoring level, a second monitoring level and a third monitoring level.
- the first monitoring level one AI/ML model associated with the AI/ML functionality is to be assessed during the monitoring.
- the model may be determined by the UE 104.
- a default AI/ML model and an AI/ML model associated with the AI/ML functionality are to be assessed during the monitoring.
- a plurality of AI/ML models associated with the AI/ML functionality are to be assessed during the monitoring.
- the UE 104 may transmit the performance of the assessed AI/ML model (s) to the base station 102.
- the level of the monitoring is the first monitoring level
- the result 205 of the monitoring may include at least one performance metric of one AI/ML model associated with the AI/ML functionality.
- the level of the monitoring is the second monitoring level
- the result 205 of the monitoring may include at least one performance metric of the default AI/ML model and at least one performance metric of the AI/ML model associated with the AI/ML functionality.
- the result 205 of the monitoring may include a plurality of performance metrics of the plurality of AI/ML models associated with the AI/ML functionality.
- the UE 104 may determine the event to be indicated in the result 205 based on determining whether each of the at least one AI/ML model is applicable.
- the events for each monitoring level may be predefined and aligned between the base station 102 and the UE 104.
- the first monitoring level may be associated with two events, namely, an event that the AI/ML functionality is applicable, and an event that the AI/ML functionality is not applicable.
- the result 205 of the monitoring may include an indication of an event that the AI/ML functionality is applicable if the AI/ML model associated with the AI/ML functionality is applicable.
- the result 205 of the monitoring may include an indication of an event that the AI/ML functionality is not applicable if the AI/ML model associated with the AI/ML functionality is not applicable.
- the second monitoring level may be associated with three events, namely, an event that the AI/ML functionality is applicable, an event that the AI/ML functionality is applicable with the default AI/ML model, and an event that the AI/ML functionality is not applicable.
- the result 205 of the monitoring may include an indication of an event that the AI/ML functionality is applicable if the default AI/ML model and the AI/ML model associated with the AI/ML functionality are applicable.
- the result 205 of the monitoring may include an indication of an event that the AI/ML functionality is applicable with the default AI/ML model if the default AI/ML model is applicable while the AI/ML model associated with the AI/ML functionality is not applicable.
- the result 205 of the monitoring may include an indication of an event that the AI/ML functionality is not applicable if both the default AI/ML model and the AI/ML model associated with the AI/ML functionality are not applicable.
- the third monitoring level may be associated with three events, namely, an event that the AI/ML functionality is applicable, an event that the AI/ML functionality is applicable but requires for a configuration for inference, and an event that the AI/ML functionality is not applicable.
- the result 205 of the monitoring may include an indication of an event that the AI/ML functionality is applicable if at least one of the plurality of AI/ML models associated with the AI/ML functionality is applicable.
- the result 205 of the monitoring may include an indication of an event that the AI/ML functionality is applicable but requires for a configuration for inference if at least one of the plurality of AI/ML models associated with the AI/ML functionality is applicable but requires for a configuration for inference.
- the result 205 of the monitoring may include an indication of an event that the AI/ML functionality is not applicable if none of the plurality of AI/ML models associated with the AI/ML functionality is applicable.
- monitoring levels and associated events for reporting are merely examples for illustrations. Other monitoring levels and events for reporting are also possible.
- the UE 104 may determine whether each of the assessed AI/ML model (s) is applicable. In one example, the UE 104 may determine that an AI/ML model is not applicable if a ratio of sample failures of the AI/ML model reaches a respective maximum ratio of sample failures. The UE 104 may determine that the AI/ML model is applicable if a ratio of sample failures of the AI/ML model does not reach a respective maximum ratio of sample failures. The ratio of sample failures of the AI/ML model may be based on the performance metrics obtained by the measurement of the RS transmitted in the time duration configured in the parameter configuration 202.
- the ratio of sample failures of the AI/ML model may be based on the obtained performance metrics whose number is the same as the respective number of performance metric samples needed for the monitoring corresponding to the AI/ML model.
- the respective maximum ratio of sample failures of the AI/ML model may be configured to the UE 104 through e.g., the parameter configuration 202.
- the UE 104 may determine that the AI/ML model is not applicable if the number of sample failures of the AI/ML model before end of the time duration for RS transmissions for the monitoring reaches a respective maximum number of sample failures.
- the UE 104 may determine that the AI/ML model is applicable if the number of sample failures of the AI/ML model in the time duration for RS transmissions for the monitoring does not reach the respective maximum number of sample failures.
- the UE 104 may determine that the AI/ML model is not applicable if the number of sample failures of the AI/ML model among performance metric samples of the AI/ML model obtained for the monitoring reaches a respective maximum number of sample failures, wherein the number of the obtained performance metric samples of the AI/ML model is less than or equal to the respective number of performance metric samples needed for the monitoring corresponding to the AI/ML model.
- the UE 104 may determine that the AI/ML model is applicable if the number of sample failures of the AI/ML model among the number of performance metric samples needed for the monitoring does not reach the respective maximum number of sample failures.
- the UE 104 may determine that the AI/ML model is not applicable if the number of consecutive sample failures of the AI/ML model before end of the time duration for RS transmissions for the monitoring reaches a respective maximum number of consecutive sample failures.
- the UE 104 may determine that the AI/ML model is applicable if the number of consecutive sample failures of the AI/ML model in the time duration for RS transmissions for the monitoring does not reach the respective maximum number of consecutive sample failures.
- the UE 104 may determine that the AI/ML model is not applicable if the number of consecutive sample failures of the AI/ML model among performance metric samples of the AI/ML model obtained for the monitoring reaches a respective maximum number of consecutive sample failures, wherein the number of the obtained performance metric samples is less than or equal to the respective number of performance metric samples needed for the monitoring corresponding to the AI/ML model.
- the UE 104 may determine that the AI/ML model is applicable if the number of consecutive sample failures of the AI/ML model among the number of performance metric samples needed for the monitoring does not reach the respective maximum number of consecutive sample failures.
- the UE 104 may determine that the AI/ML model is not applicable if the time duration with sample failures of the AI/ML model before end of the time duration for RS transmissions for the monitoring reaches a respective maximum duration of model failure.
- the UE 104 may determine that the AI/ML model is applicable if the time duration with sample failures of the AI/ML model in the time duration for RS transmissions for the monitoring does not reach the respective maximum duration of model failure.
- the UE 104 may determine that the AI/ML model is not applicable if the time duration with sample failures of the AI/ML model among performance metric samples of the AI/ML model obtained for the monitoring reaches a respective maximum duration of model failure, wherein the number of the obtained performance metric samples is less than or equal to the respective number of performance metric samples needed for the monitoring corresponding to the AI/ML model.
- the UE 104 may determine that the AI/ML model is applicable if the time duration with sample failures of the AI/ML model among the number of performance metric samples needed for the monitoring does not reach the respective maximum duration of model failure.
- the parameter configuration 202 may include a time duration for RS transmissions for the monitoring. Before the end of the configured time duration for RS transmissions for the monitoring, the UE 104 may transmit the result 205 of the monitoring to the base station 102 if for each of the assessed AI/ML model (s) , the number of sample failures of the AI/ML model has reached a respective maximum number of sample failures for the AI/ML model.
- the UE 104 may transmit the result 205 of the monitoring to the base station 102 if for each of the assessed AI/ML model (s) , the number of consecutive sample failures of the AI/ML model has reached a respective maximum number of consecutive sample failures for the AI/ML model.
- the UE 104 may transmit the result 205 of the monitoring to the base station 102 if for each of the assessed AI/ML model (s) , a time duration with sample failures of the AI/ML model has reached a respective maximum duration of model failure for the AI/ML model.
- the parameter configuration 202 may include at least one number of performance metric samples needed for the monitoring corresponding to at least one AI/ML model to be assessed. Even if the number of the obtained performance metric samples for a respective AI/ML model among the assessed AI/ML model (s) is less than the respective number of performance metric samples needed for the monitoring, the UE 104 may transmit the result 205 of the monitoring to the base station 102 based on the obtained performance metric samples if for each of the assessed AI/ML model (s) , the number of sample failures of the AI/ML model has reached a respective maximum number of sample failures for the AI/ML model.
- the UE 104 may transmit the result 205 of the monitoring to the base station 102 based on the obtained performance metric samples if for each of the assessed AI/ML model (s) , a time duration with sample failures of the AI/ML model has reached a respective maximum duration of model failure for the AI/ML model.
- the UE 104 may transmit the result 205 of the monitoring to the base station 102.
- the UE 104 may transmit the result 205 of the monitoring to the base station 102.
- the UE 104 may transmit the result 205 of the monitoring to the base station 102 if a ratio of sample failures of a respective AI/ML model among the assessed AI/ML model (s) reaches a respective maximum ratio of sample failures.
- the UE 104 may transmit the result 205 of the monitoring to the base station 102 if a ratio of sample failures of a respective AI/ML model among the assessed AI/ML model (s) reaches a respective maximum ratio of sample failures.
- the parameter configuration 202 for the performance monitoring of the AI/ML functionality may include one of the following: at least one maximum ratio of sample failures for the AI/ML model (s) to be assessed, at least one maximum number of sample failures for the AI/ML model (s) to be assessed, at least one maximum number of consecutive sample failures for the AI/ML model (s) to be assessed, or at least one maximum duration of model failure for the AI/ML model (s) to be assessed.
- FIG. 3 illustrates an example signaling chart of a general procedure of performance monitoring of an AI/ML functionality or an AI/ML model in accordance with aspects of the present disclosure.
- the communication process 300 will be described with reference to FIG. 1. It would be appreciated that although the communication process 300 has been described referring to the network environment 100 of FIG. 1, this communication process 300 may be likewise applied to other similar communication scenarios.
- the communication process 300 may be regarded as a specific example implementation of the process 200 of FIG. 2.
- the process 300 may involve the UE 104 and the base station 102.
- a configuration associated with a performance monitoring may be aligned between the UE 104 and the base station 102 during or after a functionality identification procedure for an AI/ML functionality or a model identification procedure for an AI/ML model.
- the configuration may include at least one of the following: a triggering condition, a report quantity, or other associated information.
- the configuration may further include at least one of the following: an indication for performance monitoring of inactive functionality or an indication for functionality monitoring level.
- the configuration can be reported to the base station 102 by UE capability or UAI e.g., as a kind of functionality-related information or can be sent from the base station 102 to the UE 104 by DL signaling.
- the base station 102 may make decisions on operation for the monitored AI/ML functionality and/or operation for the models of the monitored AI/ML functionality or make decisions on operation for the monitored AI/ML model.
- the procedure is for monitoring performance of an AI/ML functionality, based on the reported monitoring output of the AI/ML functionality and corresponding performance monitoring level, the base station 102 may make decision on the functionality operation or model operation and indicates the decision to the UE 104.
- information associated with performance monitoring of AI/ML functionalities may be aligned between the base station 102 and the UE 104.
- the base station 102 may indicate the information associated with the performance monitoring of AI/ML functionality by a higher-level signaling, e.g., RRC signaling.
- the UE 104 may report the information associated with the performance monitoring of the AI/ML functionality by UE capability or UAI. For example, the information may be reported as a kind of condition or additional condition associated with AI/ML-enabled Feature/FG.
- information about performance monitoring of inactive functionality may be reported by the UE 104 through UE capability.
- the information about performance monitoring of inactive functionality may include a first indication about whether the performance monitoring of inactive functionality is supported, and/or a maximum number of functionalities that can be monitored at the same time.
- the aligned information for the AI/ML functionality may include the triggering condition used to trigger/start/resume a performance monitoring of the functionality. There may be three alternatives of the triggering condition.
- the triggering condition may be associated with event occurrence implying performance degradation of the functionality.
- the UE 104 is aware of the event occurrence, then the performance monitoring will be triggered by the UE 104.
- the entry condition of the event may include that a reference signal strength (e.g., SINR, RSRP, RSRQ) is lower than a threshold configured by higher layers or that the UE 104 is moving to the cell edge or moving to a new cell.
- the UE 104 may trigger a L1 or L2 report when the entry condition for the event is fulfilled and time of duration for this event is larger than a configured timer.
- the performance monitoring may be triggered with a DL signaling.
- the base station 102 signals the UE 104 to perform the functionality monitoring. This may be done by an aperiodic CSI or beam report.
- the triggering condition may be associated with a performance monitoring periodicity and an offset.
- the performance monitoring will be started periodically at time instances determined by the periodicity and the offset. In a specific implementation, this may be done by a periodic CSI or beam report by introducing a new report quantity. In another specific implementation, a timer-based performance monitoring may be supported in L2.
- the aligned information for the AI/ML functionality may include the report quantity that specifies a report content of the performance monitoring of the functionality.
- the report content of the performance monitoring may include an event indicating performance monitoring result of the functionality.
- the report content of the performance monitoring may include performance metric (s) based on the model inference output and labeled data.
- the report content of the performance monitoring may include the differential RSRP of a same beam pair or statistical results of some measured metrics.
- the report content of the performance monitoring may include none. In this case, nothing would be reported for the functionality monitoring, and the UE 104 may select a model transparently to the base station 102 based on performance monitoring outcome.
- the aligned information for the AI/ML functionality may include the time requirement of report indicating that the time requirement of report of the performance monitoring is tight or relax. For example, value of 1 means a tight time requirement and value of 0 means a relax time requirement.
- the aligned information for the AI/ML functionality may include the second indication to indicate which level (s) of performance monitoring is (are) supported. There may be three potential performance monitoring levels, i.e., Level a, Level b and Level c.
- the performance of a model determined by the UE 104 would be assessed and the result would be regarded as the performance monitoring result of the functionality.
- Level b the performance of a default model and a specific model of the functionality would be assessed, where the default model has general performance in conditions relevant to the functionality and the specific model is selected by the UE 104 within the functionality.
- the default model should be identified between the base station 102 and the UE 104 through e.g., type A, type B1 or type B2 model identification.
- Level c the performance of candidate models belonging to the functionality would be assessed, where the number or maximum number of the candidate models or the combinations of candidate models should be aligned by the base station 102 and the UE 104.
- the aligned information for the AI/ML functionality may include at least one performance metric target (s) as the reference to performance metric of assessed model (s) of the functionality.
- the aligned information for the AI/ML functionality may further include one of the following: at least one corresponding maximum failure rate/ratio or at least one corresponding maximum failure sample number or at least one corresponding maximum consecutive number of failure sample for the assessed model (s) of the functionality.
- the aligned information for the AI/ML functionality may include the RS resource configuration for the performance monitoring.
- the RS resource configuration may be used to generate a dataset for performance monitoring of the AI/ML functionality.
- the aligned information for the AI/ML functionality may include the Cell/site/TRP/area information.
- RS sequence, RS ID, cell ID or area ID of the UE 104 may be considered.
- the performance monitoring may be triggered/started/resumed based on the triggering condition in the aligned information for the AI/ML functionality with three potential ways corresponding to three alternatives of the triggering condition.
- the UE 104 may be aware of the event occurrence implying performance monitoring degradation of the functionality, e.g., SINR is lower than threshold. Then, at 402-1, the UE 104 may report the event occurrence to the base station 102 for triggering the performance monitoring.
- the base station 102 may trigger the performance monitoring by signaling directly, e.g., DCI, MAC CE or RRC signaling.
- the performance monitoring may be triggered periodically based on the performance monitoring periodicity and offset.
- the base station 102 may transmit RSs in a time duration for measurement to enable the UE 104 to generate a dataset for the performance monitoring.
- the dataset may include input data for model and labeled data, where the input data may be beam measurement results, CSI measurement results, or UE measurement result for positioning (e.g., RSTD, RSRP, RSCP, etc. ) .
- the labeled data may be identifiers and/or L1-RSRP of top N beams, the CSI measurement result or the UE position.
- the time duration and the resource configuration for the RS transmission may be indicated by the aligned information for the AI/ML functionality.
- the UE 104 may calculate the monitoring result of the functionality, where the model of the functionality used for the monitoring may be determined based on the second indication.
- the second indication may indicate one among Level a, Level b and Level c as the level of performance monitoring of the functionality. If the Level a is indicated, the UE 104 may determine a model of the functionality by UE implementation to assess performance of the functionality. If the Level b is indicated, a default model and a specific model of the functionality would be assessed, where the default model has general performance in conditions relevant to the functionality and was identified between the base station 102 and the UE 104, and the specific model is selected by the UE 104 within the functionality. If the Level c is indicated, information about models to be assessed would be further indicated to the UE 104, e.g., IDs of models to be assessed. The UE 104 may assess performance of these indicated models.
- a performance metric may be calculated by the UE 104 based on the dataset for performance monitoring.
- the performance metric may be a per sample metric or a statistical metric.
- a bias between an inference result and corresponding labeled data is a per sample metric, where the inference result is obtained using the model with input data of the dataset.
- the bias may be different for different use cases.
- the bias may be the difference between the predicted L1-RSRP and the actual L1-RSRP of the same beam for a beam management use case, or the squared generalized cosine similarity (SGCS) between the predicted CSI and the actual CSI for a CSI prediction use case, or the position difference for a direct or AI assisted positioning accuracy enhancement use case.
- SGCS squared generalized cosine similarity
- the UE 104 may report the monitoring result of the functionality. If the performance metric target is configured and the report quantity is configured with “event” or “performance metric” , a counter may be maintained at the UE 104 to count the number of times that the performance metric is worse than the performance metric target in the time duration.
- the UE 104 may be triggered to report monitoring result of the functionality, so that the LCM operation may be performed accordingly, e.g., functionality activation, deactivation, switching and fallback.
- the time duration for measurement would be terminated as well.
- the UE 104 may be triggered to report monitoring result of the functionality, so that LCM operation may be performed accordingly.
- the time duration for measurement would be terminated as well.
- the counter may be reset when the performance metric is better than the performance metric target before the counter reaches the maximum consecutive number of failure sample.
- the UE 104 may report the monitoring result of the functionality.
- the UE 104 may be triggered to report monitoring result of the functionality.
- the report may be carried by L1 signaling, e.g., UCI.
- the report may be carried by higher layer signaling, e.g., MAC CE or RRC signaling.
- the UE 104 may report the monitoring result of the functionality using L1/L2 signaling or higher layer signaling.
- the monitoring result is determined at 403 based on the calculated performance metric.
- the monitoring result is determined at 403 based on the calculated performance metric.
- the report format for the cases of “event” and “performance metric” will be described in detail.
- the UE 104 would report a performance metric of the functionality, where a model used to generate the performance metric is selected by the UE 104 and is transparent to the base station 102.
- Event C1 functionality is suitable
- Event C2 functionality is not suitable
- the UE 104 may report an event associated with the monitoring result using one bit. For example, if the value of the counter at the end of the time duration is less than the maximum failure sample number which means performance of the functionality is suitable, the UE 104 would report the Event C1 using a bit with “1” ; otherwise, the UE 104 would report the Event C2 using a bit with “0” .
- the UE 104 would report two performance metrics of the functionality, i.e., performance metrics of the default model and the specific model. To distinguish thes two performance metrics at the base station 102, the order of them in a report should be predefined, e.g., the performance metric of the default model may be followed by the performance metric of the special model. In some embodiments, corresponding model IDs may be also reported in the report.
- the UE 104 would report the Event D2 using a bit field with “10” , which means that the performance of the functionality would be suitable if a configuration for inference of the default model is configured;
- the UE 104 would report the Event D3 using a bit field with “00” , which means that the performance of the functionality would not be suitable even with a new configuration for inference.
- Event E1 functionality is suitable
- Event E2 functionality is suitable with new configuration for inference
- Event E3 functionality is not suitable
- the UE 104 may report an event associated with the monitoring result using a bit field.
- the UE 104 would report the Event E1 using a bit field with “01” , which means that the performance of the functionality is suitable; the content of report may include Event E1, e.g., ⁇ Event E1 ⁇ .
- the UE 104 would report the Event E3 using a bit field with “00” , which means that the performance of the functionality is not suitable; the content of report may include Event E3, e.g., ⁇ Event E3 ⁇ .
- the UE 104 would report performance metrics of all assessed models, along with identifiers of assessed models e.g., model ID or identifier of associated configuration for performance monitoring (e.g., CSI-RS resource set ID) .
- the report format may be ⁇ model ID_1, Performance metric_1 ⁇ , ⁇ model ID_2, Performance metric_2 ⁇ , ..., ⁇ model ID_n, Performance metric_n ⁇ ⁇ .
- the report format may be ⁇ ⁇ CSI-RS set ID_1, Performance metric_1 ⁇ , ⁇ CSI-RS set ID_2, Performance metric_2 ⁇ , ..., ⁇ CSI-RS set ID_n, Performance metric_n ⁇ ⁇ .
- the base station 102 may make decisions on functionality operation or model operation for the AI/ML functionality.
- the base station 102 may make decisions on operations of the functionality or model of the functionality based on the reported event associated with the functionality. For event C1, event D1 or event E1, the base station 102 may not indicate a LCM operation to the UE 104. For event C2, event D3 or event E3, the base station 102 may indicate the UE 104 to perform a functionality deactivation, switching or fallback. For event D2, the base station 102 may indicate a configuration for inference of the default model to the UE 104. For event E2 along with model ID or configuration ID, the base station 102 may indicate a configuration for inference of the corresponding model to the UE 104.
- the report quantity is “performance metric”
- the functionality would be maintained; otherwise, a functionality operation would be executed, e.g., switching/updating/deactivation/fallback.
- the functionality may be claimed to fail. If the default model can reach the corresponding performance target but the specific model cannot, the functionality may be maintained and the UE 104 may perform inference using the default model. If both the default model and the specific model can reach the corresponding performance target, respectively, the functionality would be maintained as before.
- Level c monitoring if the performance metric of at least one assessed model (s) satisfies the performance metric threshold, information to maintain the functionality and, if need, activation of one of the models together with associated RS resource configuration for inference would be indicated to the UE 104 from the base station 102; otherwise, functionality operations would be executed, e.g., switching/updating/deactivation/fallback.
- the UE may report, by UE capability report, a first indication of supporting performance monitoring of inactive functionality and/or a maximum number of functionalities that can be monitored at the same time.
- the UE may also report, by UE capability report or UAI, a second indication of supporting Level a/b/c performance monitoring for an AI/ML beam prediction functionality.
- the base station may configure a performance monitoring periodicity and an offset for triggering performance monitoring.
- the base station may configure a report quantity for performance monitoring of the functionality.
- the report quantity may be “performance metric” , e.g., 95%-tile of difference between predicted L1-RSRP and measured L1-RSRP of the same beam, or top-N/1 beam prediction accuracy.
- the report quantity may be “event” .
- the base station may configure at least one performance metric target (s) as the reference to performance metric of the functionality.
- the corresponding performance metric targets can be 90%and 95%for the default model and the specific model, respectively.
- the base station may configure RS resource configuration for performance monitoring used to generate dataset of beam measurement.
- Beam set for inference result and model input are denoted Set A and Set B, respectively.
- Set A and Set B are a subset of Set A and both are sets of narrow beams
- a CSI-RS resource set is configured for beam measurement of Set A and Set B.
- Set B is different from Set A, and Set A and Set B are a set of wider beams and a set of narrow beams, respectively
- a SSB-CSI resource set may be configured for beam measurement of Set B and a CSI-RS resource set may be configured for beam measurement of Set A.
- the base station may configure a time duration for collecting data used for performance monitoring, e.g., 100ms.
- a performance monitoring for the corresponding functionality will be triggered by the UE reporting the event to the base station.
- a performance monitoring for the corresponding functionality will be triggered periodically at time instances determined by the periodicity and the offset.
- the UE may receive RS based on the RS resource configuration for performance monitoring in a time duration. Starting point of the time duration is associated with the time instance when the UE reports the event to the base station.
- the RS resource configuration may include a periodic CSI-RS resource set for Set A and Set B beam measurement, and the time duration with length of 100ms starts after x slots from the time instance when the UE reports the event to the base station.
- the UE may perform beam measurements based on the received RSs and calculate the performance metrics based on results of the beam measurements. The calculation of performance metrics is up to UE implementation.
- the UE may report monitoring results of the functionality based on the calculated performance metrics. For different cases of report quantity and performance monitoring level, there are different contents and formats of report. The details of contents and formats of report may follow step 404 in FIG. 4.
- FIG. 5 illustrates an example of a device 500 that supports performance monitoring for AI/ML functionality or AI/ML model in accordance with aspects of the present disclosure.
- the device 500 may be an example of a UE 104 or a network entity 102 as described herein.
- the device 500 may support wireless communication with one or more network entities 102, UEs 104, or any combination thereof.
- the device 500 may include components for bi-directional communications including components for transmitting and receiving communications, such as a processor 502, a memory 504, a transceiver 506, and, optionally, an I/O controller 508. These components may be in electronic communication or otherwise coupled (e.g., operatively, communicatively, functionally, electronically, electrically) via one or more interfaces (e.g., buses) .
- the processor 502, the memory 504, the transceiver 506, or various combinations thereof or various components thereof may be examples of means for performing various aspects of the present disclosure as described herein.
- the processor 502, the memory 504, the transceiver 506, or various combinations or components thereof may support a method for performing one or more of the operations described herein.
- the processor 502, the memory 504, the transceiver 506, or various combinations or components thereof may be implemented in hardware (e.g., in communications management circuitry) .
- the hardware may include a processor, a digital signal processor (DSP) , an application-specific integrated circuit (ASIC) , a field-programmable gate array (FPGA) or other programmable logic device, a discrete gate or transistor logic, discrete hardware components, or any combination thereof configured as or otherwise supporting a means for performing the functions described in the present disclosure.
- the processor 502 and the memory 504 coupled with the processor 502 may be configured to perform one or more of the functions described herein (e.g., executing, by the processor 502, instructions stored in the memory 504) .
- the processor 502 may support wireless communication at the device 500 in accordance with examples as disclosed herein.
- the processor 502 may be configured to operable to support a means for receiving, from a network entity, a parameter configuration for monitoring performance of an AI/ML functionality or performance of an AI/ML model; and a means for transmitting, to the network entity, a result of the monitoring of the performance of the AI/ML functionality or the performance of the AI/ML model based on the parameter configuration.
- the processor 502 may support wireless communication at the device 500 in accordance with examples as disclosed herein.
- the processor 502 may be configured to operable to support a means for transmitting, to a UE, a parameter configuration for monitoring performance of an AI/ML functionality or performance of an AI/ML model; and a means for receiving, from the UE, a result of the monitoring.
- the processor 502 may include an intelligent hardware device (e.g., a general-purpose processor, a DSP, a CPU, a microcontroller, an ASIC, an FPGA, a programmable logic device, a discrete gate or transistor logic component, a discrete hardware component, or any combination thereof) .
- the processor 502 may be configured to operate a memory array using a memory controller.
- a memory controller may be integrated into the processor 502.
- the processor 502 may be configured to execute computer-readable instructions stored in a memory (e.g., the memory 504) to cause the device 500 to perform various functions of the present disclosure such that the device 500 may perform any process of the disclosure as discussed with reference to FIGS. 2 to 10C.
- the memory 504 may include random access memory (RAM) and read-only memory (ROM) .
- the memory 504 may store computer-readable, computer-executable code including instructions that, when executed by the processor 502 cause the device 500 to perform various functions described herein.
- the code may be stored in a non-transitory computer-readable medium such as system memory or another type of memory.
- the code may not be directly executable by the processor 502 but may cause a computer (e.g., when compiled and executed) to perform functions described herein.
- the memory 504 may include, among other things, a basic I/O system (BIOS) which may control basic hardware or software operation such as the interaction with peripheral components or devices.
- BIOS basic I/O system
- the I/O controller 508 may manage input and output signals for the device 500.
- the I/O controller 508 may also manage peripherals not integrated into the device M02.
- the I/O controller 508 may represent a physical connection or port to an external peripheral.
- the I/O controller 508 may utilize an operating system such as or another known operating system.
- the I/O controller 508 may be implemented as part of a processor, such as the processor 506.
- a user may interact with the device 500 via the I/O controller 508 or via hardware components controlled by the I/O controller 508.
- the device 500 may include a single antenna 510. However, in some other implementations, the device 500 may have more than one antenna 510 (i.e., multiple antennas) , including multiple antenna panels or antenna arrays, which may be capable of concurrently transmitting or receiving multiple wireless transmissions.
- the transceiver 506 may communicate bi-directionally, via the one or more antennas 510, wired, or wireless links as described herein.
- the transceiver 506 may represent a wireless transceiver and may communicate bi-directionally with another wireless transceiver.
- the transceiver 506 may also include a modem to modulate the packets, to provide the modulated packets to one or more antennas 510 for transmission, and to demodulate packets received from the one or more antennas 510.
- the transceiver 506 may include one or more transmit chains, one or more receive chains, or a combination thereof.
- a transmit chain may be configured to generate and transmit signals (e.g., control information, data, packets) .
- the transmit chain may include at least one modulator for modulating data onto a carrier signal, preparing the signal for transmission over a wireless medium.
- the at least one modulator may be configured to support one or more techniques such as amplitude modulation (AM) , frequency modulation (FM) , or digital modulation schemes like phase-shift keying (PSK) or quadrature amplitude modulation (QAM) .
- the transmit chain may also include at least one power amplifier configured to amplify the modulated signal to an appropriate power level suitable for transmission over the wireless medium.
- the transmit chain may also include one or more antennas 510 for transmitting the amplified signal into the air or wireless medium.
- a receive chain may be configured to receive signals (e.g., control information, data, packets) over a wireless medium.
- the receive chain may include one or more antennas 510 for receive the signal over the air or wireless medium.
- the receive chain may include at least one amplifier (e.g., a low-noise amplifier (LNA) ) configured to amplify the received signal.
- the receive chain may include at least one demodulator configured to demodulate the receive signal and obtain the transmitted data by reversing the modulation technique applied during transmission of the signal.
- the receive chain may include at least one decoder for decoding the processing the demodulated signal to receive the transmitted data.
- FIG. 6 illustrates an example of a processor 600 that supports performance monitoring for AI/ML functionality or AI/ML model in accordance with aspects of the present disclosure.
- the processor 600 may be an example of a processor configured to perform various operations in accordance with examples as described herein.
- the processor 600 may be implemented in a device or its components as described herein.
- the device may be an example of a UE 104 or a network entity 102 as described herein.
- the processor 600 may include a controller 602 configured to perform various operations in accordance with examples as described herein.
- the processor 600 may optionally include at least one memory 604, such as L1/L2/L3 cache. Additionally, or alternatively, the processor 600 may optionally include one or more arithmetic-logic units (ALUs) 600.
- ALUs arithmetic-logic units
- the processor 600 may be a processor chipset and include a protocol stack (e.g., a software stack) executed by the processor chipset to perform various operations (e.g., receiving, obtaining, retrieving, transmitting, outputting, forwarding, storing, determining, identifying, accessing, writing, reading) in accordance with examples as described herein.
- a protocol stack e.g., a software stack
- operations e.g., receiving, obtaining, retrieving, transmitting, outputting, forwarding, storing, determining, identifying, accessing, writing, reading
- the processor chipset may include one or more cores, one or more caches (e.g., memory local to or included in the processor chipset (e.g., the processor 600) or other memory (e.g., random access memory (RAM) , read-only memory (ROM) , dynamic RAM (DRAM) , synchronous dynamic RAM (SDRAM) , static RAM (SRAM) , ferroelectric RAM (FeRAM) , magnetic RAM (MRAM) , resistive RAM (RRAM) , flash memory, phase change memory (PCM) , and others) .
- RAM random access memory
- ROM read-only memory
- DRAM dynamic RAM
- SDRAM synchronous dynamic RAM
- SRAM static RAM
- FeRAM ferroelectric RAM
- MRAM magnetic RAM
- RRAM resistive RAM
- PCM phase change memory
- the controller 602 may be configured to manage and coordinate various operations (e.g., signaling, receiving, obtaining, retrieving, transmitting, outputting, forwarding, storing, determining, identifying, accessing, writing, reading) of the processor 600 to cause the processor 600 to support various operations in accordance with examples as described herein.
- the controller 602 may operate as a control unit of the processor 600, generating control signals that manage the operation of various components of the processor 600. These control signals include enabling or disabling functional units, selecting data paths, initiating memory access, and coordinating timing of operations.
- the controller 602 may be configured to fetch (e.g., obtain, retrieve, receive) instructions from the memory 604 and determine subsequent instruction (s) to be executed to cause the processor 600 to support various operations in accordance with examples as described herein.
- the controller 602 may be configured to track memory address of instructions associated with the memory 604.
- the controller 602 may be configured to decode instructions to determine the operation to be performed and the operands involved.
- the controller 602 may be configured to interpret the instruction and determine control signals to be output to other components of the processor 600 to cause the processor 600 to support various operations in accordance with examples as described herein.
- the controller 602 may be configured to manage flow of data within the processor 600.
- the controller 602 may be configured to control transfer of data between registers, arithmetic logic units (ALUs) , and other functional units of the processor 600.
- ALUs arithmetic logic units
- the memory 604 may include one or more caches (e.g., memory local to or included in the processor 600 or other memory, such RAM, ROM, DRAM, SDRAM, SRAM, MRAM, flash memory, etc. In some implementation, the memory 604 may reside within or on a processor chipset (e.g., local to the processor 600) . In some other implementations, the memory 604 may reside external to the processor chipset (e.g., remote to the processor 600) .
- caches e.g., memory local to or included in the processor 600 or other memory, such RAM, ROM, DRAM, SDRAM, SRAM, MRAM, flash memory, etc.
- the memory 604 may reside within or on a processor chipset (e.g., local to the processor 600) . In some other implementations, the memory 604 may reside external to the processor chipset (e.g., remote to the processor 600) .
- the memory 604 may store computer-readable, computer-executable code including instructions that, when executed by the processor 600, cause the processor 600 to perform various functions described herein.
- the code may be stored in a non-transitory computer-readable medium such as system memory or another type of memory.
- the controller 602 and/or the processor 600 may be configured to execute computer-readable instructions stored in the memory 604 to cause the processor 600 to perform various functions.
- the processor 600 and/or the controller 602 may be coupled with or to the memory 604, and the processor 600, the controller 602, and the memory 604 may be configured to perform various functions described herein.
- the processor 600 may include multiple processors and the memory 604 may include multiple memories. One or more of the multiple processors may be coupled with one or more of the multiple memories, which may, individually or collectively, be configured to perform various functions herein.
- the one or more ALUs 600 may be configured to support various operations in accordance with examples as described herein.
- the one or more ALUs 600 may reside within or on a processor chipset (e.g., the processor 600) .
- the one or more ALUs 600 may reside external to the processor chipset (e.g., the processor 600) .
- One or more ALUs 600 may perform one or more computations such as addition, subtraction, multiplication, and division on data.
- one or more ALUs 600 may receive input operands and an operation code, which determines an operation to be executed.
- One or more ALUs 600 be configured with a variety of logical and arithmetic circuits, including adders, subtractors, shifters, and logic gates, to process and manipulate the data according to the operation. Additionally, or alternatively, the one or more ALUs 600 may support logical operations such as AND, OR, exclusive-OR (XOR) , not-OR (NOR) , and not-AND (NAND) , enabling the one or more ALUs 600 to handle conditional operations, comparisons, and bitwise operations.
- logical operations such as AND, OR, exclusive-OR (XOR) , not-OR (NOR) , and not-AND (NAND) , enabling the one or more ALUs 600 to handle conditional operations, comparisons, and bitwise operations.
- the processor 600 may support wireless communication in accordance with examples as disclosed herein.
- the processor 600 may be configured to or operable to support a means for receiving, from a network entity, a parameter configuration for monitoring performance of an AI/ML functionality or performance of an AI/ML model; and a means for transmitting, to the network entity, a result of the monitoring of the performance of the AI/ML functionality or the performance of the AI/ML model based on the parameter configuration.
- the processor 600 may be configured to or operable to support a means for transmitting, to a UE, a parameter configuration for monitoring performance of an AI/ML functionality or performance of an AI/ML model; and a means for receiving, from the UE, a result of the monitoring.
- FIG. 7 illustrates a flowchart of a method 700 that supports performance monitoring for AI/ML functionality or AI/ML model in accordance with aspects of the present disclosure.
- the operations of the method 700 may be implemented by a device or its components as described herein.
- the operations of the method 700 may be performed by a UE 104 as described herein.
- the device may execute a set of instructions to control the function elements of the device to perform the described functions. Additionally, or alternatively, the device may perform aspects of the described functions using special-purpose hardware.
- the method may include receiving, from a base station, a parameter configuration for monitoring performance of an AI/ML functionality or performance of an AI/ML model.
- the operations of 705 may be performed in accordance with examples as described herein. In some implementations, aspects of the operations of 705 may be performed by a device as described with reference to FIG. 1.
- the method may include transmitting, to the base station, a result of the monitoring of the performance of the AI/ML functionality or the performance of the AI/ML model based on the parameter configuration.
- the operations of 710 may be performed in accordance with examples as described herein. In some implementations, aspects of the operations of 710 may be performed by a device as described with reference to FIG. 1.
- FIG. 8 illustrates a flowchart of a method 800 that supports performance monitoring for AI/ML functionality or AI/ML model in accordance with aspects of the present disclosure.
- the operations of the method 800 may be implemented by a device or its components as described herein.
- the operations of the method 800 may be performed by a UE 104 as described herein.
- the device may execute a set of instructions to control the function elements of the device to perform the described functions. Additionally, or alternatively, the device may perform aspects of the described functions using special-purpose hardware.
- the method 800 may be implemented as a specific example of the step 710 in FIG. 7.
- the method may include transmitting, to the base station, one of the following: an indication of an event that the AI/ML functionality is applicable in the case that the default AI/ML model and the AI/ML model associated with the AI/ML functionality are applicable; an indication of an event that the AI/ML functionality is applicable with the default AI/ML model in the case that the default AI/ML model is applicable while the AI/ML model associated with the AI/ML functionality is not applicable; or an indication of an event that the AI/ML functionality is not applicable in the case that both the default AI/ML model and the AI/ML model associated with the AI/ML functionality are not applicable.
- the operations of 805 may be performed in accordance with examples as described herein. In some implementations, aspects of the operations of 805 may be performed by a device as described with reference to FIG. 1.
- FIG. 900 illustrates a flowchart of a method 900 that supports performance monitoring for AI/ML functionality or AI/ML model in accordance with aspects of the present disclosure.
- the operations of the method 900 may be implemented by a device or its components as described herein.
- the operations of the method 900 may be performed by a network entity 102 as described herein.
- the device may execute a set of instructions to control the function elements of the device to perform the described functions. Additionally, or alternatively, the device may perform aspects of the described functions using special-purpose hardware.
- the method may include transmitting, to a UE, a parameter configuration for monitoring performance of an AI/ML functionality or performance of an AI/ML model.
- the operations of 905 may be performed in accordance with examples as described herein. In some implementations, aspects of the operations of 905 may be performed by a device as described with reference to FIG. 1.
- the method may include receiving, from the UE, a result of the monitoring.
- the operations of 910 may be performed in accordance with examples as described herein. In some implementations, aspects of the operations of 910 may be performed by a device as described with reference to FIG. 1.
- FIG. 10 illustrates a flowchart of a method 1000 that supports performance monitoring for AI/ML functionality or AI/ML model in accordance with aspects of the present disclosure.
- the operations of the method 1000 may be implemented by a device or its components as described herein.
- the operations of the method 1000 may be performed by a network entity 102 as described herein.
- the device may execute a set of instructions to control the function elements of the device to perform the described functions. Additionally, or alternatively, the device may perform aspects of the described functions using special-purpose hardware.
- the method 1000 may be implemented as a specific example of the step 910 in FIG. 9.
- the method may include receiving, to a UE, one of the following: an indication of an event that the AI/ML functionality is applicable; an indication of an event that the AI/ML functionality is applicable but requires for a configuration for inference; or an indication of an event that the AI/ML functionality is not applicable.
- the operations of 1005 may be performed in accordance with examples as described herein. In some implementations, aspects of the operations of 1005 may be performed by a device as described with reference to FIG. 1.
- a general-purpose processor may be a microprocessor, but in the alternative, the processor may be any processor, controller, microcontroller, or state machine.
- a processor may also be implemented as a combination of computing devices (e.g., a combination of a DSP and a microprocessor, multiple microprocessors, one or more microprocessors in conjunction with a DSP core, or any other such configuration.
- the functions described herein may be implemented in hardware, software executed by a processor, firmware, or any combination thereof. If implemented in software executed by a processor, the functions may be stored on or transmitted over as one or more instructions or code on a computer-readable medium. Other examples and implementations are within the scope of the disclosure and appended claims. For example, due to the nature of software, functions described herein may be implemented using software executed by a processor, hardware, firmware, hardwiring, or combinations of any of these. Features implementing functions may also be physically located at various positions, including being distributed such that portions of functions are implemented at different physical locations.
- Computer-readable media includes both non-transitory computer storage media and communication media including any medium that facilitates transfer of a computer program from one place to another.
- a non-transitory storage medium may be any available medium that may be accessed by a general-purpose or special-purpose computer.
- non-transitory computer-readable media may include RAM, ROM, electrically erasable programmable ROM (EEPROM) , flash memory, compact disk (CD) ROM or other optical disk storage, magnetic disk storage or other magnetic storage devices, or any other non-transitory medium that may be used to carry or store desired program code means in the form of instructions or data structures and that may be accessed by a general-purpose or special-purpose computer, or a general-purpose or special-purpose processor.
- an article “a” before an element is unrestricted and understood to refer to “at least one” of those elements or “one or more” of those elements.
- the terms “a, ” “at least one, ” “one or more, ” and “at least one of one or more” may be interchangeable.
- a list of items indicates an inclusive list such that, for example, a list of at least one of A, B, or C means A or B or C or AB or AC or BC or ABC (i.e., A and B and C) .
- the phrase “based on” shall not be construed as a reference to a closed set of conditions. For example, an example step that is described as “based on condition A” may be based on both a condition A and a condition B without departing from the scope of the present disclosure.
- the phrase “based on” shall be construed in the same manner as the phrase “based at least in part on.
- a “set” may include one or more elements.
Landscapes
- Engineering & Computer Science (AREA)
- Computer Networks & Wireless Communication (AREA)
- Signal Processing (AREA)
- Mobile Radio Communication Systems (AREA)
Abstract
Divers aspects de la présente divulgation concernent la surveillance de performance pour une fonctionnalité IA/ML ou un modèle IA/ML. Selon un aspect, un UE reçoit, en provenance d'une station de base, une configuration de paramètres pour surveiller la performance d'une fonctionnalité d'intelligence artificielle/apprentissage automatique (IA/ML) ou la performance d'un modèle IA/ML. L'UE transmet, à la station de base, un résultat de la surveillance de la performance de la fonctionnalité IA/ML ou de la performance du modèle IA/ML sur la base de la configuration de paramètres. De cette manière, un cadre de surveillance de performance pour une fonctionnalité IA/ML ou un modèle IA/ML peut être conçu.
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| PCT/CN2024/070851 WO2024239663A1 (fr) | 2024-01-05 | 2024-01-05 | Surveillance de performance pour une fonctionnalité ia/ml ou un modèle ia/ml |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| PCT/CN2024/070851 WO2024239663A1 (fr) | 2024-01-05 | 2024-01-05 | Surveillance de performance pour une fonctionnalité ia/ml ou un modèle ia/ml |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| WO2024239663A1 true WO2024239663A1 (fr) | 2024-11-28 |
Family
ID=93588867
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| PCT/CN2024/070851 Pending WO2024239663A1 (fr) | 2024-01-05 | 2024-01-05 | Surveillance de performance pour une fonctionnalité ia/ml ou un modèle ia/ml |
Country Status (1)
| Country | Link |
|---|---|
| WO (1) | WO2024239663A1 (fr) |
Citations (3)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN114930356A (zh) * | 2020-12-11 | 2022-08-19 | 尤帕斯公司 | 经由动作中心来补充人工智能(ai)/机器学习(ml)模型、ai/ml模型重新训练硬件控制以及ai/ml模型设置管理 |
| WO2023211041A1 (fr) * | 2022-04-28 | 2023-11-02 | 엘지전자 주식회사 | Procédé et appareil d'émission et de réception de signal dans un système de communication sans fil |
| WO2023211356A1 (fr) * | 2022-04-29 | 2023-11-02 | Telefonaktiebolaget Lm Ericsson (Publ) | Surveillance de fonctionnalité d'apprentissage automatique d'équipement utilisateur |
-
2024
- 2024-01-05 WO PCT/CN2024/070851 patent/WO2024239663A1/fr active Pending
Patent Citations (3)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN114930356A (zh) * | 2020-12-11 | 2022-08-19 | 尤帕斯公司 | 经由动作中心来补充人工智能(ai)/机器学习(ml)模型、ai/ml模型重新训练硬件控制以及ai/ml模型设置管理 |
| WO2023211041A1 (fr) * | 2022-04-28 | 2023-11-02 | 엘지전자 주식회사 | Procédé et appareil d'émission et de réception de signal dans un système de communication sans fil |
| WO2023211356A1 (fr) * | 2022-04-29 | 2023-11-02 | Telefonaktiebolaget Lm Ericsson (Publ) | Surveillance de fonctionnalité d'apprentissage automatique d'équipement utilisateur |
Non-Patent Citations (1)
| Title |
|---|
| YAN CHENG, HUAWEI, HISILICON: "Discussion on general aspects of AI/ML framework", 3GPP DRAFT; R1-2300107; TYPE DISCUSSION; FS_NR_AIML_AIR, 3RD GENERATION PARTNERSHIP PROJECT (3GPP), MOBILE COMPETENCE CENTRE ; 650, ROUTE DES LUCIOLES ; F-06921 SOPHIA-ANTIPOLIS CEDEX ; FRANCE, vol. RAN WG1, no. Athens, GR; 20230227 - 20230303, 17 February 2023 (2023-02-17), Mobile Competence Centre ; 650, route des Lucioles ; F-06921 Sophia-Antipolis Cedex ; France, XP052247260 * |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| WO2024187797A1 (fr) | Dispositifs et procédés de collecte de données | |
| WO2024239690A1 (fr) | Procédés de gestion d'une tâche informatique infructueuse | |
| WO2024239709A1 (fr) | Rapport de faisceau | |
| WO2024193228A1 (fr) | Procédé et appareil de prise en charge d'intelligence artificielle (ia) pour des communications sans fil | |
| WO2024260001A1 (fr) | Surveillance de performances pour prédiction de faisceaux | |
| WO2024093428A1 (fr) | Mécanisme pour cho avec des scg candidats | |
| WO2024239663A1 (fr) | Surveillance de performance pour une fonctionnalité ia/ml ou un modèle ia/ml | |
| WO2025156689A1 (fr) | Rapport de surveillance | |
| WO2024183486A1 (fr) | Procédé et appareil de prise en charge d'intelligence artificielle (ia) pour des communications sans fil | |
| WO2024213187A1 (fr) | Indication de condition supplémentaire côté réseau | |
| WO2024239712A1 (fr) | Continuité de service pour détecter une cible | |
| WO2025107685A1 (fr) | Améliorations de configuration | |
| WO2025097820A1 (fr) | Compte-rendu de mesures dans un message de couche 3 | |
| WO2025107605A1 (fr) | Rapport de faisceaux | |
| WO2025123698A1 (fr) | Prédiction d'avance temporelle activée par ia/ml | |
| WO2025011017A1 (fr) | Continuité de service dans détection et communication intégrées (isac) | |
| WO2024159790A1 (fr) | Agrégation de bande passante pour positionnement | |
| WO2025251681A1 (fr) | Prédiction associée à une mesure | |
| WO2025145705A1 (fr) | Saut d'occasion d'intervalle de mesure | |
| WO2025123703A1 (fr) | Communication associée à un résultat de mesure | |
| WO2025241619A1 (fr) | Gestion de prédiction d'événement l1 | |
| WO2025213846A1 (fr) | Transmission de signal de détection | |
| WO2025086698A1 (fr) | Enregistrement de résultats de mesure de couche 3 | |
| WO2025241540A1 (fr) | Mobilité déclenchée par couche 1/couche 2 conditionnelle | |
| WO2025107718A1 (fr) | Prédiction de défaillance de liaison radio |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| 121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 24809914 Country of ref document: EP Kind code of ref document: A1 |