[go: up one dir, main page]

WO2025030349A1 - User equipment, base station, and method for processng artificial intelligence-related user equipment capability reporting - Google Patents

User equipment, base station, and method for processng artificial intelligence-related user equipment capability reporting Download PDF

Info

Publication number
WO2025030349A1
WO2025030349A1 PCT/CN2023/111600 CN2023111600W WO2025030349A1 WO 2025030349 A1 WO2025030349 A1 WO 2025030349A1 CN 2023111600 W CN2023111600 W CN 2023111600W WO 2025030349 A1 WO2025030349 A1 WO 2025030349A1
Authority
WO
WIPO (PCT)
Prior art keywords
capability
information
reporting
configuration
capability reporting
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
PCT/CN2023/111600
Other languages
French (fr)
Inventor
Miao Qu
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen TCL New Technology Co Ltd
Original Assignee
Shenzhen TCL New Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen TCL New Technology Co Ltd filed Critical Shenzhen TCL New Technology Co Ltd
Priority to PCT/CN2023/111600 priority Critical patent/WO2025030349A1/en
Publication of WO2025030349A1 publication Critical patent/WO2025030349A1/en
Pending legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W8/00Network data management
    • H04W8/22Processing or transfer of terminal data, e.g. status or physical capabilities
    • H04W8/24Transfer of terminal data
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/445Program loading or initiating

Definitions

  • the present disclosure relates to the field of communication systems, and more particularly, to a method for processing artificial intelligence (AI) -related user equipment (UE) capability reporting and a wireless communication device.
  • AI artificial intelligence
  • UE user equipment
  • AI Artificial intelligence
  • ML machine learning
  • the UE capability reporting mechanism concerning AI/ML methods can be classified into two categories, as outlined below:
  • Category 1 Static UE capability reporting mechanism
  • Category 2 Dynamic UE capability reporting mechanism.
  • Category 1 involves AI/ML features or feature groups (FGs) as well as LCM operations that can be reported to the network (NW) via static UE capability reporting.
  • Such reporting includes AI/ML use cases, model training, model monitoring, data collection, and other related functionalities.
  • This static reporting requires a one-time submission. It appears that the existing legacy UE capability mechanism can be directly reused for this category, with the exception of making enhancements to accommodate new AI-related features, model training, model monitoring, data collection, and other related functionalities.
  • Category 2 pertains to a novel UE capability mechanism for dynamic AI-related UE capability reporting. The necessity of this mechanism has been extensively discussed, but a complete solution has not yet been devised.
  • An object of the present disclosure is to propose a method for processing AI-related UE capability reporting, a user equipment (UE) , and a base station.
  • UE user equipment
  • an embodiment of the invention provides an artificial intelligence (AI) -related user equipment (UE) capability reporting method executable in a user equipment (UE) , comprising:
  • an embodiment of the invention provides a user equipment (UE) comprising a processor configured to call and run a computer program stored in a memory, to cause a device in which the processor is installed to execute the disclosed method.
  • UE user equipment
  • an embodiment of the invention provides a method for processing artificial intelligence (AI) -related user equipment (UE) capability reporting executable in a user equipment (UE) , comprising:
  • an embodiment of the invention provides a base station comprising a processor configured to call and run a computer program stored in a memory, to cause a device in which the processor is installed to execute the disclosed method.
  • the disclosed method may be implemented in a chip.
  • the chip may include a processor, configured to call and run a computer program stored in a memory, to cause a device in which the chip is installed to execute the disclosed method.
  • the disclosed method may be programmed as computer-executable instructions stored in non-transitory computer-readable medium.
  • the non-transitory computer-readable medium when loaded to a computer, directs a processor of the computer to execute the disclosed method.
  • the non-transitory computer-readable medium may comprise at least one from a group consisting of: a hard disk, a CD-ROM, an optical storage device, a magnetic storage device, a Read Only Memory, a Programmable Read Only Memory, an Erasable Programmable Read Only Memory, EPROM, an Electrically Erasable Programmable Read Only Memory and a Flash memory.
  • the disclosed method may be programmed as a computer program product, which causes a computer to execute the disclosed method.
  • the disclosed method may be programmed as a computer program, which causes a computer to execute the disclosed method.
  • This disclosed method introduces a comprehensive procedure for AI-related UE capability reporting, which has not been discussed in the current study.
  • the disclosed method ensures efficient system operation by utilizing AI-enabled techniques.
  • the disclosed method efficiently conserves network resources and reduces signaling overhead.
  • FIG. 1 illustrates a schematic view showing a wireless communication system comprising a user equipment (UE) , a base station, and a network entity.
  • UE user equipment
  • FIG. 2 illustrates a schematic view showing a system with an AI/ML functional framework for executing a method for processing AI-related UE capability reporting using ML models.
  • FIG. 3 illustrates a schematic view showing an embodiment of the disclosed method.
  • FIG. 4 illustrates a schematic view showing an example of a procedure of AI-related information reporting.
  • FIG. 5 illustrates a schematic view showing another example of a procedure of AI-related information reporting.
  • FIG. 6 illustrates a schematic view showing another example of a procedure of AI-related information reporting.
  • FIG. 7 illustrates a schematic view showing an example of an AI-specific BSR MAC CE.
  • FIG. 8 illustrates a schematic view showing another example of an AI-specific BSR MAC CE.
  • FIG. 9 illustrates a schematic view showing an example of AI-specific MAC CE that carries AI-specific data.
  • FIG. 10 illustrates a schematic view showing another example of AI-specific MAC CE that carries AI-specific data.
  • FIG. 11 illustrates a schematic view showing an example of a procedure of AI-related UE capability reporting.
  • FIG. 12 illustrates a schematic view showing examples of triggering time.
  • FIG. 13 illustrates a schematic view showing a system for wireless communication according to an embodiment of the present disclosure.
  • This invention provides a method for AI-related User Equipment (UE) capability reporting. Specifically, the invention proposes possible procedures, signaling, and corresponding elements that efficiently conserve network resources and reduce signaling overhead.
  • UE User Equipment
  • Embodiments of the disclosure are related to artificial intelligence (AI) and machine learning (ML) for wireless communication system, such as LTE or new radio (NR) air interface, and address problems of AI-related UE capability reporting.
  • AI-related UE capability reporting in the disclosure may be static AI-related UE capability reporting or dynamic AI-related UE capability reporting.
  • Static AI-related UE capability reporting is one-time AI-related UE capability reporting conveyed in a reporting message.
  • Dynamic AI-related UE capability reporting may comprise multiple transmissions of AI-related UE capability reporting conveyed in multiple reporting messages.
  • LCM Life cycle management
  • information of AI-related UE capability reporting includes applicable functionality, applicable model, additional conditions, UE internal conditions, and/or their related information, such as model ID, model structure, functionality ID, component of functionality and others.
  • the applicable model/functionality means that the model/functionality that are currently applicable at the UE among identified models/functionalities.
  • the additional conditions include scenarios, sites, cell, zone, datasets, and pair information.
  • UE internal conditions may include UE memory usage, battery status, computing power and other hardware limitations.
  • Model ID is used to indicate the model.
  • Model structure may include the model layer, node, tree node, coefficients and others.
  • Functionality ID is used to indicate a functionality. Component of functionality indicates conditions of the functionality.
  • reporting information of reporting mechanism is not limited to AI-related UE capability information, but can also to be used for other AI-specific data/information.
  • AI-related UE capability information may comprise but is not limited to static and/or dynamic AI-related UE capability information.
  • Functionality refers to an AI/ML-enabled Feature/FG enabled by configuration (s) , where configurations are the conditions in forms of RRC/LPP IE (s) reported by UE capability.
  • a functionality refers to a specific configuration of the Feature/FG or a set of configurations of Feature/FG and may serve as a unit of activation/deactivation/switching in functionality-based LCM.
  • AI/ML model monitoring is the process of tracking and evaluating the performance of one or more AI/ML models and/or LCM over time.
  • the AI/ML model monitoring (for example by a monitoring agent) collects data as monitoring input, which may be at one least of inference accuracy, system throughput, spectrum efficiency, NACK ratio, ACK ratio, SINR, RSRP, RSRQ, beam failure, link failure, positioning, intra-cell interference, inter-cell interference, BLER, MSER and FAR, performance loss, performance gain, channel information (e.g., real-time channel matrix, long-term channel matrix, CSI, eigen-vector, etc.
  • channel information e.g., real-time channel matrix, long-term channel matrix, CSI, eigen-vector, etc.
  • a decision agent can make informed decisions to maintain the AI/ML model, and/or take corrective actions of LCM, (e.g., selecting, switching, activating, deactivating, or falling back to different AI/ML models or non-AI modules) to address any issues or anomalies detected via model monitoring.
  • AI/ML models can be installed and executed in one or more UE (s) and NW (e.g., base station, LMF, etc. ) .
  • NW e.g., base station, LMF, etc.
  • one or more AI/ML models may be installed and executed in UE 10 or/and installed and executed in a NW 20, wherein the AI/ML model (s) is used for different feature and/or functions considering one-side model, and/or two side model.
  • a telecommunication system including a UE 10a, a UE 10b, a base station (BS) 20a, and a network entity device 30 executes the disclosed method according to an embodiment of the present disclosure.
  • FIG. 1 is shown for illustrative not limiting, and the system may comprise more UEs, BSs, and CN entities. Connections between devices and device components are detailed as lines and arrows in the FIGs.
  • the base station 20a can operate as a gNB for 5G NR networks, an eNB for LTE networks, or a base station for future mobile network systems beyond 5G.
  • a gNB is a 5G radio network node that connects to the core network via the NG interface.
  • An eNB is a 4G radio network node that connects to the evolved packet core via the S1 interface.
  • a base station for beyond 5G may be a smart virtual eNB (SVeNB) that can perform functions of EPS elements and reduce end-to-end delay.
  • SVeNB smart virtual eNB
  • the UE 10a may include a processor 11a, a memory 12a, and a transceiver 13a.
  • the UE 10b may include a processor 11b, a memory 12b, and a transceiver 13b.
  • the base station 20a may include a processor 21a, a memory 22a, and a transceiver 23a.
  • the network entity device 30 may include a processor 31, a memory 32, and a transceiver 33.
  • Each of the processors 11a, 11b, 21a, and 31 may be configured to implement proposed functions, procedures and/or methods described in the description. Layers of radio interface protocol may be implemented in the processors 11a, 11b, 21a, and 31.
  • Each of the memory 12a, 12b, 22a, and 32 operatively stores a variety of programs and information to operate a connected processor.
  • Each of the transceivers 13a, 13b, 23a, and 33 is operatively coupled with a connected processor, and transmits and/or receives radio signals or wireline signals.
  • the UE 10a may be in communication with the UE 10b through a sidelink.
  • the base station 20a may be an eNB, a gNB, or one of other types of radio nodes, and may configure radio resources for the UE 10a and UE 10b.
  • Each of the processors 11a, 11b, 21a, and 31 may include an application-specific integrated circuit (ASICs) , other chipsets, logic circuits and/or data processing devices.
  • ASICs application-specific integrated circuit
  • Each of the memory 12a, 12b, 22a, and 32 may include read-only memory (ROM) , a random-access memory (RAM) , a flash memory, a memory card, a storage medium and/or other storage devices.
  • Each of the transceivers 13a, 13b, 23a, and 33 may include baseband circuitry and radio frequency (RF) circuitry to process radio frequency signals.
  • RF radio frequency
  • the network entity device 30 may be a node in a CN.
  • CN may include LTE CN or 5G core (5GC) which includes user plane function (UPF) , session management function (SMF) , access and mobility management function (AMF) , unified data management (UDM) , policy control function (PCF) , control plane (CP) /user plane (UP) separation (CUPS) , authentication server (AUSF) , network slice selection function (NSSF) , and the network exposure function (NEF) .
  • UPF user plane function
  • SMF session management function
  • AMF access and mobility management function
  • UDM unified data management
  • PCF policy control function
  • PCF control plane
  • CP control plane
  • UP user plane
  • CUPS authentication server
  • NSSF network slice selection function
  • NEF network exposure function
  • An example of the UE (e.g., UE 10 in FIG. 2) in the description may include one of the UE 10a or UE 10b.
  • An example of the base station in the description may include the base station 20a.
  • the NW 20 may be one or more entities, such as a base station or a gNB, in a radio access network (RAN) or one or more entities a 5G core network (5GC) .
  • An entity in RAN may comprise a gNB or a base station of any other types of base stations, such as an eNB or a base station for beyond 5G.
  • Uplink (UL) transmission of a control signal or data may be a transmission operation from a UE to a base station.
  • Downlink (DL) transmission of a control signal or data may be a transmission operation from a base station to a UE.
  • a DL control signal may comprise downlink control information (DCI) or a radio resource control (RRC) signal, from a base station to a UE.
  • DCI downlink control information
  • RRC radio resource control
  • a general functional framework of AI/ML is used to show the logical relationship among LCM actions.
  • a possible AI/ML functional framework comprising a system 100 is show in FIG. 2. At least a portion or all of the units may execute in a UE (e.g., UE 10 in FIG. 3) of the disclosure.
  • the general AI/ML functional framework should include the following functional blocks,
  • a data collection unit 1101 is functional block that is used for collection of the measurements/data for various actions of life cycle management (LCM) , such as model training unit 1102, model inference unit 1104, and model management/performance monitoring unit 1107.
  • the data collection unit 1101 works for one-sided model, and/or two-sided model.
  • a model training unit 1102 is functional block that is used for performing model training.
  • the model training unit 1102 may perform model training, validation, test, and finally produces a trained model.
  • a model management/performance monitoring unit 1107 is a functional block that is used for performing model management including one or more of the following: model monitoring, selection, activating, deactivating, switching, and fallback. Moreover, the function block may provide control signaling to model inference unit 1104, or model training 1102, or model storage 1106.
  • a model inference unit 1104 is a functional block that is used for performing model inference. This functional block produces inference output of AI/ML models.
  • the model inference unit 1104 uses the data from data collection as input and uses the trained AI/ML model given by model training unit 1102 to provide a set of inference output.
  • the output of model monitoring can trigger a set of AI/ML model actions and also can be an input for model monitoring function.
  • a model storage unit 1106 is a functional block is used to store the trained/updated/retrained/fine-tuned models.
  • FIG. 3 shows an embodiment of the disclosed method.
  • a user equipment UE may execute a method for processing AI-related UE capability reporting.
  • the UE 10 performs AI-related UE capability reporting to report AI-related UE capability information 400 of the UE 10 using dynamic grant (DG) radio resource, configured grant (CG) radio resource, or semi-persistent scheduling (SPS) resources (S001) .
  • the NW 20 receives the AI-related UE capability information 400 of a user equipment (UE) reported through AI-related UE capability reporting (S002) .
  • the NW 20 comprises a base station, such as a gNB.
  • the UE collects AI-related UE capability information.
  • the collecting AI-related UE capability information may include detecting change of AI-related UE capability information of the UE; and reporting the changed AI-related UE capability information.
  • the AI-related UE capability reporting may be performed in response to a message transmitted from a network device, such as the base station.
  • the message comprises a request or configuration message for AI-related UE capability reporting.
  • the AI-related UE capability information may be transmitted in a UAI message, RRCReconfigurationComplete message, or RRCResumeComplete message, or a dedicated message.
  • the dedicated message is used to convey the AI-related UE capability information.
  • a specific purpose (newly defined) of the UAI message may be used to indicate that the UAI carries the AI-related UE capability information.
  • the configuration of the UAI message may be transmitted by the base station and conveyed in a RRCReconfiguration message.
  • configuration of the UAI message may include one or more of:
  • a memory usage configuration for the UE to report AI-related UE capability information of changes in memory usage detected by the UE;
  • a scenario configuration for the UE to report AI-related UE capability information of changes in a scenario detected by the UE;
  • a site configuration for the UE to report AI-related UE capability information of changes in a site detected by the UE;
  • a cell configuration for the UE to report AI-related UE capability information of changes in a cell detected by the UE
  • a zone configuration for the UE to report AI-related UE capability information of changes in a zone detected by the UE.
  • the configuration of the UAI message may include a prohibit timer for each type of AI-related UE capability information.
  • the applicable model configuration may include one or both of a maximum number of applicable models and a minimum number of applicable models.
  • the applicable functionality configuration may include one or both of a maximum number of applicable functionalities and a minimum number of applicable functionalities.
  • the base station transmits configuration of AI-related UE capability reporting, and the UE receives configuration of AI-related UE capability reporting.
  • the configuration of AI-related UE capability reporting may include one or more instances of the following information:
  • an AI-related capability configured ID a configuration or information used to identify an AI-related capability configuration
  • a requested AI-related UE capability a configuration or information that indicates one or more instances of AI-related UE capability information requested by a network
  • an AI-related UE capability reporting configuration ID a configuration or information used to identify an AI-related UE capability reporting configuration
  • an AI-related UE capability reporting type a configuration or information used to indicate one of AI-related UE capability reporting types
  • ⁇ a reference signaling type a configuration or information used to indicate a type of reference signaling
  • ⁇ a time to trigger a configuration or information that specifies time during which specific criteria for the trigger event needs to be met in order to trigger an AI-related UE capability reporting;
  • an activating time offset a configuration or information that specifies a time offset according to which an AI-related operation may be activated a time offset after a trigger event;
  • ⁇ a duration time of AI-related UE capability reporting a configuration or information that specifies a duration of time during which the UE may be to report at least one requested AI-related UE capability;
  • a server identification a configuration or information used to identify a server which may be capable of AI model management
  • a reason of requesting AI-related UE capability a configuration or information that indicates a reason of requesting AI-related UE capability information
  • ⁇ a threshold a configuration or information specifies a newly defined threshold for detecting change of memory usage of the UE.
  • each UE capability in the (requested) AI-related UE capability may be a capability element and identified at element-based granularity.
  • a set/group of UE capabilities in the requested AI-related UE capability may be a capability set/group and identified at set/group-based granularity.
  • the AI-related UE capability reporting types comprise periodic AI-related information reporting and event-triggered AI-related information reporting.
  • the base station transmits AI-related capability reporting activation/deactivation.
  • the UE receives AI-related capability reporting activation/deactivation.
  • the AI-related UE capability reporting may be activated in response to the received activation or deactivated in response to the received deactivation.
  • the AI-related capability reporting activation/deactivation may include one or more of:
  • the UE transmits a scheduling request (SR) for radio resource for AI-related UE capability reporting.
  • the base station receives the SR.
  • the scheduling request may be configured with a new scheduling request instance that indicates an SR configuration corresponding to AI-specific data/information.
  • the scheduling request may be a legacy scheduling request with a parameter used to indicate data/information of new transmission associated with the scheduling request may be AI-specific data/information for AI-related UE capability reporting.
  • the scheduling request may be a (newly defined) AI-specific SR which may be used for requesting uplink radio resources for AI-specific data/information.
  • a logical channel (LCH) or logical channel group (LCG) may be corresponded with an SR configuration or AI-specific SR configuration for the transmitting data information, the LCH or LCG indicates the transmitting data information is AI-specific data/information implicitly.
  • a logical channel (LCH) or logical channel group (LCG) may be included in new AI-specific buffer status reporting (BSR) Medium Access Control (MAC) control element (CE) for the transmitting data information, the LCH or LCG indicates the transmitting data information is AI-specific data/information implicitly.
  • BSR buffer status reporting
  • CE Medium Access Control
  • the reported AI-related UE capability information may be conveyed in a MAC CE.
  • configuration of AI-related UE capability reporting may include one or more of: information that indicates an interval between two adjacent periodical AI-related UE capability reports;
  • a trigger event triggers the AI-related UE capability reporting.
  • the trigger event occurs upon the UE detects change to the applicable model (s) and/or applicable functionality/functionalities of the UE, or after the applicable model (s) and/or applicable functionality/functionalities have been changed and persist for a specified period, or when the applicable model (s) and/or applicable functionality/functionalities have been changed and persist for the specified duration while a number of changed applicable model (s) and/or applicable functionality/functionalities exceeds a maximum/minimum number of applicable model (s) or applicable functionality/functionalities.
  • a trigger event triggers the AI-related UE capability reporting, the trigger event occurs when change of UE memory usage become greater than a memory usage threshold.
  • a trigger event triggers the AI-related UE capability reporting.
  • An entering condition of the event is satisfied when a UE-detected change of UE memory usage minus a hysteresis parameter is less than a memory usage threshold.
  • a leaving condition of the event is satisfied when the UE-detected change of UE memory usage plus the hysteresis parameter is greater than a memory usage threshold.
  • Embodiment 1 Procedure of AI-related UE capability reporting
  • AI-related UE capability can be reported through UAI message, RRCReconfigurationComplete/RRCResumeComplete message, or another new defined dedicated message. Embodiments of procedures of AI-related UE capability reporting are detailed in the following.
  • Embodiment 1.1 An approach of AI-related UE capability reporting based on UAI:
  • the AI-related UE capability information can be carried in UAI message from UE to NW. Since the AI-related UE capability is a new concept or a new feature, in order to achieve information synchronization between the UE and NW by UAI message, the legacy UAI procedure requires enhancements, including purpose and configuration of UAI.
  • a specific purpose (newly defined) of this UAI procedure is for the UE to inform the network of its AI-related information, including AI-related UE capability information.
  • the new configuration information of RRCReconfiguration including one or more of following:
  • applicableModelconfig An applicable model configuration is a configuration for the UE to report AI-related UE capability information and to notify the network of changes in the applicable model detected by the UE.
  • An applicable functionality configuration is a configuration for the UE to report AI-related UE capability information and to notify the network of changes in the applicable functionality detected by the UE.
  • a memory usage configuration is a configuration for the UE to report AI-related UE capability information and to notify the network of changes in memory usage detected by the UE.
  • a UE battery configuration is a configuration for the UE to report AI-related UE capability information and to notify the network of changes in the UE battery status detected by the UE.
  • a scenario configuration is a configuration for the UE to report AI-related UE capability information and to notify the network of changes in the scenario detected by the UE.
  • a site configuration is a configuration for the UE to report AI-related UE capability information and to notify the network of changes in the site detected by the UE.
  • datasetinfoconfig A dataset configuration is a configuration for the UE to report AI-related UE capability information and to notify the network of changes in the dataset detected by the UE.
  • a cell configuration is a configuration for the UE to report AI-related UE capability information and to notify the network of changes in the cell detected by the UE.
  • Zoneconfig is a configuration for the UE to report AI-related UE capability information and to notify the network of changes in the zone detected by the UE.
  • the UE can use any combination of the above information for AI-related UE capability reporting.
  • the UE can use an information element (e.g., UEinternalcondition) to indicate all types of AI-related information (i.e., the applicable model, applicable functionality, UE memory usage, UE battery status, scenario, site, dataset, cell, and zone) or use individual IEs to report the types of AI-related information separately.
  • an information element e.g., UEinternalcondition
  • UEinternalcondition to indicate all types of AI-related information (i.e., the applicable model, applicable functionality, UE memory usage, UE battery status, scenario, site, dataset, cell, and zone) or use individual IEs to report the types of AI-related information separately.
  • the following one or more parameters may be included.
  • Configuration of each type of AI-related UE capability information may include one or more of the following:
  • ProhibitTimer is a prohibit timer of AI-related UE capability information, including a prohibit timer for applicable models, or a prohibit timer of applicable functionalities, or a prohibit timer of UE memory usage, or a prohibit timer of UE battery status, or a prohibit timer of scenario, or a prohibit timer of site, or a prohibit timer of dataset.
  • the UE Upon expiration of the prohibit timer, the UE reports the AI-related UE capability information.
  • the maximum number of applicable models/functionalities The configuration or information indicates the maximum number of applicable models and/or applicable functionalities that UE can report as a portion of the AI-related information in one submission.
  • the minimum number of applicable models/functionalities The configuration or information indicates the minimum number of applicable models and/or applicable functionalities that UE shall report as a portion of the AI-related information in one submission.
  • UE capable of providing AI-related information in RRC_CONNECTED state may initiate a procedure to report AI-related information to NW in several cases, including when it is configured to report AI-related information and/or when detecting a change in AI-related information.
  • the UE upon initiating the procedure, the UE shall perform pseudo codes in the following table:
  • the UE determines the conditions in the level 2> of the pseudo codes. That is, the UE determines:
  • the UE start timer T with the timer value set to the applicableModelProhibitTimer (i.e., a value of the prohibit timer for applicable model) and initiates transmission of the UEAssistanceInformation message to provide applicable model information.
  • applicableModelProhibitTimer i.e., a value of the prohibit timer for applicable model
  • NW transmits configuration of AI-related information reporting to UE.
  • UE receives configuration of AI-related information reporting from the NW and provides AI-related information in RRC_CONNECTED state to NW.
  • Step1 UE receives a configuration of AI-related UE capability information in RRCReconfiguration message from the NW.
  • the configuration includes AI-related UE capability configuration that includes one or more instances of the following information elements: the specific purpose (newly defined) , applicableModelconfig, applicableFuncationalityconfig, UEmemoryuseageconfig, UEbatteryconfig, scenarioconfig, siteconfig, datasetinfoconfig, cellconfig, Zoneconfig, ProhibitTimer, maximum number of applicable models/functionalities, and minimum number of applicable models/functionalities.
  • Step2 UE receives configuration of AI-related UE capability conveyed in RRCReconfiguration message.
  • the UE detects the change of AI-related UE capability information of the UE, such as applicable model, applicable functionality, scenario and etc., and sends the changed AI-related UE capability information to the NW.
  • the criterion of the change of AI-related UE capability information is defined in Embodiment3.
  • Embodiment 1.2 An approach of AI-related UE capability reporting
  • Another example of the procedure of AI-related information reporting is provided in this embodiment which may be another snapshot capability “needForGap” .
  • one or more instances of the following signaling/information may be transmitted over the air interface between NW and UE for different methods:
  • AI-related UE capability reporting configuration One or more instances of the following information may be included in the configuration of AI-related UE capability reporting (referred to as AI-related UE capability reporting configuration) : AI-related capability configured ID, Requested AI-related UE capability, AI-related UE capability reporting configuration, AI-related UE capability reporting configuration ID, AI-related UE capability reporting type, Reference signaling (RS) type, Time to trigger, Activating time offset, Duration time of AI-related UE capability reporting, Server identification, Reason of requesting AI-related UE capability, and Threshold1.
  • AI-related capability configured ID The configuration or information is used to identify an AI-related capability configuration.
  • the configuration or information provides association between a Requested AI-related UE capability and an AI-related UE capability reporting configuration.
  • the configuration or information indicates one or more instances of the AI-related UE capability information requested by the NW, such as additional conditions, UE internal conditions, applicable functionality, applicable model and others related to a UE.
  • the capabilities in the Requested AI-related UE capability can be identified based on capability element or capability group.
  • Each UE capability in the Requested AI-related UE capability is a capability element and can be identified at element-based granularity.
  • a set/group of UE capabilities in the Requested AI-related UE capability is a capability set/group and can be identified at set/group-based granularity.
  • AI-related UE capability reporting configuration ID The configuration or information is used to identify an AI-related UE capability reporting configuration.
  • AI-related UE capability reporting type The configuration or information is used to indicate one of AI-related UE capability reporting types (e.g., periodic reporting or event-triggered reporting) .
  • the AI-related UE capability reporting type may include an interval or period for AI-related information reporting. The period can be fixed or configured.
  • the AI-related UE capability reporting type may include trigger event ID and/or specify trigger events which can be fixed or configured as detailed in Embodiment 3.
  • Reference signaling (RS) type The configuration or information is used to indicate the type of reference signaling, which can be CSI-RS, SSB, AI-specific CSR-RS, AI-specific SSB.
  • Time to trigger The configuration or information specifies time during which specific criteria for the trigger event needs to be met in order to trigger an AI-related UE capability reporting;
  • the configuration or information specifies a time offset according to which an AI-related operation, such as AI-related UE capability reporting, is activated a time offset after a trigger event.
  • the AI-related operation includes AI-related measurement (e.g., data collection, model monitoring, and etc. for LCM operations, which can be AI-specific measurement, legacy measurement, the measurement can be L1-measurement, L2-measurement, or L3-measurement) , and/or data collection for AI-related UE capability information, and/or report the AI-related UE capability information.
  • AI-related measurement e.g., data collection, model monitoring, and etc. for LCM operations, which can be AI-specific measurement, legacy measurement, the measurement can be L1-measurement, L2-measurement, or L3-measurement
  • Duration time of AI-related UE capability reporting The configuration or information specifies a duration of time during which the UE is to report one or more requested AI-related UE capabilities. Based on at least one of the starting time, time duration, or a time offset, the AI-related UE capability reporting is activated. 1) the NW sends the message that carries the AI-related UE capability reporting configuration; 2) UE receives the message that carries the AI-related UE capability reporting configuration; 3) UE is activated to report the AI-related capability; 4) UE performs AI-related UE capability reporting.
  • Server identification The configuration or information is used to identify a server which is able to manage AI/ML models, including model training, such as OTT, OAM, MAO and etc.
  • the configuration or information indicates a reason of requesting AI-related UE capability information, which can be 1) NW does not receive any AI-related UE capability in a period timed by a timer; 2) LCM operations enabled; 3) system performance become worse or better;
  • Threshold1 The configuration or information specifies a newly defined threshold for detecting change of memory usage of the UE.
  • the AI-related UE capability reporting may be triggered in response to a change in memory usage exceeding the specified threshold.
  • AI-related capability reporting activation/deactivation The configuration or information includes one or more of trigger events as defined in Embodiment 3, AI-related capability reporting threshold, Requested AI-related UE capability, Activating time, Deactivating time, Duration time of AI-related UE capability reporting, Server identification and etc.
  • AI-related capability reporting threshold The configuration or information is detailed in Embodiment 3.
  • the configuration or information specifies the AI-related UE capability requested by NW.
  • the configuration or information specifies the activation timing of reference signaling and/or activation timing for UE to gather and/or report its AI-related UE capability information, or/and LCM operations.
  • Deactivating time The configuration or information specifies the deactivation timing of reference signaling, and/or deactivation timing for UE to gather and/or report its AI-related UE capability information, or/and LCM operations.
  • Duration time of AI-related UE capability reporting The configuration or information specifies a duration of time. In the duration time, UE should report the requested AI-related UE capability. Based on at least one of the starting time, time duration, or a time offset, the AI-related UE capability reporting is activated.
  • Server identification The configuration or information is used to identify a server which is able to manage AI/ML models, including model training, such as OTT, OAM, MAO and etc.
  • Resource request for AI-related UE capability reporting Operations of resource request for AI-related UE capability reporting include scheduling request (SR) , buffer status reporting (BSR) , physical uplink shared channel (PUSCH) resource allocation (for BSR and AI-related UE capability information) .
  • SR scheduling request
  • BSR buffer status reporting
  • PUSCH physical uplink shared channel
  • AI-related UE capability reporting The field includes AI-related UE capability information for synchronization among entities in the network, and can be carried in UCI, PUCCH, PUSCH, MAC CE, RRC message.
  • the AI-related UE capability information may include one or more instances of the following information: AI-related UE capability reporting configuration ID, the information of Requested AI-related UE capability, AI-related UE capability reporting type, reference signaling (RS) type, and trigger event.
  • Time to trigger Duration time of AI-related UE capability reporting, Threshold1.
  • network e.g., gNB or one of CN functions
  • NW can enquire the AI-related UE capability information of UE by sending a request message to UE.
  • the procedure is detailed in the following:
  • the Msg1 AI-UECapability Enquiry may include one or more instances of the following information: AI-related capability configured ID, Requested AI-related UE capability, AI-related UE capability reporting configuration, AI-related UE capability reporting configuration ID, AI-related UE capability reporting type, Reference signaling (RS) type, Time to trigger, Activating time offset, Duration time of AI-related UE capability reporting, Server identification, Reason of requesting AI-related UE capability.
  • the Msg1 may be a dedicated RRC message, MAC CE, or DCI, or carried in PDCCH, PDSCH and etc. Msg1 stands for first message.
  • UE may receive Msg2 AI-UECapability Activation/Deactivation from NW, which is used to deactivate/activate AI-related UE capability reporting, and/or deactivate/activate some operations for gathering AI-related UE capability information, including related measurement (AI-specific measurement, L1-measurement, L3-measurement, new defined measurement) , and/or data collection, and/or model monitoring.
  • Msg2 stands for second message.
  • the Msg2 AI-UECapability Activation/Deactivation include one or more instances of the following information: Requested AI-related UE capability, activate time, deactivate time, Duration time of AI-related UE capability reporting, Server identification.
  • the Msg2 may be a MAC CE, DCI, RRC message, PDCCH, or PDSCH.
  • Step3 the UE will gather the enquired AI-related UE capability information, such as scenario, site, dataset, UE memory usage, battery status, applicable model, applicable functionality and etc.
  • AI-related UE capability information may comprise either UE internal information (previously existing within the UE) or information obtained by the UE through measurements, computations, sensing, etc.
  • Step4 If the criterion of the periodic AI-related information reporting and/or event-triggered AI-related information reporting is satisfied, the UE reports a Msg3 that carries the Requested AI-related UE capability information to the network.
  • Msg3 stands for third message.
  • the trigger event of the periodic AI-related information reporting and/or periodicity of the event-triggered AI-related information reporting trigger are configured in Msg1 and/or Msg2, or fixed/predefined.
  • One or more instances of the following information may be included in Msg3: AI-related UE capability reporting configuration ID, the information of Requested AI-related UE capability, AI-related UE capability reporting type, reference signaling (RS) type, and trigger event.
  • RS reference signaling
  • Embodiment 1.3 An approach of AI-related UE capability reporting based on UE initiation.
  • FIG. 6 illustrates an example of a procedure of UE-initiated AI-related UE capability reporting, which is used to a UE-sided model or UE-part of two-sided model based on the information definition in Embodiment 1.2.
  • the example of the procedure is detailed in the following.
  • a UE gathers and stores the AI-related UE capability information, such as scenario, site dataset, UE memory usage, battery status, applicable model, applicable functionality and etc. These types of the AI-related UE capability information may be obtained by UE through measurement, computing, sensing and etc.
  • Step2’ The UE reports a Msg1 that carries the AI-related UE capability information to NW based on schemes of periodic AI-related information reporting or event-triggered AI-related information reporting.
  • the trigger event of the periodic AI-related information reporting and/or periodicity of the event-triggered AI-related information reporting is configurable, fixed/predefined, or UE itself-decided.
  • One or more instances of the following information may be included in Msg1: information of Requested AI-related UE capability, AI-related UE capability reporting type, and reference signaling (RS) type.
  • the Msg1 can be a dedicated message, such as RRC, DCI and etc.
  • Embodiment 2 The mechanism of AI-related UE capability reporting.
  • Embodiment 1 gives some examples of the procedure of AI-related UE capability reporting. With limited radio resources, it may be necessary to study scheduling of resources for AI-related UE capability reporting.
  • the legacy scheduling mechanism has three methods for requesting Uplink Shared Channel (UL-SCH) resources including 1) Scheduling Request, 2) Buffer Status Reporting, and 3) Random Access Channel (RACH) process.
  • UL-SCH Uplink Shared Channel
  • RACH Random Access Channel
  • the UE may send an SR to the NW to request SR PUCCH resource allocation for buffer status reporting (BSR) .
  • BSR buffer status reporting
  • the SR serves to notify the NW that the UE has data information to transmit and informs the NW of the exact data information size through BSR. Subsequently, the NW allocates a suitable resource for uplink transmission of the data information.
  • the UE If a PUSCH resource is available, the UE directly requests a suitable PUSCH resource for transmitting the data information through BSR.
  • the UE initiates the RACH process to request an uplink grant from the NW.
  • AI-related UE capability information such as applicable model and applicable functionality
  • two aspects require consideration. Firstly, AI-related UE capability reporting may have latency requirement.
  • AI model and its related functionality differ from the traditional communication data and information. As a result, certain modifications to the legacy scheduling mechanism need to be taken into account.
  • the UE sends a scheduling request (SR) to request uplink resource.
  • SR scheduling request
  • legacy SR configuration corresponds to one or more logical channels and/or to SCell beam failure recovery and/or to consistent LBT failure recovery and/or to beam failure recovery of BFD-RS set (s) . Therefore, to specify that the data/information is AI-specific data/information, the following options are available:
  • Option 1 Among scheduling request instances, a new scheduling request instance for AI-specific data/information transmission is newly added into schedulingRequestId.
  • a value of the SR ID schedulingRequestId represents the new scheduling request instance that indicates an SR configuration corresponding to AI-specific data/information.
  • An SR of the new scheduling request instance is used for uplink transmission of AI-specific data/information for AI-related UE capability reporting.
  • Option 2 A newly parameter is introduced to legacy SR configuration, which is used to indicate the data/information of new transmission associated with a SR is AI-specific data/information for AI-related UE capability reporting.
  • An SR with the newly parameter is used for uplink transmission of AI-specific data/information for AI-related UE capability reporting.
  • AN AI-specific SR (newly defined) is introduced so that this SR is AI specific, which is used for requesting UL-SCH resources for AI-specific data/information.
  • Configuration of AI-specific SR may include AI-sr-ProhibitTimer (per AI-SR configuration) and/or AI-sr-TransMax (per AI-SR configuration) , which may be configured by RRC message.
  • An AI-specific SR (newly defined) is used for uplink transmission of AI-specific data/information for AI-related UE capability reporting.
  • AI-sr-ProhibitTimer indicates a Timer for SR transmission on PUCCH, the value is in units of millisecond (ms) , and can be 1ms or 2ms etc;
  • AI-sr-TransMax The AI-sr-TransMax, if configured, indicates the Maximum number of SR transmissions, the value can be 4, or 8, or 16 or etc.
  • a logical channel (LCH) or logical channel group (LCG) for the transmitting data information can also indicate the transmitting data information is AI-specific data/information implicitly.
  • new LCH/LCG is defined to indicate so.
  • the SR configuration may correspond to one or more AI-specific logical channels/logical channel groups and implicitly indicates that SR of the SR configuration is used to request for resource scheduling for data information and the data information to be transmission associated with SR is AI-specific data/information.
  • AI-specific LCH/LCG is configured in SR configuration or AI-specific SR configuration.
  • An AI-specific LCH/LCG is used for transmission of AI-specific data/information for AI-related UE capability reporting.
  • a new AI-specific BSR MAC CE is introduced, which includes the AI-specific LCH/LCG information. As shown in FIG. 7, an example of the AI-specific BSR MAC CE include a field to carry an AI-specific LCH/LCG ID and a field to carry a buffer size. A new AI-specific BSR MAC CE with the AI-specific LCH/LCG is used for transmission of AI-specific data/information for AI-related UE capability reporting.
  • LCH/LCG-AI ID The field indicates which AI-specific logical channel/logical channel group carries the AI-related UE capability reporting information.
  • Buffer size The field indicates the data volume of AI-related UE capability reporting information.
  • Solution#3 A new AI-specific BSR MAC CE is introduced, which includes a field of an AI-specific indicator or a field of LCH/LCG-AI ID.
  • FIG. 8 shows another example of the AI-specific BSR MAC CE.
  • a new AI-specific BSR MAC CE with the AI-specific indicator is used for transmission of AI-specific data/information for AI-related UE capability reporting.
  • AI-specific The field is an indicator that indicates the BSR MAC CE is AI-specific BSR MAC CE;
  • LCH/LCG ID The field indicates a logical channel/logical channel group that carries the AI-related UE capability reporting information.
  • Buffer size The field indicates the data volume of the AI-related UE capability reporting information.
  • new trigger event may be introduced to trigger BSR (including the new BSR or legacy BSR for AI-related UE capability reporting) .
  • BSR including the new BSR or legacy BSR for AI-related UE capability reporting
  • the LCH/LCG of the AI-specific data/information can have a priority higher or lower than or the same with an LCH/LCG UL data which belong to any LCH/LCG.
  • Solution#1 AI-related UE capability information is carried in new MAC CE.
  • FIG. 9 and FIG. 10 illustrate examples of the new AI-specific MAC CE that carries AI-specific data.
  • LCH/LCG ID, LCH/LCG 0 to LCH/LCG m Each of the field indicates a logical channel or logical channel group that carries the AI-related UE capability reporting information.
  • the logical channel or logical channel group can be a legacy LCH/LCG or specific LCH/LCG (newly defined) for AI-related UE capability reporting information.
  • the field indicates the number of the logical channels or logical channel groups.
  • R The field if a reserved field.
  • AI-specific data 1 to AI-specific data m Each of the fields carries AI-specific data for AI-related UE capability reporting.
  • Solution#2 A specific BSR used to request UL-SCH resource for uplink transmission of AI-related UE capability information.
  • the NW allocates or configures dedicated BSR MAC CE resource for buffer status reporting of AI-related UE capability reporting information.
  • the UE sends a BSR to request uplink resource for AI-related UE capability reporting for information.
  • FIG. 11 shows an example of a procedure of AI-related UE capability reporting.
  • Step1 UE sends Msg1 to NW.
  • the Msg1 is a Buffer Status Report (BSR) used to request uplink resource for AI-related UE capability reporting for information.
  • the NW receives the Msg1 BSR.
  • the resource for transmitting the BSR periodically is configured by NW.
  • Step2 UE receives Msg2 from NW, the Msg2 include an uplink grant of uplink resource for AI-related UE capability reporting.
  • Step3 UE sends the Msg3 to NW using the allocated uplink resource in step2.
  • the Msg3 carries the AI-related UE capability reporting information.
  • the AI-related UE capability reporting information also can be carried in:
  • Embodiment 3 Reporting criterion of AI-related UE capability reporting
  • AI-related UE capability information reported from UE to NW dynamically may be transmitted using the scheme of periodic AI-related information reporting and/or the scheme of event-triggered AI-related information reporting.
  • the reporting criterion of both of the schemes is detailed in the following.
  • configuration of AI-related UE capability reporting can be configured, or predefined, including one or more of reportInterval, reportAmount, maxNrofmodel-ToReport, maxNroffuncationality-ToReport, useAllowedModelList, and useAllowedfunctionalitylList,
  • reportInterval The field indicates the interval between two adjacent periodical reports.
  • reportAmount The field indicates the number of AI-related UE capability reports. The field is applicable for both reporting types.
  • maxNrofmodel-ToReport The field indicates the maximum number of AI/ML models included in the AI-related UE capability reporting.
  • maxNroffuncationality-ToReport The field indicates the maximum number of AI/ML functionalities included in the AI-related UE capability reporting.
  • useAllowedModelList The field indicates whether an allow-list that enumerates AI/ML models in an associated Requested AI-related UE capability information element are applicable to the AI-related UE capability reporting.
  • useAllowedfunctionalitylList The field indicates whether an allow-list that enumerates AI/ML functionalities in an associated Requested AI-related UE capability information element are applicable to the AI-related UE capability reporting.
  • AI-related UE capability reporting In wireless AI-based systems, frequently AI-related UE capability reporting will result in significant overhead, resource wastage, transmission conflicts, and energy inefficient. Therefore, event-triggered scheme for the AI-related UE capability reporting is reasonable. Based on different AI-related UE capabilities (e.g., applicable model, applicable functionality, UE internal conditions, additional conditions and etc. ) , some potential trigger events are illustrated in the following:
  • Event A Applicable model/models and/or functionality/functionalities changes
  • Event A occurs when the UE changes its applicable AI-related models and/or functionalities.
  • Event A states a scenario where the applicable model/models and/or applicable functionality/functionalities at the UE undergo changes. Even A triggers AI-related UE capability reporting to inform the NW of the updated applicable model/models, applicable functionality/functionalities, and/or their corresponding information.
  • Option 1 Upon detecting change to the applicable model (s) and/or applicable functionality/functionalities, the UE shall initiate AI-related UE capability reporting.
  • the duration time of AI-related UE capability reporting is configured, the applicable model/models and/or applicable functionality/functionalities is changed at T1, the Even A occurs representing the change. Even A triggers the AI-related UE capability reporting, where the exact reporting time can be T1, or the end time of duration time Td. Alternatively, if the duration time of AI-related UE capability reporting is not configured, the exact reporting time can be T1.
  • Option 2 AI-related UE capability reporting will be triggered after the applicable model (s) and/or applicable functionality/functionalities have been changed and persist for a specified period.
  • an even occurs representing the change and the lasting time.
  • the even triggers AI-related UE capability reporting.
  • the duration time of AI-related UE capability reporting is also configured, and Time to trigger is within in duration time of AI-related UE capability reporting, the event occurs and triggers the reporting.
  • the exact reporting time can be the end of Time to trigger, or the end time of duration. If Time to trigger is out of the duration time of AI-related UE capability reporting, the situation does not trigger the reporting. Alternatively, if the duration time of AI-related UE capability reporting is not configured, the exact reporting time can be the end of Time to trigger.
  • AI-related UE capability reporting will be triggered once the applicable model (s) and/or applicable functionality/functionalities have been changed and persist for a specified duration while a number of changed applicable model (s) and/or applicable functionality/functionalities exceeds a threshold specified by the maximum/minimum number of applicable model (s) or applicable functionality/functionalities.
  • the actual trigger time of AI-related UE capability reporting can be determined based on the time of either the threshold being exceeded and/or the time the change lasts for the specified duration of time for AI-related UE capability reporting, as specified by the duration time defined in embodiment 1. Subsequently, the AI-related UE capability reporting will be triggered when an event represents the option 3 occurs.
  • Option 4 The AI-related UE capability reporting will be triggered when either the applicable model (s) and/or applicable functionality/functionalities have been modified, or when any combination of options opt1, opt2, and opt3 is met.
  • Event B UE memory usage changes
  • Event B describes the current UE memory usage changes. More specifically, for example, the UE performs AI-related UE capability reporting to NW if the one of the following possible criteria are met.
  • the change of UE memory usage become greater than the threshold1;
  • Threshold1 is a newly defined threshold for detecting a change in memory usage. If the change of memory usage is greater than the threshold1, an event B occurs and triggers AI-related UE capability reporting to NW.
  • condition B-1 as specified below
  • condition B-2 as specified below
  • the variable refers to a UE-detected change of UE memory usage.
  • the variable Mum can be obtained from a difference between a compare with the latest memory usage reporting;
  • Hys This is a hysteresis parameter for this event B (which is configured by RRC signaling, including the reporting configuration or other signaling) ;
  • Threshold1 This is the newly defined threshold of the change of memory usage. A current change in memory usage greater than the threshold will trigger the AI-related UE capability reporting.
  • Embodiment 4 CSI Priority rule for AI-based reporting
  • the priority rules for CSI reporting have been defined, as shown in the following.
  • N cells is the value of the higher layer parameter maxNrofServingCells
  • M s is the value of the higher layer parameter maxNrofCSI-ReportConfigurations.
  • a first CSI report is said to have priority over second CSI report if an associated Pri iCSI (y, k, c, s) value for the first report is lower than an associated Pri iCSI (y, k, c, s) value for the second report.
  • Two CSI reports are said to collide if the time occupancy of the physical channels scheduled to carry the CSI reports overlap in at least one OFDM symbol and are transmitted on the same carrier.
  • the two CSI reports are multiplexed or dropped based on the priority values, as described in Clause 9.2.5.2 in TS 38.213.
  • a semi-persistent CSI report to be carried on PUSCH overlaps in time with PUSCH data transmission in one or more symbols on the same carrier, and if the earliest symbol of these PUSCH channels starts no earlier than N2+d2, 1 symbols after the last symbol of the DCI scheduling the PUSCH where d2, 1 is the maximum of the d2, 1 associated with the PUSCH carrying semi-persistent CSI report and the PUSCH with data transmission, the CSI report shall not be transmitted by the UE. Otherwise, if the timeline requirement is not satisfied, this is an error case.
  • a UE may transmit a first PUSCH that includes semi-persistent CSI reports and a second PUSCH that includes an UL-SCH on the same carrier, and the first PUSCH transmission may overlap in time with the second PUSCH transmission, the UE does not transmit the first PUSCH and transmits the second PUSCH.
  • the UE expects that the first and second PUSCH transmissions satisfy the above timing conditions for PUSCH transmissions that overlap in time when one or more of the first or second PUSCH transmissions is in response to a DCI format detection by the UE.
  • AI-based wireless system has been discussed in 3GPP RAN1/RAN2 meeting for several times.
  • Dedicated CSI reporting may be needed to make AI models work normally in wireless system, including AI/ML model lifecycle management (including monitoring, training, inference etc. ) , also for AI-based CSI measurement/compression, beam management, positioning, mobility etc.
  • AI-based wireless technology has been a topic of recurrent discussion in multiple 3GPP RAN1/RAN2 meetings. To ensure the seamless integration of AI models within wireless systems, dedicated CSI reporting becomes imperative. This includes comprehensive AI/ML model lifecycle management (encompassing monitoring, training, inference, and other related processes) . Moreover, AI-based CSI measurement, compression, beam management, positioning, mobility, and similar functionalities demand careful consideration and implementation.
  • Bothe non-AI/ML based CSI measurement and AI/ML based CSI measurement or data collection may be existed for one specific radio link. Due to the limitation on UCI payload, new priority rule is needed to deal with AI/ML and/or non-AI/ML based CSI reporting, and modification is expected for the legacy priority rules that have been defined in current NR system.
  • the priority value is different for AI/ML CSI reporting and non-AI/ML CSI reporting, e.g., the k value is corresponding to AI/ML CSI reporting and non-AI/ML CSI reporting, where the value can be pre-defined or configured by base station or derived by UE self.
  • Embodiment 4.1 Multiple priority rules existed in system, including AI/ML and non-AI/ML specific.
  • the AI/ML specific priority rules are defined in the system, which different from non-AI/ML specific priority rules that have been adopted in current NR system.
  • the AI/ML specific priority is determined based on one or more instances of the following information, including reporting configuration ID, serving cell ID, maximum number of CSI reporting configurations, maximum serving cell, AI/ML lifecycle management procedure, AI/ML features, AI/ML functionalities, AI/ML model ID, AI/ML CSI reporting contents, and AI/ML CSI reporting behavior, as shown in the following,
  • AI/ML specific priority determined based on AI/ML lifecycle management procedure means that the CSI reporting related with different procedure has different priorities. For example, monitoring has the highest priority, inference has the middle priority, and training has the lowest priority, etc.
  • AI/ML specific priority determined based on AI/ML functionalities means that the CSI reporting related with different functionalities has different priorities. For example, beam management has the highest priority, and the CSI measurement has the lowest priority, etc.
  • AI/ML specific priority determined based on AI/ML features means that the CSI reporting related with different features has different priorities. For example, beam management has the highest priority, and the CSI measurement has the lowest priority, etc.
  • AI/ML specific priority determined based on AI/ML CSI reporting contents means that different instances of CSI reporting information have different priorities. For example, AI-based CSI part A has the highest priority, and other portions of CSI information have lower priority, etc.
  • CSI part A is a part of the total CSI reporting information which is configured by RRC and/or activated by MAC-CE or DCI;
  • ⁇ i may be equal to zero or other integer value and is related with the maximum number of CSI reporting configurations, maximum serving cell, and/or AI/ML lifecycle management procedure number, and/or AI/ML functionalities number, and/or AI/ML features number.
  • AI/ML lifecycle management procedure number the number of AI/ML lifecycle management procedures
  • AI/ML functionalities number the number of AIML functionalities
  • AI/ML features number the number of AI/ML features.
  • - ⁇ is other value which is determined by the system or configuration or other methods.
  • the AI-specific CSI reporting formula may be, for example, changed to another formula which including the AI/ML functionalities and should also be covered by the disclosure.
  • Pri iCSI (w, z, y, k, c, s) ⁇ + ⁇ 1 ⁇ w+ ⁇ 2 ⁇ z+ ⁇ 3 ⁇ y+ ⁇ 4 ⁇ k+ ⁇ 5 ⁇ c+s
  • CSI part A is a part of the total CSI reporting information which is configured by RRC and/or active by MAC-CE or DCI;
  • ⁇ i may be equal to zero or other integer value and related with the maximum number of CSI reporting configurations, and/or maximum serving cell, and/or AI/ML procedure number, and/or AI/ML functionalities number, and/or AI/ML features number.
  • - ⁇ is other value which is determined by the system or configuration or other methods.
  • AI/ML CSI reporting and non-AL/ML CSI reporting may be both existed in system, when resource collision is happed, the priority for AI/ML CSI reporting and the priority for non-AI/ML CSI reporting should be determined.
  • the determination method may be as shown in the following,
  • Option A The priority of AI/ML CSI reporting is always higher than the priority of non-AI/ML CSI reporting. That is, the non-AI/ML CSI can be transmitted after the AI/ML CSI.
  • the priority of CSI is determined based on the specific priority value of a type of the CSI, which can be AI/ML CSI reporting or non-AI/ML CSI reporting.
  • Option B The priority of AI/ML CSI is always lower than non-AI/ML CSI reporting. That is, the AI/ML only can be transmitted after the non-AI/ML CSI.
  • the priority of CSI is determined based on the specific priority value of a type of the CSI, which can be AI/ML CSI reporting or non-AI/ML CSI reporting.
  • Option C If both AI/ML CSI and non-AI/ML share the same priority value, the AI/ML CSI will be transmitted firstly, otherwise the transmission priority is determined by the priority value.
  • Option D If both AI/ML CSI and non-AI/ML share the same priority value, the non-AI/ML CSI will be transmitted firstly, otherwise the transmission priority is determined by the priority value.
  • ⁇ UE get the priority rule for CSI reporting:
  • the priority rules may be predefined by the system.
  • the priority rules for CSI reporting may be defined in the specification as “Relationship between the priority of non-AI/ML CSI rule and the priority of AI/ML CSI rule” .
  • Option B The CSI reporting priority rules are configured by base station.
  • the priority rules are carried by RRC configuration in an RRC message, MAC-CE, and/or DCI.
  • RRC configuration in an RRC message, MAC-CE, and/or DCI.
  • One or more instances of the following information may be included by the configuration,
  • a CSI reporting priority rule which describes which priority rule may be used by UE for CSI reporting.
  • ⁇ CSI reporting priority rule parameters including one or more of coefficients of the following items.
  • the items include one or more of reporting configuration ID, serving cell ID, maximum number of CSI reporting configurations, maximum serving cell, AI/ML lifecycle management procedure, AI/ML features, AI/ML functionalities, AI/ML model ID, AI/ML CSI reporting contents, AI/ML CSI reporting behavior etc.
  • Option C UE get the CSI priority rule by derived from the RRC configuration.
  • the non-AI/ML priority rule may be used.
  • the AI/ML priority rule may be used.
  • the AI/ML dedicated RRC configuration may be represented by AI/ML model ID, and/or AI/ML features, AI/ML functionalities, and/or AI/ML-specific CSI information, and/or RRC dedicated ID (including CSI measurement configuration ID, CSI reporting ID, CSI-RS resource setting/set/resource ID etc. )
  • Embodiment 4.2 One priority rule in system for both of AI/ML and non-AI/ML CSI reporting.
  • Another method is only one priority rule for both of AI/ML and non-AI/ML CSI reporting.
  • the CSI reporting priority rule is determined based on one or more instances of the following information, including reporting configuration ID, serving cell ID, maximum number of CSI reporting configurations, maximum serving cell, AI/ML lifecycle management procedure, AI/ML features, AI/ML functionalities, AI/ML model ID, AI/ML CSI reporting contents, AI/ML CSI reporting behavior, as shown in the following:
  • CSI reporting priority rule determined based on AI/ML lifecycle management procedure means that the CSI reporting related with different procedures has different priorities. For example, monitoring has the highest priority; inference has the middle priority; and training has the lowest priority, etc.
  • CSI reporting priority rule determined based on AI/ML functionalities means that the CSI reporting related with different functionalities has different priorities. For example, beam management has the highest priority, and the CSI measurement has the lowest priority, etc.
  • CSI reporting priority rule determined based on AI/ML features means that the CSI reporting related with different features has different priorities. For example, beam management has the highest priority, and the CSI measurement has the lowest priority, etc.
  • CSI reporting priority rule determined based on AI/ML CSI reporting contents means that different portions of CSI have different priorities. For example, AI-based CSI part A has the highest priority, other portions of CSI have low priority.
  • Pri iCSI (w, z, y, k, c, s) ⁇ + ⁇ 1 ⁇ w+ ⁇ 2 ⁇ z+ ⁇ 3 ⁇ y+ ⁇ 4 ⁇ k+ ⁇ 5 ⁇ c+s
  • - k is the priority value for CSI reporting for CSI reporting contents, which is determined both by AI/ML and/or non-AI/ML CSI information.
  • the CSI part A/B can be as shown in the following,
  • the CSI part A can be non-AI/ML based CSI reporting information, such as L1-RSRP or L1-SINR;
  • the CSI part B is related to that CSI information which used for AI/ML monitoring, such as accuracy, input and/or output distribution, performance of the wireless system, or other CSI information etc.;
  • the CSI part A can be non-AI/ML based CSI reporting information, such as L1-RSRP or L1-SINR;
  • the CSI part B is non-AI/ML based CSI information not carrying L1-RSRP or L1-SINR;
  • the CSI-Part C is related to that CSI information which used for AI/ML monitoring, while CSI part D is related to other CSI information for AI/ML.
  • the CSI part A is related to that CSI information which used for AI/ML monitoring;
  • the CSI part B can be non-AI/ML based CSI reporting information, such as L1-RSRP or L1-SINR;
  • - s is the CSI reporting configuration ID.
  • ⁇ i may be equal to zero or any other integer value and related with the maximum number of CSI reporting configurations, and/or maximum serving cell, and/or AI/ML procedure number, and/or AI/ML functionalities number, and/or AI/ML features number.
  • Embodiment 5 CSI computation time for AI/ML model
  • the UE When a CSI request field in a DCI triggers CSI report (s) on PUSCH, the UE shall provide a valid CSI report for the n-th triggered report,
  • Z ref is defined as the next uplink symbol with its cyclic prefix (CP) starting T proc
  • CSI (Z) (2048+144) ⁇ 2 - ⁇ ⁇ T c +T switch after the end of the last symbol of the PDCCH triggering the CSI report (s)
  • Z' ref (n) is defined as the next uplink symbol with its CP starting T′ proc
  • CSI (Z′) (2048+144) ⁇ 2 - ⁇ ⁇ T c after the end of the last symbol in time of the latest of: aperiodic CSI-RS resource for channel measurements, aperiodic CSI-IM used for interference measurements, and aperiodic NZP CSI-RS for interference measurement, when aperiodic CSI-RS is used for channel measurement for the n-th triggered CSI report
  • T switch is defined in TS 38.214 clause 6.4 and is applied only if z 1 of TS 38.214 table 5.4-1 is applied.
  • the CSI computation time is strong related to the CSI reporting information and subcarrier spacing.
  • the AI/ML based CSI measurement or processing may spend longer computation time than non-AI/ML based processing, so how to support the CSI computation time both for non-AI/ML model or AI/ML model should be addressed.
  • the solutions including:
  • the delay requirements for CSI computation needs to be determined so that the gNB knows exactly when to receive the CSI measurement feedback on uplink resources.
  • CSI computation delay requirement for non-AI/ML model
  • CSI computation delay requirement may be determined by one or more of the following
  • Subcarrier spacing may be related to the PDCCH, PUSCH and AI/ML dedicated CSI-RS subcarrier spacing. For example, the subcarrier spacing equal to the maximum of the subcarrier spacing of PDCCH, PUSCH, and AI/ML dedicated CSI-RS resource.
  • AI/ML functionalities may have different CSI computation delay requirements. For example, the beam management needs CSI computation delay of K (e.g., in units of symbols or milliseconds) , and CSI compression needs CSI computation delay of M (e.g., in units of symbols or milliseconds) , where the K and M may be different.
  • the AI/ML functionalities may be represented by AI/ML model ID or RRC configuration or other methods.
  • AI/ML features Different AI/ML functionalities may have different CSI computation delay requirements. For example, the beam management needs CSI computation delay of K (e.g., in units of symbols or milliseconds) , and CSI compression needs CSI computation delay of M (e.g., in units of symbols or milliseconds) , where the K and M may be different.
  • the AI/ML features may be represented by AI/ML model ID or RRC configuration or other methods.
  • AI/ML lifecycle management procedure Different LCM procedures has different CSI computation delay requirements. For example, AI/ML monitoring needs CSI computation delay of S (e.g., in units of symbols or milliseconds) , while AI/ML inference needs CSI computation delay of W (e.g., in units of symbols or milliseconds) , etc.
  • S e.g., in units of symbols or milliseconds
  • W e.g., in units of symbols or milliseconds
  • UE capability information includes one or more of AI/ML model reporting time, AI/ML model switching time.
  • the AI/ML model reporting time signifies the count of OFDM symbols between the termination of the last symbol of SSB/CSI-RS and the commencement of the first symbol of the transmission channel incorporating the AI/ML CSI report.
  • the UE offers the capability to specify the band number for which the report is provided, denoting the location where the measurement is performed.
  • the UE includes this field to indicate supported sub-carrier spacing, and/or supported AI/ML models, functionalities, or features.
  • the AI/ML model switching time indicates the minimum number of OFDM symbols between the triggering of CSI-RS and CSI-RS transmission.
  • the count of OFDM symbols is measured from the termination of the last symbol containing the indication to the commencement of the first symbol of CSI-RS.
  • the UE includes this field for supported sub-carrier spacing, and/or for supported AI/ML model or functionalities or features.
  • the CSI computation delay requirement may be shown in the following. Notes that it’s just an example not used to limit the invention. There may have multiple implementation methods. Where the Z i may be predefined or calculated according to above mentioned parameters.
  • Embodiment 6 Rules for discarding the active model/functionality.
  • UE memory usage is a key factor of UE internal conditions, which may affect the applicable or active model/functionalities.
  • activating or discarding model/functionalities can adjust memory usage.
  • UE memory usage of the UE causes memory capacity to decrease to a certain low level
  • the UE discards one or more selected applicable/active models/functionalities; and, when current UE memory usage of the UE causes memory capacity to increase to a certain high level, the UE activates one or more applicable/active models/functionalities.
  • Discarding one or more models which means that the model entities (including functionalities related model entities) of the models are removed from UE.
  • Activating one or more models means that the model entities (including functionalities related model entities) of the models are download to the UE.
  • Case1 UE memory usage causes memory capacity to decrease to a certain level.
  • the UE may deactivate and discard some applicable/active models/functionalities. More specifically, when the current UE memory usage in a UE causes memory capacity to decrease to a level, the UE performs the following operations:
  • the UE releases some inactive models which are deployed at UE but not active.
  • the UE can discard some inactive models. More specifically, in discarding of the one or more selected applicable/active models/functionalities, the UE initially discards inactive models/functionalities that are not used for the ongoing AI-based feature/feature group and subsequently discards inactive models/functionalities associated with the ongoing AI-based feature/feature group. Optionally, UE can discard some inactive models randomly.
  • a new time threshold may be introduced, so that in discarding of the one or more selected applicable/active models/functionalities, if inactive time of one or more inactive models exceed the time threshold, the UE discards the inactive models.
  • the threshold can be defined by UE itself or configured in a configuration by the NW.
  • a new threshold for prohibit discard may be introduced, so that if the model discarding exceeds the threshold that is defined based on discarding time, or UE memory usage, a number of discarded AI models or a number of deactivated AI models, the model discarding is turned off.
  • the threshold can be defined by UE itself o or configured in a configuration by the NW.
  • the UE will use the mechanism of AI-related UE capability reporting, which is defined in the foregoing embodiments, to report updated applicable models/functionalities to NW.
  • the updated applicable models/functionalities are the models/functionalities in the UE after the model discarding.
  • the UE or a life cycle management (LCM) network device When current UE memory usage of the UE causes memory capacity to decrease to a certain low level, the UE or a life cycle management (LCM) network device performs model selection or model switching to select one or more AI/ML models with smaller size among applicable models in the UE and discard one or more active AI models with large size in the UE.
  • LCM life cycle management
  • model selection or model switching may also include the model activation that causes the large size applicable models may not be activated.
  • model selection or model switching may be performed at UE side or NW side for UE-side model, or two side model.
  • the model selection or model switching is performed at NW side, the changing UE memory usage needs to be reported to the NW.
  • the NW performs the model selection or model switching and assists UE to select one/or more suitable AI models.
  • the UE memory usage reporting used in the AI-related UE capability reporting has been detailed in the foregoing embodiments.
  • the UE will use the mechanism of AI-related UE capability reporting, which is defined in the foregoing embodiments, to report updated applicable models/functionalities to NW.
  • the updated applicable models/functionalities are the models/functionalities in the UE after the model selection or model switching.
  • the UE performs the discarding based on model monitoring at the UE.
  • the UE performs model monitoring and discards some active models according to the result of the model monitoring (i.e., system performance or system resource consumption) .
  • the UE selects some suitable AI/ML models for model selection or model switching, similar to the operations in option 2.
  • the UE will discard some AI models which are located at UE side.
  • Various internal conditions of the UE which can initiate model discarding can be monitored as a portion of the model monitoring.
  • a threshold for model monitoring can be the same as or different from Threshold1.
  • FIG. 13 is a block diagram of an example system 700 for wireless communication according to an embodiment of the present disclosure. Embodiments described herein may be implemented into the system using any suitably configured hardware and/or software.
  • FIG. 13 illustrates the system 700 including a radio frequency (RF) circuitry 710, a baseband circuitry 720, a processing unit 730, a memory/storage 740, a display 750, a camera 760, a sensor 770, and an input/output (I/O) interface 780, coupled with each other as illustrated.
  • RF radio frequency
  • the processing unit 730 may include circuitry, such as, but not limited to, one or more single-core or multi-core processors.
  • the processors may include any combinations of general-purpose processors and dedicated processors, such as graphics processors and application processors.
  • the processors may be coupled with the memory/storage and configured to execute instructions stored in the memory/storage to enable various applications and/or operating systems running on the system.
  • the radio control functions may include, but are not limited to, signal modulation, encoding, decoding, radio frequency shifting, etc.
  • the baseband circuitry may provide for communication compatible with one or more radio technologies.
  • the baseband circuitry may support communication with 5G NR, LTE, an evolved universal terrestrial radio access network (EUTRAN) and/or other wireless metropolitan area networks (WMAN) , a wireless local area network (WLAN) , a wireless personal area network (WPAN) .
  • EUTRAN evolved universal terrestrial radio access network
  • WMAN wireless metropolitan area networks
  • WLAN wireless local area network
  • WPAN wireless personal area network
  • Embodiments in which the baseband circuitry is configured to support radio communications of more than one wireless protocol may be referred to as multi-mode baseband circuitry.
  • the baseband circuitry 720 may include circuitry to operate with signals that are not strictly considered as being in a baseband frequency.
  • baseband circuitry may include circuitry to operate with signals having an intermediate frequency, which is between a baseband frequency and a radio frequency.
  • the system 700 may be a mobile computing device such as, but not limited to, a laptop computing device, a tablet computing device, a netbook, an ultrabook, a smartphone, etc.
  • the system may have more or less components, and/or different architectures.
  • the methods described herein may be implemented as a computer program.
  • the computer program may be stored on a storage medium, such as a non-transitory storage medium.
  • the embodiment of the present disclosure is a combination of techniques/processes that can be adopted in 3GPP specification to create an end product.
  • the software function unit is realized and used and sold as a product, it can be stored in a readable storage medium in a computer.
  • the technical plan proposed by the present disclosure can be essentially or partially realized as the form of a software product.
  • one part of the technical plan beneficial to the conventional technology can be realized as the form of a software product.
  • the software product in the computer is stored in a storage medium, including a plurality of commands for a computational device (such as a personal computer, a server, or a network device) to run all or some of the steps disclosed by the embodiments of the present disclosure.
  • the storage medium includes a USB disk, a mobile hard disk, a read-only memory (ROM) , a random-access memory (RAM) , a floppy disk, or other kinds of media capable of storing program codes.
  • This disclosure provides a mechanism of AI-related UE capability reporting, including the possible procedure, signaling and elements.
  • This disclosure also provides rules of discarding the active model/functionality, including deactivate unsuitable model/functionality, and/or activate suitable model/functionality for a certain feature/FG.
  • the disclosure enhances AI/ML models for wireless communication systems.

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Databases & Information Systems (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mobile Radio Communication Systems (AREA)

Abstract

A user equipment (UE executes a method for processing artificial intelligence (AI) -related user equipment (UE) capability reporting. The UE performs AI-related UE capability reporting to report AI-related UE capability information of the UE using dynamic grant (DG) radio resource, configured grant (CG) radio resource, semi-persistent scheduling (SPS) resources, or random-access channel resource. The DG radio resource, CG radio resource, and SPS resource can be specific to AI-related UE capability reporting or not.

Description

USER EQUIPMENT, BASE STATION, AND METHOD FOR PROCESSNG ARTIFICIAL INTELLIGENCE-RELATED USER EQUIPMENT CAPABILITY REPORTING Technical Field
The present disclosure relates to the field of communication systems, and more particularly, to a method for processing artificial intelligence (AI) -related user equipment (UE) capability reporting and a wireless communication device.
Background Art
Artificial intelligence (AI) and machine learning (ML) are two related fields of computer science that aim to create systems that can perform tasks that normally require human intelligence and learning. ML can be used to solve various problems in domains such as natural language processing, computer vision, robotics, and bioinformatics. Recently, AI/ML has been increasingly applied to telecommunication networks.
The general framework for the study of AI/ML over air inference has been proposed. This framework illustrates how data collection can facilitate some AI actions, such as model training and model inference.
Technical Problem
The UE capability reporting mechanism concerning AI/ML methods can be classified into two categories, as outlined below:
Category 1: Static UE capability reporting mechanism;
Category 2: Dynamic UE capability reporting mechanism.
Category 1 involves AI/ML features or feature groups (FGs) as well as LCM operations that can be reported to the network (NW) via static UE capability reporting. Such reporting includes AI/ML use cases, model training, model monitoring, data collection, and other related functionalities. This static reporting requires a one-time submission. It appears that the existing legacy UE capability mechanism can be directly reused for this category, with the exception of making enhancements to accommodate new AI-related features, model training, model monitoring, data collection, and other related functionalities.
On the other hand, Category 2 pertains to a novel UE capability mechanism for dynamic AI-related UE capability reporting. The necessity of this mechanism has been extensively discussed, but a complete solution has not yet been devised.
Hence, a method for processing AI-related UE capability reporting for enhancing the current wireless communication system is desired.
Technical Solution
An object of the present disclosure is to propose a method for processing AI-related UE capability reporting, a user equipment (UE) , and a base station.
In a first aspect, an embodiment of the invention provides an artificial intelligence (AI) -related user equipment (UE) capability reporting method executable in a user equipment (UE) , comprising:
performing AI-related UE capability reporting to report AI-related UE capability information of the UE.
In a second aspect, an embodiment of the invention provides a user equipment (UE) comprising a processor configured to call and run a computer program stored in a memory, to cause a device in which  the processor is installed to execute the disclosed method.
In a third aspect, an embodiment of the invention provides a method for processing artificial intelligence (AI) -related user equipment (UE) capability reporting executable in a user equipment (UE) , comprising:
receiving AI-related UE capability information of a user equipment (UE) reported through AI-related UE capability reporting.
In a fourth aspect, an embodiment of the invention provides a base station comprising a processor configured to call and run a computer program stored in a memory, to cause a device in which the processor is installed to execute the disclosed method.
The disclosed method may be implemented in a chip. The chip may include a processor, configured to call and run a computer program stored in a memory, to cause a device in which the chip is installed to execute the disclosed method.
The disclosed method may be programmed as computer-executable instructions stored in non-transitory computer-readable medium. The non-transitory computer-readable medium, when loaded to a computer, directs a processor of the computer to execute the disclosed method.
The non-transitory computer-readable medium may comprise at least one from a group consisting of: a hard disk, a CD-ROM, an optical storage device, a magnetic storage device, a Read Only Memory, a Programmable Read Only Memory, an Erasable Programmable Read Only Memory, EPROM, an Electrically Erasable Programmable Read Only Memory and a Flash memory.
The disclosed method may be programmed as a computer program product, which causes a computer to execute the disclosed method.
The disclosed method may be programmed as a computer program, which causes a computer to execute the disclosed method.
Advantageous Effects
This disclosed method introduces a comprehensive procedure for AI-related UE capability reporting, which has not been discussed in the current study.
The disclosed method ensures efficient system operation by utilizing AI-enabled techniques.
The disclosed method efficiently conserves network resources and reduces signaling overhead.
The disclosed method significantly decreases the latency of information reporting. Description of Drawings
In order to more clearly illustrate the embodiments of the present disclosure or related art, the following figures will be described in the embodiments are briefly introduced. It is obvious that the drawings are merely some embodiments of the present disclosure. A person having ordinary skill in this field can obtain other figures according to these figures without paying the premise.
FIG. 1 illustrates a schematic view showing a wireless communication system comprising a user equipment (UE) , a base station, and a network entity.
FIG. 2 illustrates a schematic view showing a system with an AI/ML functional framework for executing a method for processing AI-related UE capability reporting using ML models.
FIG. 3 illustrates a schematic view showing an embodiment of the disclosed method.
FIG. 4 illustrates a schematic view showing an example of a procedure of AI-related information reporting.
FIG. 5 illustrates a schematic view showing another example of a procedure of AI-related information reporting.
FIG. 6 illustrates a schematic view showing another example of a procedure of AI-related information reporting.
FIG. 7 illustrates a schematic view showing an example of an AI-specific BSR MAC CE.
FIG. 8 illustrates a schematic view showing another example of an AI-specific BSR MAC CE.
FIG. 9 illustrates a schematic view showing an example of AI-specific MAC CE that carries AI-specific data.
FIG. 10 illustrates a schematic view showing another example of AI-specific MAC CE that carries AI-specific data.
FIG. 11 illustrates a schematic view showing an example of a procedure of AI-related UE capability reporting.
FIG. 12 illustrates a schematic view showing examples of triggering time.
FIG. 13 illustrates a schematic view showing a system for wireless communication according to an embodiment of the present disclosure.
DETAILED DESCRIPTION OF EMBODIMENTS
Embodiments of the disclosure are described in detail with the technical matters, structural features, achieved objects, and effects with reference to the accompanying drawings as follows. Specifically, the terminologies in the embodiments of the present disclosure are merely for describing the purpose of the certain embodiment, but not to limit the disclosure.
Abbreviations used in the description are listed in the following:
Table 1

This invention provides a method for AI-related User Equipment (UE) capability reporting. Specifically, the invention proposes possible procedures, signaling, and corresponding elements that efficiently conserve network resources and reduce signaling overhead.
Embodiments of the disclosure are related to artificial intelligence (AI) and machine learning (ML) for wireless communication system, such as LTE or new radio (NR) air interface, and address problems of AI-related UE capability reporting. AI-related UE capability reporting in the disclosure may be static AI-related UE capability reporting or dynamic AI-related UE capability reporting. Static AI-related UE capability reporting is one-time AI-related UE capability reporting conveyed in a reporting message. Dynamic AI-related UE capability reporting may comprise multiple transmissions of AI-related UE capability reporting conveyed in multiple reporting messages.
For simplicity, in the description an AI/ML model, AI model, ML model, and model are interchangeably used, also, AI/ML model monitoring and model monitoring are interchangeably used. In the description, Life cycle management (LCM) may comprise model selection, activation, deactivation, switching, fallback, model training, model monitoring, model registration, model deployment, model transfer retraining /fine-tuning at least for one-sided models and two-sided models.
For descriptive convenience, information of AI-related UE capability reporting includes applicable functionality, applicable model, additional conditions, UE internal conditions, and/or their related information, such as model ID, model structure, functionality ID, component of functionality and others. The applicable model/functionality means that the model/functionality that are currently applicable at the UE among identified models/functionalities. The additional conditions include scenarios, sites, cell, zone, datasets, and pair information. UE internal conditions may include UE memory usage, battery status, computing power and other hardware limitations. Model ID is used to indicate the model. Model structure may include the model layer, node, tree node, coefficients and others. Functionality ID is used to indicate a functionality. Component of functionality indicates conditions of the functionality. Moreover, note that in this discourse, the reporting information of reporting mechanism is not limited to AI-related UE capability information, but can also to be used for other AI-specific data/information. AI-related UE capability information may comprise but is not limited to static and/or dynamic AI-related UE capability information.
Functionality refers to an AI/ML-enabled Feature/FG enabled by configuration (s) , where configurations are the conditions in forms of RRC/LPP IE (s) reported by UE capability. A functionality refers to a specific configuration of the Feature/FG or a set of configurations of Feature/FG and may serve as a unit of activation/deactivation/switching in functionality-based LCM.
AI/ML model monitoring is the process of tracking and evaluating the performance of one or more AI/ML models and/or LCM over time. The AI/ML model monitoring (for example by a monitoring agent) collects data as monitoring input, which may be at one least of inference accuracy, system throughput, spectrum efficiency, NACK ratio, ACK ratio, SINR, RSRP, RSRQ, beam failure, link failure, positioning, intra-cell interference, inter-cell interference, BLER, MSER and FAR, performance loss, performance gain, channel information (e.g., real-time channel matrix, long-term channel matrix, CSI, eigen-vector, etc. ) , input/output data distribution, and/or applicable condition (s) , as well as applies various metrics and calculations on the monitoring input to assess the model's accuracy, reliability, fairness, and robustness. Based on the monitoring output, A decision agent can make informed decisions to maintain the AI/ML  model, and/or take corrective actions of LCM, (e.g., selecting, switching, activating, deactivating, or falling back to different AI/ML models or non-AI modules) to address any issues or anomalies detected via model monitoring.
AI/ML models can be installed and executed in one or more UE (s) and NW (e.g., base station, LMF, etc. ) . In the description, one or more AI/ML models may be installed and executed in UE 10 or/and installed and executed in a NW 20, wherein the AI/ML model (s) is used for different feature and/or functions considering one-side model, and/or two side model.
With reference to FIG. 1, a telecommunication system including a UE 10a, a UE 10b, a base station (BS) 20a, and a network entity device 30 executes the disclosed method according to an embodiment of the present disclosure. FIG. 1 is shown for illustrative not limiting, and the system may comprise more UEs, BSs, and CN entities. Connections between devices and device components are detailed as lines and arrows in the FIGs.
The base station 20a can operate as a gNB for 5G NR networks, an eNB for LTE networks, or a base station for future mobile network systems beyond 5G. A gNB is a 5G radio network node that connects to the core network via the NG interface. An eNB is a 4G radio network node that connects to the evolved packet core via the S1 interface. A base station for beyond 5G may be a smart virtual eNB (SVeNB) that can perform functions of EPS elements and reduce end-to-end delay.
The UE 10a may include a processor 11a, a memory 12a, and a transceiver 13a. The UE 10b may include a processor 11b, a memory 12b, and a transceiver 13b. The base station 20a may include a processor 21a, a memory 22a, and a transceiver 23a. The network entity device 30 may include a processor 31, a memory 32, and a transceiver 33. Each of the processors 11a, 11b, 21a, and 31 may be configured to implement proposed functions, procedures and/or methods described in the description. Layers of radio interface protocol may be implemented in the processors 11a, 11b, 21a, and 31. Each of the memory 12a, 12b, 22a, and 32 operatively stores a variety of programs and information to operate a connected processor. Each of the transceivers 13a, 13b, 23a, and 33 is operatively coupled with a connected processor, and transmits and/or receives radio signals or wireline signals. The UE 10a may be in communication with the UE 10b through a sidelink. The base station 20a may be an eNB, a gNB, or one of other types of radio nodes, and may configure radio resources for the UE 10a and UE 10b.
Each of the processors 11a, 11b, 21a, and 31 may include an application-specific integrated circuit (ASICs) , other chipsets, logic circuits and/or data processing devices. Each of the memory 12a, 12b, 22a, and 32 may include read-only memory (ROM) , a random-access memory (RAM) , a flash memory, a memory card, a storage medium and/or other storage devices. Each of the transceivers 13a, 13b, 23a, and 33 may include baseband circuitry and radio frequency (RF) circuitry to process radio frequency signals. When the embodiments are implemented in software, the techniques described herein may be implemented with modules, procedures, functions, entities, and so on, that perform the functions described herein. The modules may be stored in a memory and executed by the processors. The memory may be implemented within a processor or external to the processor, in which those may be communicatively coupled to the processor via various means are known in the art.
The network entity device 30 may be a node in a CN. CN may include LTE CN or 5G core (5GC) which includes user plane function (UPF) , session management function (SMF) , access and mobility  management function (AMF) , unified data management (UDM) , policy control function (PCF) , control plane (CP) /user plane (UP) separation (CUPS) , authentication server (AUSF) , network slice selection function (NSSF) , and the network exposure function (NEF) .
An example of the UE (e.g., UE 10 in FIG. 2) in the description may include one of the UE 10a or UE 10b. An example of the base station in the description may include the base station 20a. The NW 20 may be one or more entities, such as a base station or a gNB, in a radio access network (RAN) or one or more entities a 5G core network (5GC) . An entity in RAN may comprise a gNB or a base station of any other types of base stations, such as an eNB or a base station for beyond 5G. Uplink (UL) transmission of a control signal or data may be a transmission operation from a UE to a base station. Downlink (DL) transmission of a control signal or data may be a transmission operation from a base station to a UE. A DL control signal may comprise downlink control information (DCI) or a radio resource control (RRC) signal, from a base station to a UE.
AI/ML functional framework:
A general functional framework of AI/ML is used to show the logical relationship among LCM actions. A possible AI/ML functional framework comprising a system 100 is show in FIG. 2. At least a portion or all of the units may execute in a UE (e.g., UE 10 in FIG. 3) of the disclosure.
With reference to FIG. 2, the general AI/ML functional framework should include the following functional blocks,
■ Data Collection: A data collection unit 1101 is functional block that is used for collection of the measurements/data for various actions of life cycle management (LCM) , such as model training unit 1102, model inference unit 1104, and model management/performance monitoring unit 1107. The data collection unit 1101 works for one-sided model, and/or two-sided model.
■ Model Training: A model training unit 1102 is functional block that is used for performing model training. The model training unit 1102 may perform model training, validation, test, and finally produces a trained model.
■ Model management/performance monitoring: A model management/performance monitoring unit 1107 is a functional block that is used for performing model management including one or more of the following: model monitoring, selection, activating, deactivating, switching, and fallback. Moreover, the function block may provide control signaling to model inference unit 1104, or model training 1102, or model storage 1106.
■ Model Inference: A model inference unit 1104 is a functional block that is used for performing model inference. This functional block produces inference output of AI/ML models. The model inference unit 1104 uses the data from data collection as input and uses the trained AI/ML model given by model training unit 1102 to provide a set of inference output. The output of model monitoring can trigger a set of AI/ML model actions and also can be an input for model monitoring function.
■ Model Storage: A model storage unit 1106 is a functional block is used to store the trained/updated/retrained/fine-tuned models.
FIG. 3 shows an embodiment of the disclosed method. A user equipment (UE) may execute a method for processing AI-related UE capability reporting.
The UE 10 performs AI-related UE capability reporting to report AI-related UE capability  information 400 of the UE 10 using dynamic grant (DG) radio resource, configured grant (CG) radio resource, or semi-persistent scheduling (SPS) resources (S001) . The NW 20 receives the AI-related UE capability information 400 of a user equipment (UE) reported through AI-related UE capability reporting (S002) . For example, the NW 20 comprises a base station, such as a gNB.
In some embodiments of the disclosure, the UE collects AI-related UE capability information. The collecting AI-related UE capability information may include detecting change of AI-related UE capability information of the UE; and reporting the changed AI-related UE capability information.
In some embodiments of the disclosure, the AI-related UE capability reporting may be performed in response to a message transmitted from a network device, such as the base station. The message comprises a request or configuration message for AI-related UE capability reporting.
In some embodiments of the disclosure, the AI-related UE capability information may be transmitted in a UAI message, RRCReconfigurationComplete message, or RRCResumeComplete message, or a dedicated message. The dedicated message is used to convey the AI-related UE capability information.
In some embodiments of the disclosure, a specific purpose (newly defined) of the UAI message may be used to indicate that the UAI carries the AI-related UE capability information. The configuration of the UAI message may be transmitted by the base station and conveyed in a RRCReconfiguration message.
In some embodiments of the disclosure, configuration of the UAI message may include one or more of:
■ an applicable model configuration for the UE to report AI-related UE capability information of changes in applicable model detected by the UE;
■ an applicable functionality configuration for the UE to report AI-related UE capability information of changes in applicable functionality detected by the UE;
■ a memory usage configuration for the UE to report AI-related UE capability information of changes in memory usage detected by the UE;
■ a UE battery configuration for the UE to report AI-related UE capability information of changes in a UE battery status detected by the UE;
■ a scenario configuration for the UE to report AI-related UE capability information of changes in a scenario detected by the UE;
■ a site configuration for the UE to report AI-related UE capability information of changes in a site detected by the UE;
■ a dataset configuration for the UE to report AI-related UE capability information of changes in a dataset detected by the UE;
■ a cell configuration for the UE to report AI-related UE capability information of changes in a cell detected by the UE; and
■ a zone configuration for the UE to report AI-related UE capability information of changes in a zone detected by the UE.
The configuration of the UAI message may include a prohibit timer for each type of AI-related UE capability information.
The applicable model configuration may include one or both of a maximum number of applicable models and a minimum number of applicable models. The applicable functionality configuration may include one or both of a maximum number of applicable functionalities and a minimum number of applicable functionalities.
In some embodiments of the disclosure, the base station transmits configuration of AI-related UE capability reporting, and the UE receives configuration of AI-related UE capability reporting. The configuration of AI-related UE capability reporting may include one or more instances of the following information:
■ an AI-related capability configured ID: a configuration or information used to identify an AI-related capability configuration;
■ a requested AI-related UE capability: a configuration or information that indicates one or more instances of AI-related UE capability information requested by a network;
■ an AI-related UE capability reporting configuration ID: a configuration or information used to identify an AI-related UE capability reporting configuration;
■ an AI-related UE capability reporting type: a configuration or information used to indicate one of AI-related UE capability reporting types;
■ a reference signaling type: a configuration or information used to indicate a type of reference signaling;
■ a time to trigger: a configuration or information that specifies time during which specific criteria for the trigger event needs to be met in order to trigger an AI-related UE capability reporting;
■ an activating time offset: a configuration or information that specifies a time offset according to which an AI-related operation may be activated a time offset after a trigger event;
■ a duration time of AI-related UE capability reporting: a configuration or information that specifies a duration of time during which the UE may be to report at least one requested AI-related UE capability;
■ a server identification: a configuration or information used to identify a server which may be capable of AI model management;
■ a reason of requesting AI-related UE capability: a configuration or information that indicates a reason of requesting AI-related UE capability information; and
■ a threshold: a configuration or information specifies a newly defined threshold for detecting change of memory usage of the UE.
In some embodiments of the disclosure, each UE capability in the (requested) AI-related UE capability may be a capability element and identified at element-based granularity. A set/group of UE capabilities in the requested AI-related UE capability may be a capability set/group and identified at set/group-based granularity.
In some embodiments of the disclosure, the AI-related UE capability reporting types comprise periodic AI-related information reporting and event-triggered AI-related information reporting.
The base station transmits AI-related capability reporting activation/deactivation. The UE receives AI-related capability reporting activation/deactivation. The AI-related UE capability reporting may be activated in response to the received activation or deactivated in response to the received deactivation. The AI-related capability reporting activation/deactivation may include one or more of:
■ AI-related capability reporting threshold;
■ requested AI-related UE capability;
■ an activating time;
■ a deactivating time;
■ a duration time of AI-related UE capability reporting; and
■ a server identification.
In some embodiments of the disclosure, the UE transmits a scheduling request (SR) for radio resource for AI-related UE capability reporting. The base station receives the SR. The scheduling request may be configured with a new scheduling request instance that indicates an SR configuration corresponding to AI-specific data/information. The scheduling request may be a legacy scheduling request with a parameter used to indicate data/information of new transmission associated with the scheduling request may be AI-specific data/information for AI-related UE capability reporting. The scheduling request may be a (newly defined) AI-specific SR which may be used for requesting uplink radio resources for AI-specific data/information.
In some embodiments of the disclosure, a logical channel (LCH) or logical channel group (LCG) may be corresponded with an SR configuration or AI-specific SR configuration for the transmitting data information, the LCH or LCG indicates the transmitting data information is AI-specific data/information implicitly.
In some embodiments of the disclosure, a logical channel (LCH) or logical channel group (LCG) may be included in new AI-specific buffer status reporting (BSR) Medium Access Control (MAC) control element (CE) for the transmitting data information, the LCH or LCG indicates the transmitting data information is AI-specific data/information implicitly.
In some embodiments of the disclosure, the reported AI-related UE capability information may be conveyed in a MAC CE.
In some embodiments of the disclosure, for a reporting type of periodic AI-related information reporting, configuration of AI-related UE capability reporting may include one or more of: information that indicates an interval between two adjacent periodical AI-related UE capability reports;
■ information that indicates a number of AI-related UE capability reports; information that indicates a maximum number of AI/ML models included in the AI-related UE capability reporting;
■ information that indicates the maximum number of AI/ML functionalities included in the AI-related UE capability reporting;
■ information that indicates whether an allow-list that enumerates AI/ML models in an associated requested AI-related UE capability information element are applicable to the AI-related UE capability reporting; and
■ information that indicates whether an allow-list that enumerates AI/ML functionalities in the associated requested AI-related UE capability information element are applicable to the AI-related UE capability reporting.
In some embodiments of the disclosure, for a reporting type of event-triggered AI-related information reporting, a trigger event triggers the AI-related UE capability reporting. The trigger event  occurs upon the UE detects change to the applicable model (s) and/or applicable functionality/functionalities of the UE, or after the applicable model (s) and/or applicable functionality/functionalities have been changed and persist for a specified period, or when the applicable model (s) and/or applicable functionality/functionalities have been changed and persist for the specified duration while a number of changed applicable model (s) and/or applicable functionality/functionalities exceeds a maximum/minimum number of applicable model (s) or applicable functionality/functionalities.
In some embodiments of the disclosure, for a reporting type of event-triggered AI-related information reporting, a trigger event triggers the AI-related UE capability reporting, the trigger event occurs when change of UE memory usage become greater than a memory usage threshold.
In some embodiments of the disclosure, for a reporting type of event-triggered AI-related information reporting, a trigger event triggers the AI-related UE capability reporting. An entering condition of the event is satisfied when a UE-detected change of UE memory usage minus a hysteresis parameter is less than a memory usage threshold. A leaving condition of the event is satisfied when the UE-detected change of UE memory usage plus the hysteresis parameter is greater than a memory usage threshold.
Embodiment 1: Procedure of AI-related UE capability reporting
The information of AI-related UE capability can be reported through UAI message, RRCReconfigurationComplete/RRCResumeComplete message, or another new defined dedicated message. Embodiments of procedures of AI-related UE capability reporting are detailed in the following.
Embodiment 1.1: An approach of AI-related UE capability reporting based on UAI:
The AI-related UE capability information can be carried in UAI message from UE to NW. Since the AI-related UE capability is a new concept or a new feature, in order to achieve information synchronization between the UE and NW by UAI message, the legacy UAI procedure requires enhancements, including purpose and configuration of UAI.
A specific purpose (newly defined) of this UAI procedure is for the UE to inform the network of its AI-related information, including AI-related UE capability information.
In the RRCReconfiguration message, certain new configuration information must be introduced as corresponding configuration of the UAI for AI-related information of the UE. The new configuration information of RRCReconfiguration including one or more of following:
applicableModelconfig: An applicable model configuration is a configuration for the UE to report AI-related UE capability information and to notify the network of changes in the applicable model detected by the UE.
applicableFuncationalityconfig: An applicable functionality configuration is a configuration for the UE to report AI-related UE capability information and to notify the network of changes in the applicable functionality detected by the UE.
UEmemoryuseageconfig: A memory usage configuration is a configuration for the UE to report AI-related UE capability information and to notify the network of changes in memory usage detected by the UE.
UEbatteryconfig: A UE battery configuration is a configuration for the UE to report AI-related UE capability information and to notify the network of changes in the UE battery status detected by the UE.
scenarioconfig: A scenario configuration is a configuration for the UE to report AI-related UE capability information and to notify the network of changes in the scenario detected by the UE.
siteconfig: A site configuration is a configuration for the UE to report AI-related UE capability information and to notify the network of changes in the site detected by the UE.
datasetinfoconfig: A dataset configuration is a configuration for the UE to report AI-related UE capability information and to notify the network of changes in the dataset detected by the UE.
cellconfig: A cell configuration is a configuration for the UE to report AI-related UE capability information and to notify the network of changes in the cell detected by the UE.
Zoneconfig: A zone configuration is a configuration for the UE to report AI-related UE capability information and to notify the network of changes in the zone detected by the UE.
The UE can use any combination of the above information for AI-related UE capability reporting. Alternatively, for example, the UE can use an information element (e.g., UEinternalcondition) to indicate all types of AI-related information (i.e., the applicable model, applicable functionality, UE memory usage, UE battery status, scenario, site, dataset, cell, and zone) or use individual IEs to report the types of AI-related information separately. For each of the configurations, the following one or more parameters may be included.
1) . Configuration of each type of AI-related UE capability information may include one or more of the following:
ProhibitTimer: ProhibitTimer is a prohibit timer of AI-related UE capability information, including a prohibit timer for applicable models, or a prohibit timer of applicable functionalities, or a prohibit timer of UE memory usage, or a prohibit timer of UE battery status, or a prohibit timer of scenario, or a prohibit timer of site, or a prohibit timer of dataset. Upon expiration of the prohibit timer, the UE reports the AI-related UE capability information.
2) . In the applicableModelconfig and applicableFuncationalityconfig, one or more of the following may also be included:
The maximum number of applicable models/functionalities: The configuration or information indicates the maximum number of applicable models and/or applicable functionalities that UE can report as a portion of the AI-related information in one submission.
The minimum number of applicable models/functionalities: The configuration or information indicates the minimum number of applicable models and/or applicable functionalities that UE shall report as a portion of the AI-related information in one submission.
Initiation condition: As shown in FIG. 4, UE capable of providing AI-related information in RRC_CONNECTED state may initiate a procedure to report AI-related information to NW in several cases, including when it is configured to report AI-related information and/or when detecting a change in AI-related information.
For example, if the applicable model has change, upon initiating the procedure, the UE shall perform pseudo codes in the following table:
Table 2

Accordingly, if configured to provide applicable model information, the UE determines the conditions in the level 2> of the pseudo codes. That is, the UE determines:
whether the applicable model’s initiation condition for triggering reporting of applicable model information has been detected and a prohibit timer T is not running; or
whether the current applicable models are different from what is indicated in the last transmission of the UEAssistanceInformation message (i.e., UAI including applicable Model, applicable model ID, and/or applicable model structure) and timer T is not running.
If any of the conditions is affirmed, the UE start timer T with the timer value set to the applicableModelProhibitTimer (i.e., a value of the prohibit timer for applicable model) and initiates transmission of the UEAssistanceInformation message to provide applicable model information.
As shown in FIG. 4, in a procedure of AI-related information reporting NW transmits configuration of AI-related information reporting to UE. UE receives configuration of AI-related information reporting from the NW and provides AI-related information in RRC_CONNECTED state to NW.
Step1: UE receives a configuration of AI-related UE capability information in RRCReconfiguration message from the NW. The configuration includes AI-related UE capability configuration that includes one or more instances of the following information elements: the specific purpose (newly defined) , applicableModelconfig, applicableFuncationalityconfig, UEmemoryuseageconfig, UEbatteryconfig, scenarioconfig, siteconfig, datasetinfoconfig, cellconfig, Zoneconfig, ProhibitTimer, maximum number of applicable models/functionalities, and minimum number of applicable models/functionalities.
Step2: UE receives configuration of AI-related UE capability conveyed in RRCReconfiguration message. The UE detects the change of AI-related UE capability information of the UE, such as applicable model, applicable functionality, scenario and etc., and sends the changed AI-related UE capability information to the NW. Moreover, the criterion of the change of AI-related UE capability information is defined in Embodiment3.
Embodiment 1.2: An approach of AI-related UE capability reporting
Another example of the procedure of AI-related information reporting is provided in this embodiment which may be another snapshot capability “needForGap” .
In order to enable the AI-related information reporting in UE-sided model and UE-part of two-sided model, one or more instances of the following signaling/information may be transmitted over the air interface between NW and UE for different methods:
1) AI-related UE capability reporting configuration;
2) AI-related UE capability reporting activation/deactivation;
3) Resource request for AI-related UE capability reporting; and
4) AI-related UE capability reporting.
The details are provided in the following:
AI-related UE capability reporting configuration: One or more instances of the following information may be included in the configuration of AI-related UE capability reporting (referred to as AI-related UE capability reporting configuration) : AI-related capability configured ID, Requested AI-related UE capability, AI-related UE capability reporting configuration, AI-related UE capability reporting configuration ID, AI-related UE capability reporting type, Reference signaling (RS) type, Time to trigger, Activating time offset, Duration time of AI-related UE capability reporting, Server identification, Reason of requesting AI-related UE capability, and Threshold1.
AI-related capability configured ID: The configuration or information is used to identify an AI-related capability configuration. The configuration or information provides association between a Requested AI-related UE capability and an AI-related UE capability reporting configuration.
Requested AI-related UE capability: The configuration or information indicates one or more instances of the AI-related UE capability information requested by the NW, such as additional conditions, UE internal conditions, applicable functionality, applicable model and others related to a UE. The capabilities in the Requested AI-related UE capability can be identified based on capability element or capability group.
Option 1: Each UE capability in the Requested AI-related UE capability is a capability element and can be identified at element-based granularity.
Option 2: A set/group of UE capabilities in the Requested AI-related UE capability is a capability set/group and can be identified at set/group-based granularity.
AI-related UE capability reporting configuration ID: The configuration or information is used to identify an AI-related UE capability reporting configuration.
AI-related UE capability reporting type: The configuration or information is used to indicate one of AI-related UE capability reporting types (e.g., periodic reporting or event-triggered reporting) .
(a) . For periodic AI-related information reporting, the AI-related UE capability reporting type may include an interval or period for AI-related information reporting. The period can be fixed or configured.
(b) . For event-triggered AI-related information reporting, the AI-related UE capability reporting type may include trigger event ID and/or specify trigger events which can be fixed or configured as detailed in Embodiment 3.
Reference signaling (RS) type: The configuration or information is used to indicate the type of reference signaling, which can be CSI-RS, SSB, AI-specific CSR-RS, AI-specific SSB.
Time to trigger: The configuration or information specifies time during which specific criteria for the trigger event needs to be met in order to trigger an AI-related UE capability reporting;
Activating time offset: The configuration or information specifies a time offset according to which an AI-related operation, such as AI-related UE capability reporting, is activated a time offset after a trigger event. The AI-related operation includes AI-related measurement (e.g., data collection, model  monitoring, and etc. for LCM operations, which can be AI-specific measurement, legacy measurement, the measurement can be L1-measurement, L2-measurement, or L3-measurement) , and/or data collection for AI-related UE capability information, and/or report the AI-related UE capability information.
Duration time of AI-related UE capability reporting: The configuration or information specifies a duration of time during which the UE is to report one or more requested AI-related UE capabilities. Based on at least one of the starting time, time duration, or a time offset, the AI-related UE capability reporting is activated. 1) the NW sends the message that carries the AI-related UE capability reporting configuration; 2) UE receives the message that carries the AI-related UE capability reporting configuration; 3) UE is activated to report the AI-related capability; 4) UE performs AI-related UE capability reporting.
Server identification: The configuration or information is used to identify a server which is able to manage AI/ML models, including model training, such as OTT, OAM, MAO and etc.
Reason of requesting AI-related UE capability: The configuration or information indicates a reason of requesting AI-related UE capability information, which can be 1) NW does not receive any AI-related UE capability in a period timed by a timer; 2) LCM operations enabled; 3) system performance become worse or better;
Threshold1: The configuration or information specifies a newly defined threshold for detecting change of memory usage of the UE. The AI-related UE capability reporting may be triggered in response to a change in memory usage exceeding the specified threshold.
AI-related capability reporting activation/deactivation: The configuration or information includes one or more of trigger events as defined in Embodiment 3, AI-related capability reporting threshold, Requested AI-related UE capability, Activating time, Deactivating time, Duration time of AI-related UE capability reporting, Server identification and etc.
AI-related capability reporting threshold: The configuration or information is detailed in Embodiment 3.
Requested AI-related UE capability: The configuration or information specifies the AI-related UE capability requested by NW.
Activating time: The configuration or information specifies the activation timing of reference signaling and/or activation timing for UE to gather and/or report its AI-related UE capability information, or/and LCM operations.
Deactivating time: The configuration or information specifies the deactivation timing of reference signaling, and/or deactivation timing for UE to gather and/or report its AI-related UE capability information, or/and LCM operations.
Duration time of AI-related UE capability reporting: The configuration or information specifies a duration of time. In the duration time, UE should report the requested AI-related UE capability. Based on at least one of the starting time, time duration, or a time offset, the AI-related UE capability reporting is activated.
Server identification: The configuration or information is used to identify a server which is able to manage AI/ML models, including model training, such as OTT, OAM, MAO and etc.
Resource request for AI-related UE capability reporting: Operations of resource request for  AI-related UE capability reporting include scheduling request (SR) , buffer status reporting (BSR) , physical uplink shared channel (PUSCH) resource allocation (for BSR and AI-related UE capability information) . The details including signaling and elements are provided in Embodiment 2.
AI-related UE capability reporting: The field includes AI-related UE capability information for synchronization among entities in the network, and can be carried in UCI, PUCCH, PUSCH, MAC CE, RRC message. The AI-related UE capability information may include one or more instances of the following information: AI-related UE capability reporting configuration ID, the information of Requested AI-related UE capability, AI-related UE capability reporting type, reference signaling (RS) type, and trigger event.
Note that one or more instances of the following information may be predefined: Time to trigger, Duration time of AI-related UE capability reporting, Threshold1.
With reference to FIG. 5, based on those defined information or signaling, in an example of a procedure of AI-related UE capability reporting, network (NW, e.g., gNB or one of CN functions) can enquire the AI-related UE capability information of UE by sending a request message to UE. The procedure is detailed in the following:
Step1: UE receives Msg1 AI-UECapability Enquiry from NW, which is used for enquiring/requiring/request AI-related UE capability reporting. The Msg1 AI-UECapability Enquiry may include one or more instances of the following information: AI-related capability configured ID, Requested AI-related UE capability, AI-related UE capability reporting configuration, AI-related UE capability reporting configuration ID, AI-related UE capability reporting type, Reference signaling (RS) type, Time to trigger, Activating time offset, Duration time of AI-related UE capability reporting, Server identification, Reason of requesting AI-related UE capability. The Msg1 may be a dedicated RRC message, MAC CE, or DCI, or carried in PDCCH, PDSCH and etc. Msg1 stands for first message.
Step2: Optionally, UE may receive Msg2 AI-UECapability Activation/Deactivation from NW, which is used to deactivate/activate AI-related UE capability reporting, and/or deactivate/activate some operations for gathering AI-related UE capability information, including related measurement (AI-specific measurement, L1-measurement, L3-measurement, new defined measurement) , and/or data collection, and/or model monitoring. Msg2 stands for second message. The Msg2 AI-UECapability Activation/Deactivation include one or more instances of the following information: Requested AI-related UE capability, activate time, deactivate time, Duration time of AI-related UE capability reporting, Server identification. The Msg2 may be a MAC CE, DCI, RRC message, PDCCH, or PDSCH.
Step3: Optionally, once the UE receives the corresponding Msg1, or Activating time offset included in Msg1 or the activation/deactivation Msg2, the UE will gather the enquired AI-related UE capability information, such as scenario, site, dataset, UE memory usage, battery status, applicable model, applicable functionality and etc. These types of AI-related UE capability information may comprise either UE internal information (previously existing within the UE) or information obtained by the UE through measurements, computations, sensing, etc.
Step4: If the criterion of the periodic AI-related information reporting and/or event-triggered AI-related information reporting is satisfied, the UE reports a Msg3 that carries the Requested AI-related UE capability information to the network. Msg3 stands for third message. The trigger event of the periodic  AI-related information reporting and/or periodicity of the event-triggered AI-related information reporting trigger are configured in Msg1 and/or Msg2, or fixed/predefined. One or more instances of the following information may be included in Msg3: AI-related UE capability reporting configuration ID, the information of Requested AI-related UE capability, AI-related UE capability reporting type, reference signaling (RS) type, and trigger event.
Embodiment 1.3: An approach of AI-related UE capability reporting based on UE initiation.
Different from Embodiment 1.1 and Embodiment 1.2, in Embodiment 1.3, UE is able to initiate the AI-related UE capability reporting actively. FIG. 6 illustrates an example of a procedure of UE-initiated AI-related UE capability reporting, which is used to a UE-sided model or UE-part of two-sided model based on the information definition in Embodiment 1.2. The example of the procedure is detailed in the following.
Step1’ : Optionally, a UE gathers and stores the AI-related UE capability information, such as scenario, site dataset, UE memory usage, battery status, applicable model, applicable functionality and etc. These types of the AI-related UE capability information may be obtained by UE through measurement, computing, sensing and etc.
Step2’ : The UE reports a Msg1 that carries the AI-related UE capability information to NW based on schemes of periodic AI-related information reporting or event-triggered AI-related information reporting. The trigger event of the periodic AI-related information reporting and/or periodicity of the event-triggered AI-related information reporting is configurable, fixed/predefined, or UE itself-decided. One or more instances of the following information may be included in Msg1: information of Requested AI-related UE capability, AI-related UE capability reporting type, and reference signaling (RS) type. The Msg1 can be a dedicated message, such as RRC, DCI and etc.
Embodiment 2: The mechanism of AI-related UE capability reporting.
Embodiment 1 gives some examples of the procedure of AI-related UE capability reporting. With limited radio resources, it may be necessary to study scheduling of resources for AI-related UE capability reporting.
The legacy scheduling mechanism has three methods for requesting Uplink Shared Channel (UL-SCH) resources including 1) Scheduling Request, 2) Buffer Status Reporting, and 3) Random Access Channel (RACH) process.
More specifically, in cases where there is no PUSCH resource available, but an SR PUCCH resource exists, the UE may send an SR to the NW to request SR PUCCH resource allocation for buffer status reporting (BSR) . The SR serves to notify the NW that the UE has data information to transmit and informs the NW of the exact data information size through BSR. Subsequently, the NW allocates a suitable resource for uplink transmission of the data information.
If a PUSCH resource is available, the UE directly requests a suitable PUSCH resource for transmitting the data information through BSR.
However, if neither SR PUCCH nor PUSCH resources are available, the UE initiates the RACH process to request an uplink grant from the NW.
For the AI-related UE capability information, such as applicable model and applicable functionality, two aspects require consideration. Firstly, AI-related UE capability reporting may have latency requirement. Secondly, the AI model and its related functionality differ from the traditional communication  data and information. As a result, certain modifications to the legacy scheduling mechanism need to be taken into account.
■ Use SR to request UL-SCH resource for AI-related UE capability information.
In this case, once having the AI-related UE capability information to report to NW, the UE sends a scheduling request (SR) to request uplink resource.
In the case of legacy SR configuration, it corresponds to one or more logical channels and/or to SCell beam failure recovery and/or to consistent LBT failure recovery and/or to beam failure recovery of BFD-RS set (s) . Therefore, to specify that the data/information is AI-specific data/information, the following options are available:
Option 1: Among scheduling request instances, a new scheduling request instance for AI-specific data/information transmission is newly added into schedulingRequestId. A value of the SR ID schedulingRequestId represents the new scheduling request instance that indicates an SR configuration corresponding to AI-specific data/information. An SR of the new scheduling request instance is used for uplink transmission of AI-specific data/information for AI-related UE capability reporting.
Option 2: A newly parameter is introduced to legacy SR configuration, which is used to indicate the data/information of new transmission associated with a SR is AI-specific data/information for AI-related UE capability reporting. An SR with the newly parameter is used for uplink transmission of AI-specific data/information for AI-related UE capability reporting.
Option 3: AN AI-specific SR (newly defined) is introduced so that this SR is AI specific, which is used for requesting UL-SCH resources for AI-specific data/information. Configuration of AI-specific SR may include AI-sr-ProhibitTimer (per AI-SR configuration) and/or AI-sr-TransMax (per AI-SR configuration) , which may be configured by RRC message. An AI-specific SR (newly defined) is used for uplink transmission of AI-specific data/information for AI-related UE capability reporting.
AI-sr-ProhibitTimer: The AI-sr-ProhibitTimer, if configured, indicates a Timer for SR transmission on PUCCH, the value is in units of millisecond (ms) , and can be 1ms or 2ms etc;
AI-sr-TransMax: The AI-sr-TransMax, if configured, indicates the Maximum number of SR transmissions, the value can be 4, or 8, or 16 or etc.
In an embodiment, a logical channel (LCH) or logical channel group (LCG) for the transmitting data information can also indicate the transmitting data information is AI-specific data/information implicitly. For example, new LCH/LCG is defined to indicate so. The SR configuration may correspond to one or more AI-specific logical channels/logical channel groups and implicitly indicates that SR of the SR configuration is used to request for resource scheduling for data information and the data information to be transmission associated with SR is AI-specific data/information. Hence, the possible solutions for triggering SR for AI-related information reporting are listed as below:
Solution#1: AI-specific LCH/LCG is configured in SR configuration or AI-specific SR configuration. An AI-specific LCH/LCG is used for transmission of AI-specific data/information for AI-related UE capability reporting.
Solution#2: A new AI-specific BSR MAC CE is introduced, which includes the AI-specific LCH/LCG information. As shown in FIG. 7, an example of the AI-specific BSR MAC CE include a field to  carry an AI-specific LCH/LCG ID and a field to carry a buffer size. A new AI-specific BSR MAC CE with the AI-specific LCH/LCG is used for transmission of AI-specific data/information for AI-related UE capability reporting.
(1) . LCH/LCG-AI ID: The field indicates which AI-specific logical channel/logical channel group carries the AI-related UE capability reporting information.
(2) . Buffer size: The field indicates the data volume of AI-related UE capability reporting information.
Solution#3: A new AI-specific BSR MAC CE is introduced, which includes a field of an AI-specific indicator or a field of LCH/LCG-AI ID. FIG. 8 shows another example of the AI-specific BSR MAC CE. A new AI-specific BSR MAC CE with the AI-specific indicator is used for transmission of AI-specific data/information for AI-related UE capability reporting.
(1) . AI-specific: The field is an indicator that indicates the BSR MAC CE is AI-specific BSR MAC CE;
(2) . LCH/LCG ID: The field indicates a logical channel/logical channel group that carries the AI-related UE capability reporting information.
(3) . Buffer size: The field indicates the data volume of the AI-related UE capability reporting information.
Correspondingly, new trigger event may be introduced to trigger BSR (including the new BSR or legacy BSR for AI-related UE capability reporting) . For logical channel prioritization, when AI-specific data/information newly arrives at a MAC layer of a UE for transmission, the LCH/LCG of the AI-specific data/information can have a priority higher or lower than or the same with an LCH/LCG UL data which belong to any LCH/LCG.
■ Low latency reporting of AI-related UE capability
As above discussion, for some LCM operations, such as model selection, model switch, activation, deactivation, fallback and etc., there may be a low latency requirement of AI-related UE capability reporting. Meanwhile, considering the size of AI-related UE capability information may be small. Hence, in order to fast report the AI-related UE capability information, some possible solutions are detailed in the following:
Solution#1: AI-related UE capability information is carried in new MAC CE. FIG. 9 and FIG. 10 illustrate examples of the new AI-specific MAC CE that carries AI-specific data.
(1) . LCH/LCG ID, LCH/LCG 0 to LCH/LCG m: Each of the field indicates a logical channel or logical channel group that carries the AI-related UE capability reporting information. The logical channel or logical channel group can be a legacy LCH/LCG or specific LCH/LCG (newly defined) for AI-related UE capability reporting information.
(2) . Num: The field indicates the number of the logical channels or logical channel groups.
(3) . R: The field if a reserved field.
(4) . AI-specific data 1 to AI-specific data m: Each of the fields carries AI-specific data for AI-related UE capability reporting.
Solution#2: A specific BSR used to request UL-SCH resource for uplink transmission of  AI-related UE capability information.
The NW allocates or configures dedicated BSR MAC CE resource for buffer status reporting of AI-related UE capability reporting information. Once UE has the AI-related UE capability information to report to NW, the UE sends a BSR to request uplink resource for AI-related UE capability reporting for information. FIG. 11 shows an example of a procedure of AI-related UE capability reporting.
Step1: UE sends Msg1 to NW. The Msg1 is a Buffer Status Report (BSR) used to request uplink resource for AI-related UE capability reporting for information. The NW receives the Msg1 BSR. Moreover, the resource for transmitting the BSR periodically is configured by NW.
Step2: UE receives Msg2 from NW, the Msg2 include an uplink grant of uplink resource for AI-related UE capability reporting.
Step3: UE sends the Msg3 to NW using the allocated uplink resource in step2. The Msg3 carries the AI-related UE capability reporting information.
In some embodiments, the AI-related UE capability reporting information also can be carried in:
SR (UCI, PUCCH) , as aforementioned in Embodiment 2.
SDT, or Msg1 in 2-Step RACH, or Msg3 in 4-Step RACH.
Embodiment 3: Reporting criterion of AI-related UE capability reporting
AI-related UE capability information reported from UE to NW dynamically may be transmitted using the scheme of periodic AI-related information reporting and/or the scheme of event-triggered AI-related information reporting. The reporting criterion of both of the schemes is detailed in the following.
■ The reporting type of periodic AI-related information reporting:
For periodic reporting of AI-related UE capability information, configuration of AI-related UE capability reporting can be configured, or predefined, including one or more of reportInterval, reportAmount, maxNrofmodel-ToReport, maxNroffuncationality-ToReport, useAllowedModelList, and useAllowedfunctionalitylList,
reportInterval: The field indicates the interval between two adjacent periodical reports.
reportAmount: The field indicates the number of AI-related UE capability reports. The field is applicable for both reporting types.
maxNrofmodel-ToReport: The field indicates the maximum number of AI/ML models included in the AI-related UE capability reporting.
maxNroffuncationality-ToReport: The field indicates the maximum number of AI/ML functionalities included in the AI-related UE capability reporting.
useAllowedModelList: The field indicates whether an allow-list that enumerates AI/ML models in an associated Requested AI-related UE capability information element are applicable to the AI-related UE capability reporting.
useAllowedfunctionalitylList: The field indicates whether an allow-list that enumerates AI/ML functionalities in an associated Requested AI-related UE capability information element are applicable to the AI-related UE capability reporting.
■ The reporting type of event-triggered AI-related information reporting:
In wireless AI-based systems, frequently AI-related UE capability reporting will result in significant  overhead, resource wastage, transmission conflicts, and energy inefficient. Therefore, event-triggered scheme for the AI-related UE capability reporting is reasonable. Based on different AI-related UE capabilities (e.g., applicable model, applicable functionality, UE internal conditions, additional conditions and etc. ) , some potential trigger events are illustrated in the following:
Event A: Applicable model/models and/or functionality/functionalities changes
Event A occurs when the UE changes its applicable AI-related models and/or functionalities. Event A states a scenario where the applicable model/models and/or applicable functionality/functionalities at the UE undergo changes. Even A triggers AI-related UE capability reporting to inform the NW of the updated applicable model/models, applicable functionality/functionalities, and/or their corresponding information. The following presents the potential criterion options for Event A:
Option 1: Upon detecting change to the applicable model (s) and/or applicable functionality/functionalities, the UE shall initiate AI-related UE capability reporting.
For example, in FIG. 12, if the duration time of AI-related UE capability reporting is configured, the applicable model/models and/or applicable functionality/functionalities is changed at T1, the Even A occurs representing the change. Even A triggers the AI-related UE capability reporting, where the exact reporting time can be T1, or the end time of duration time Td. Alternatively, if the duration time of AI-related UE capability reporting is not configured, the exact reporting time can be T1.
Option 2: AI-related UE capability reporting will be triggered after the applicable model (s) and/or applicable functionality/functionalities have been changed and persist for a specified period.
In this option, once the current applicable model/models and/or applicable functionality/functionalities changes and the changes lasts for a period as specified by the Time to trigger, which is defined in embodiment 1, an even occurs representing the change and the lasting time. The even triggers AI-related UE capability reporting. Moreover, if the duration time of AI-related UE capability reporting is also configured, and Time to trigger is within in duration time of AI-related UE capability reporting, the event occurs and triggers the reporting. The exact reporting time can be the end of Time to trigger, or the end time of duration. If Time to trigger is out of the duration time of AI-related UE capability reporting, the situation does not trigger the reporting. Alternatively, if the duration time of AI-related UE capability reporting is not configured, the exact reporting time can be the end of Time to trigger.
Option 3: AI-related UE capability reporting will be triggered once the applicable model (s) and/or applicable functionality/functionalities have been changed and persist for a specified duration while a number of changed applicable model (s) and/or applicable functionality/functionalities exceeds a threshold specified by the maximum/minimum number of applicable model (s) or applicable functionality/functionalities.
In this option, note that the actual trigger time of AI-related UE capability reporting can be determined based on the time of either the threshold being exceeded and/or the time the change lasts for the specified duration of time for AI-related UE capability reporting, as specified by the duration time defined in embodiment 1. Subsequently, the AI-related UE capability reporting will be triggered when an event represents the option 3 occurs.
Option 4: The AI-related UE capability reporting will be triggered when either the applicable model (s) and/or applicable functionality/functionalities have been modified, or when any combination of  options opt1, opt2, and opt3 is met.
Event B: UE memory usage changes
Event B describes the current UE memory usage changes. More specifically, for example, the UE performs AI-related UE capability reporting to NW if the one of the following possible criteria are met.
The change of UE memory usage become greater than the threshold1;
Threshold1 is a newly defined threshold for detecting a change in memory usage. If the change of memory usage is greater than the threshold1, an event B occurs and triggers AI-related UE capability reporting to NW.
The entering condition of the event is satisfied when condition B-1, as specified below, is fulfilled:
Inequality B-1 (Entering condition) :
Mum -Hys > Threshold1
The leaving condition of the event is satisfied when condition B-2, as specified below, is fulfilled:
Inequality B-2 (Leaving condition) :
Mum + Hys > Threshold1
The variables in the formula are defined as follows:
Mum: The variable refers to a UE-detected change of UE memory usage. The variable Mum can be obtained from a difference between a compare with the latest memory usage reporting;
Hys: This is a hysteresis parameter for this event B (which is configured by RRC signaling, including the reporting configuration or other signaling) ;
Threshold1: This is the newly defined threshold of the change of memory usage. A current change in memory usage greater than the threshold will trigger the AI-related UE capability reporting.
Embodiment 4: CSI Priority rule for AI-based reporting
In NR system, the priority rules for CSI reporting have been defined, as shown in the following. The CSI reports are associated with a priority value PriiCSI (y, k, c, s) =2·Ncells·Ms·y+Ncells·Ms·k+Ms·c+s, where
- y=0 for aperiodic CSI reports to be carried on PUSCH, y=1 for semi-persistent CSI reports to be carried on PUSCH, y=2 for semi-persistent CSI reports to be carried on PUCCH, and y=3 for periodic CSI reports to be carried on PUCCH;
- k=0 for CSI reports carrying L1-RSRP or L1-SINR, and k=1 for CSI reports not carrying L1-RSRP or L1-SINR;
- c is a serving cell index, and Ncells is the value of the higher layer parameter maxNrofServingCells;
- s is the reportConfigID, and Ms is the value of the higher layer parameter maxNrofCSI-ReportConfigurations.
A first CSI report is said to have priority over second CSI report if an associated PriiCSI (y, k, c, s) value for the first report is lower than an associated PriiCSI (y, k, c, s) value for the second report.
Two CSI reports are said to collide if the time occupancy of the physical channels scheduled to carry the CSI reports overlap in at least one OFDM symbol and are transmitted on the same carrier. When a UE is configured to transmit two colliding CSI reports, the following applies:
- if y values are different between the two CSI reports, the following rules apply except for the case when one value of the y is 2 and the other value y is 3 (for CSI reports transmitted on PUSCH, as described in TS 38.214 Clause 5.2.3; for CSI reports transmitted on PUCCH, as described in TS 38.214 Clause 5.2.4) :
- The CSI report with higher PriiCSI (y, k, c, s) value shall not be sent by the UE.
- otherwise, the two CSI reports are multiplexed or dropped based on the priority values, as described in Clause 9.2.5.2 in TS 38.213.
If a semi-persistent CSI report to be carried on PUSCH overlaps in time with PUSCH data transmission in one or more symbols on the same carrier, and if the earliest symbol of these PUSCH channels starts no earlier than N2+d2, 1 symbols after the last symbol of the DCI scheduling the PUSCH where d2, 1 is the maximum of the d2, 1 associated with the PUSCH carrying semi-persistent CSI report and the PUSCH with data transmission, the CSI report shall not be transmitted by the UE. Otherwise, if the timeline requirement is not satisfied, this is an error case.
If a UE may transmit a first PUSCH that includes semi-persistent CSI reports and a second PUSCH that includes an UL-SCH on the same carrier, and the first PUSCH transmission may overlap in time with the second PUSCH transmission, the UE does not transmit the first PUSCH and transmits the second PUSCH. The UE expects that the first and second PUSCH transmissions satisfy the above timing conditions for PUSCH transmissions that overlap in time when one or more of the first or second PUSCH transmissions is in response to a DCI format detection by the UE.
Currently, AI-based wireless system has been discussed in 3GPP RAN1/RAN2 meeting for several times. Dedicated CSI reporting may be needed to make AI models work normally in wireless system, including AI/ML model lifecycle management (including monitoring, training, inference etc. ) , also for AI-based CSI measurement/compression, beam management, positioning, mobility etc.
AI-based wireless technology has been a topic of recurrent discussion in multiple 3GPP RAN1/RAN2 meetings. To ensure the seamless integration of AI models within wireless systems, dedicated CSI reporting becomes imperative. This includes comprehensive AI/ML model lifecycle management (encompassing monitoring, training, inference, and other related processes) . Moreover, AI-based CSI measurement, compression, beam management, positioning, mobility, and similar functionalities demand careful consideration and implementation.
Bothe non-AI/ML based CSI measurement and AI/ML based CSI measurement or data collection may be existed for one specific radio link. Due to the limitation on UCI payload, new priority rule is needed to deal with AI/ML and/or non-AI/ML based CSI reporting, and modification is expected for the legacy priority rules that have been defined in current NR system.
The solutions including the following methods, and the detail implementations are described in the following.
1) . Multiple priority rules existed in system (both including AI/ML based and/or non-AI/ML based) , CSI reporting’s priority is determined by a given rule, which can be base station configured or derived from other configuration by UE self.
2) . Only one priority rule existed in system, but the priority value is different for AI/ML CSI reporting and non-AI/ML CSI reporting, e.g., the k value is corresponding to AI/ML CSI reporting and  non-AI/ML CSI reporting, where the value can be pre-defined or configured by base station or derived by UE self.
Embodiment 4.1: Multiple priority rules existed in system, including AI/ML and non-AI/ML specific.
■ AI-specific priority rule for AI/ML CSI reporting and/or data collection
Here, the AI/ML specific priority rules are defined in the system, which different from non-AI/ML specific priority rules that have been adopted in current NR system. In one of the AI/ML specific priority rules, the AI/ML specific priority is determined based on one or more instances of the following information, including reporting configuration ID, serving cell ID, maximum number of CSI reporting configurations, maximum serving cell, AI/ML lifecycle management procedure, AI/ML features, AI/ML functionalities, AI/ML model ID, AI/ML CSI reporting contents, and AI/ML CSI reporting behavior, as shown in the following,
AI/ML specific priority determined based on AI/ML lifecycle management procedure means that the CSI reporting related with different procedure has different priorities. For example, monitoring has the highest priority, inference has the middle priority, and training has the lowest priority, etc.
AI/ML specific priority determined based on AI/ML functionalities means that the CSI reporting related with different functionalities has different priorities. For example, beam management has the highest priority, and the CSI measurement has the lowest priority, etc.
AI/ML specific priority determined based on AI/ML features means that the CSI reporting related with different features has different priorities. For example, beam management has the highest priority, and the CSI measurement has the lowest priority, etc.
AI/ML specific priority determined based on AI/ML CSI reporting contents means that different instances of CSI reporting information have different priorities. For example, AI-based CSI part A has the highest priority, and other portions of CSI information have lower priority, etc.
Based on above descriptions, taking the priority rule which including AI/ML specific priority determined based on AI/ML lifecycle management procedure and AI/ML CSI reporting contents for example, the AI-specific CSI reporting formula may be:
PriiCSI (z, y, k, c, s) =Δ+γ1·z+γ2·y+γ3·k+γ4·c+s
where
- z is the priority value for CSI reporting for an AI/ML lifecycle procedure. For example, z=0 for CSI reports for AI/ML model monitoring, and z=1 for CSI reports not for AI/ML model monitoring;
- y=0 for aperiodic AI/ML CSI reports to be carried on PUSCH; y=1 for semi-persistent AI/ML CSI reports to be carried on PUSCH; y=2 for semi-persistent AI/ML CSI reports to be carried on PUCCH; and y=3 for periodic AI/ML CSI reports to be carried on PUCCH;
- k=0 for AI/ML CSI reports carrying CSI part A; and k=1 for AI/ML CSI reports not carrying CSI part A, where the CSI part A is a part of the total CSI reporting information which is configured by RRC and/or activated by MAC-CE or DCI;
- c is the serving cell index;
- s is the AI/ML CSI reporting configuration ID.
- {γi, i=1, 2, 3, 4} is integer value. γi may be equal to zero or other integer value and is related with the maximum number of CSI reporting configurations, maximum serving cell, and/or AI/ML lifecycle  management procedure number, and/or AI/ML functionalities number, and/or AI/ML features number.
AI/ML lifecycle management procedure number: the number of AI/ML lifecycle management procedures
AI/ML functionalities number: the number of AIML functionalities;
AI/ML features number: the number of AI/ML features.
-Δ is other value which is determined by the system or configuration or other methods.
Furthermore, when AI/ML functionalities is also included in the priority rule, the AI-specific CSI reporting formula may be, for example, changed to another formula which including the AI/ML functionalities and should also be covered by the disclosure.
PriiCSI (w, z, y, k, c, s) =Δ+γ1·w+γ2·z+γ3·y+γ4·k+γ5·c+s
where
- w is the priority value for CSI reporting for AI/ML features and/or functionalities. For example, w=0 for CSI reports for beam management, and w=1 for CSI reports for CSI compression, etc.
- z is the priority value for CSI reporting for AI/ML lifecycle procedure, for example, z=0 for CSI reports for AI/ML model monitoring, and z=1 for CSI reports not for AI/ML model monitoring;
- y=0 for aperiodic AI/ML CSI reports to be carried on PUSCH, y=1 for semi-persistent AI/ML CSI reports to be carried on PUSCH, y=2 for semi-persistent AI/ML CSI reports to be carried on PUCCH, and y=3 for periodic AI/ML CSI reports to be carried on PUCCH;
- k=0 for AI/ML CSI reports carrying CSI part A, and k=1 for AI/ML CSI reports not carrying CSI part A, where the CSI part A is a part of the total CSI reporting information which is configured by RRC and/or active by MAC-CE or DCI;
- c is the serving cell index;
- s is the AI/ML CSI reporting configuration ID.
- {γi, i=1, 2, 3, 4, 5} is integer value. γi may be equal to zero or other integer value and related with the maximum number of CSI reporting configurations, and/or maximum serving cell, and/or AI/ML procedure number, and/or AI/ML functionalities number, and/or AI/ML features number.
-Δ is other value which is determined by the system or configuration or other methods.
■ Relationship between non-The priority of AI/ML CSI rule and The priority of AI/ML CSI rule
Considering AI/ML CSI reporting and non-AL/ML CSI reporting may be both existed in system, when resource collision is happed, the priority for AI/ML CSI reporting and the priority for non-AI/ML CSI reporting should be determined. The determination method may be as shown in the following,
Option A: The priority of AI/ML CSI reporting is always higher than the priority of non-AI/ML CSI reporting. That is, the non-AI/ML CSI can be transmitted after the AI/ML CSI. The priority of CSI is determined based on the specific priority value of a type of the CSI, which can be AI/ML CSI reporting or non-AI/ML CSI reporting.
Option B: The priority of AI/ML CSI is always lower than non-AI/ML CSI reporting. That is, the AI/ML only can be transmitted after the non-AI/ML CSI. The priority of CSI is determined based on the specific priority value of a type of the CSI, which can be AI/ML CSI reporting or non-AI/ML CSI reporting.
Option C: If both AI/ML CSI and non-AI/ML share the same priority value, the AI/ML CSI will be transmitted firstly, otherwise the transmission priority is determined by the priority value.
Option D: If both AI/ML CSI and non-AI/ML share the same priority value, the non-AI/ML CSI will be transmitted firstly, otherwise the transmission priority is determined by the priority value.
■ UE get the priority rule for CSI reporting:
Two implementation options may be considered here:
Option A: The priority rules may be predefined by the system. For example, the priority rules for CSI reporting may be defined in the specification as “Relationship between the priority of non-AI/ML CSI rule and the priority of AI/ML CSI rule” .
Option B: The CSI reporting priority rules are configured by base station. For example, the priority rules are carried by RRC configuration in an RRC message, MAC-CE, and/or DCI. One or more instances of the following information may be included by the configuration,
■ a CSI reporting priority rule which describes which priority rule may be used by UE for CSI reporting.
■ CSI reporting priority rule parameters, including one or more of coefficients of the following items. The items include one or more of reporting configuration ID, serving cell ID, maximum number of CSI reporting configurations, maximum serving cell, AI/ML lifecycle management procedure, AI/ML features, AI/ML functionalities, AI/ML model ID, AI/ML CSI reporting contents, AI/ML CSI reporting behavior etc.
Option C: UE get the CSI priority rule by derived from the RRC configuration. For example, when the RRC configuration is dedicated for non-AI/ML CSI reporting, the non-AI/ML priority rule may be used. If the RRC configuration is dedicated for AI/ML CSI reporting, the AI/ML priority rule may be used. The AI/ML dedicated RRC configuration may be represented by AI/ML model ID, and/or AI/ML features, AI/ML functionalities, and/or AI/ML-specific CSI information, and/or RRC dedicated ID (including CSI measurement configuration ID, CSI reporting ID, CSI-RS resource setting/set/resource ID etc. ) 
Embodiment 4.2: One priority rule in system for both of AI/ML and non-AI/ML CSI reporting.
Another method is only one priority rule for both of AI/ML and non-AI/ML CSI reporting. The CSI reporting priority rule is determined based on one or more instances of the following information, including reporting configuration ID, serving cell ID, maximum number of CSI reporting configurations, maximum serving cell, AI/ML lifecycle management procedure, AI/ML features, AI/ML functionalities, AI/ML model ID, AI/ML CSI reporting contents, AI/ML CSI reporting behavior, as shown in the following:
CSI reporting priority rule determined based on AI/ML lifecycle management procedure means that the CSI reporting related with different procedures has different priorities. For example, monitoring has the highest priority; inference has the middle priority; and training has the lowest priority, etc.
CSI reporting priority rule determined based on AI/ML functionalities means that the CSI reporting related with different functionalities has different priorities. For example, beam management has the highest priority, and the CSI measurement has the lowest priority, etc.
CSI reporting priority rule determined based on AI/ML features means that the CSI reporting related with different features has different priorities. For example, beam management has the highest priority, and the CSI measurement has the lowest priority, etc.
CSI reporting priority rule determined based on AI/ML CSI reporting contents means that different portions of CSI have different priorities. For example, AI-based CSI part A has the highest priority, other portions of CSI have low priority.
Based on above descriptions, the CSI reporting formula may be:
PriiCSI (w, z, y, k, c, s) =Δ+γ1·w+γ2·z+γ3·y+γ4·k+γ5·c+s
- w is the priority value for CSI reporting for AI/ML features and/or functionalities. For example, w=0 for CSI reports for beam management, w=1 for CSI reports for CSI compression, etc.
- z is the priority value for CSI reporting for AI/ML lifecycle procedure. For example, z=0 for CSI reports used for AI/ML model monitoring, and z=1 for CSI reports not used for AI/ML model monitoring;
- y=0 for aperiodic CSI reports to be carried on PUSCH, y=1 for semi-persistent CSI reports to be carried on PUSCH, y=2 for semi-persistent CSI reports to be carried on PUCCH, and y=3 for periodic CSI reports to be carried on PUCCH;
- k is the priority value for CSI reporting for CSI reporting contents, which is determined both by AI/ML and/or non-AI/ML CSI information. For example, k=0 for CSI reports carrying CSI part A; k=1 for CSI reports carrying CSI part B; k=2 for CSI reports carrying CSI part C; k=2 for CSI reports carrying CSI part D. For example, the CSI part A/B can be as shown in the following,
Option A: The CSI part A can be non-AI/ML based CSI reporting information, such as L1-RSRP or L1-SINR; The CSI part B is related to that CSI information which used for AI/ML monitoring, such as accuracy, input and/or output distribution, performance of the wireless system, or other CSI information etc.;
Option B: The CSI part A can be non-AI/ML based CSI reporting information, such as L1-RSRP or L1-SINR; The CSI part B is non-AI/ML based CSI information not carrying L1-RSRP or L1-SINR; The CSI-Part C is related to that CSI information which used for AI/ML monitoring, while CSI part D is related to other CSI information for AI/ML.
Option C: The CSI part A is related to that CSI information which used for AI/ML monitoring; The CSI part B can be non-AI/ML based CSI reporting information, such as L1-RSRP or L1-SINR;
- c is the serving cell index;
- s is the CSI reporting configuration ID.
- {γi, i=1, 2, 3, 4, 5} is an integer value. γi may be equal to zero or any other integer value and related with the maximum number of CSI reporting configurations, and/or maximum serving cell, and/or AI/ML procedure number, and/or AI/ML functionalities number, and/or AI/ML features number.
- Δ is other value which is determined by the system or configuration or other methods.
Embodiment 5: CSI computation time for AI/ML model
When a CSI request field in a DCI triggers CSI report (s) on PUSCH, the UE shall provide a valid CSI report for the n-th triggered report,
- if the first uplink symbol to carry the corresponding CSI report (s) including the effect of the timing advance, starts no earlier than at symbol Zref, and
- if the first uplink symbol to carry the n-th CSI report including the effect of the timing advance, starts no earlier than at symbol Z'ref (n) ,
where Zref is defined as the next uplink symbol with its cyclic prefix (CP) starting Tproc, CSI=  (Z) (2048+144) ·κ2·Tc+Tswitch after the end of the last symbol of the PDCCH triggering the CSI report (s) , and where Z'ref (n) , is defined as the next uplink symbol with its CP starting T′proc, CSI= (Z′) (2048+144) ·κ2·Tc after the end of the last symbol in time of the latest of: aperiodic CSI-RS resource for channel measurements, aperiodic CSI-IM used for interference measurements, and aperiodic NZP CSI-RS for interference measurement, when aperiodic CSI-RS is used for channel measurement for the n-th triggered CSI report, and where Tswitch is defined in TS 38.214 clause 6.4 and is applied only if z1 of TS 38.214 table 5.4-1 is applied.
The CSI computation time is strong related to the CSI reporting information and subcarrier spacing. Generally, the AI/ML based CSI measurement or processing may spend longer computation time than non-AI/ML based processing, so how to support the CSI computation time both for non-AI/ML model or AI/ML model should be addressed.
The solutions including:
Determining delay requirements of AI/ML model specific CSI computation; and
A procedure for determining AI/ML model specific CSI computation delay requirements.
If the AI model is used for CSI computation, the delay requirements for CSI computation needs to be determined so that the gNB knows exactly when to receive the CSI measurement feedback on uplink resources. The detail implementations are described in the following.
AI/ML model specific CSI computation delay requirement:
Except the legacy CSI computation delay requirement for non-AI/ML model’s CSI, another CSI computation delay requirement should be defined for AI/ML model, and CSI computation delay requirement may be determined by one or more of the following,
■ Subcarrier spacing: the subcarrier spacing may be related to the PDCCH, PUSCH and AI/ML dedicated CSI-RS subcarrier spacing. For example, the subcarrier spacing equal to the maximum of the subcarrier spacing of PDCCH, PUSCH, and AI/ML dedicated CSI-RS resource.
■ AI/ML functionalities: Different AI/ML functionalities may have different CSI computation delay requirements. For example, the beam management needs CSI computation delay of K (e.g., in units of symbols or milliseconds) , and CSI compression needs CSI computation delay of M (e.g., in units of symbols or milliseconds) , where the K and M may be different. The AI/ML functionalities may be represented by AI/ML model ID or RRC configuration or other methods.
■ AI/ML features: Different AI/ML functionalities may have different CSI computation delay requirements. For example, the beam management needs CSI computation delay of K (e.g., in units of symbols or milliseconds) , and CSI compression needs CSI computation delay of M (e.g., in units of symbols or milliseconds) , where the K and M may be different. The AI/ML features may be represented by AI/ML model ID or RRC configuration or other methods.
■ AI/ML lifecycle management procedure: Different LCM procedures has different CSI computation delay requirements. For example, AI/ML monitoring needs CSI computation delay of S (e.g., in units of symbols or milliseconds) , while AI/ML inference needs CSI computation delay of W (e.g., in units of symbols or milliseconds) , etc.
■ UE capability information: UE capability information includes one or more of AI/ML model reporting time, AI/ML model switching time.
The AI/ML model reporting time signifies the count of OFDM symbols between the termination of the last symbol of SSB/CSI-RS and the commencement of the first symbol of the transmission channel incorporating the AI/ML CSI report. The UE offers the capability to specify the band number for which the report is provided, denoting the location where the measurement is performed. The UE includes this field to indicate supported sub-carrier spacing, and/or supported AI/ML models, functionalities, or features.
The AI/ML model switching time indicates the minimum number of OFDM symbols between the triggering of CSI-RS and CSI-RS transmission. The count of OFDM symbols is measured from the termination of the last symbol containing the indication to the commencement of the first symbol of CSI-RS. The UE includes this field for supported sub-carrier spacing, and/or for supported AI/ML model or functionalities or features.
Based on above descriptions, for example, the CSI computation delay requirement may be shown in the following. Notes that it’s just an example not used to limit the invention. There may have multiple implementation methods. Where the Zi may be predefined or calculated according to above mentioned parameters.
Table 3
Embodiment 6: Rules for discarding the active model/functionality.
UE memory usage is a key factor of UE internal conditions, which may affect the applicable or active model/functionalities. On the other hand, activating or discarding model/functionalities can adjust memory usage. Generally speaking, based on the definitions of applicable/active model/functionalities, when current UE memory usage of the UE causes memory capacity to decrease to a certain low level, the UE discards one or more selected applicable/active models/functionalities; and, when current UE memory usage of the UE causes memory capacity to increase to a certain high level, the UE activates one or more applicable/active models/functionalities. Discarding one or more models which means that the model entities (including functionalities related model entities) of the models are removed from UE. Activating one or more models means that the model entities (including functionalities related model entities) of the models are download to the UE.
Case1: UE memory usage causes memory capacity to decrease to a certain level.
When the current UE memory usage in a UE causes memory capacity to decrease to a level, the UE may deactivate and discard some applicable/active models/functionalities. More specifically, when the current UE memory usage in a UE causes memory capacity to decrease to a level, the UE performs the  following operations:
Option 1:
The UE releases some inactive models which are deployed at UE but not active. The UE can discard some inactive models. More specifically, in discarding of the one or more selected applicable/active models/functionalities, the UE initially discards inactive models/functionalities that are not used for the ongoing AI-based feature/feature group and subsequently discards inactive models/functionalities associated with the ongoing AI-based feature/feature group. Optionally, UE can discard some inactive models randomly.
Since not all of the inactive models are needed to be discarded, a new time threshold may be introduced, so that in discarding of the one or more selected applicable/active models/functionalities, if inactive time of one or more inactive models exceed the time threshold, the UE discards the inactive models. The threshold can be defined by UE itself or configured in a configuration by the NW.
Since the UE cannot discard AI models all the time, a new threshold for prohibit discard may be introduced, so that if the model discarding exceeds the threshold that is defined based on discarding time, or UE memory usage, a number of discarded AI models or a number of deactivated AI models, the model discarding is turned off. The threshold can be defined by UE itself o or configured in a configuration by the NW.
UE will use the mechanism of AI-related UE capability reporting, which is defined in the foregoing embodiments, to report updated applicable models/functionalities to NW. The updated applicable models/functionalities are the models/functionalities in the UE after the model discarding.
Option 2:
When current UE memory usage of the UE causes memory capacity to decrease to a certain low level, the UE or a life cycle management (LCM) network device performs model selection or model switching to select one or more AI/ML models with smaller size among applicable models in the UE and discard one or more active AI models with large size in the UE.
Since large size AI model can occupy more UE memory, while the applicable models which means that these models meet current requirement in system, UE can select some smaller size applicable models to maintain the undergoing AI-based (sub) use case/FG, in this process. The model selection or model switching may also include the model activation that causes the large size applicable models may not be activated.
While the model selection or model switching may be performed at UE side or NW side for UE-side model, or two side model.
■ If the model selection or model switching is performed at UE side, a corresponding procedure of the model selection or model switching depends on UE implementation. However, as aforementioned, since the UE cannot discard AI models all the time, the new threshold for prohibit discard may be introduced.
■ If the model selection or model switching is performed at NW side, the changing UE memory usage needs to be reported to the NW. The NW performs the model selection or model switching and assists UE to select one/or more suitable AI models. The UE memory usage reporting used in the AI-related UE capability reporting has been detailed in the foregoing embodiments.
UE will use the mechanism of AI-related UE capability reporting, which is defined in the foregoing embodiments, to report updated applicable models/functionalities to NW. The updated applicable models/functionalities are the models/functionalities in the UE after the model selection or model switching.
Option 3:
In discarding of the one or more selected applicable/active models/functionalities, if the UE performs the discarding based on model monitoring at the UE. The UE performs model monitoring and discards some active models according to the result of the model monitoring (i.e., system performance or system resource consumption) . The UE selects some suitable AI/ML models for model selection or model switching, similar to the operations in option 2.
As aforementioned, the change of UE memory usage cause memory capacity to decrease to a certain level, the UE will discard some AI models which are located at UE side. Various internal conditions of the UE which can initiate model discarding can be monitored as a portion of the model monitoring. A threshold for model monitoring can be the same as or different from Threshold1.
FIG. 13 is a block diagram of an example system 700 for wireless communication according to an embodiment of the present disclosure. Embodiments described herein may be implemented into the system using any suitably configured hardware and/or software. FIG. 13 illustrates the system 700 including a radio frequency (RF) circuitry 710, a baseband circuitry 720, a processing unit 730, a memory/storage 740, a display 750, a camera 760, a sensor 770, and an input/output (I/O) interface 780, coupled with each other as illustrated.
The processing unit 730 may include circuitry, such as, but not limited to, one or more single-core or multi-core processors. The processors may include any combinations of general-purpose processors and dedicated processors, such as graphics processors and application processors. The processors may be coupled with the memory/storage and configured to execute instructions stored in the memory/storage to enable various applications and/or operating systems running on the system.
The radio control functions may include, but are not limited to, signal modulation, encoding, decoding, radio frequency shifting, etc. In some embodiments, the baseband circuitry may provide for communication compatible with one or more radio technologies. For example, in some embodiments, the baseband circuitry may support communication with 5G NR, LTE, an evolved universal terrestrial radio access network (EUTRAN) and/or other wireless metropolitan area networks (WMAN) , a wireless local area network (WLAN) , a wireless personal area network (WPAN) . Embodiments in which the baseband circuitry is configured to support radio communications of more than one wireless protocol may be referred to as multi-mode baseband circuitry. In various embodiments, the baseband circuitry 720 may include circuitry to operate with signals that are not strictly considered as being in a baseband frequency. For example, in some embodiments, baseband circuitry may include circuitry to operate with signals having an intermediate frequency, which is between a baseband frequency and a radio frequency.
In various embodiments, the system 700 may be a mobile computing device such as, but not limited to, a laptop computing device, a tablet computing device, a netbook, an ultrabook, a smartphone, etc. In various embodiments, the system may have more or less components, and/or different architectures. Where appropriate, the methods described herein may be implemented as a computer  program. The computer program may be stored on a storage medium, such as a non-transitory storage medium.
The embodiment of the present disclosure is a combination of techniques/processes that can be adopted in 3GPP specification to create an end product.
If the software function unit is realized and used and sold as a product, it can be stored in a readable storage medium in a computer. Based on this understanding, the technical plan proposed by the present disclosure can be essentially or partially realized as the form of a software product. Or, one part of the technical plan beneficial to the conventional technology can be realized as the form of a software product. The software product in the computer is stored in a storage medium, including a plurality of commands for a computational device (such as a personal computer, a server, or a network device) to run all or some of the steps disclosed by the embodiments of the present disclosure. The storage medium includes a USB disk, a mobile hard disk, a read-only memory (ROM) , a random-access memory (RAM) , a floppy disk, or other kinds of media capable of storing program codes.
This disclosure provides a mechanism of AI-related UE capability reporting, including the possible procedure, signaling and elements.
This disclosure also provides rules of discarding the active model/functionality, including deactivate unsuitable model/functionality, and/or activate suitable model/functionality for a certain feature/FG. The disclosure enhances AI/ML models for wireless communication systems.
While the present disclosure has been described in connection with what is considered the most practical and preferred embodiments, it is understood that the present disclosure is not limited to the disclosed embodiments but is intended to cover various arrangements made without departing from the scope of the broadest interpretation of the appended claims.

Claims (61)

  1. A method for processing artificial intelligence (AI) -related user equipment (UE) capability reporting executable in a user equipment (UE) , comprising:
    performing AI-related UE capability reporting to report AI-related UE capability information of the UE.
  2. The method for processing AI-related UE capability reporting of claim 1, wherein the AI-related UE capability information comprises change of AI-related UE capability information of the UE.
  3. The method for processing AI-related UE capability reporting of claim 1, wherein the AI-related UE capability reporting is performed in response to a message from a network device, and the message comprises a request or configuration message for AI-related UE capability reporting.
  4. The method for processing AI-related UE capability reporting of claim 1, wherein the AI-related UE capability information is transmitted in a UAI message, RRCReconfigurationComplete message, or RRCResumeComplete message, Medium Access Control (MAC) control element (CE) , or dedicated message.
  5. The method for processing AI-related UE capability reporting of claim 4, wherein a defined purpose of the UAI message is used to indicate that the UAI message carries the AI-related UE capability information.
  6. The method for processing AI-related UE capability reporting of claim 4, wherein configuration of the UAI message is conveyed in a RRCReconfiguration message.
  7. The method for processing AI-related UE capability reporting of claim 4, wherein configuration of the UAI message comprises one or more of:
    applicable model configuration for the UE to report AI-related UE capability information of changes in applicable model detected by the UE;
    applicable functionality configuration for the UE to report AI-related UE capability information of changes in applicable functionality detected by the UE;
    memory usage configuration for the UE to report AI-related UE capability information of changes in memory usage detected by the UE;
    UE battery configuration for the UE to report AI-related UE capability information of changes in a UE battery status detected by the UE;
    scenario configuration for the UE to report AI-related UE capability information of changes in a scenario detected by the UE;
    site configuration for the UE to report AI-related UE capability information of changes in a site detected by the UE;
    dataset configuration for the UE to report AI-related UE capability information of changes in a dataset detected by the UE;
    cell configuration for the UE to report AI-related UE capability information of changes in a cell detected by the UE; and
    zone configuration for the UE to report AI-related UE capability information of changes in a zone detected by the UE.
  8. The method for processing AI-related UE capability reporting of claim 7, wherein the configuration of the UAI message comprises a prohibit timer for each type of AI-related UE capability information.
  9. The method for processing AI-related UE capability reporting of claim 7, wherein the applicable model configuration comprises one or both of a maximum number of applicable models and a minimum number  of applicable models; and
    the applicable functionality configuration comprises one or both of a maximum number of applicable functionalities and a minimum number of applicable functionalities.
  10. The method for processing AI-related UE capability reporting of claim 4, wherein the UE receives configuration of AI-related UE capability reporting; and
    the configuration of AI-related UE capability reporting comprises one or more instances of the following information:
    AI-related capability configured ID: information used to identify an AI-related capability configuration;
    requested AI-related UE capability: information that indicates one or more instances of AI-related UE capability information requested by a network;
    AI-related UE capability reporting configuration ID: information used to identify an AI-related UE capability reporting configuration;
    AI-related UE capability reporting type: information used to indicate one of AI-related UE capability reporting types;
    reference signaling type: information used to indicate a type of reference signaling;
    time to trigger: information that specifies time during which specific criteria for the trigger event needs to be met in order to trigger an AI-related UE capability reporting;
    activating time offset: information that specifies a time offset according to which an AI-related operation is activated a time offset after a trigger event;
    duration time of AI-related UE capability reporting: information that specifies a duration of time during which the UE is to report at least one requested AI-related UE capability.
    server identification: information used to identify a server which is capable of AI model management;
    cause of requesting AI-related UE capability: information that indicates a cause of requesting AI-related UE capability information; and
    threshold: information specifies a defined threshold for detecting change of memory usage of the UE.
  11. The method for processing AI-related UE capability reporting of claim 10, wherein each UE capability in the requested AI-related UE capability is a capability element and identified at element-based granularity; or
    set/group of UE capabilities in the requested AI-related UE capability is a capability set/group and identified at set/group-based granularity.
  12. The method for processing AI-related UE capability reporting of claim 1, wherein the AI-related UE capability reporting types comprise periodic AI-related information reporting and event-triggered AI-related information reporting.
  13. The method for processing AI-related UE capability reporting of claim 1, wherein the UE receives AI-related capability reporting activation/deactivation, the AI-related UE capability reporting is activated in response to the received activation or deactivated in response to the received deactivation; and
    the AI-related capability reporting activation/deactivation comprises one or more of:
    AI-related capability reporting threshold;
    requested AI-related UE capability;
    an activating time;
    a deactivating time;
    a duration time of AI-related UE capability reporting; and
    a server identification.
  14. The method for processing AI-related UE capability reporting of claim 1, wherein the UE transmits a scheduling request (SR) for radio resource for the AI-related UE capability reporting.
  15. The method for processing AI-related UE capability reporting of claim 14, wherein the scheduling request is configured with a new scheduling request instance that indicates an SR configuration corresponding to AI-specific data/information.
  16. The method for processing AI-related UE capability reporting of claim 14, wherein the scheduling request is a legacy scheduling request with a parameter used to indicate that data/information of new transmission associated with the scheduling request is AI-specific data/information for AI-related UE capability reporting.
  17. The method for processing AI-related UE capability reporting of claim 14, wherein the scheduling request is an AI-specific SR which is used for requesting uplink radio resources for AI-specific data/information.
  18. The method for processing AI-related UE capability reporting of claim 14, wherein a logical channel (LCH) or logical channel group (LCG) is corresponded with an SR configuration or AI-specific SR configuration for the transmitting data information, the LCH or LCG indicates the transmitting data information is AI-specific data/information implicitly.
  19. The method for processing AI-related UE capability reporting of claim 1, wherein a logical channel (LCH) or logical channel group (LCG) is included in an AI-specific buffer status reporting (BSR) Medium Access Control (MAC) control element (CE) for the transmitting data information, the LCH or LCG indicates the transmitting data information is AI-specific data/information implicitly.
  20. The method for processing AI-related UE capability reporting of claim 1, wherein an AI-specific indicator and a logical channel (LCH) or logical channel group (LCG) is included in an AI-specific buffer status reporting (BSR) Medium Access Control (MAC) control element (CE) for the transmitting data information, the LCH or LCG indicates the transmitting data information is AI-specific data/information implicitly.
  21. The method for processing AI-related UE capability reporting of claim 12, wherein for a reporting type of periodic reporting, configuration of AI-related UE capability reporting comprises one or more of:
    information that indicates an interval between two adjacent periodical AI-related UE capability reports;
    information that indicates a number of AI-related UE capability reports;
    information that indicates a maximum number of AI/ML models included in the AI-related UE capability reporting;
    information that indicates the maximum number of AI/ML functionalities included in the AI-related UE capability reporting;
    information that indicates whether an allow-list that enumerates AI/ML models in an associated requested AI-related UE capability information element are applicable to the AI-related UE capability reporting; and
    information that indicates whether an allow-list that enumerates AI/ML functionalities in the associated requested AI-related UE capability information element are applicable to the AI-related UE capability reporting.
  22. The method for processing AI-related UE capability reporting of claim 12, wherein for a reporting type  of event-triggered reporting, a trigger event triggers the AI-related UE capability reporting, the trigger event occurs upon the UE detects change to the applicable model (s) and/or applicable functionality/functionalities of the UE, or after the applicable model (s) and/or applicable functionality/functionalities have been changed and persist for a specified period, or when the applicable model (s) and/or applicable functionality/functionalities have been changed and persist for the specified duration while a number of changed applicable model (s) and/or applicable functionality/functionalities exceeds a maximum/minimum number of applicable model (s) or applicable functionality/functionalities.
  23. The method for processing AI-related UE capability reporting of claim 12, wherein for a reporting type of event-triggered AI-related information reporting, a trigger event triggers the AI-related UE capability reporting, the trigger event occurs when change of UE memory usage become greater than a memory usage threshold.
  24. The method for processing AI-related UE capability reporting of claim 12, wherein for a reporting type of event-triggered AI-related information reporting, a trigger event triggers the AI-related UE capability reporting;
    an entering condition of the event is satisfied when a UE-detected change of UE memory usage minus a hysteresis parameter is less than a memory usage threshold; and
    a leaving condition of the event is satisfied when the UE-detected change of UE memory usage plus the hysteresis parameter is greater than the memory usage threshold.
  25. The method for processing AI-related UE capability reporting of claim 1, wherein when current UE memory usage of the UE causes memory capacity to decrease to a certain low level, the UE discards one or more selected applicable/active models/functionalities; and
    when current UE memory usage of the UE causes memory capacity to increase to a certain high level, the UE activates one or more applicable/active models/functionalities.
  26. The method for processing AI-related UE capability reporting of claim 25, wherein in discarding of the one or more selected applicable/active models/functionalities, the UE initially discards inactive models/functionalities that are not used for an ongoing AI-based feature/feature group and subsequently discards inactive models/functionalities associated with the ongoing AI-based (sub) use case/feature group.
  27. The method for processing AI-related UE capability reporting of claim 25, wherein in discarding of the one or more selected applicable/active models/functionalities, if inactive time of one or more inactive models exceed a time threshold, the UE discards the inactive models.
  28. The method for processing AI-related UE capability reporting of claim 25, wherein in discarding of the one or more selected applicable/active models/functionalities, if the discarding exceeds a threshold that is defined based on discarding time, or UE memory usage, a number of discarded AI models or a number of deactivated AI models, the discarding is turned off.
  29. The method for processing AI-related UE capability reporting of claim 25, wherein when current UE memory usage of the UE causes memory capacity to decrease to a certain low level, the UE or a life cycle management (LCM) network device performs model selection or model switching to select one or more AI/ML models with smaller size among applicable models in the UE and discard one or more active AI models with large size in the UE.
  30. The method for processing AI-related UE capability reporting of claim 25, wherein in discarding of the  one or more selected applicable/active models/functionalities, if the UE performs the discarding based on model monitoring at the UE.
  31. A user equipment (UE) comprising:
    a processor configured to call and run a computer program stored in a memory, to cause a device in which the processor is installed to execute the method of any of claims 1 to 30.
  32. A chip, comprising:
    a processor, configured to call and run a computer program stored in a memory, to cause a device in which the chip is installed to execute the method of any of claims 1 to 30.
  33. A computer-readable storage medium, in which a computer program is stored, wherein the computer program causes a computer to execute the method of any of claims 1 to 30.
  34. A computer program product, comprising a computer program, wherein the computer program causes a computer to execute the method of any of claims 1 to 30.
  35. A computer program, wherein the computer program causes a computer to execute the method of any of claims 1 to 30.
  36. A method for processing artificial intelligence (AI) -related user equipment (UE) capability reporting executable in a base station, comprising:
    receiving AI-related UE capability information of a user equipment (UE) reported through AI-related UE capability reporting.
  37. The method for processing AI-related UE capability reporting of claim 36, wherein the AI-related UE capability information comprises change of AI-related UE capability information of the UE.
  38. The method for processing AI-related UE capability reporting of claim 36, wherein the AI-related UE capability reporting is performed in response to a message transmitted from the base station, and the message comprises a request or configuration message for AI-related UE capability reporting.
  39. The method for processing AI-related UE capability reporting of claim 36, wherein the AI-related UE capability information is included in a UAI message, RRCReconfigurationComplete message, or RRCResumeComplete message, Medium Access Control (MAC) control element (CE) , or dedicated message.
  40. The method for processing AI-related UE capability reporting of claim 39, wherein a defined purpose of the UAI message is used to indicate that the UAI message carries the AI-related UE capability information.
  41. The method for processing AI-related UE capability reporting of claim 39, wherein configuration of the UAI message is transmitted by the base station and conveyed in a RRCReconfiguration message.
  42. The method for processing AI-related UE capability reporting of claim 39, wherein configuration of the UAI message comprises one or more of:
    applicable model configuration for the UE to report AI-related UE capability information of changes in applicable model detected by the UE;
    applicable functionality configuration for the UE to report AI-related UE capability information of changes in applicable functionality detected by the UE;
    memory usage configuration for the UE to report AI-related UE capability information of changes in memory usage detected by the UE;
    UE battery configuration for the UE to report AI-related UE capability information of changes in a UE battery status detected by the UE;
    scenario configuration for the UE to report AI-related UE capability information of changes in a scenario detected by the UE;
    site configuration for the UE to report AI-related UE capability information of changes in a site detected by the UE;
    dataset configuration for the UE to report AI-related UE capability information of changes in a dataset detected by the UE;
    cell configuration for the UE to report AI-related UE capability information of changes in a cell detected by the UE; and
    zone configuration for the UE to report AI-related UE capability information of changes in a zone detected by the UE.
  43. The method for processing AI-related UE capability reporting of claim 42, wherein the configuration of the UAI message comprises a prohibit timer for each type of AI-related UE capability information.
  44. The method for processing AI-related UE capability reporting of claim 42, wherein the applicable model configuration comprises one or both of a maximum number of applicable models and a minimum number of applicable models; and
    the applicable functionality configuration comprises one or both of a maximum number of applicable functionalities and a minimum number of applicable functionalities.
  45. The method for processing AI-related UE capability reporting of claim 39, wherein the base station transmits configuration of AI-related UE capability reporting; and
    the configuration of AI-related UE capability reporting comprises one or more instances of the following information:
    AI-related capability configured ID: information used to identify an AI-related capability configuration;
    requested AI-related UE capability: information that indicates one or more instances of AI-related UE capability information requested by a network;
    AI-related UE capability reporting configuration ID: information used to identify an AI-related UE capability reporting configuration;
    AI-related UE capability reporting type: information used to indicate one of AI-related UE capability reporting types;
    reference signaling type: information used to indicate a type of reference signaling;
    time to trigger: information that specifies time during which specific criteria for the trigger event needs to be met in order to trigger an AI-related UE capability reporting;
    activating time offset: information that specifies a time offset according to which an AI-related operation is activated a time offset after a trigger event;
    duration time of AI-related UE capability reporting: information that specifies a duration of time during which the UE is to report at least one requested AI-related UE capability.
    server identification: information used to identify a server which is capable of AI model management;
    cause of requesting AI-related UE capability: information that indicates a cause of requesting AI-related UE capability information; and
    threshold: information specifies a defined threshold for detecting change of memory usage of the UE.
  46. The method for processing AI-related UE capability reporting of claim 45, wherein each UE capability in the requested AI-related UE capability is a capability element and identified at element-based  granularity; or
    set/group of UE capabilities in the requested AI-related UE capability is a capability set/group and identified at set/group-based granularity.
  47. The method for processing AI-related UE capability reporting of claim 1, wherein the AI-related UE capability reporting types comprise periodic AI-related information reporting and event-triggered AI-related information reporting.
  48. The method for processing AI-related UE capability reporting of claim 1, wherein the base station transmits AI-related capability reporting activation/deactivation, the AI-related UE capability reporting is activated in response to the received activation or deactivated in response to the received deactivation; and
    the AI-related capability reporting activation/deactivation comprises one or more of:
    AI-related capability reporting threshold;
    requested AI-related UE capability;
    an activating time;
    a deactivating time;
    a duration time of AI-related UE capability reporting; and
    a server identification.
  49. The method for processing AI-related UE capability reporting of claim 36, wherein the base station receives a scheduling request (SR) for radio resource for the AI-related UE capability reporting.
  50. The method for processing AI-related UE capability reporting of claim 49, wherein the scheduling request is configured with a new scheduling request instance that indicates an SR configuration corresponding to AI-specific data/information.
  51. The method for processing AI-related UE capability reporting of claim 49, wherein the scheduling request is a legacy scheduling request with a parameter used to indicate that data/information of new transmission associated with the scheduling request is AI-specific data/information for AI-related UE capability reporting.
  52. The method for processing AI-related UE capability reporting of claim 49, wherein the scheduling request is an AI-specific SR which is used for requesting uplink radio resources for AI-specific data/information.
  53. The method for processing AI-related UE capability reporting of claim 49, wherein a logical channel (LCH) or logical channel group (LCG) is corresponded with an SR configuration or AI-specific SR configuration for the transmitting data information, the LCH or LCG indicates the transmitting data information is AI-specific data/information implicitly.
  54. The method for processing AI-related UE capability reporting of claim 36, wherein a logical channel (LCH) or logical channel group (LCG) is included in an AI-specific buffer status reporting (BSR) Medium Access Control (MAC) control element (CE) for the transmitting data information, the LCH or LCG indicates the transmitting data information is AI-specific data/information implicitly.
  55. The method for processing AI-related UE capability reporting of claim 36, wherein an AI-specific indicator and a logical channel (LCH) or logical channel group (LCG) is included in an AI-specific buffer status reporting (BSR) Medium Access Control (MAC) control element (CE) for the transmitting data information, the LCH or LCG indicates the transmitting data information is AI-specific data/information  implicitly.
  56. The method for processing AI-related UE capability reporting of claim 47, wherein for a reporting type of periodic reporting, configuration of AI-related UE capability reporting comprises one or more of:
    information that indicates an interval between two adjacent periodical AI-related UE capability reports;
    information that indicates a number of AI-related UE capability reports;
    information that indicates a maximum number of AI/ML models included in the AI-related UE capability reporting;
    information that indicates the maximum number of AI/ML functionalities included in the AI-related UE capability reporting;
    information that indicates whether an allow-list that enumerates AI/ML models in an associated requested AI-related UE capability information element are applicable to the AI-related UE capability reporting; and
    information that indicates whether an allow-list that enumerates AI/ML functionalities in the associated requested AI-related UE capability information element are applicable to the AI-related UE capability reporting.
  57. A base station comprising:
    a processor configured to call and run a computer program stored in a memory, to cause a device in which the processor is installed to execute the method of any of claims 36 to 56.
  58. A chip, comprising:
    a processor, configured to call and run a computer program stored in a memory, to cause a device in which the chip is installed to execute the method of any of claims 36 to 56.
  59. A computer-readable storage medium, in which a computer program is stored, wherein the computer program causes a computer to execute the method of any of claims 36 to 56.
  60. A computer program product, comprising a computer program, wherein the computer program causes a computer to execute the method of any of claims 36 to 56.
  61. A computer program, wherein the computer program causes a computer to execute the method of any of claims 36 to 56.
PCT/CN2023/111600 2023-08-07 2023-08-07 User equipment, base station, and method for processng artificial intelligence-related user equipment capability reporting Pending WO2025030349A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/CN2023/111600 WO2025030349A1 (en) 2023-08-07 2023-08-07 User equipment, base station, and method for processng artificial intelligence-related user equipment capability reporting

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2023/111600 WO2025030349A1 (en) 2023-08-07 2023-08-07 User equipment, base station, and method for processng artificial intelligence-related user equipment capability reporting

Publications (1)

Publication Number Publication Date
WO2025030349A1 true WO2025030349A1 (en) 2025-02-13

Family

ID=94533289

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2023/111600 Pending WO2025030349A1 (en) 2023-08-07 2023-08-07 User equipment, base station, and method for processng artificial intelligence-related user equipment capability reporting

Country Status (1)

Country Link
WO (1) WO2025030349A1 (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114556822A (en) * 2019-10-17 2022-05-27 高通股份有限公司 Configuration of CSI reference resources and CSI target resources for predictive estimation of channel state information
CN115250502A (en) * 2021-04-01 2022-10-28 英特尔公司 Apparatus and method for RAN intelligent network
US20230164817A1 (en) * 2021-11-24 2023-05-25 Lenovo (Singapore) Pte. Ltd. Artificial Intelligence Capability Reporting for Wireless Communication

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114556822A (en) * 2019-10-17 2022-05-27 高通股份有限公司 Configuration of CSI reference resources and CSI target resources for predictive estimation of channel state information
CN115250502A (en) * 2021-04-01 2022-10-28 英特尔公司 Apparatus and method for RAN intelligent network
US20230164817A1 (en) * 2021-11-24 2023-05-25 Lenovo (Singapore) Pte. Ltd. Artificial Intelligence Capability Reporting for Wireless Communication

Similar Documents

Publication Publication Date Title
US20250150970A1 (en) Methods and systems for handling power saving signals to improve power saving performance of ue
US11576157B2 (en) NR V2X sidelink resource selection and reselection using scheduling window
JP7270110B2 (en) Method and User Equipment for Uplink Cancellation Indication
JP7209863B2 (en) Channel occupied time sharing method and device
WO2022204871A1 (en) Methods, apparatus and systems for monitoring a control channel
US9215656B2 (en) Self-contained data transfer channel
WO2022032661A1 (en) Method for a relaxed ue processing time
US12177816B2 (en) Resource exclusion method and apparatus, and storage medium
US20240121798A1 (en) Methods, apparatus and systems for a control channel monitoring procedure
US20250324432A1 (en) Enhanced single downlink control information multi-slot scheduling
WO2022151321A1 (en) A system and method for pdcch monitoring
US9042280B2 (en) Methods and apparatus for half duplex scheduling
US20250056612A1 (en) Resource determining method and apparatus
US20240276528A1 (en) Communication method and device
WO2018228550A1 (en) Data transmission method and apparatus
US20240172321A1 (en) Methods, Node, UE and Computer Readable Media for Aligning Partial Sensing Configuration with DRX Configuration
EP4093131A1 (en) Prioritization method and device for plurality of collision resources
EP3516915B1 (en) Flexible resource usage between scheduling-based and contention-based resource access for wireless networks
WO2025030349A1 (en) User equipment, base station, and method for processng artificial intelligence-related user equipment capability reporting
CN111726208B (en) Method and device for recovering link failure
US11284430B2 (en) Method of determining ambiguous period, terminal and network-side device
US12495424B2 (en) Communication method, communication apparatus, and device including periodicity
EP4408102A1 (en) Communication method and apparatus
KR20240175243A (en) Method and apparatus for ai based low power communication of user equipment in wireless communication system
WO2025148284A1 (en) Traffic status information report

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23947930

Country of ref document: EP

Kind code of ref document: A1