[go: up one dir, main page]

WO2025059827A1 - Ai/ml-based method for processing channel state information and wireless communication device - Google Patents

Ai/ml-based method for processing channel state information and wireless communication device Download PDF

Info

Publication number
WO2025059827A1
WO2025059827A1 PCT/CN2023/119548 CN2023119548W WO2025059827A1 WO 2025059827 A1 WO2025059827 A1 WO 2025059827A1 CN 2023119548 W CN2023119548 W CN 2023119548W WO 2025059827 A1 WO2025059827 A1 WO 2025059827A1
Authority
WO
WIPO (PCT)
Prior art keywords
csi
based csi
processing
model
measurement configuration
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
PCT/CN2023/119548
Other languages
French (fr)
Inventor
Yunsheng KUANG
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen TCL New Technology Co Ltd
Original Assignee
Shenzhen TCL New Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen TCL New Technology Co Ltd filed Critical Shenzhen TCL New Technology Co Ltd
Priority to PCT/CN2023/119548 priority Critical patent/WO2025059827A1/en
Publication of WO2025059827A1 publication Critical patent/WO2025059827A1/en
Pending legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L5/00Arrangements affording multiple use of the transmission path
    • H04L5/003Arrangements for allocating sub-channels of the transmission path
    • H04L5/0053Allocation of signalling, i.e. of overhead other than pilot signals
    • H04L5/0057Physical resource allocation for CQI
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W24/00Supervisory, monitoring or testing arrangements
    • H04W24/10Scheduling measurement reports ; Arrangements for measurement reports
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W8/00Network data management
    • H04W8/22Processing or transfer of terminal data, e.g. status or physical capabilities
    • H04W8/24Transfer of terminal data

Definitions

  • the present disclosure relates to the field of communication systems, and more particularly, to an artificial intelligence (AI) /machine learning (ML) -based method for processing channel state information (CSI) and a wireless communication device.
  • AI artificial intelligence
  • ML machine learning
  • 3GPP RAN1 TR38.843 presents a comprehensive exploration of the integration of artificial intelligence (AI) and machine learning (ML) into the NR air interface. These technologies are harnessed to enhance CSI feedback, thereby improving the accuracy of time-domain CSI prediction, beam management, and even positioning. Additionally, the application of AI/ML methods in areas such as load balancing and RRM algorithms has been deliberated in RAN3/RAN2. These standardization research endeavors consistently demonstrate remarkable performance improvements when compared to non-AI/ML processing.
  • AI artificial intelligence
  • ML machine learning
  • CSI as a pivotal component in communication systems, plays a crucial role in supporting multi-input multi-output (MIMO) and Radio Resource management (RRM) algorithms, contributing significantly to system throughput and spectral efficiency.
  • MIMO multi-input multi-output
  • RRM Radio Resource management
  • the incorporation of AI is no exception and is poised to reshape the way CSI is designed.
  • AI-related information including prediction accuracy, must be incorporated into the traditional CSI reporting process, along with considerations for the time required for CSI measurement and computation.
  • measures are in place to ensure the uniformity of CSI measurement and reporting behaviors between the base station and user equipment (UE) . This involves defining the CSI process unit (CPU) and latency requirements for calculating various types of CSI.
  • An object of the present disclosure is to propose a wireless communication device, such as a user equipment (UE) or a base station, and an artificial intelligence (AI) /machine learning (ML) -based method for processing channel state information (CSI) based on machine learning.
  • a wireless communication device such as a user equipment (UE) or a base station
  • AI artificial intelligence
  • ML machine learning
  • an embodiment of the invention provides artificial intelligence (AI) /machine learning (ML) -based method for processing channel state information (CSI) , executed by a base station, comprising:
  • AI/ML-based CSI measurement configuration based on the AI/ML-based CSI processing capability, wherein the AI/ML-based CSI measurement configuration comprises one or more of a configuration for CSI resource scheduling, a configuration for CSI reporting, and a configuration for CSI measurement triggering.
  • an embodiment of the invention provides a base station comprising a processor configured to call and run a computer program stored in a memory, to cause a device in which the processor is installed to execute the disclosed method.
  • an embodiment of the invention provides artificial intelligence (AI) /machine learning (ML) -based method for processing channel state information (CSI) , executed by a user equipment (UE) , comprising:
  • UE user equipment
  • AI/ML-based CSI measurement configuration from a base station, wherein the AI/ML-based CSI measurement configuration is based on the AI/ML-based CSI processing capability;
  • AI/ML-based CSI measurement configuration comprises one or more of a configuration for CSI resource scheduling, a configuration for CSI reporting, and a configuration for CSI measurement triggering.
  • an embodiment of the invention provides a user equipment (UE) comprising a processor configured to call and run a computer program stored in a memory, to cause a device in which the processor is installed to execute the disclosed method.
  • UE user equipment
  • an embodiment of the invention provides artificial intelligence (AI) /machine learning (ML) -based method for processing channel state information (CSI) , executable in a user equipment (UE) , comprising:
  • a time interval from reception of the AI/ML-based CSI measurement configuration to CSI reporting is less than or equal to a first processing time, or a time interval from reception of the AI/ML-based CSI reference signals to the CSI reporting is less than or equal to a second processing time;
  • the first processing time and the second processing time are determined by a UE capability of AI/ML-based CSI processing.
  • an embodiment of the invention provides a user equipment (UE) comprising a processor configured to call and run a computer program stored in a memory, to cause a device in which the processor is installed to execute the disclosed method.
  • UE user equipment
  • an embodiment of the invention provides artificial intelligence (AI) /machine learning (ML) -based method for processing channel state information (CSI) , executed by a base station, comprising:
  • AI/ML-based CSI measurement configuration transmitting, by the base station, AI/ML-based CSI measurement configuration and AI/ML-based CSI reference signals, so that a user equipment (UE) performs CSI measurement based on the AI/ML-based CSI measurement configuration and the AI/ML-based CSI reference signals, wherein a time interval from reception of the AI/ML-based CSI measurement configuration to CSI reporting is less than or equal to a first processing time, or a time interval from reception of the AI/ML-based CSI reference signals to the CSI reporting is less than or equal to a second processing time, and the first processing time and the second processing time are determined by a UE capability of AI/ML-based CSI processing; and
  • an embodiment of the invention provides a base station comprising a processor configured to call and run a computer program stored in a memory, to cause a device in which the processor is installed to execute the disclosed method.
  • the disclosed method may be implemented in a chip.
  • the chip may include a processor, configured to call and run a computer program stored in a memory, to cause a device in which the chip is installed to execute the disclosed method.
  • the disclosed method may be programmed as computer-executable instructions stored in non- transitory computer-readable medium.
  • the non-transitory computer-readable medium when loaded to a computer, directs a processor of the computer to execute the disclosed method.
  • the non-transitory computer-readable medium may comprise at least one from a group consisting of: a hard disk, a CD-ROM, an optical storage device, a magnetic storage device, a Read Only Memory, a Programmable Read Only Memory, an Erasable Programmable Read Only Memory, EPROM, an Electrically Erasable Programmable Read Only Memory and a Flash memory.
  • the disclosed method may be programmed as a computer program product, which causes a computer to execute the disclosed method.
  • the disclosed method may be programmed as a computer program, which causes a computer to execute the disclosed method.
  • This disclosure presents an artificial intelligence (AI) /machine learning (ML) -based method for processing channel state information (CSI) , executed by a base station, comprising: receiving AI/ML-based CSI processing capability reported by a user equipment (UE) ; and determining AI/ML-based CSI measurement configuration based on the AI/ML-based CSI processing capability, wherein the AI/ML-based CSI measurement configuration comprises one or more of a configuration for CSI resource scheduling, a configuration for CSI reporting, and a configuration for CSI measurement triggering. So our solution can improve the Success rate of CSI measurement.
  • AI artificial intelligence
  • ML machine learning
  • This disclosure presents an artificial intelligence (AI) /machine learning (ML) -based method for processing channel state information (CSI) , executed by a user equipment (UE) , comprising: receiving AI/ML-based CSI measurement configuration and AI/ML-based CSI reference signals; wherein a time interval from reception of the AI/ML-based CSI measurement configuration to CSI reporting is less than or equal to a first processing time, or a time interval from reception of the AI/ML-based CSI reference signals to the CSI reporting is less than or equal to a second processing time; and the first processing time and the second processing time are determined by a UE capability of AI/ML-based CSI processing. So our solution can improve reporting rate.
  • AI artificial intelligence
  • ML machine learning
  • a user equipment reports AI/ML-based CSI processing capability.
  • a base station receives the AI/ML-based CSI processing capability reported by the UE and determines AI/ML-based CSI measurement configuration based on the AI/ML-based CSI processing capability.
  • the UE receives AI/ML-based CSI measurement configuration from a base station.
  • CSI measurement configuration is configured taking into account the AI/ML-based CSI processing capability of UE.
  • CSI reporting performed by a UE can be enhanced by the AI/ML-based CSI measurement configuration.
  • a UE receives AI/ML-based CSI measurement configuration and AI/ML-based CSI reference signals and performs CSI reporting using the AI/ML-based CSI reference signals based on the AI/ML-based CSI measurement configuration.
  • a time interval from reception of the AI/ML-based CSI measurement configuration to CSI reporting is less than or equal to a first processing time, or a time interval from reception of the AI/ML-based CSI reference signals to the CSI reporting is less than or equal to a second processing time.
  • the first processing time and the second processing time are determined by a UE capability of AI/ML-based CSI processing.
  • timing relationships of CSI reporting, reception of the AI/ML-based CSI measurement configuration, and reception of the AI/ML-based CSI reference signals are limited by the first processing time and the second processing time determined based on the UE capability of AI/ML-based CSI processing.
  • FIG. 1 illustrates a schematic view showing CSI process criteria and CSI measurement and calculation time in NR system.
  • FIG. 2 illustrates a schematic view showing a wireless communication system comprising a user equipment (UE) , a base station, and a network entity.
  • UE user equipment
  • FIG. 3 illustrates a schematic view showing a system with an AI/ML functional framework for executing a model management method using ML models.
  • FIG. 4 illustrates a schematic view showing an overall solution of the disclosed method.
  • FIG. 5 illustrates a schematic view showing an embodiment of the disclosed method.
  • FIG. 6 illustrates a schematic view showing another embodiment of the disclosed method.
  • FIG. 7 illustrates a schematic view showing timing of CSI reporting, AI/ML model activation, and AI/ML model inferencing in an example.
  • FIG. 8 illustrates a schematic view showing timing of CSI reporting, AI/ML model activation, and AI/ML model inferencing in another example.
  • FIG. 9 illustrates a schematic view showing a system for wireless communication according to an embodiment of the present disclosure.
  • Embodiments of the disclosure are related to artificial intelligence (AI) and machine learning (ML) for new radio (NR) air interface and address problems of AI/ML-specific CSI processing and computation timing delay, to make the AI models working normally at the network with extremally less signaling interaction between gNB and UE.
  • AI artificial intelligence
  • ML machine learning
  • NR new radio
  • Embodiments of the disclosure introduce a novel framework that leverages AI/ML models for Channel State Information (CSI) measurement.
  • This framework encompasses reference signal configuration, CSI measurement, and reporting.
  • Our solution offers several advantages, including reduced signaling overhead, scalability, efficient resource allocation tailored to various model requirements, and the unification of one-sided and two-sided model procedures into a single process through the use of an AI model-ID.
  • Embodiments of the disclosure provides:
  • a set of CSI process criteria specific to AI/ML models, including the definition of CPU N CPU, AI/ML and occupied CPU O CPU, AI/ML dedicated to AI/ML models, as well as the CSI process criteria for UE when AI/ML-based and non-AI/ML-based CSI processing coexist on the UE.
  • a set of CSI measurement timing and configurations specifically for AI/ML model inference, including T proc, CSI, AI/ML and T′ proc, CSI, AI/ML , encompassing aspects such as activation time and inference time for AI/ML models, and modifications to the current CSI measurement time estimation procedures.
  • the T proc, CSI, AI/ML is a time difference between the last symbol of the PDCCH triggering the CSI report (s) and next uplink symbol after the end of the last symbol of the PDCCH
  • T′ proc, CSI, AI/ML is a time difference between the last symbol of the reference signals for CSI measurement and report (s) and the next uplink symbol after the end of the last symbol in time of the reference signals.
  • ⁇ CSI processing unit This term refers to the capacity of simultaneous CSI measurements and calculations that the system supports.
  • the identifier is used to identify an AI/ML model, which can be a globally unique ID, a PLMN-specific unique ID, an operator-specific unique ID, or an AI/ML management platform-specific unique ID.
  • the identifier is used to identify an AI/ML model used in the network, and it has a certain mapping relationship with the Global ID.
  • the Logical ID can be a globally unique ID, a PLMN-specific unique ID, an operator-specific unique ID, a Cell-specific ID, a Link-specific ID, a TA-specific ID, a CU-specific ID, a DU-specific ID, a UPF-specific ID, an AMF-specific ID, an RRC-specific ID, or a Network-Slicing-specific ID.
  • Cell-specific means that each AI/ML model has a unique ID within a specific cell.
  • Link-specific refers to unique IDs for AI/ML models in specific contexts within the network.
  • the information includes one or more of Global AI/ML model ID, provider, scenario, feature, function, version, accuracy, RRC descriptor, and AI/ML model descriptor.
  • Scenario of an AI/ML model may include one of but not limited to indoor, outdoor, flying, water, mobility speed, etc.
  • Feature of an AI/ML model may include one of but not limited to CSI compression, CSI prediction, beam management, positioning, handover, radio resource management, etc.
  • Function of an AI/ML model may include one of but not limited to CSI compression, CSI prediction, beam management, positioning, handover, radio resource management, and/or the usage of AI/ML models.
  • the usage may be monitoring, inference, and/or training etc.
  • the RRC descriptor includes requirements on reference signal configuration and requirements on CSI measurement for the AI/ML model, which may include at least one of reference signal timing requirement (e.g., period, time step, hopping mechanism, etc. ) , reference signal frequency requirement (e.g., frequency resource, hopping mechanism etc. ) , reference signal antenna port number, CSI contents (e.g., channel matrix, eigen-vector, etc. ) .
  • reference signal timing requirement e.g., period, time step, hopping mechanism, etc.
  • reference signal frequency requirement e.g., frequency resource, hopping mechanism etc.
  • reference signal antenna port number e.g., reference signal antenna port number
  • CSI contents e.g., channel matrix, eigen-vector, etc.
  • AI/ML model descriptor This term refers to the detailed attribute description of the AI/ML model. This description may include, but is not limited to, AI/ML type, accuracy, input data requirement, output data, monitoring method, input data distribution, out data distribution etc.
  • AI/ML model For simplicity, an AI/ML model, AI model, ML model, and model are interchangeably used in the description.
  • the UE (e.g., UE 10) indicates the number of supported simultaneous CSI measurements and calculations N CPU with parameter simultaneousCSI-ReportsPerCC in a component carrier, and simultaneousCSI-ReportsAllCC across all component carriers. If a UE supports N CPU simultaneous CSI measurements and calculations it is said to have N CPU CSI processing units for processing CSI reports. If L CPUs are occupied for calculation of CSI reports in a given OFDM symbol, the UE has N CPU -L unoccupied CPUs.
  • the UE is not required to update the M-M requested CSI reports with lowest priority, where 0 ⁇ M ⁇ M. can be maximized such that holds.
  • a UE is not expected to be configured with an aperiodic CSI trigger state containing more than N CPU Reporting settings. Processing of a CSI report occupies a number of CPUs for a number of symbols had been defined.
  • UE After receiving CSIReportConfig in downlink control information (DCI) , UE (e.g., UE 10) starts the process of CSI measurements and calculations, and finally obtains the CSI measurement/calculation results to report to gNB (e.g., gNB 20) on PUCCH or PUSCH.
  • the overall time duration consumed can be categorized into four parts, including DCI decoding, beam switching (optional) , CSI measurements/calculations and transmission preparation.
  • the CSI request field in a DCI triggers a CSI report (s) on PUSCH
  • the UE shall provide a valid CSI report for the n-th triggered CSI report,
  • Z ref is defined as the next uplink symbol with its CP starting T proc
  • CSI (Z) (2048+144) ⁇ 2 - ⁇ ⁇ T C +T switch after the end of the last symbol of the PDCCH triggering the CSI report (s)
  • Z' ref (n) is defined as the next uplink symbol with its CP starting T' proc
  • CSI (Z') (2048+144) ⁇ 2 - ⁇ ⁇ T C after the end of the last symbol in time of CSI-RS resource.
  • AI/ML models When applying AI/ML models to the CSI measurement process, which includes CSI prediction, channel information feedback, beam management, and positioning, it becomes evident that the resource demands, encompassing computing units, memory, storage, and other hardware resources, for AI/ML models differ significantly from those required by traditional CSI measurement and calculation methods that do not employ AI/ML (known as non-AI/ML-based CSI measurement/calculation) . Consequently, the existing metrics for CPU usage and occupied CPU, originally defined for non-AI/ML models, become inadequate for assessing AI/ML model resource utilization. Therefore, it is imperative to redefine relevant parameters specifically tailored to AI/ML models.
  • AI/ML model activation e.g., loading an AI/ML model into the UE's memory (RAM)
  • RAM UE's memory
  • This model activation process should only commence once the UE has received the CSI-RS. Consequently, if the UE initiates AI/ML model inference immediately upon initiating the legacy CSI measurement/calculation as described in TS 38.214 –5.4, where CSI measurement and calculation begin after receiving CSI-RS, the overall time required for CSI measurement and calculation using AI/ML model inference will be extended due to the time needed for AI/ML model activation.
  • a telecommunication system including a UE 10a, a base station 20a, a base station 20b, and a network entity device 30 executes the disclosed method according to an embodiment of the present disclosure.
  • FIG. 2 is shown for illustrative, not limiting, and the system may comprise more UEs, BSs, and CN entities. Connections between devices and device components are shown as lines and arrows in the FIGs.
  • the UE 10a may include a processor 11a, a memory 12a, and a transceiver 13a.
  • the base station 20a may include a processor 21a, a memory 22a, and a transceiver 23a.
  • the base station 20b may include a processor 21b, a memory 22b, and a transceiver 23b.
  • the network entity device 30 may include a processor 31, a memory 32, and a transceiver 33.
  • Each of the processors 11a, 21a, 21b, and 31 may be configured to implement the proposed functions, procedures, and/or methods described in this description. Layers of radio interface protocol may be implemented in the processors 11a, 21a, 21b, and 31.
  • Each of the memory 12a, 22a, 22b, and 32 operatively stores a variety of programs and information to operate a connected processor.
  • Each of the transceivers 13a, 23a, 23b, and 33 is operatively coupled with a connected processor, and transmits and/or receives a radio signal.
  • Each of the base stations 20a and 20b may be an eNB, a gNB, or one of other radio nodes.
  • Each of the processors 11a, 21a, 21b, and 31 may include a general-purpose central processing unit (CPU) , application-specific integrated circuits (ASICs) , other chipsets, logic circuits and/or data processing devices.
  • Each of the memory 12a, 22a, 22b, and 32 may include read-only memory (ROM) , a random-access memory (RAM) , a flash memory, a memory card, a storage medium and/or other storage devices.
  • Each of the transceivers 13a, 23a, 23b, and 33 may include baseband circuitry and radio frequency (RF) circuitry to process radio frequency signals.
  • RF radio frequency
  • the techniques described herein can be implemented with modules, procedures, functions, entities and so on, that perform the functions described herein.
  • the modules can be stored in a memory and executed by the processors.
  • the memory can be implemented within a processor or external to the processor, in which those can be communicatively coupled to the processor via various means are known in the art.
  • the network entity device 30 may be a node in a CN.
  • CN may include LTE CN or 5GC which may include user plane function (UPF) , session management function (SMF) , access and mobility management function (AMF) , unified data management (UDM) , policy control function (PCF) , control plane (CP) /user plane (UP) separation (CUPS) , authentication server (AUSF) , network slice selection function (NSSF) , and the network exposure function (NEF) .
  • UPF user plane function
  • SMF session management function
  • AMF access and mobility management function
  • UDM unified data management
  • PCF policy control function
  • PCF control plane
  • CP control plane
  • UP user plane
  • CUPS authentication server
  • NSSF network slice selection function
  • NEF network exposure function
  • a system 100 for the artificial intelligence (AI) /machine learning (ML) -based method for processing channel state information (CSI) based on machine learning comprises units of data collection 101, model training unit 102, actor 103, and model inference 104.
  • FIG. 3 does not necessarily limit the artificial intelligence (AI) /machine learning (ML) -based method for processing channel state information (CSI) to the instant example.
  • the artificial intelligence (AI) /machine learning (ML) -based method for processing channel state information (CSI) is applicable to any design based on machine learning.
  • the general steps comprise data collection and/or model training and/or model inference and/or (an) actor (s) .
  • the data collection unit 101 is a function that provides input data to the model training unit 102 and the model inference unit 104.
  • AI/ML algorithm-specific data preparation e.g., data pre-processing and cleaning, formatting, and transformation
  • data pre-processing and cleaning e.g., data pre-processing and cleaning, formatting, and transformation
  • Examples of input data may include measurements from UEs or different network entities, feedback from Actor 103, and output from an AI/ML model.
  • Training data is data needed as input for the AI/ML Model training unit 102.
  • Inference data is data needed as input for the AI/ML Model inference unit 104.
  • the model training unit 102 is a function that performs the ML model training, validation, and testing.
  • the Model training unit 102 is also responsible for data preparation (e.g., data pre-processing and cleaning, formatting, and transformation) based on training data delivered by the data collection unit 101, if required.
  • Model Deployment/Update between units 102 and 104 involves deployment or update of an AI/ML model (e.g., a trained machine learning model 105a or 105b) to the model inference unit 104.
  • the model training unit 102 uses data units as training data to train a machine learning model 105a and generates a trained machine learning model 105b from the machine learning model 105a.
  • the model inference unit 104 is a function that provides AI/ML model inference output (e.g., predictions or decisions) .
  • the AI/ML model inference output is the output of the machine learning model 105b.
  • the Model inference unit 104 is also responsible for data preparation (e.g., data pre-processing and cleaning, formatting, and transformation) based on inference data delivered by the data collection unit 101, if required.
  • the output shown between unit 103 and unit 104 is the inference output of the AI/ML model produced by the model inference unit 104.
  • Actor 103 is a function that receives the output from the model inference unit 104 and triggers or performs corresponding actions.
  • the actor 103 may trigger actions directed to other entities or to itself.
  • Feedback between unit 103 and unit 101 is information that may be needed to derive training or inference data or performance feedback.
  • FIG. 4 shows a flowchart of the overall solution for artificial intelligence (AI) /machine learning (ML) -based method for processing channel state information (CSI) .
  • AI artificial intelligence
  • ML machine learning
  • CSI channel state information
  • Step 1 The UE (e.g., UE 10) sends the parameters simultaneousCSI-ReportsPerCC and simultaneousCSI-ReportsAllCC to the gNB (e.g., gNB 20) .
  • the above two parameters can calculate the number of available CSI processing units (CPUs) , denoted as N CPU , of the UE.
  • the N CPU is a portion of UE capability of the UE.
  • Step 2 The gNB provides CSI measurement configuration and performs CSI reporting processing based on the UE capability.
  • Step 3 After receiving a CSI measurement request, the UE decodes the received message. After successful decoding, the UE starts activating an AI/ML model used for inference for CSI measurement/calculation.
  • Step 4 The UE can perform CSI measurement/calculation by AI/ML model inference when receiving CSI-RS. Within the specified time, the UE generates a CSI report and sends the CSI report it to the gNB.
  • an example of a UE 10 in the description may include one of the UE 10a.
  • Examples of a gNB 20 in the description may include the base station 20a or 20b. Note that even though the gNB is described as an example of base station in the following, the disclosed method of may be implemented in any other types of base stations, such as an eNB or a base station for beyond 5G.
  • Uplink (UL) transmission of a control signal or data may be a transmission operation from a UE to a base station.
  • Downlink (DL) transmission of a control signal or data may be a transmission operation from a base station to a UE.
  • the disclosed method is detailed in the following.
  • the UE 10 and a base station, such as a gNB 20, execute the artificial intelligence (AI) /machine learning (ML) -based method for processing channel state information (CSI) based on machine learning.
  • AI artificial intelligence
  • ML machine learning
  • the UE 10 reports AI/ML-based CSI processing capability (Step S001) .
  • the AI/ML-based CSI processing capability of the UE comprises one or more of:
  • CPUs AI/ML-based CSI processing units
  • the AI/ML-based CSI processing capability of the UE is determined based on one or more of:
  • SCS subcarrier spacing
  • the gNB 20 receives the AI/ML-based CSI processing capability (Step S002) .
  • the gNB 20 determines AI/ML-based CSI measurement configuration 402 based on the AI/ML-based CSI processing capability, wherein the AI/ML-based CSI measurement configuration comprises one or more of a configuration for CSI resource scheduling, a configuration for CSI reporting, and a configuration for CSI measurement triggering (Step S003) .
  • the UE 10 receives AI/ML-based CSI measurement configuration 402 from the gNB 20, wherein the AI/ML-based CSI measurement configuration is based on the AI/ML-based CSI processing capability (Step S004) .
  • the UE 10 performs a first operation according to the AI/ML-based CSI measurement configuration 402 (S006) .
  • one or both of the UE 10 and the gNB 20 keep one or more of the following relationships:
  • AI/ML-based CSI processing resources occupied by the AI/ML-based CSI measurement configuration are less than or equal to the AI/ML-based CSI processing capability
  • a weighted value of the AI/ML-based CSI processing resources occupied by the AI/ML-based CSI measurement configuration is less than or equal to the AI/ML-based CSI processing capability
  • the AI/ML-based CSI processing resources occupied by the AI/ML-based CSI measurement configuration and CSI processing resources occupied by the non-AI/ML-based CSI measurement are less than or equal to processing capability of the UE.
  • the AI/ML-based CSI processing resources comprise one or more of:
  • the AI/ML-based CSI processing resources are determined based on one or more of:
  • SCS subcarrier spacing
  • the AI/ML-based CSI processing resources of the UE is obtained through one or more of the following:
  • the AI/ML-based CSI processing resources of the UE occupied by the AI/ML-based CSI measurement configuration is preconfigured or predefined by a base station;
  • the base station receives a message from the UE, wherein the message comprises one or more of:
  • the AI/ML-based CSI processing resources of the UE is obtained through one or more of the following:
  • the AI/ML-based CSI processing resources of the UE occupied by the AI/ML-based CSI measurement configuration is preconfigured or predefined by the base station;
  • the base station receives a message from the UE, wherein the message comprises one or more of:
  • CPU CSI processing unit
  • the UE when AI/ML-based CSI measurement configuration exceeds the AI/ML-based CSI processing capability of the UE, the UE discards the AI/ML-based CSI measurement configuration that exceeds the AI/ML-based CSI processing capability of the UE; and when the AI/ML-based CSI measurement configuration is less than or equal to the AI/ML-based CSI processing capability of the UE, the UE performs the first operation according to the AI/ML-based CSI measurement configuration.
  • the UE when a weighted sum calculated from AI/ML-based CSI measurement configuration and weights of the AI/ML-based CSI measurement configuration exceeds the AI/ML-based CSI processing capability of the UE, the UE discards the AI/ML-based CSI measurement configuration that exceeds the AI/ML-based CSI processing capability of the UE; and when the weighted sum is less than or equal to the AI/ML-based CSI processing capability of the UE, the UE performs the first operation according to the AI/ML-based CSI measurement configuration.
  • the UE 10 transmits an uplink message to convey the CPU occupation for the AI/ML-based CSI processing.
  • the gNB 20 transmits AI/ML-based CSI measurement configuration 402 and AI/ML-based CSI reference signals 403, so that a user equipment (UE) performs CSI measurement based on the AI/ML-based CSI measurement configuration and the AI/ML-based CSI reference signals, wherein a time interval from reception of the AI/ML-based CSI measurement configuration to CSI reporting is less than or equal to a first processing time, or a time interval from reception of the AI/ML-based CSI reference signals to the CSI reporting is less than or equal to a second processing time, and the first processing time and the second processing time are determined by a UE capability of AI/ML-based CSI processing (Step S011) .
  • the UE 10 receives AI/ML-based CSI measurement configuration 402 and AI/ML-based CSI reference signals 403 (Step S012) .
  • a time interval from reception of the AI/ML-based CSI measurement configuration to CSI reporting is less than or equal to a first processing time, or a time interval from reception of the AI/ML-based CSI reference signals to the CSI reporting is less than or equal to a second processing time.
  • the first processing time and the second processing time are determined by a UE capability of AI/ML-based CSI processing.
  • the UE 10 performs CSI reporting to transmit a CSI report 404 using the AI/ML-based CSI reference signals 403 based on the AI/ML-based CSI measurement configuration 402 (Step S013) .
  • the gNB 20 receives a result (e.g., the CSI report 404) of the CSI reporting (Step S014) .
  • the first processing time comprises one or more of:
  • the second processing time comprises one or more of:
  • the first processing time and/or the second processing time is determined by one or more of:
  • AI/ML model parameters for CSI measurement
  • SCS subcarrier spacing
  • the first processing time and/or the second processing time is obtained through one or more of:
  • a value of the first processing time and/or the second processing conveyed in an uplink message that is transmitted in a UE capability report, a scheduling request, physical uplink shared channel (PUSCH) , or physical uplink control channel (PUCCH) .
  • PUSCH physical uplink shared channel
  • PUCCH physical uplink control channel
  • the second processing time comprises one portion or entirety of the duration required for AI/ML model activation
  • the second processing time comprises entirety of the duration required for AI/ML model activation
  • the second processing time comprises one portion of the duration required for AI/ML model activation
  • the second processing time does not include the duration required for AI/ML model activation.
  • AI/ML specific CPU and processing criteria with virtualization is detailed in the following.
  • the CSI measurement requirements configured by the base station to the UE, or the UE-configured CSI measurement requirements, or the CSI measurement update requirements should be determined by at least one of the following parameters:
  • the parameter means a UE have N CPU, AI/ML AI-ML specific CSI processing units for processing AI/ML-based CSI reports, where the N CPU, AI/ML may be obtained from predefined system parameters or can be indicated by the UE 10 via a reporting message sent from the UE 10 to gNB 20.
  • n-th AI/ML-based CSI reporting The number of occupied CPUs, denoted as for n-th AI/ML-based CSI reporting, where n is an integer variable varying from 0 to N AI/ML -1:
  • the parameter means a UE will occupy CPUs for n-th AI/ML-based CSI reporting, where the may be obtained from predefined system parameters or can be indicated by the UE 10 via a reporting message sent from the UE 10 to gNB 20.
  • a UE is not expected to be configured with an aperiodic CSI trigger state containing more than N CPU, AI/ML reporting settings using AI/ML model inference.
  • CSI processing may comprise either or both of CSI measurement and CSI calculation.
  • the parameter may be determined based on one or more factors, such as contents of CSI measurement, AI/ML functionality, AI/ML model, etc. The detail method depends on implementation.
  • the value may be obtained from predefined system parameters or can be indicated by the UE 10 via a reporting message sent from the UE 10 to gNB 20.
  • Option A is predefined by wireless communication system
  • the number of occupied CPUs is related to contents of CSI measurement, AI/ML functionality, AI/ML model.
  • O k is the predefined value of to demonstrate the consumed AI/ML specific CPU, which may be influenced by resources of process, memory, computation capabilities and power consumption.
  • the variable k is an integer representing a column index.
  • Class A/B/C represents different types of CSI-ReportConfig. For example, Class A means reportQuantity is set to ‘none’ , Class B means set to ‘RSRP’ or ‘SINR’ , Class C means CSI measurements and calculations like codebookType etc. Type A/B/C represents different AI/ML functionality.
  • Type A is AI/ML used for CSI measurements
  • Type B is AI/ML used for beam measurements
  • Type C is AI/ML used for positioning.
  • Model A/B/C represents different AI/ML models.
  • Model A is an AI/ML model based on RNN
  • Model B is an AI/ML model based on CNN
  • Model C is an AI/ML model based on a type of neural network and so on.
  • Option B is indicated by UE to gNB
  • each AI/ML-based CSI report are obtained through UE reporting.
  • the gNB After receiving this information through UE reporting, the gNB starts to decide the AI/ML-based CSI measurement configuration that are assigned to UE in condition that the total occupied resources for AI/ML model inference do not exceed the UE capability for AI/ML model inference, that is,
  • the information fed back by the UE to the base station should contain one or more of the following parameters: AI/ML model related information and occupied CPU for AI/ML-based CSI.
  • the information fed back by the UE is model-specific metadata and may be included in UE capability reporting, uplink control/data channels like PUSCH/PUCCH, MAC layer control elements, RRC layer signaling, or other appropriate control mechanisms available in the radio protocol stack.
  • the AI/ML model related information includes AI/ML model attribute description information used to identify and differentiate between various AI/ML models that are used by the UE or gNB for CSI measurement and calculation.
  • the occupied CPUs for n-th AI/ML-based CSI measurement and calculation used by various CSI, and/or AI/ML features/functionalities, and/or AI/ML models may be different.
  • the number of occupied resources may include at least one of parameters or which represented as process resource, memory, computation capabilities, or power consumption of the nth CSI measurement and calculation or reporting.
  • the number of occupied resources may include a weighted sum of the parameters or Definitions of the parameters or are illustrated in Embodiment 2.
  • Embodiment 1 AI/ML specific CPU and processing criteria with one or more physical info (s)
  • the parameter N CPU, AI/ML which indicates the number of supported simultaneous AI/ML-based processing is a virtualized parameter that is originated from and influenced by multiple physical factors, including processes, memory, computing power and energy consumption, etc.
  • AI/ML may not fully reflect the UE's capability of simultaneously processing AI/ML-based CSI measurements and calculations.
  • the CSI measurement requirements configured by the base station to the UE, or the UE-configured CSI measurement requirements, or the CSI measurement update requirements should be determined by at least one of the following parameters:
  • N proc The number of supported simultaneous process resources for AI/ML-based CSI measurements N proc : which represents the process resources currently available on the UE that can be used for AI/ML model computation, where the parameter N proc may be indicated by UE or defined by system parameters.
  • N mem The number of supported simultaneous occupied memory resources for AI/ML-based CSI measurements N mem : which represents the available RAM resources on UE that can be used for AI/ML model inference, where the parameter N mem may be indicated by UE or defined by system parameters.
  • N comp The number of supported simultaneous computation capabilities for AI/ML-based CSI measurements N comp : which represents the available computation resources on UE that can be used for AI/ML model inference, where the parameter N comp may be indicated by UE or defined by system parameters.
  • N power The number of supported simultaneous power consumptions for AI/ML-based CSI measurements N power : which represents the maximum power consumptions on UE that can be used for AI/ML model inference, where the parameter N power may be indicated by UE or defined by system parameters.
  • n an integer variable varying from 0 to N AI/ML -1: which means a UE will occupy process resources for n-th AI/ML-based CSI reporting, where the may be indicated by UE or defined by system parameters.
  • n-th AI/ML-based CSI reporting where n is an integer variable varying from 0 to N AI/ML -1: which means a UE will occupy memory resources for n-th AI/ML-based CSI reporting, where the may be indicated by UE or defined by system parameters.
  • n-th AI/ML-based CSI reporting where n is an integer variable varying from 0 to N AI/ML -1: which means a UE will occupy computation capabilities for n-th AI/ML-based CSI reporting, where the may be indicated by UE or defined by system parameters.
  • n-th AI/ML-based CSI reporting where n is an integer variable varying from 0 to N AI/ML -1: which means a UE will occupy power consumptions for n-th AI/ML-based CSI reporting, where the may be indicated by UE or defined by system parameters.
  • the gNB 20 decides to assign AI/ML-based CSI report configuration (i.e., CSI reporting configuration) to UE according to the evaluation of UE capability which satisfies at least one of the following conditions:
  • parameters are determined based on one or more factors, such as contents of CSI measurement, AI/ML model, AI/ML functionality, etc.
  • the detail method depends on implementation.
  • the values of one or more of and may be obtained from predefined system parameters or can be indicated by the UE 10 via a reporting message sent from the UE 10 to gNB 20.
  • Option A one or more of and are predefined by wireless communication system.
  • the table below shows an example in which one or more of and are predefined by wireless communication system.
  • P k , M k , C k , and O k are the predefined values of the consumed resources of process, memory, computation capabilities and power consumption where k is an integer that varies from 1 to 7.
  • the meanings of Class A/B/C representing types of CSI-ReportConfig, Type A/B/C of AI/ML functionality, Model A/B of AI/ML models are the same as illustrated in Embodiment 1.
  • Option B one or more of and are indicated by UE to gNB
  • These parameters can be carried in a UE capability report, MAC-CE or RRC message over PUSCH/PUCCH and contain at least one of the following:
  • the AI/ML model related information includes AI/ML model attribute description information used to identify an AI/ML model, which can be a globally unique ID, a PLMN-specific unique ID, an operator-specific unique ID, or an AI/ML management platform-specific unique ID.
  • the parameter is used to describe the number of consumed process of AI/ML model.
  • ⁇ Occupied memory The parameter is used to describe the consumed RAM in the process of AI/ML model inferring.
  • Embodiment 2 AI/ML specific CPU and processing criteria with a weighted parameter originated from one or more physical info (s) .
  • one or more parameters of and which indicate (s) the UE capability used for supporting AI/ML-based CSI measurements are transmitted to gNB 20 from UE (e.g., UE10) , which may incur significant air interface overhead.
  • UE e.g., UE10
  • the CSI measurement requirements configured by the base station to the UE, or the UE-configured CSI measurement requirements, or the CSI measurement update requirements should be determined by at least one of the following parameters:
  • the weighted number of supported simultaneous AI/ML specific CSI processing units for CSI measurement and calculation N ALL means a UE have N ALL AI-ML specific CSI processing units for processing AI/ML-based CSI reports, where the N ALL may be obtained from predefined system parameters or can be indicated by the UE 10 via a reporting message sent from the UE 10 to gNB 20.
  • the priority indicates a priority of memory resource in evaluating the UE capability for AI/ML-based CSI measurement and calculation for n-th AI/ML-based CSI reporting, where the may be obtained from predefined system parameters or can be indicated by the UE 10 via a reporting message sent from the UE 10 to gNB 20.
  • the priority indicates the priority of computation capabilities in evaluating the UE capability for AI/ML-based CSI measurement and calculation for n-th AI/ML-based CSI reporting, where the may be obtained from predefined system parameters or can be indicated by the UE 10 via a reporting message sent from the UE 10 to gNB 20.
  • the priority indicates the priority of power consumption in evaluating the UE capability for AI/ML-based CSI measurement and calculation for n-th AI/ML-based CSI reporting, where the may be obtained from predefined system parameters or can be indicated by the UE 10 via a reporting message sent from the UE 10 to gNB 20.
  • the parameter can be calculated using the priority parameters (weight value) of and, which is shown below,
  • a UE is not expected to be configured with an aperiodic CSI trigger state containing more than N ALL Reporting settings using AI/ML model inference.
  • the parameters and, are determined based on one or more factors, such as contents of CSI measurement, AI/ML model, AI/ML functionality, etc.
  • the detail method depends on implementation.
  • the values of one or more of and, may be obtained from predefined system parameters or can be indicated by the UE 10 via a reporting message sent from the UE 10 to gNB 20.
  • values of one part of and, can be obtained from predefined system parameters while another part be indicated by the UE 10 via a reporting message sent from the UE 10 to gNB 20.
  • Option A one or more of and, are predefined by the wireless communication system.
  • P proc k , P mem k , P comp k , and P power k are the predefined values of the priority (weight value) of process, memory, computation capabilities and power consumption.
  • the variable k is an integer representing a column index.
  • the meanings of Class A/B/C of types of CSI-ReportConfig, Type A/B/C of types of AI/ML functionality, Model A/B of AI/ML models are the same as illustrated in Embodiment 1.
  • Option B one or more of and, are indicated by UE to gNB.
  • One or more of and, of each AI/ML-based CSI report are obtained through UE reporting by UE (e.g., UE 10) .
  • the gNB 20 decides to assign AI/ML-based CSI report configuration (i.e., CSI reporting configuration) to UE (e.g., UE 10) according to the evaluation of UE capability which satisfies at least one of the following conditions:
  • These parameters can be carried in a UE capability report, MAC-CE or RRC message over PUSCH/PUCCH and contain at least one of the following:
  • the AI/ML model related information includes AI/ML model attribute description information used to identify an AI/ML model, which can be a globally unique ID, a PLMN-specific unique ID, an operator-specific unique ID, or an AI/ML management platform-specific unique ID.
  • the parameter is used to describe the priority of consumed process of AI/ML model.
  • Priority of occupied memory The parameter is used to describe priority of the consumed RAM in the process of AI/ML model inferring.
  • Priority of consumed computation capabilities The parameter is used to describe the priority of consumed computation capabilities in the process of AI/ML inferring.
  • the parameter is used to describe the priority of power consumption of AI/ML inferring.
  • Embodiment 3 Compatibility between AI/ML-based and non-AI/ML-based CSI measurement and calculation.
  • the UE should be able to support both types of models simultaneously.
  • schemes for compatibility between AI/ML-based and non-AI/ML-based CSI measurement and calculation including conventional schemes or schemes under development or developed in the future, can be incorporated into embodiments of the disclosure. This is because the AI/ML model may switch or fallback to a non-AI/ML model in some scenarios. Therefore, the UE should have the capability to perform CSI measurements using both AI/ML and non-AI/ML models in the same UE.
  • UE may transmits signaling to base station to inform the base station how to configure the non-AI/ML-based CSI measurements and AI/ML-based CSI measurements based on the UE's capability.
  • a UE may transmit to the base station both N CPU corresponding to the UE capability for non-AI/ML-based CSI measurements and N CPU, AI/ML corresponding to the UE capability for AI/ML-based CSI measurements.
  • AI/ML-based CSI measurement and non-AI/ML-based CSI measurement running simultaneously may occupy the same hardware and system resources or not.
  • AI/ML and non-AI/ML models of CSI measurement running on one single UE: one with AI/ML models running on separate micro processing units (MPUs) , and another with AI/ML and non-AI/ML models running on a shared MPU.
  • MPUs micro processing units
  • the following information may be indicated by UE,
  • the AI/ML model related information includes AI/ML model attribute description information used to identify an AI/ML model used by the UE or gNB for CSI measurement and calculation.
  • a CSI calculation resource occupation scheme indicator shows whether the AI/ML-based and non-AI/ML-based CSI measurement and calculation occupy separate CPU pools or share the same CPU pool, where the CPU may be computation process resource, memory, or others.
  • the CPU may include the total CPU number, and/or occupied CPU for one CSI calculation or reporting or measurement.
  • the CSI calculation resource occupation scheme indicator can be signaled implicitly or explicitly from UE (e.g., UE 10) to base station (e.g., gNB 20) .
  • the CSI calculation resource occupation scheme indicator can be signaled or determined implicitly by the relationship information or directly by an indication.
  • An example of the detail implementation is shown in the following:
  • Option 1 Determined by the relationship information.
  • the relationship information with a valid value means AI/ML-based CSI measurement and calculation or non-AI/ML-based CSI measurement and calculation share the same CPU pool.
  • the relationship information with an invalid value means the AI/ML-based CSI measurement and calculation and non-AI/ML-based CSI measurement and calculation are processed with separated CPU pools.
  • the invalid value can be denoted by NULL, a maximum value, a minimize value or alternative values.
  • the relationship information may depict a relationship between CPUs used by AI/ML-based CSI measurement and calculation and CPUs used by non-AI/ML-based CSI measurement and calculation.
  • the relationship can be illustrated as follows, while additional methods are not precluded,
  • a scaler coefficient K n representing a relationship between the CPU consumption of AI/ML-based CSI processing and non-AI/ML-based CSI processing for n-th AI/ML-based CSI reporting, where n is an integer variable varying from 0 to N AI/ML -1:
  • the scaler coefficient K n means a ratio of CPUs used by AI/ML model and CPUs used by non-AI/ML model.
  • the K n may be obtained from predefined system parameters or can be indicated by the UE 10 via a reporting message sent from the UE 10 to gNB 20.
  • the conversion coefficient K n can be expressed as
  • the Difference D n means a difference between CPUs used by AI/ML model and CPUs used by non-AI/ML model.
  • the D n may be obtained from predefined system parameters or can be indicated by the UE 10 via a reporting message sent from the UE 10 to gNB 20.
  • the conversion difference D n can be expressed as
  • Option 2 A new indicator is needed to be transmitted from UE (e.g., UE 10) to gNB (e.g., gNB 20) to indicate whether the AI/ML and non-AI/ML models occupy separate CPU pools or share the same CPU pool.
  • This indicator can take the form of a binary digit, with 0 representing separation and 1 indicating sharing, for example.
  • gNB e.g., gNB 20
  • CSI measurement configuration CSI-RS resource, CSI reporting, CSI triggering etc.
  • UE e.g., UE 10
  • Case 1 AI/ML-based and non-AI/ML-based CSI measurement and calculation use separate CPU pools.
  • parameter N represents the number of assigned CSI reporting tasks which are implemented by non-AI/ML model
  • parameter N AI/ML represents the number of assigned CSI reporting tasks which are implemented by AI/ML model inferring.
  • the number of assigned CSI reporting tasks may be configured in CSI report configuration (i.e., CSI reporting configuration) .
  • UE e.g., UE 10
  • UE need not to transmit additional signaling to instruct gNBs on configuring CSI measurements and calculations for both AI/ML-based and non-AI/ML-based models, as the configuration is determined by the UE's capabilities.
  • the hardware and system resources of the same micro processing unit (MPU) consumed when running AI/ML models and non-AI/ML models for CSI measurement and calculation computations are shared, then if the CSI report configuration (i.e., CSI reporting configuration) issued by the gNB for execution using non-AI/ML models and AI/ML models occupy resources of N CPU and N CPU, AI/ML to saturation, it is very likely to cause the CPUs occupied by the UE when performing the corresponding CSI measurement and calculation operations to directly exceed the UE's capabilities, leading to failure in processing some CSI report configuration (i.e., CSI reporting configuration) and decreased success rate of CSI feedback.
  • MPU micro processing unit
  • the resource consumption of AI/ML models cannot be considered separately from that of non-AI/ML models. Instead, a conversion relationship between the CPU consumption of AI/ML models and non-AI/ML models needs to be considered. Using the conversion relationship, the CPU consumption of one type of CSI measurement and calculation can be converted into the CPU consumption of the other type of CSI measurement and calculation for evaluation.
  • the types of CSI measurement and calculation include AI/ML-based and non-AI/ML-based CSI measurement and calculations.
  • the conversion relationship between the CPU consumption of AI/ML model and non-AI/ML model for the CSI measurement and calculations should be determined by at least one of the parameters K n and D n ,
  • the overall CPU consumption needs to satisfy the following formulas,
  • Embodiment 4 time consumption for AI/ML-based CSI measurement and calculation including model activation time and inferring time.
  • an example shows timing of CSI reporting, AI/ML model activation, and AI/ML model inferencing.
  • a model activation process is needed to load the AI/ML model into main memory (e.g., memory 11 or a main memory in memory/storage 740) before subsequent model inference operations can be performed.
  • AI/ML models are usually stored as files (e.g., . pkl, . pmml, . mlmodel, . caffemodel format files) in read-only memory (ROM) (e.g., a ROM in memory/storage 740) or a database on the UE side.
  • ROM read-only memory
  • the AI/ML models are loaded into memory to run.
  • the CSI measurement and calculation time may be determined by at least one of the following parameters:
  • the first timing duration T proc, CSI, AI/ML of AI/ML-based CSI measurement and calculation The Z Ref, AI/ML represents the next uplink symbol after the end of the last symbol of the PDCCH (e.g., a CSI request) triggering the AI/ML-based CSI report (s) , and may be related to the numerology – ⁇ .
  • the T proc, CSI, AI/ML is a time difference between the last symbol of the PDCCH triggering the CSI report (s) and next uplink symbol after the end of the last symbol of the PDCCH.
  • the first timing duration may contain the time duration of at least one of the following operations: decoding DCI which contains CSI request, AI/ML model activation, AI/ML model switching, AI/ML model inference and beam forming switching.
  • the first timing duration T proc, CSI, AI/ML and the symbol Z Ref, AI/ML may be obtained from predefined system parameters or can be indicated by the UE 10 via a reporting message sent from the UE 10 to gNB 20.
  • the second timing duration T′ proc, CSI, AI/ML of AI/ML-based CSI measurement and calculation The Z′ Ref, AI/ML represents the next uplink symbol after the end of the last symbol of the reference signal (e.g., CSI-RS, SRS, DMRS, SSB etc. ) for AI/ML-based CSI measurement and calculation, and may be related to the numerology – ⁇ .
  • T′ proc, CSI, AI/ML is a time difference between the last symbol of the reference signals for CSI measurement and report (s) and the next uplink symbol after the end of the last symbol in time of the reference signals.
  • the second timing duration may contain the time duration of at least one of the following operations: part or the whole of AI/ML model activation, AI/ML model switching and AI/ML model inference.
  • the second timing duration T′ proc, CSI, AI/ML and the symbol Z′ Ref, AI/ML may be obtained from predefined system parameters or can be indicated by the UE 10 via a reporting message sent from the UE 10 to gNB 20.
  • the time duration required for AI/ML model activation X load, AI/ML corresponds to the duration from the initiation of the CSI request or CSI-RS transmission to the successful activation of the specific AI/ML model into RAM, and may be related to the numerology - ⁇ , where the X load, AI/ML may be obtained from predefined system parameters or can be indicated by the UE 10 via a reporting message sent from the UE 10 to gNB 20.
  • the time duration required for AI/ML model inference X inf, AI/ML This indicates the duration from the commencement of inference by the AI/ML model to completion of the inference, and may be related to the numerology - ⁇ , where the X inf, AI/ML may be obtained from predefined system parameters or can be indicated by the UE 10 via a reporting message sent from the UE 10 to gNB 20.
  • the starting time of AI/ML model inference may be after the end of the last symbol of CSI-RS and completion of AI/ML model activation.
  • the UE e.g., UE 10
  • the UE may receive aperiodic CSI-RS resource for channel measurements, aperiodic CSI-IM used for interference measurements, and aperiodic NZP CSI-RS for interference measurement.
  • Parameter Z′ ref, AI/ML is defined to represent the next uplink symbol after the end of the last symbol of the reference signal (e.g., CSI-RS, SRS, DMRS, SSB etc. ) for AI/ML-based CSI measurement and calculation.
  • Z′ Ref, AI/ML is a version of Z′ Ref for AI/ML-based CSI measurement and calculation.
  • Z Ref, AI/ML is a version of Z Ref for AI/ML-based CSI measurement and calculation.
  • the parameters X load, AI/ML and X inf, AI/ML may be determined based on one or more factors, such as contents of CSI measurement and calculation, AI/ML model, AI/ML functionality, etc. The detail method depends on implementation.
  • the X load, AI/ML and X inf, AI/ML values may be obtained from predefined system parameters or can be indicated by the UE 10 via a reporting message sent from the UE 10 to gNB 20.
  • UE may determine one from multiples parameters X load, AI/ML and one from multiples parameters X inf, AI/ML based on numerology ⁇ and one or more factors, such as contents of CSI measurement, AI/ML model, AI/ML functionality, etc.
  • the following table shows examples of parameters X load, AI/ML and X inf, AI/ML which are related to numerology ⁇ and determined based on one or more factors, such as contents of CSI measurement, AI/ML model, AI/ML functionality, etc.
  • the parameters X inf, AI/ML can be denoted as X infij
  • the parameters X load, AI/ML can be denoted as X loadij
  • i is an integer variable representing a row index
  • j is an integer variable representing a column index.
  • the initiation of the AI/ML model's inference time must satisfy two conditions: (1) The UE has received the CSI reference signals, such as CSI-RS, IM-RS, or others, transmitted by the gNB for CSI measurement; (2) The activation of the AI/ML model is finished.
  • CSI reference signals such as CSI-RS, IM-RS, or others
  • UE e.g., UE 10
  • the time duration X inf, AI/ML begins when the UE receives the last symbol of CSI reference signals transmitted from gNB and ends when the CSI measurement and calculation using AI/ML model inference has been accomplished with inference results obtained.
  • the inference results include CSI measurement and calculation obtained from inference of the AI/ML model. That is, X inf, AI/ML constitutes T′ proc, CSI, AI/ML for the n-th triggered CSI report. Therefore, the time duration of T′ proc, CSI, AI/ML equals to X inf, AI/ML .
  • the UE When the time of X inf, AI/ML ends earlier than a transmission occasion of the first symbol of a message carrying a CSI report using an uplink channel, such as PUSCH, the UE shall provide a valid CSI report for the n-th triggered CSI report using AI/ML model inference. Otherwise, the UE may ignore the CSI report using the AI/ML model inference if no HARQ-ACK or transport block is multiplexed on the PUSCH.
  • the AI/ML model activation is on process and not yet loaded into RAM successfully.
  • the time duration X inf, AI/ML begins when the AI/ML model has been successfully loaded into RAM and ends when the CSI measurement and calculation, utilizing AI/ML model inference, has been completed with inference results obtained.
  • the inference results include CSI measurement and calculation obtained from inference of the AI/ML model. That is, X load, AI/ML and X inf, AI/ML constitute T proc, CSI, AI/ML for the n-th triggered CSI report.
  • the time duration of X load, AI/ML and X inf, AI/ML is consecutive without time gap between them.
  • the time duration T proc, CSI, AI/ML from the time of the last symbol of the message carrying CSI request to the time equals to X load, AI/ML + X inf, AI/ML .
  • the UE e.g., UE 10
  • the UE shall provide a valid CSI report for the n-th triggered CSI report using AI/ML model inference. Otherwise, the UE may ignore the CSI report using the AI/ML model inference if no HARQ-ACK or transport block is multiplexed on the PUSCH.
  • Embodiment 5 and can be understood by referring to it, with the exception of what is detailed in the following.
  • the difference between time consumption of non-AI/ML-based CSI measurement and calculation and time consumption of AI/ML-based CSI measurement and calculation is defined.
  • the difference may be quantified in terms of the number of OFDM symbols.
  • This embodiment addresses a scenario where the completion of AI/ML model activation occurs prior to the UE (e.g., UE 10) receiving all of the CSI reference signals transmitted from gNB (e.g., gNB 20) , such as CSI-RS, CSI-IM, etc. Therefore, the difference represents the difference between time consumed by non-AI/ML model execution and time consumption by AI/ML model inferring on UE (e.g., UE 10) .
  • gNB e.g., gNB 20
  • the CSI measurement and calculation time should be determined by the following parameter,
  • the difference in terms of a number of OFDM symbols ⁇ Z between duration of AI/ML model inferring and duration of non-AI/ML execution on UE e.g., UE 10.
  • the parameter ⁇ Z represents the difference between a number of OFDM symbols required by AI/ML model inferring and a number of OFDM symbols required by the corresponding non-AI/ML execution.
  • the parameter ⁇ Z may be related to the numerology - ⁇ , where the ⁇ Z may be obtained from predefined system parameters or can be indicated by the UE 10 via a reporting message sent from the UE 10 to gNB 20.
  • CSI represents the predefined number of OFDM symbols for the execution of non-AI/ML-based CSI measurement and calculation and can be regarded as a constant known by both UE and gNB (e.g., UE 10 and gNB 20) .
  • the parameter ⁇ Z may be determined based on one or more factors, such as contents of CSI measurement, AI/ML model, AI/ML functionality, etc. The detail method depends on implementation.
  • the ⁇ Z value may be obtained from predefined system parameters or can be indicated by the UE 10 via a reporting message sent from the UE 10 to gNB 20.
  • UE may determine one from multiples parameters ⁇ Z based on numerology ⁇ and one or more factors, such as contents of CSI measurement, AI/ML model, AI/ML functionality, etc.
  • the following table shows examples of parameters ⁇ Z which are related to numerology ⁇ and determined based on one or more factors, such as contents of CSI measurement, AI/ML model, AI/ML functionality, etc.
  • the parameters ⁇ Z can be denoted as ⁇ Z ij , where i is an integer variable representing a row index and j is an integer variable representing a column index.
  • the meaning of Class A/B/C for different types of CSI-ReportConfig, Type A/B/C for different AI/ML functionalities, Model A/B/C for different AI/ML models are the same as illustrated in the above description
  • Embodiment 6 Time consumption for AI/ML-based CSI measurement and calculation with one overall time duration
  • the Embodiment 6 is similar to and may be discerned with reference to Embodiment 4 except what is specified in the following.
  • the AI/ML model activation process initiates upon the UE's reception of CSI reference signals, including CSI-RS, IM-RS, or other signal types, and the AI/ML model begins inference immediately upon successful activation (e.g., loading into RAM) .
  • the time duration of X load, AI/ML and X inf, AI/ML defined in Embodiment 4 are consecutive with no time gap between them.
  • This parameter represents the overall time duration of AI/ML model running on UE (e.g., UE 10) , including the time duration of AI/ML model activation, switching and inferring, as well as decoding DCI which contains CSI request and beamforming switching, and begins when the UE receives CSI reference signals, where the X AI may be obtained from predefined system parameters or can be indicated by the UE 10 via a reporting message sent from the UE 10 to gNB 20.
  • the parameter X AI may be determined based on one or more factors, such as contents of CSI measurement, AI/ML model, AI/ML functionality, etc. The detail method depends on implementation.
  • the X AI value may be obtained from predefined system parameters or can be indicated by the UE 10 via a reporting message sent from the UE 10 to gNB 20. If parameter X AI is predefined by wireless communication system, UE (e.g., UE 10) may determine one from multiples parameters X AI based on numerology ⁇ and one or more factors, such as contents of CSI measurement, AI/ML model, AI/ML functionality, etc.
  • the process of AI/ML model activation begins.
  • the time duration X AI starts when the UE (e.g., UE 10) receives the last symbol of CSI reference signals transmitted from gNB (e.g., gNB 20) and ends when the AI/ML model has completed its inference for CSI measurement and calculation with an inference result for CSI reporting obtained. That is, duration X AI equals to T′ proc, CSI, AI/ML for the n-th triggered CSI report.
  • the UE When the time of T′ proc, CSI, AI/ML ends earlier than a transmission occasion of the first symbol of a message carrying a CSI report using an uplink channel, such as PUSCH, the UE shall provide a valid CSI report for the n-th triggered CSI report using AI/ML model inference. Otherwise, the UE may ignore the CSI report using the AI/ML model inference if no HARQ-ACK or transport block is multiplexed on the PUSCH.
  • the processing unit 730 may include circuitry, such as, but not limited to, one or more single-core or multi-core processors.
  • the processors may include any combinations of general-purpose processors and dedicated processors, such as graphics processors and application processors.
  • the processors may be coupled with the memory/storage and configured to execute instructions stored in the memory/storage to enable various applications and/or operating systems running on the system.
  • the radio control functions may include, but are not limited to, signal modulation, encoding, decoding, radio frequency shifting, etc.
  • the baseband circuitry may provide for communication compatible with one or more radio technologies.
  • the baseband circuitry may support communication with 5G NR, LTE, an evolved universal terrestrial radio access network (EUTRAN) and/or other wireless metropolitan area networks (WMAN) , a wireless local area network (WLAN) , a wireless personal area network (WPAN) .
  • EUTRAN evolved universal terrestrial radio access network
  • WMAN wireless metropolitan area networks
  • WLAN wireless local area network
  • WPAN wireless personal area network
  • Embodiments in which the baseband circuitry is configured to support radio communications of more than one wireless protocol may be referred to as multi-mode baseband circuitry.
  • the system 700 may be a mobile computing device such as, but not limited to, a laptop computing device, a tablet computing device, a netbook, an ultrabook, a smartphone, etc.
  • the system may have more or less components, and/or different architectures.
  • the methods described herein may be implemented as a computer program.
  • the computer program may be stored on a storage medium, such as a non-transitory storage medium.
  • the embodiment of the present disclosure is a combination of techniques/processes that can be adopted in 3GPP specification to create an end product.
  • the software function unit is realized and used and sold as a product, it can be stored in a readable storage medium in a computer.
  • the technical plan proposed by the present disclosure can be essentially or partially realized as the form of a software product.
  • one part of the technical plan beneficial to the conventional technology can be realized as the form of a software product.
  • the software product in the computer is stored in a storage medium, including a plurality of commands for a computational device (such as a personal computer, a server, or a network device) to run all or some of the steps disclosed by the embodiments of the present disclosure.
  • the storage medium includes a USB disk, a mobile hard disk, a read-only memory (ROM) , a random-access memory (RAM) , a floppy disk, or other kinds of media capable of storing program codes.
  • This disclosure presents a new framework that uses AI/ML models to measure Channel State Information (CSI) .
  • the framework covers reference signal configuration, CSI measurement, and reporting.
  • Our solution has several benefits, such as lower signaling overhead, scalability, efficient resource allocation for different model requirements, and the integration of one-sided and two-sided model procedures into one process with an AI model-ID.

Landscapes

  • Engineering & Computer Science (AREA)
  • Signal Processing (AREA)
  • Software Systems (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Theoretical Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Medical Informatics (AREA)
  • Physics & Mathematics (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Data Mining & Analysis (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Artificial Intelligence (AREA)
  • Mobile Radio Communication Systems (AREA)

Abstract

The disclosure provides an artificial intelligence (AI) /machine learning (ML) -based method for processing channel state information (CSI). A base station receives AI/ML-based CSI processing capability reported by a user equipment (UE). The base station determines AI/ML-based CSI measurement configuration based on the AI/ML-based-based CSI processing capability. The AI/ML-based CSI measurement configuration comprises one or more of a configuration for CSI resource scheduling, a configuration for CSI reporting, and a configuration for CSI measurement triggering.

Description

AI/ML-BASED METHOD FOR PROCESSING CHANNEL STATE INFORMATION AND WIRELESS COMMUNICATION DEVICE Technical Field
The present disclosure relates to the field of communication systems, and more particularly, to an artificial intelligence (AI) /machine learning (ML) -based method for processing channel state information (CSI) and a wireless communication device.
Background Art
3GPP RAN1 TR38.843 presents a comprehensive exploration of the integration of artificial intelligence (AI) and machine learning (ML) into the NR air interface. These technologies are harnessed to enhance CSI feedback, thereby improving the accuracy of time-domain CSI prediction, beam management, and even positioning. Additionally, the application of AI/ML methods in areas such as load balancing and RRM algorithms has been deliberated in RAN3/RAN2. These standardization research endeavors consistently demonstrate remarkable performance improvements when compared to non-AI/ML processing.
Technical Problem
CSI, as a pivotal component in communication systems, plays a crucial role in supporting multi-input multi-output (MIMO) and Radio Resource management (RRM) algorithms, contributing significantly to system throughput and spectral efficiency. The incorporation of AI is no exception and is poised to reshape the way CSI is designed. For instance, AI-related information, including prediction accuracy, must be incorporated into the traditional CSI reporting process, along with considerations for the time required for CSI measurement and computation. In conventional communication systems, measures are in place to ensure the uniformity of CSI measurement and reporting behaviors between the base station and user equipment (UE) . This involves defining the CSI process unit (CPU) and latency requirements for calculating various types of CSI.
Technical Solution
An object of the present disclosure is to propose a wireless communication device, such as a user equipment (UE) or a base station, and an artificial intelligence (AI) /machine learning (ML) -based method for processing channel state information (CSI) based on machine learning.
In a first aspect, an embodiment of the invention provides artificial intelligence (AI) /machine learning (ML) -based method for processing channel state information (CSI) , executed by a base station, comprising:
receiving AI/ML-based CSI processing capability reported by a user equipment (UE) ; and
determining AI/ML-based CSI measurement configuration based on the AI/ML-based CSI processing capability, wherein the AI/ML-based CSI measurement configuration comprises one or more of a configuration for CSI resource scheduling, a configuration for CSI reporting, and a configuration for CSI measurement triggering.
In a second aspect, an embodiment of the invention provides a base station comprising a processor configured to call and run a computer program stored in a memory, to cause a device in which the processor is installed to execute the disclosed method.
In a third aspect, an embodiment of the invention provides artificial intelligence (AI) /machine learning (ML) -based method for processing channel state information (CSI) , executed by a user equipment (UE) , comprising:
reporting AI/ML-based CSI processing capability by the user equipment (UE) ;
receiving AI/ML-based CSI measurement configuration from a base station, wherein the AI/ML-based CSI measurement configuration is based on the AI/ML-based CSI processing capability; and
performing a first operation according to the AI/ML-based CSI measurement configuration;
wherein the AI/ML-based CSI measurement configuration comprises one or more of a configuration for CSI resource scheduling, a configuration for CSI reporting, and a configuration for CSI measurement triggering.
In a fourth aspect, an embodiment of the invention provides a user equipment (UE) comprising a processor configured to call and run a computer program stored in a memory, to cause a device in which the processor is installed to execute the disclosed method.
In a fifth aspect, an embodiment of the invention provides artificial intelligence (AI) /machine learning (ML) -based method for processing channel state information (CSI) , executable in a user equipment (UE) , comprising:
receiving AI/ML-based CSI measurement configuration and AI/ML-based CSI reference signals;
wherein a time interval from reception of the AI/ML-based CSI measurement configuration to CSI reporting is less than or equal to a first processing time, or a time interval from reception of the AI/ML-based CSI reference signals to the CSI reporting is less than or equal to a second processing time; and
the first processing time and the second processing time are determined by a UE capability of AI/ML-based CSI processing.
In a sixth aspect, an embodiment of the invention provides a user equipment (UE) comprising a processor configured to call and run a computer program stored in a memory, to cause a device in which the processor is installed to execute the disclosed method.
In a seventh aspect, an embodiment of the invention provides artificial intelligence (AI) /machine learning (ML) -based method for processing channel state information (CSI) , executed by a base station, comprising:
transmitting, by the base station, AI/ML-based CSI measurement configuration and AI/ML-based CSI reference signals, so that a user equipment (UE) performs CSI measurement based on the AI/ML-based CSI measurement configuration and the AI/ML-based CSI reference signals, wherein a time interval from reception of the AI/ML-based CSI measurement configuration to CSI reporting is less than or equal to a first processing time, or a time interval from reception of the AI/ML-based CSI reference signals to the CSI reporting is less than or equal to a second processing time, and the first processing time and the second processing time are determined by a UE capability of AI/ML-based CSI processing; and
receiving a result of the CSI reporting.
In an eighth aspect, an embodiment of the invention provides a base station comprising a processor configured to call and run a computer program stored in a memory, to cause a device in which the processor is installed to execute the disclosed method.
The disclosed method may be implemented in a chip. The chip may include a processor, configured to call and run a computer program stored in a memory, to cause a device in which the chip is installed to execute the disclosed method.
The disclosed method may be programmed as computer-executable instructions stored in non- transitory computer-readable medium. The non-transitory computer-readable medium, when loaded to a computer, directs a processor of the computer to execute the disclosed method.
The non-transitory computer-readable medium may comprise at least one from a group consisting of: a hard disk, a CD-ROM, an optical storage device, a magnetic storage device, a Read Only Memory, a Programmable Read Only Memory, an Erasable Programmable Read Only Memory, EPROM, an Electrically Erasable Programmable Read Only Memory and a Flash memory.
The disclosed method may be programmed as a computer program product, which causes a computer to execute the disclosed method.
The disclosed method may be programmed as a computer program, which causes a computer to execute the disclosed method.
Advantageous Effects
This disclosure presents an artificial intelligence (AI) /machine learning (ML) -based method for processing channel state information (CSI) , executed by a base station, comprising: receiving AI/ML-based CSI processing capability reported by a user equipment (UE) ; and determining AI/ML-based CSI measurement configuration based on the AI/ML-based CSI processing capability, wherein the AI/ML-based CSI measurement configuration comprises one or more of a configuration for CSI resource scheduling, a configuration for CSI reporting, and a configuration for CSI measurement triggering. So our solution can improve the Success rate of CSI measurement.
This disclosure presents an artificial intelligence (AI) /machine learning (ML) -based method for processing channel state information (CSI) , executed by a user equipment (UE) , comprising: receiving AI/ML-based CSI measurement configuration and AI/ML-based CSI reference signals; wherein a time interval from reception of the AI/ML-based CSI measurement configuration to CSI reporting is less than or equal to a first processing time, or a time interval from reception of the AI/ML-based CSI reference signals to the CSI reporting is less than or equal to a second processing time; and the first processing time and the second processing time are determined by a UE capability of AI/ML-based CSI processing. So our solution can improve reporting rate.
A user equipment (UE) reports AI/ML-based CSI processing capability. A base station receives the AI/ML-based CSI processing capability reported by the UE and determines AI/ML-based CSI measurement configuration based on the AI/ML-based CSI processing capability. The UE receives AI/ML-based CSI measurement configuration from a base station. Thus, CSI measurement configuration is configured taking into account the AI/ML-based CSI processing capability of UE. CSI reporting performed by a UE can be enhanced by the AI/ML-based CSI measurement configuration.
A UE receives AI/ML-based CSI measurement configuration and AI/ML-based CSI reference signals and performs CSI reporting using the AI/ML-based CSI reference signals based on the AI/ML-based CSI measurement configuration. A time interval from reception of the AI/ML-based CSI measurement configuration to CSI reporting is less than or equal to a first processing time, or a time interval from reception of the AI/ML-based CSI reference signals to the CSI reporting is less than or equal to a second processing time. The first processing time and the second processing time are determined by a UE capability of AI/ML-based CSI processing. Thus, timing relationships of CSI reporting, reception of the AI/ML-based CSI measurement configuration, and reception of the AI/ML-based CSI reference signals are limited by the first processing time and the second processing time determined based on the UE capability of AI/ML-based  CSI processing.
Description of Drawings
In order to more clearly illustrate the embodiments of the present disclosure or related art, the following figures will be described in the embodiments are briefly introduced. It is obvious that the drawings are merely some embodiments of the present disclosure. A person having ordinary skill in this field can obtain other figures according to these figures without paying the premise.
FIG. 1 illustrates a schematic view showing CSI process criteria and CSI measurement and calculation time in NR system.
FIG. 2 illustrates a schematic view showing a wireless communication system comprising a user equipment (UE) , a base station, and a network entity.
FIG. 3 illustrates a schematic view showing a system with an AI/ML functional framework for executing a model management method using ML models.
FIG. 4 illustrates a schematic view showing an overall solution of the disclosed method.
FIG. 5 illustrates a schematic view showing an embodiment of the disclosed method.
FIG. 6 illustrates a schematic view showing another embodiment of the disclosed method.
FIG. 7 illustrates a schematic view showing timing of CSI reporting, AI/ML model activation, and AI/ML model inferencing in an example.
FIG. 8 illustrates a schematic view showing timing of CSI reporting, AI/ML model activation, and AI/ML model inferencing in another example.
FIG. 9 illustrates a schematic view showing a system for wireless communication according to an embodiment of the present disclosure.
DETAILED DESCRIPTION OF EMBODIMENTS
Embodiments of the disclosure are described in detail with the technical matters, structural features, achieved objects, and effects with reference to the accompanying drawings as follows. Specifically, the terminologies in the embodiments of the present disclosure are merely for describing the purpose of the certain embodiment, but not to limit the disclosure.
Abbreviations used in the description are listed in the following:
Table 1


Embodiments of the disclosure are related to artificial intelligence (AI) and machine learning (ML) for new radio (NR) air interface and address problems of AI/ML-specific CSI processing and computation timing delay, to make the AI models working normally at the network with extremally less signaling interaction between gNB and UE.
Embodiments of the disclosure introduce a novel framework that leverages AI/ML models for Channel State Information (CSI) measurement. This framework encompasses reference signal configuration, CSI measurement, and reporting. Our solution offers several advantages, including reduced signaling overhead, scalability, efficient resource allocation tailored to various model requirements, and the unification of one-sided and two-sided model procedures into a single process through the use of an AI model-ID.
Embodiments of the disclosure provides:
● a set of CSI process criteria specific to AI/ML models, including the definition of CPU NCPU, AI/ML and occupied CPU OCPU, AI/ML dedicated to AI/ML models, as well as the CSI process criteria for UE when AI/ML-based and non-AI/ML-based CSI processing coexist on the UE.
● a set of CSI measurement timing and configurations specifically for AI/ML model inference, including Tproc, CSI, AI/ML and T′proc, CSI, AI/ML, encompassing aspects such as activation time and inference time for AI/ML models, and modifications to the current CSI measurement time estimation procedures.
ZRef, AI/ML is defined as the next uplink symbol with its cyclic prefix (CP) starting Tproc, CSI, AI/ML= (Z) (2048+144) ·κ2·Tc+Tswitch after the end of the last symbol of the PDCCH triggering the CSI report (s) . Z′Ref, AI/ML is defined as the next uplink symbol with its CP starting T′proc, CSI, AI/ML= (Z′) (2048+144) ·κ2·Tc after the end of the last symbol in time of the latest of: aperiodic CSI-RS resource for channel measurements, aperiodic CSI-IM used for interference measurements, and aperiodic NZP CSI-RS for interference measurement, when aperiodic CSI-RS is used for channel measurement for the n-th triggered CSI report, and where Z, Z’, and Tswitch is defined in 3GPP TS38.214.
The Tproc, CSI, AI/ML is a time difference between the last symbol of the PDCCH triggering the CSI report (s) and next uplink symbol after the end of the last symbol of the PDCCH, and where T′proc, CSI, AI/ML is a time difference between the last symbol of the reference signals for CSI measurement and report (s) and the next uplink symbol after the end of the last symbol in time of the reference signals.
Definitions that may be used in the description includes:
● CSI processing unit (CPU) : This term refers to the capacity of simultaneous CSI measurements and calculations that the system supports.
● Global AI/ML model ID: The identifier is used to identify an AI/ML model, which can be a globally unique ID, a PLMN-specific unique ID, an operator-specific unique ID, or an AI/ML management platform-specific unique ID.
● Logical AI/ML model ID: The identifier is used to identify an AI/ML model used in the network, and it has a certain mapping relationship with the Global ID. The Logical ID can be a globally unique ID, a  PLMN-specific unique ID, an operator-specific unique ID, a Cell-specific ID, a Link-specific ID, a TA-specific ID, a CU-specific ID, a DU-specific ID, a UPF-specific ID, an AMF-specific ID, an RRC-specific ID, or a Network-Slicing-specific ID. The term "Cell-specific" means that each AI/ML model has a unique ID within a specific cell. Similarly, "Link-specific" , "TA-specific" , "CU-Specific" , "DU-specific" , "UPF-specific" , "AMF-specific" , and "Network-Slicing-specific" refer to unique IDs for AI/ML models in specific contexts within the network.
● AI/ML model related information: The information includes one or more of Global AI/ML model ID, provider, scenario, feature, function, version, accuracy, RRC descriptor, and AI/ML model descriptor.
■ Scenario: Scenario of an AI/ML model may include one of but not limited to indoor, outdoor, flying, water, mobility speed, etc.
■ Feature: Feature of an AI/ML model may include one of but not limited to CSI compression, CSI prediction, beam management, positioning, handover, radio resource management, etc.
■ Function: Function of an AI/ML model may include one of but not limited to CSI compression, CSI prediction, beam management, positioning, handover, radio resource management, and/or the usage of AI/ML models. The usage may be monitoring, inference, and/or training etc.
■ RRC descriptor: The RRC descriptor includes requirements on reference signal configuration and requirements on CSI measurement for the AI/ML model, which may include at least one of reference signal timing requirement (e.g., period, time step, hopping mechanism, etc. ) , reference signal frequency requirement (e.g., frequency resource, hopping mechanism etc. ) , reference signal antenna port number, CSI contents (e.g., channel matrix, eigen-vector, etc. ) .
■ AI/ML model descriptor: This term refers to the detailed attribute description of the AI/ML model. This description may include, but is not limited to, AI/ML type, accuracy, input data requirement, output data, monitoring method, input data distribution, out data distribution etc.
For simplicity, an AI/ML model, AI model, ML model, and model are interchangeably used in the description.
1. For Non-AI/ML CSI processing criteria
With reference to FIG. 1, the UE (e.g., UE 10) indicates the number of supported simultaneous CSI measurements and calculations NCPU with parameter simultaneousCSI-ReportsPerCC in a component carrier, and simultaneousCSI-ReportsAllCC across all component carriers. If a UE supports NCPU simultaneous CSI measurements and calculations it is said to have NCPU CSI processing units for processing CSI reports. If L CPUs are occupied for calculation of CSI reports in a given OFDM symbol, the UE has NCPU-L unoccupied CPUs. If N CSI reports start occupying their respective CPUs on the same OFDM symbol on which NCPU-L CPUs are unoccupied, where each n-th CSI report usesoccupied CPUs, where n is an integer variable varying from 0 to N-1, that is, n=0, …, M-1. The UE is not required to update the M-M requested CSI reports with lowest priority, where 0≤M≤M. can be maximized such thatholds. A UE is not expected to be configured with an aperiodic CSI trigger state containing more than NCPU Reporting settings. Processing of a CSI report occupies a number of CPUs for a number of symbols had been defined.
2. For Non-AI/ML CSI measurement and calculation delay
After receiving CSIReportConfig in downlink control information (DCI) , UE (e.g., UE 10) starts the process of CSI measurements and calculations, and finally obtains the CSI measurement/calculation results to report to gNB (e.g., gNB 20) on PUCCH or PUSCH. The overall time duration consumed can be categorized into four parts, including DCI decoding, beam switching (optional) , CSI measurements/calculations and transmission preparation. When the CSI request field in a DCI triggers a CSI report (s) on PUSCH, the UE shall provide a valid CSI report for the n-th triggered CSI report,
- if the first uplink symbol to carry the corresponding CSI report (s) including the effect of the timing advance, starts no earlier than at symbol Zref, and
- if the first uplink symbol to carry the n-th CSI report including the effect of the timing advance, starts no earlier than at symbol Z'ref (n) ,
where Zref is defined as the next uplink symbol with its CP starting Tproc, CSI= (Z) (2048+144) ·κ2·TC+Tswitch after the end of the last symbol of the PDCCH triggering the CSI report (s) , and where Z'ref (n) , is defined as the next uplink symbol with its CP starting T'proc, CSI= (Z') (2048+144) ·κ2·TC after the end of the last symbol in time of CSI-RS resource.
On the UE side, since the requirements of AI/ML models on CSI processing units (CPU) during operation are completely different from those of non-AI/ML models, it is necessary to redefine a set of CSI process criteria and CSI measurement and calculation delay suitable for AI/ML models.
1. The requirement of AI CPU is quite different from Legacy CPU
When applying AI/ML models to the CSI measurement process, which includes CSI prediction, channel information feedback, beam management, and positioning, it becomes evident that the resource demands, encompassing computing units, memory, storage, and other hardware resources, for AI/ML models differ significantly from those required by traditional CSI measurement and calculation methods that do not employ AI/ML (known as non-AI/ML-based CSI measurement/calculation) . Consequently, the existing metrics for CPU usage and occupied CPU, originally defined for non-AI/ML models, become inadequate for assessing AI/ML model resource utilization. Therefore, it is imperative to redefine relevant parameters specifically tailored to AI/ML models.
2. CSI measurement and calculation time estimation needs to be reconstructed for AI/ML
Unlike non-AI/ML algorithms used in CSI measurement and calculation, AI/ML model activation (e.g., loading an AI/ML model into the UE's memory (RAM) ) is a necessary step prior to inference and requires a specific time duration. This model activation process should only commence once the UE has received the CSI-RS. Consequently, if the UE initiates AI/ML model inference immediately upon initiating the legacy CSI measurement/calculation as described in TS 38.214 –5.4, where CSI measurement and calculation begin after receiving CSI-RS, the overall time required for CSI measurement and calculation using AI/ML model inference will be extended due to the time needed for AI/ML model activation.
Diverse AI capabilities can result in varying inference times, making the conventional fixed definition of CSI measurement and calculation time no longer applicable. Moreover, even when considering the same AI functionality, the utilization of different AI algorithms, such as RNN, CNN, and others, can introduce differences in computation latency.
With reference to FIG. 2, a telecommunication system including a UE 10a, a base station 20a, a base station 20b, and a network entity device 30 executes the disclosed method according to an embodiment of the present disclosure. FIG. 2 is shown for illustrative, not limiting, and the system may comprise more UEs, BSs, and CN entities. Connections between devices and device components are shown as lines and arrows in the FIGs. The UE 10a may include a processor 11a, a memory 12a, and a transceiver 13a. The base station 20a may include a processor 21a, a memory 22a, and a transceiver 23a. The base station 20b may include a processor 21b, a memory 22b, and a transceiver 23b. The network entity device 30 may include a processor 31, a memory 32, and a transceiver 33. Each of the processors 11a, 21a, 21b, and 31 may be configured to implement the proposed functions, procedures, and/or methods described in this description. Layers of radio interface protocol may be implemented in the processors 11a, 21a, 21b, and 31. Each of the memory 12a, 22a, 22b, and 32 operatively stores a variety of programs and information to operate a connected processor. Each of the transceivers 13a, 23a, 23b, and 33 is operatively coupled with a connected processor, and transmits and/or receives a radio signal. Each of the base stations 20a and 20b may be an eNB, a gNB, or one of other radio nodes.
Each of the processors 11a, 21a, 21b, and 31 may include a general-purpose central processing unit (CPU) , application-specific integrated circuits (ASICs) , other chipsets, logic circuits and/or data processing devices. Each of the memory 12a, 22a, 22b, and 32 may include read-only memory (ROM) , a random-access memory (RAM) , a flash memory, a memory card, a storage medium and/or other storage devices. Each of the transceivers 13a, 23a, 23b, and 33 may include baseband circuitry and radio frequency (RF) circuitry to process radio frequency signals. When the embodiments are implemented in software, the techniques described herein can be implemented with modules, procedures, functions, entities and so on, that perform the functions described herein. The modules can be stored in a memory and executed by the processors. The memory can be implemented within a processor or external to the processor, in which those can be communicatively coupled to the processor via various means are known in the art.
The network entity device 30 may be a node in a CN. CN may include LTE CN or 5GC which may include user plane function (UPF) , session management function (SMF) , access and mobility management function (AMF) , unified data management (UDM) , policy control function (PCF) , control plane (CP) /user plane (UP) separation (CUPS) , authentication server (AUSF) , network slice selection function (NSSF) , and the network exposure function (NEF) .
With reference to FIG. 3, a system 100 for the artificial intelligence (AI) /machine learning (ML) -based method for processing channel state information (CSI) based on machine learning comprises units of data collection 101, model training unit 102, actor 103, and model inference 104. Please note that FIG. 3 does not necessarily limit the artificial intelligence (AI) /machine learning (ML) -based method for processing channel state information (CSI) to the instant example. The artificial intelligence (AI) /machine learning (ML) -based method for processing channel state information (CSI) is applicable to any design based on machine learning. The general steps comprise data collection and/or model training and/or model inference and/or (an) actor (s) .
The data collection unit 101 is a function that provides input data to the model training unit 102 and the model inference unit 104. AI/ML algorithm-specific data preparation (e.g., data pre-processing and cleaning, formatting, and transformation) is not carried out in the data collection unit 101.
Examples of input data may include measurements from UEs or different network entities, feedback from Actor 103, and output from an AI/ML model.
Training data is data needed as input for the AI/ML Model training unit 102.
Inference data is data needed as input for the AI/ML Model inference unit 104.
The model training unit 102 is a function that performs the ML model training, validation, and testing. The Model training unit 102 is also responsible for data preparation (e.g., data pre-processing and cleaning, formatting, and transformation) based on training data delivered by the data collection unit 101, if required.
Model Deployment/Update between units 102 and 104 involves deployment or update of an AI/ML model (e.g., a trained machine learning model 105a or 105b) to the model inference unit 104. The model training unit 102 uses data units as training data to train a machine learning model 105a and generates a trained machine learning model 105b from the machine learning model 105a.
The model inference unit 104 is a function that provides AI/ML model inference output (e.g., predictions or decisions) . The AI/ML model inference output is the output of the machine learning model 105b. The Model inference unit 104 is also responsible for data preparation (e.g., data pre-processing and cleaning, formatting, and transformation) based on inference data delivered by the data collection unit 101, if required.
The output shown between unit 103 and unit 104 is the inference output of the AI/ML model produced by the model inference unit 104.
Actor 103 is a function that receives the output from the model inference unit 104 and triggers or performs corresponding actions. The actor 103 may trigger actions directed to other entities or to itself.
Feedback between unit 103 and unit 101 is information that may be needed to derive training or inference data or performance feedback.
FIG. 4 shows a flowchart of the overall solution for artificial intelligence (AI) /machine learning (ML) -based method for processing channel state information (CSI) . With reference to FIG. 4, the overall solution of the disclosed method is detailed in the following.
Step 1: The UE (e.g., UE 10) sends the parameters simultaneousCSI-ReportsPerCC and simultaneousCSI-ReportsAllCC to the gNB (e.g., gNB 20) . The above two parameters can calculate the number of available CSI processing units (CPUs) , denoted as NCPU, of the UE. The NCPU is a portion of UE capability of the UE.
Step 2: The gNB provides CSI measurement configuration and performs CSI reporting processing based on the UE capability.
Step 3: After receiving a CSI measurement request, the UE decodes the received message. After successful decoding, the UE starts activating an AI/ML model used for inference for CSI measurement/calculation.
Step 4: The UE can perform CSI measurement/calculation by AI/ML model inference when receiving CSI-RS. Within the specified time, the UE generates a CSI report and sends the CSI report it to the gNB.
With reference to FIG. 5, an example of a UE 10 in the description may include one of the UE 10a. Examples of a gNB 20 in the description may include the base station 20a or 20b. Note that even though  the gNB is described as an example of base station in the following, the disclosed method of may be implemented in any other types of base stations, such as an eNB or a base station for beyond 5G. Uplink (UL) transmission of a control signal or data may be a transmission operation from a UE to a base station. Downlink (DL) transmission of a control signal or data may be a transmission operation from a base station to a UE. The disclosed method is detailed in the following. The UE 10 and a base station, such as a gNB 20, execute the artificial intelligence (AI) /machine learning (ML) -based method for processing channel state information (CSI) based on machine learning.
The UE 10 reports AI/ML-based CSI processing capability (Step S001) . In an embodiment, the AI/ML-based CSI processing capability of the UE comprises one or more of:
a number of AI/ML-based CSI processing units (CPUs) of the UE,
a number of AI/ML-based CSI processing processes of the UE,
a number of AI/ML-based CSI memory units of the UE,
a number of AI/ML-based CSI computation units of the UE, and
a number of AI/ML-based CSI power consumption units of the UE.
In an embodiment, the AI/ML-based CSI processing capability of the UE is determined based on one or more of:
subcarrier spacing (SCS) ,
AI/ML-based CSI parameter information, and
AI/ML-based CSI model features.
The gNB 20 receives the AI/ML-based CSI processing capability (Step S002) .
The gNB 20 determines AI/ML-based CSI measurement configuration 402 based on the AI/ML-based CSI processing capability, wherein the AI/ML-based CSI measurement configuration comprises one or more of a configuration for CSI resource scheduling, a configuration for CSI reporting, and a configuration for CSI measurement triggering (Step S003) .
The UE 10 receives AI/ML-based CSI measurement configuration 402 from the gNB 20, wherein the AI/ML-based CSI measurement configuration is based on the AI/ML-based CSI processing capability (Step S004) .
The UE 10 performs a first operation according to the AI/ML-based CSI measurement configuration 402 (S006) .
In determining AI/ML-based CSI measurement configuration based on the AI/ML-based CSI processing capability, one or both of the UE 10 and the gNB 20 keep one or more of the following relationships:
AI/ML-based CSI processing resources occupied by the AI/ML-based CSI measurement configuration are less than or equal to the AI/ML-based CSI processing capability;
a weighted value of the AI/ML-based CSI processing resources occupied by the AI/ML-based CSI measurement configuration is less than or equal to the AI/ML-based CSI processing capability; or
the AI/ML-based CSI processing resources occupied by the AI/ML-based CSI measurement configuration and CSI processing resources occupied by the non-AI/ML-based CSI measurement are less than or equal to processing capability of the UE.
In an embodiment, the AI/ML-based CSI processing resources comprise one or more of:
a number of AI/ML-based CSI processing units (CPUs) of the UE occupied by the AI/ML-based CSI  measurement configuration;
a number of AI/ML-based CSI processing processes of the UE occupied by the AI/ML-based CSI measurement configuration;
a number of AI/ML-based CSI memory units of the UE occupied by the AI/ML-based CSI measurement configuration;
a number of AI/ML-based CSI computation units of the UE occupied by the AI/ML-based CSI measurement configuration; and
a number of AI/ML-based CSI power consumption units of the UE occupied by the AI/ML-based CSI measurement configuration.
In an embodiment, the AI/ML-based CSI processing resources are determined based on one or more of:
subcarrier spacing (SCS) ;
AI/ML-based CSI parameter information; and
CSI AI/ML model features.
In an embodiment, the AI/ML-based CSI processing resources of the UE is obtained through one or more of the following:
the AI/ML-based CSI processing resources of the UE occupied by the AI/ML-based CSI measurement configuration is preconfigured or predefined by a base station; and
the base station receives a message from the UE, wherein the message comprises one or more of:
the AI/ML-based CSI processing resources of the UE;
an indicator showing that the AI/ML-based CSI processing resources and non-AI/ML-based CSI processing resources are separated.
Alternatively, the AI/ML-based CSI processing resources of the UE is obtained through one or more of the following:
the AI/ML-based CSI processing resources of the UE occupied by the AI/ML-based CSI measurement configuration is preconfigured or predefined by the base station; and
the base station receives a message from the UE, wherein the message comprises one or more of:
the AI/ML-based CSI processing resources of the UE;
an indicator showing that the AI/ML-based CSI processing resources and non-AI/ML-based CSI processing resources are one common CSI processing unit (CPU) pool.
In some embodiments of the disclosure, when AI/ML-based CSI measurement configuration exceeds the AI/ML-based CSI processing capability of the UE, the UE discards the AI/ML-based CSI measurement configuration that exceeds the AI/ML-based CSI processing capability of the UE; and when the AI/ML-based CSI measurement configuration is less than or equal to the AI/ML-based CSI processing capability of the UE, the UE performs the first operation according to the AI/ML-based CSI measurement configuration.
In some embodiments of the disclosure, when a weighted sum calculated from AI/ML-based CSI measurement configuration and weights of the AI/ML-based CSI measurement configuration exceeds the AI/ML-based CSI processing capability of the UE, the UE discards the AI/ML-based CSI measurement configuration that exceeds the AI/ML-based CSI processing capability of the UE; and when the weighted  sum is less than or equal to the AI/ML-based CSI processing capability of the UE, the UE performs the first operation according to the AI/ML-based CSI measurement configuration.
With reference to FIG. 6, in an embodiment of the disclosed method, the UE 10 transmits an uplink message to convey the CPU occupation for the AI/ML-based CSI processing.
The gNB 20 transmits AI/ML-based CSI measurement configuration 402 and AI/ML-based CSI reference signals 403, so that a user equipment (UE) performs CSI measurement based on the AI/ML-based CSI measurement configuration and the AI/ML-based CSI reference signals, wherein a time interval from reception of the AI/ML-based CSI measurement configuration to CSI reporting is less than or equal to a first processing time, or a time interval from reception of the AI/ML-based CSI reference signals to the CSI reporting is less than or equal to a second processing time, and the first processing time and the second processing time are determined by a UE capability of AI/ML-based CSI processing (Step S011) .
The UE 10 receives AI/ML-based CSI measurement configuration 402 and AI/ML-based CSI reference signals 403 (Step S012) . A time interval from reception of the AI/ML-based CSI measurement configuration to CSI reporting is less than or equal to a first processing time, or a time interval from reception of the AI/ML-based CSI reference signals to the CSI reporting is less than or equal to a second processing time. The first processing time and the second processing time are determined by a UE capability of AI/ML-based CSI processing.
The UE 10 performs CSI reporting to transmit a CSI report 404 using the AI/ML-based CSI reference signals 403 based on the AI/ML-based CSI measurement configuration 402 (Step S013) .
The gNB 20 receives a result (e.g., the CSI report 404) of the CSI reporting (Step S014) .
In some embodiments of the disclosure, the first processing time comprises one or more of:
a duration required for decoding the AI/ML-based CSI measurement configuration;
a duration required for AI/ML model activation;
a duration required for AI/ML model switching;
a duration required for AI/ML model inferring; and
a duration required for UE antenna switching.
In some embodiments of the disclosure, the second processing time comprises one or more of:
one portion or entirety of a duration required for AI/ML model activation;
a duration required for AI/ML model switching; and
a duration required for AI/ML model inferring.
In some embodiments of the disclosure, the first processing time and/or the second processing time is determined by one or more of:
AI/ML-based CSI measurement data;
AI/ML-based CSI measurement features or function;
AI/ML-based CSI measurement time;
AI/ML model parameters for CSI measurement;
a duration required for AI/ML model activation;
a duration required for AI/ML model switching;
a duration required for AI/ML model inferring; and
subcarrier spacing (SCS) .
In some embodiments of the disclosure, the first processing time and/or the second processing time is obtained through one or more of:
predefinition or pre-configuration;
an empirical value obtained from statistics of time spent on performing said AI/ML-based CSI measurement configuration; and
a value of the first processing time and/or the second processing conveyed in an uplink message that is transmitted in a UE capability report, a scheduling request, physical uplink shared channel (PUSCH) , or physical uplink control channel (PUCCH) .
In some embodiments of the disclosure, the second processing time comprises one portion or entirety of the duration required for AI/ML model activation;
if the AI/ML model activation begins after reception of the AI/ML-based CSI reference signals, the second processing time comprises entirety of the duration required for AI/ML model activation;
if the AI/ML model activation begins before the reception of the AI/ML-based CSI reference signals and completes after the reception of the AI/ML-based CSI reference signals, the second processing time comprises one portion of the duration required for AI/ML model activation; and
if the AI/ML model activation completes before the reception of the AI/ML-based CSI reference signals, the second processing time does not include the duration required for AI/ML model activation.
AI/ML specific CPU and processing criteria with virtualization is detailed in the following.
In the process of performing CSI measurement and calculation using AI/ML models at the UE, the CSI measurement requirements configured by the base station to the UE, or the UE-configured CSI measurement requirements, or the CSI measurement update requirements (including CSI-RS resource configuration, CSI reporting configuration, CSI measurement configuration, CSI triggering etc. ) , should be determined by at least one of the following parameters:
● The number of supported simultaneous AI/ML-based processing, denoted as NCPU, AI/ML: The parameter means a UE have NCPU, AI/ML AI-ML specific CSI processing units for processing AI/ML-based CSI reports, where the NCPU, AI/ML may be obtained from predefined system parameters or can be indicated by the UE 10 via a reporting message sent from the UE 10 to gNB 20.
● The number of occupied CPUs, denoted asfor n-th AI/ML-based CSI reporting, where n is an integer variable varying from 0 to NAI/ML-1: The parameter means a UE will occupyCPUs for n-th AI/ML-based CSI reporting, where themay be obtained from predefined system parameters or can be indicated by the UE 10 via a reporting message sent from the UE 10 to gNB 20.
For example, if LAI/ML AI/ML specific CPUs are occupied for calculation of AI/ML-based CSI reports in a given OFDM symbol, the UE has NCPU, AI/ML-LAI/ML unoccupied AI/ML specific CPUs. If NAI/ML AI/ML-based CSI reports start occupying their respective AI/ML specific CPUs on the same OFDM symbol on which NCPU, AI/ML-LAI/ML AI/ML specific CPUs are unoccupied, where each AI/ML-based CSI report n=0, …, NAI/ML-1 corresponds toCPUs, the UE is not required to update the NAI/ML-MAI/ML requested AI/ML-based CSI reports with lowest priority (MAI/ML<LAI/ML) , where 0≤MAI/ML≤NAI/ML is the largest value such thatholds. A UE is not expected to be  configured with an aperiodic CSI trigger state containing more than NCPU, AI/ML reporting settings using AI/ML model inference. That is, the CPU occupation for AI/ML-based CSI processing follows the formula:
CSI processing may comprise either or both of CSI measurement and CSI calculation. Furthermore, the parametermay be determined based on one or more factors, such as contents of CSI measurement, AI/ML functionality, AI/ML model, etc. The detail method depends on implementation. Thevalue may be obtained from predefined system parameters or can be indicated by the UE 10 via a reporting message sent from the UE 10 to gNB 20.
1. Option A: is predefined by wireless communication system
Examples of the number of occupied CPUsare shown in the table below. The number of occupied CPUsis related to contents of CSI measurement, AI/ML functionality, AI/ML model. Ok is the predefined value ofto demonstrate the consumed AI/ML specific CPU, which may be influenced by resources of process, memory, computation capabilities and power consumption. The variable k is an integer representing a column index. Class A/B/C represents different types of CSI-ReportConfig. For example, Class A means reportQuantity is set to ‘none’ , Class B means set to ‘RSRP’ or ‘SINR’ , Class C means CSI measurements and calculations like codebookType etc. Type A/B/C represents different AI/ML functionality. For example, Type A is AI/ML used for CSI measurements, Type B is AI/ML used for beam measurements, Type C is AI/ML used for positioning. Model A/B/C represents different AI/ML models. For example, Model A is an AI/ML model based on RNN, Model B is an AI/ML model based on CNN, and similarly Model C is an AI/ML model based on a type of neural network and so on.
Table 2: Example for the value determination
Here is an example showing how to determine the value of Ok in condition that parameter is determined only based on the contents of CSI measurement. The example takes 3 cases for demonstration according to the different types of CSI-ReportConfig.
■ Case 1: when the type of CSI-ReportConfig is Class A, 
■ Case 2: when the type of CSI-ReportConfig is Class B, 
■ Case 3: when the type of CSI-ReportConfig is Class C, the CSI measurement/calculation metrics are relatively complex, and the measurement task is heavy. As a result, the AI/ML models that are used the CSI measurement/calculation also have high complexity and resource overhead. Hence, the value of the parameteris set equal to the upmost UE capability, that is, 
2. Option B: is indicated by UE to gNB
In Option B, theof each AI/ML-based CSI report are obtained through UE reporting. After receiving this information through UE reporting, the gNB starts to decide the AI/ML-based CSI measurement configuration that are assigned to UE in condition that the total occupied resources for AI/ML model inference do not exceed the UE capability for AI/ML model inference, that is,
In order for the base station to accurately identify the UE's AI/ML CPU occupation under different situations (i.e., measurements, AI/ML functionalities, AI/ML models) , so as to maximize the UE's computing power and CSI feedback capability, the information fed back by the UE to the base station should contain one or more of the following parameters: AI/ML model related information and occupied CPUfor AI/ML-based CSI. The information fed back by the UE is model-specific metadata and may be included in UE capability reporting, uplink control/data channels like PUSCH/PUCCH, MAC layer control elements, RRC layer signaling, or other appropriate control mechanisms available in the radio protocol stack.
■ AI/ML model related information: The AI/ML model related information includes AI/ML model attribute description information used to identify and differentiate between various AI/ML models that are used by the UE or gNB for CSI measurement and calculation.
■ Occupied CPUfor AI/ML-based CSI: The occupied CPUs for n-th AI/ML-based CSI measurement and calculation used by various CSI, and/or AI/ML features/functionalities, and/or AI/ML models may be different. Furthermore, the number of occupied resourcesmay include at least one of parametersorwhich represented as process resource, memory, computation capabilities, or power consumption of the nth CSI measurement and calculation or reporting. Alternatively, Furthermore, the number of occupied resourcesmay include a weighted sum of the parametersorDefinitions of the parameters orare illustrated in Embodiment 2.
Embodiment 1: AI/ML specific CPU and processing criteria with one or more physical info (s)
In embodiment 1, the parameter NCPU, AI/ML which indicates the number of supported simultaneous AI/ML-based processing is a virtualized parameter that is originated from and influenced by multiple physical factors, including processes, memory, computing power and energy consumption, etc. Using just one parameter NCPU, AI/ML may not fully reflect the UE's capability of simultaneously processing AI/ML-based CSI measurements and calculations. In this embodiment, the CSI measurement requirements configured by the base station to the UE, or the UE-configured CSI measurement requirements, or the CSI measurement update requirements (including CSI-RS resource configuration, CSI reporting configuration, CSI measurement configuration, CSI triggering etc. ) , should be determined by at least one of the following parameters:
■ The number of supported simultaneous process resources for AI/ML-based CSI measurements Nproc: which represents the process resources currently available on the UE that can be used for AI/ML model computation, where the parameter Nproc may be indicated by UE or defined by system parameters.
■ The number of supported simultaneous occupied memory resources for AI/ML-based CSI measurements Nmem: which represents the available RAM resources on UE that can be used for AI/ML model inference, where the parameter Nmem may be indicated by UE or defined by system parameters.
■ The number of supported simultaneous computation capabilities for AI/ML-based CSI measurements Ncomp: which represents the available computation resources on UE that can be used for AI/ML model inference, where the parameter Ncomp may be indicated by UE or defined by system parameters.
■ The number of supported simultaneous power consumptions for AI/ML-based CSI measurements Npower: which represents the maximum power consumptions on UE that can be used for AI/ML model inference, where the parameter Npower may be indicated by UE or defined by system parameters.
Accordingly, the number of occupied process resources, memory resources, computation capabilities and power consumptions for the n-th AI/ML-based CSI reporting (n=0, 1, …, NAI/ML-1) should be defined as follows,
■ The number of occupied process resourcesfor n-th AI/ML-based CSI reporting, where n is an integer variable varying from 0 to NAI/ML-1: which means a UE will occupyprocess resources for n-th AI/ML-based CSI reporting, where themay be indicated by UE or defined by system parameters.
■ The number of occupied memory resourcesfor n-th AI/ML-based CSI reporting, where n is an integer variable varying from 0 to NAI/ML-1: which means a UE will occupymemory resources for n-th AI/ML-based CSI reporting, where themay be indicated by UE or defined by system parameters.
■ The number of occupied computation capabilitiesfor n-th AI/ML-based CSI reporting, where n is an integer variable varying from 0 to NAI/ML-1: which means a UE will occupycomputation capabilities for n-th AI/ML-based CSI reporting, where themay be indicated by UE or defined by system parameters.
■ The number of occupied power consumptionsfor n-th AI/ML-based CSI reporting, where n is an integer variable varying from 0 to NAI/ML-1: which means a UE will occupypower consumptions for n-th AI/ML-based CSI reporting, where themay be indicated by UE or defined by system parameters.
For example, after receiving the four parameters, the gNB 20 decides to assign AI/ML-based CSI report configuration (i.e., CSI reporting configuration) to UE according to the evaluation of UE capability which satisfies at least one of the following conditions:



Furthermore, parametersare determined based on one or more factors, such as contents of CSI measurement, AI/ML model, AI/ML functionality, etc. The detail method depends on implementation. The values of one or more ofandmay be obtained from predefined system parameters or can be indicated by the UE 10 via a reporting message sent from the UE 10 to gNB 20. In addition, values of one part ofandcan be obtained from predefined system parameters while another part be indicated by the UE 10 via a reporting message sent from the UE 10 to gNB 20.
1. Option A: one or more ofandare predefined by wireless communication system.
The table below shows an example in which one or more ofandare predefined by wireless communication system. In the table, Pk, Mk, Ck, and Ok are the predefined values of the consumed resources of process, memory, computation capabilities and power consumption where k is an integer that varies from 1 to 7. The meanings of Class A/B/C representing types of CSI-ReportConfig, Type A/B/C of AI/ML functionality, Model A/B of AI/ML models are the same as illustrated in Embodiment 1.
Table 3
Here is an example showing how to determine the values of Pk, Mk, Ck, Ok in condition that parametersare determined only based on the contents of CSI measurement. The example takes 3 cases for demonstration according to the different types of CSI-ReportConfig.
● Case 1: when the type of CSI-ReportConfig is Class A, 
● Case 2: when the type of CSI-ReportConfig is Class B, 
● Case 3: when the type of CSI-ReportConfig is Class C, the CSI measurement/calculation metrics are  relatively complex, the measurement task is heavy. As a result, the AI/ML models that are used the CSI measurement/calculation also have high complexity and resource overhead. Hence, the values of parameters is set equal to the upmost UE capability, that is,  and
2. Option B: one or more ofandare indicated by UE to gNB
These parameters can be carried in a UE capability report, MAC-CE or RRC message over PUSCH/PUCCH and contain at least one of the following:
■ AI/ML model related information: The AI/ML model related information includes AI/ML model attribute description information used to identify an AI/ML model, which can be a globally unique ID, a PLMN-specific unique ID, an operator-specific unique ID, or an AI/ML management platform-specific unique ID.
■ Number of processesThe parameter is used to describe the number of consumed process of AI/ML model.
■ Occupied memoryThe parameter is used to describe the consumed RAM in the process of AI/ML model inferring.
■ Consumed computation capabilitiesThe parameter is used to describe the consumed computation capabilities in the process of AI/ML inferring.
■ Power ConsumptionThe parameter is used to describe the power consumption of AI/ML inferring.
Embodiment 2: AI/ML specific CPU and processing criteria with a weighted parameter originated from one or more physical info (s) .
In embodiment 2, one or more parameters ofandwhich indicate (s) the UE capability used for supporting AI/ML-based CSI measurements are transmitted to gNB 20 from UE (e.g., UE10) , which may incur significant air interface overhead. In this embodiment, the CSI measurement requirements configured by the base station to the UE, or the UE-configured CSI measurement requirements, or the CSI measurement update requirements (including CSI-RS resource configuration, CSI reporting configuration, CSI measurement configuration, CSI triggering etc. ) , should be determined by at least one of the following parameters:
■ The weighted number of supported simultaneous AI/ML specific CSI processing units for CSI measurement and calculation NALL: The parameter NALL means a UE have NALL AI-ML specific CSI processing units for processing AI/ML-based CSI reports, where the NALL may be obtained from predefined system parameters or can be indicated by the UE 10 via a reporting message sent from the UE 10 to gNB 20.
■ The priority (weight value) of occupied process resources for n-th AI/ML-based CSI reporting where n is an integer variable varying from 0 to NAI/ML-1: The priorityindicates a priority of process resource in evaluating the UE capability for AI/ML-based CSI measurement and  calculation for n-th AI/ML-based CSI reporting, where themay be obtained from predefined system parameters or can be indicated by the UE 10 via a reporting message sent from the UE 10 to gNB 20.
■ The priority (weight value) of occupied memory resources for n-th AI/ML-based CSI reporting where n is an integer variable varying from 0 to NAI/ML-1: The priorityindicates a priority of memory resource in evaluating the UE capability for AI/ML-based CSI measurement and calculation for n-th AI/ML-based CSI reporting, where themay be obtained from predefined system parameters or can be indicated by the UE 10 via a reporting message sent from the UE 10 to gNB 20.
■ The priority (weight value) of computation capabilities for n-th AI/ML-based CSI reporting where n is an integer variable varying from 0 to NAI/ML-1: The priorityindicates the priority of computation capabilities in evaluating the UE capability for AI/ML-based CSI measurement and calculation for n-th AI/ML-based CSI reporting, where themay be obtained from predefined system parameters or can be indicated by the UE 10 via a reporting message sent from the UE 10 to gNB 20.
■ The priority (weight value) of power consumption for n-th AI/ML-based CSI reportingwhere n is an integer variable varying from 0 to NAI/ML-1: The priorityindicates the priority of power consumption in evaluating the UE capability for AI/ML-based CSI measurement and calculation for n-th AI/ML-based CSI reporting, where themay be obtained from predefined system parameters or can be indicated by the UE 10 via a reporting message sent from the UE 10 to gNB 20.
For example, the parameter can be calculated using the priority parameters (weight value) of and, which is shown below,
where parameters NAI/ML, Nproc, Nmem, Ncomp, and Npower are defined in Embodiment 2.
At the same time, a UE is not expected to be configured with an aperiodic CSI trigger state containing more than NALL Reporting settings using AI/ML model inference.
Furthermore, the parametersand, are determined based on one or more factors, such as contents of CSI measurement, AI/ML model, AI/ML functionality, etc. The detail method depends on implementation. The values of one or more ofand, may be obtained from predefined system parameters or can be indicated by the UE 10 via a reporting message sent from the UE 10 to gNB 20. In addition, values of one part ofand, can be obtained from predefined system parameters while another part be indicated by the UE 10 via a reporting message sent from the UE 10 to gNB 20.
1. Option A: one or more ofand, are predefined by the wireless communication system.
The table below shows an example in whichand, are predefined by the wireless communication system. In the table, Pproc k, Pmem k, Pcomp k, and Ppower k are the predefined values of the priority (weight value) of process, memory, computation capabilities and power consumption. The variable k is an integer representing a column index. The meanings of Class A/B/C of types of CSI-ReportConfig, Type A/B/C of types of AI/ML functionality, Model A/B of AI/ML models are the same as illustrated in Embodiment 1.
Table 4
Here is an example showing how to determine the values of Pproc k, Pmem k, Pcomp k, Ppower k in condition that parametersand, are determined only based on the contents of CSI measurement. The example takes 3 cases for demonstration according to the different types of CSI-ReportConfig.
● Case 1: when the type of CSI-ReportConfig is Class A, 
● Case 2: when the type of CSI-ReportConfig is Class B, 
● Case 3: when the type of CSI-ReportConfig is Class C, the CSI measurement/calculation metrics are relatively complex, the measurement task is heavy. As a result, the AI/ML models that are used the CSI measurement/calculation also have high complexity and resource overhead. Hence, the priority of memory resource and computation capabilities will be higher than that of process resource and power consumption, that is, 
2. Option B: one or more ofand, are indicated by UE to gNB.
One or more ofand, of each AI/ML-based CSI report are obtained through UE reporting by UE (e.g., UE 10) . After receiving the four parameters, the gNB 20 decides to assign AI/ML-based CSI report configuration (i.e., CSI reporting configuration) to UE (e.g., UE 10) according to the evaluation of UE capability which satisfies at least one of the following conditions:
These parameters can be carried in a UE capability report, MAC-CE or RRC message over PUSCH/PUCCH and contain at least one of the following:
■ AI/ML model related information: The AI/ML model related information includes AI/ML model attribute description information used to identify an AI/ML model, which can be a globally unique ID, a PLMN-specific unique ID, an operator-specific unique ID, or an AI/ML management platform-specific unique ID.
■ Priority of processThe parameter is used to describe the priority of consumed process of AI/ML model.
■ Priority of occupied memoryThe parameter is used to describe priority of the consumed RAM in the process of AI/ML model inferring.
■ Priority of consumed computation capabilitiesThe parameter is used to describe the priority of consumed computation capabilities in the process of AI/ML inferring.
■ Priority of power ConsumptionThe parameter is used to describe the priority of power consumption of AI/ML inferring.
Embodiment 3: Compatibility between AI/ML-based and non-AI/ML-based CSI measurement and calculation.
To ensure the compatibility of CSI measurements with both AI/ML and non-AI/ML models, the UE should be able to support both types of models simultaneously. Note that other schemes for compatibility between AI/ML-based and non-AI/ML-based CSI measurement and calculation, including conventional schemes or schemes under development or developed in the future, can be incorporated into embodiments of the disclosure. This is because the AI/ML model may switch or fallback to a non-AI/ML model in some scenarios. Therefore, the UE should have the capability to perform CSI measurements using both AI/ML and non-AI/ML models in the same UE. Therefore, UE (e.g., UE 10) may transmits signaling to base station to inform the base station how to configure the non-AI/ML-based CSI measurements and AI/ML-based CSI measurements based on the UE's capability. A UE may transmit to the base station both NCPU corresponding to the UE capability for non-AI/ML-based CSI measurements and NCPU, AI/ML corresponding to the UE capability for AI/ML-based CSI measurements. AI/ML-based CSI measurement and non-AI/ML-based CSI measurement running simultaneously may occupy the same hardware and system resources or not. There are two scenarios for AI/ML and non-AI/ML models of CSI measurement running on one single UE: one with AI/ML models running on separate micro processing units (MPUs) , and another with AI/ML and non-AI/ML models running on a shared MPU. The following information may be indicated by UE,
■ AI/ML model related information: The AI/ML model related information includes AI/ML model attribute description information used to identify an AI/ML model used by the UE or gNB for CSI measurement and calculation.
■ A CSI calculation resource occupation scheme indicator: The indicator shows whether the AI/ML-based and non-AI/ML-based CSI measurement and calculation occupy separate CPU pools or share the same CPU pool, where the CPU may be computation process resource, memory, or others.
■ Relationship information that indicates a relationship between CPU used by AI/ML-based CSI measurement and calculation and CPU used by non-AI/ML-based CSI measurement and calculation: The CPU may include the total CPU number, and/or occupied CPU for one CSI  calculation or reporting or measurement.
The CSI calculation resource occupation scheme indicator can be signaled implicitly or explicitly from UE (e.g., UE 10) to base station (e.g., gNB 20) . The CSI calculation resource occupation scheme indicator can be signaled or determined implicitly by the relationship information or directly by an indication. An example of the detail implementation is shown in the following:
(1) Option 1: Determined by the relationship information. The relationship information with a valid value means AI/ML-based CSI measurement and calculation or non-AI/ML-based CSI measurement and calculation share the same CPU pool. The relationship information with an invalid value means the AI/ML-based CSI measurement and calculation and non-AI/ML-based CSI measurement and calculation are processed with separated CPU pools. The invalid value can be denoted by NULL, a maximum value, a minimize value or alternative values.
Furthermore, the relationship information may depict a relationship between CPUs used by AI/ML-based CSI measurement and calculation and CPUs used by non-AI/ML-based CSI measurement and calculation. For example, the relationship can be illustrated as follows, while additional methods are not precluded,
■ A scaler coefficient Kn representing a relationship between the CPU consumption of AI/ML-based CSI processing and non-AI/ML-based CSI processing for n-th AI/ML-based CSI reporting, where n is an integer variable varying from 0 to NAI/ML-1: The scaler coefficient Kn means a ratio of CPUs used by AI/ML model and CPUs used by non-AI/ML model. The Kn may be obtained from predefined system parameters or can be indicated by the UE 10 via a reporting message sent from the UE 10 to gNB 20. The conversion coefficient Kn can be expressed as
■ A difference Dn between the CPU consumption of AI/ML-based CSI processing and non-AI/ML-based CSI processing for n-th AI/ML-based CSI reporting, where n is an integer variable varying from 0 to NAI/ML-1: The Difference Dn means a difference between CPUs used by AI/ML model and CPUs used by non-AI/ML model. The Dn may be obtained from predefined system parameters or can be indicated by the UE 10 via a reporting message sent from the UE 10 to gNB 20. The conversion difference Dn can be expressed as
(2) Option 2: A new indicator is needed to be transmitted from UE (e.g., UE 10) to gNB (e.g., gNB 20) to indicate whether the AI/ML and non-AI/ML models occupy separate CPU pools or share the same CPU pool. This indicator can take the form of a binary digit, with 0 representing separation and 1 indicating sharing, for example.
Once knowing whether the AI/ML models and non-AI/ML models occupy separate MPU resources or share the same MPU resources and the relationship between AI/ML-based and non-AI/ML-based CSI measurement and calculation CPU, gNB (e.g., gNB 20) can assign the CSI measurement configuration (CSI-RS resource, CSI reporting, CSI triggering etc. ) to UE (e.g., UE 10) that matches the UE capabilities for both AI/ML-based and non-AI/ML-based CSI measurement and calculations according to the following two cases:
(1) Case 1: AI/ML-based and non-AI/ML-based CSI measurement and calculation use separate CPU pools.
Each CPU pool may comprise hardware and system resources. This case indicates that the hardware and system resources occupied for performing CSI measurement or CSI calculation using AI/ML models and non-AI/ML models are separate and do not affect each other. Thus, in accordance with the technical solutions outlined in Embodiment 1, when the base station (e.g., gNB 20) delivers CSI measurement configurations (CSI-RS resource, CSI reporting, CSI triggering, etc. ) for both AI/ML models and non-AI/ML models to the UE (e.g., UE 10) , the base station ensures that the configurations align with the respective UE capabilities, meeting the conditions outlined below.

where parameter N represents the number of assigned CSI reporting tasks which are implemented by non-AI/ML model, parameter NAI/ML represents the number of assigned CSI reporting tasks which are implemented by AI/ML model inferring. The number of assigned CSI reporting tasks may be configured in CSI report configuration (i.e., CSI reporting configuration) .
In this case, UE (e.g., UE 10) need not to transmit additional signaling to instruct gNBs on configuring CSI measurements and calculations for both AI/ML-based and non-AI/ML-based models, as the configuration is determined by the UE's capabilities.
(2) Case 2: AI/ML-based and non-AI/ML-based CSI measurement and calculation share the same CPU pool.
In this case, if the hardware and system resources of the same micro processing unit (MPU) consumed when running AI/ML models and non-AI/ML models for CSI measurement and calculation computations are shared, then if the CSI report configuration (i.e., CSI reporting configuration) issued by the gNB for execution using non-AI/ML models and AI/ML models occupy resources of NCPU and NCPU, AI/ML to saturation, it is very likely to cause the CPUs occupied by the UE when performing the corresponding CSI measurement and calculation operations to directly exceed the UE's capabilities, leading to failure in processing some CSI report configuration (i.e., CSI reporting configuration) and decreased success rate of CSI feedback.
In this case, when both AI/ML models and non-AI/ML models share the same hardware and system resources (e.g., the same MPU) for running during CSI measurement and calculation computations, there's a significant risk of saturating the resources. If the CSI report configuration (i.e., CSI reporting configuration) provided by the gNB for both non-AI/ML and AI/ML models fully consume these NCPU and NCPU, AI/ML resources, this can potentially lead to the UE's CPUs being overwhelmed, surpassing the UE's capabilities and resulting in the inability to process certain CSI report configuration (i.e., CSI reporting configuration) effectively, consequently reducing the success rate of CSI feedback.
In order to avoid this situation, the resource consumption of AI/ML models cannot be considered separately from that of non-AI/ML models. Instead, a conversion relationship between the CPU consumption of AI/ML models and non-AI/ML models needs to be considered. Using the conversion relationship, the CPU consumption of one type of CSI measurement and calculation can be converted into the CPU consumption of the other type of CSI measurement and calculation for evaluation. The types of  CSI measurement and calculation include AI/ML-based and non-AI/ML-based CSI measurement and calculations.
The conversion relationship between the CPU consumption of AI/ML model and non-AI/ML model for the CSI measurement and calculations should be determined by at least one of the parameters Kn and Dn, For example, the overall CPU consumption needs to satisfy the following formulas,

In another example, the overall CPU consumption needs to satisfy the following formulas,

Embodiment 4: time consumption for AI/ML-based CSI measurement and calculation including model activation time and inferring time.
With reference to FIG. 7, an example shows timing of CSI reporting, AI/ML model activation, and AI/ML model inferencing. For AI/ML-based CSI measurement and calculation, a model activation process is needed to load the AI/ML model into main memory (e.g., memory 11 or a main memory in memory/storage 740) before subsequent model inference operations can be performed. In general, AI/ML models are usually stored as files (e.g., . pkl, . pmml, . mlmodel, . caffemodel format files) in read-only memory (ROM) (e.g., a ROM in memory/storage 740) or a database on the UE side. The AI/ML models are loaded into memory to run. As such, compared to traditional non-AI/ML models, using AI/ML models for CSI measurements and calculations requires additional time for model activation. In the process of performing CSI measurement and calculation using AI/ML models at the UE, the CSI measurement and calculation time may be determined by at least one of the following parameters:
■ The first timing duration Tproc, CSI, AI/ML of AI/ML-based CSI measurement and calculation: The ZRef, AI/ML represents the next uplink symbol after the end of the last symbol of the PDCCH (e.g., a CSI request) triggering the AI/ML-based CSI report (s) , and may be related to the numerology –μ. The Tproc, CSI, AI/ML is a time difference between the last symbol of the PDCCH triggering the CSI report (s) and next uplink symbol after the end of the last symbol of the PDCCH. The first timing duration may contain the time duration of at least one of the following operations: decoding DCI which contains CSI request, AI/ML model activation, AI/ML model switching, AI/ML model inference and beam forming switching. The first timing duration Tproc, CSI, AI/ML and the symbol ZRef, AI/ML may be obtained from predefined system parameters or can be indicated by the UE 10 via a reporting message sent from the UE 10 to gNB 20.
■ The second timing duration T′proc, CSI, AI/ML of AI/ML-based CSI measurement and calculation: The Z′Ref, AI/ML represents the next uplink symbol after the end of the last symbol of the reference signal (e.g., CSI-RS, SRS, DMRS, SSB etc. ) for AI/ML-based CSI measurement and calculation, and may be related to the numerology –μ. T′proc, CSI, AI/ML is a time difference between the last symbol of the reference signals for CSI measurement and report (s) and the next uplink symbol after the end of  the last symbol in time of the reference signals. The second timing duration may contain the time duration of at least one of the following operations: part or the whole of AI/ML model activation, AI/ML model switching and AI/ML model inference. The second timing duration T′proc, CSI, AI/ML and the symbol Z′Ref, AI/ML may be obtained from predefined system parameters or can be indicated by the UE 10 via a reporting message sent from the UE 10 to gNB 20.
■ The time duration required for AI/ML model activation Xload, AI/ML: This corresponds to the duration from the initiation of the CSI request or CSI-RS transmission to the successful activation of the specific AI/ML model into RAM, and may be related to the numerology -μ, where the Xload, AI/ML may be obtained from predefined system parameters or can be indicated by the UE 10 via a reporting message sent from the UE 10 to gNB 20.
■ The time duration required for AI/ML model inference Xinf, AI/ML: This indicates the duration from the commencement of inference by the AI/ML model to completion of the inference, and may be related to the numerology -μ, where the Xinf, AI/ML may be obtained from predefined system parameters or can be indicated by the UE 10 via a reporting message sent from the UE 10 to gNB 20.
For example, when aperiodic CSI-RS is used for channel measurement for the n-th triggered CSI report, the starting time of AI/ML model inference may be after the end of the last symbol of CSI-RS and completion of AI/ML model activation. The UE (e.g., UE 10) may receive aperiodic CSI-RS resource for channel measurements, aperiodic CSI-IM used for interference measurements, and aperiodic NZP CSI-RS for interference measurement. Parameter Z′ref, AI/ML is defined to represent the next uplink symbol after the end of the last symbol of the reference signal (e.g., CSI-RS, SRS, DMRS, SSB etc. ) for AI/ML-based CSI measurement and calculation. Z′Ref, AI/ML is a version of Z′Ref for AI/ML-based CSI measurement and calculation. ZRef, AI/ML is a version of ZRef for AI/ML-based CSI measurement and calculation.
Furthermore, the parameters Xload, AI/ML and Xinf, AI/ML may be determined based on one or more factors, such as contents of CSI measurement and calculation, AI/ML model, AI/ML functionality, etc. The detail method depends on implementation. The Xload, AI/ML and Xinf, AI/ML values may be obtained from predefined system parameters or can be indicated by the UE 10 via a reporting message sent from the UE 10 to gNB 20. If parameters Xload, AI/ML and Xinf, AI/ML are predefined by wireless communication system, UE (e.g., UE 10) may determine one from multiples parameters Xload, AI/ML and one from multiples parameters Xinf, AI/ML based on numerology μ and one or more factors, such as contents of CSI measurement, AI/ML model, AI/ML functionality, etc. The following table shows examples of parameters Xload, AI/ML and Xinf, AI/ML which are related to numerology μ and determined based on one or more factors, such as contents of CSI measurement, AI/ML model, AI/ML functionality, etc. The parameters Xinf, AI/ML can be denoted as Xinfij, and the parameters Xload, AI/ML can be denoted as Xloadij, where i is an integer variable representing a row index and j is an integer variable representing a column index. In the table, the meaning of Class A/B/C for different types of CSI-ReportConfig, Type A/B/C for different AI/ML functionalities, Model A/B/C for different AI/ML models are the same as illustrated in the above description.
Table 5
The initiation of the AI/ML model's inference time must satisfy two conditions: (1) The UE has received the CSI reference signals, such as CSI-RS, IM-RS, or others, transmitted by the gNB for CSI measurement; (2) The activation of the AI/ML model is finished.
Once the activation of the AI/ML model is completed, considering the specific time duration Xload, AI/ML for AI/ML model activation, the inference time Xinf, AI/ML, and the timing of CSI reference signal transmission from the gNB to the UE, the relationship between these three time durations can result in two scenarios, denoted as case 1 and case 2:
■ Case 1: The AI/ML model activation process has been completed before the UE (e.g., UE 10) receives CSI reference signals.
With reference to FIG. 7, in this case, when UE (e.g., UE 10) receives the last symbol of CSI reference signals which include CSI-RS, IM-RS, or other types of signals, the process of AI/ML model activation has been accomplished and successfully loaded into RAM. The time duration Xinf, AI/ML begins when the UE receives the last symbol of CSI reference signals transmitted from gNB and ends when the CSI measurement and calculation using AI/ML model inference has been accomplished with inference results obtained. The inference results include CSI measurement and calculation obtained from inference of the AI/ML model. That is, Xinf, AI/ML constitutes T′proc, CSI, AI/ML for the n-th triggered CSI report. Therefore, the time duration of T′proc, CSI, AI/ML equals to Xinf, AI/ML.
When the time of Xinf, AI/ML ends earlier than a transmission occasion of the first symbol of a message carrying a CSI report using an uplink channel, such as PUSCH, the UE shall provide a valid CSI  report for the n-th triggered CSI report using AI/ML model inference. Otherwise, the UE may ignore the CSI report using the AI/ML model inference if no HARQ-ACK or transport block is multiplexed on the PUSCH.
■ Case 2: The AI/ML model activation process has been completed after the UE (e.g., UE 10) receives CSI reference signals.
With reference to FIG. 8, in this case, when UE receives the last symbol of CSI reference signals which include CSI-RS, IM-RS, or other types of signals, the AI/ML model activation is on process and not yet loaded into RAM successfully. The time duration Xinf, AI/ML begins when the AI/ML model has been successfully loaded into RAM and ends when the CSI measurement and calculation, utilizing AI/ML model inference, has been completed with inference results obtained. The inference results include CSI measurement and calculation obtained from inference of the AI/ML model. That is, Xload, AI/ML and Xinf, AI/ML constitute Tproc, CSI, AI/ML for the n-th triggered CSI report. Therefore, the time duration of Xload, AI/ML and Xinf, AI/ML is consecutive without time gap between them. The time duration Tproc, CSI, AI/ML from the time of the last symbol of the message carrying CSI request to the time equals to Xload, AI/ML+ Xinf, AI/ML.
When the time of Xload, AI/ML+Xinf, AI/ML ends earlier than a transmission occasion of the first symbol of a message carrying a CSI report using an uplink channel, such as PUSCH, the UE (e.g., UE 10) shall provide a valid CSI report for the n-th triggered CSI report using AI/ML model inference. Otherwise, the UE may ignore the CSI report using the AI/ML model inference if no HARQ-ACK or transport block is multiplexed on the PUSCH.
Embodiment 5: time consumption for AI/ML-based CSI measurement and calculation including model activation and inferring duration compared to time consumption for non-AI/ML-based CSI measurement and calculation.
Embodiment 5 and can be understood by referring to it, with the exception of what is detailed in the following. In this embodiment, the difference between time consumption of non-AI/ML-based CSI measurement and calculation and time consumption of AI/ML-based CSI measurement and calculation is defined. The difference may be quantified in terms of the number of OFDM symbols. This embodiment addresses a scenario where the completion of AI/ML model activation occurs prior to the UE (e.g., UE 10) receiving all of the CSI reference signals transmitted from gNB (e.g., gNB 20) , such as CSI-RS, CSI-IM, etc. Therefore, the difference represents the difference between time consumed by non-AI/ML model execution and time consumption by AI/ML model inferring on UE (e.g., UE 10) .
In the process of performing CSI measurement and calculation using AI/ML models at the UE (e.g., UE 10) , the CSI measurement and calculation time should be determined by the following parameter,
■ The difference in terms of a number of OFDM symbols ΔZ between duration of AI/ML model inferring and duration of non-AI/ML execution on UE (e.g., UE 10) : The parameter ΔZ represents the difference between a number of OFDM symbols required by AI/ML model inferring and a number of OFDM symbols required by the corresponding non-AI/ML execution. The parameter ΔZ may be related to the numerology -μ, where the ΔZ may be obtained from predefined system parameters or can be indicated by the UE 10 via a reporting message sent from the UE 10 to gNB 20.
For example, the parameter ΔZ can be calculated according to the following formula:
ΔZ= T′proc, CSI, AI/ML-T′proc, CSI            (16)
where parameter T′proc, CSI, AI/ML is defined in Embodiment 4 and parameter T′proc, CSI represents the predefined number of OFDM symbols for the execution of non-AI/ML-based CSI measurement and calculation and can be regarded as a constant known by both UE and gNB (e.g., UE 10 and gNB 20) .
Furthermore, the parameter ΔZ may be determined based on one or more factors, such as contents of CSI measurement, AI/ML model, AI/ML functionality, etc. The detail method depends on implementation. The ΔZ value may be obtained from predefined system parameters or can be indicated by the UE 10 via a reporting message sent from the UE 10 to gNB 20.
UE (e.g., UE 10) may determine one from multiples parameters ΔZ based on numerology μ and one or more factors, such as contents of CSI measurement, AI/ML model, AI/ML functionality, etc. The following table shows examples of parameters ΔZ which are related to numerology μ and determined based on one or more factors, such as contents of CSI measurement, AI/ML model, AI/ML functionality, etc. The parameters ΔZ can be denoted as ΔZij, where i is an integer variable representing a row index and j is an integer variable representing a column index. In the table, the meaning of Class A/B/C for different types of CSI-ReportConfig, Type A/B/C for different AI/ML functionalities, Model A/B/C for different AI/ML models are the same as illustrated in the above description
Table 6
● Case 1: when the AI/ML Functionality is Class A and numerology is 0, the AI/ML model is simple with less time consumption for model activation and inference, and the time difference may be zero, that is, ΔZ04=0.
● Case 2: when the AI/ML Functionality is Class B and numerology is 0, the CSI measurement/calculation metrics are relatively complex with long time consumption for model activation and inference. Hence, the values of parameter ΔZ is set to a larger one, that is, ΔZ05=20.
Embodiment 6: Time consumption for AI/ML-based CSI measurement and calculation with one overall time duration
The Embodiment 6 is similar to and may be discerned with reference to Embodiment 4 except what is specified in the following. In this embodiment, The AI/ML model activation process initiates upon the UE's reception of CSI reference signals, including CSI-RS, IM-RS, or other signal types, and the AI/ML model begins inference immediately upon successful activation (e.g., loading into RAM) . The time duration of Xload, AI/ML and Xinf, AI/ML defined in Embodiment 4 are consecutive with no time gap between them. Therefore, there is no need to distinguish the end time of Xload, AI/ML and start time of Xinf, AI/ML, and these two durations can be merged into one single duration that begins when UE receives CSI reference signals and ends when the AI/ML model has completed its inference for CSI measurements with an inference result for CSI reporting obtained. The merged time duration contains the time duration of AI/ML model activation and inference.
In the process of performing CSI measurement and calculation using AI/ML models at the UE, the CSI measurement and calculation time should be determined by at least one of the following parameters:
■ The time duration consumed by the AI/ML model running on UE XAI: This parameter represents the overall time duration of AI/ML model running on UE (e.g., UE 10) , including the time duration of AI/ML model activation, switching and inferring, as well as decoding DCI which contains CSI request and beamforming switching, and begins when the UE receives CSI reference signals, where the XAI may be obtained from predefined system parameters or can be indicated by the UE 10 via a reporting message sent from the UE 10 to gNB 20.
The parameter XAI may be determined based on one or more factors, such as contents of CSI measurement, AI/ML model, AI/ML functionality, etc. The detail method depends on implementation. The XAI value may be obtained from predefined system parameters or can be indicated by the UE 10 via a reporting message sent from the UE 10 to gNB 20. If parameter XAI is predefined by wireless communication system, UE (e.g., UE 10) may determine one from multiples parameters XAI based on numerology μ and one or more factors, such as contents of CSI measurement, AI/ML model, AI/ML functionality, etc. The following table shows examples of parameters XAI which are related to numerology μ and determined based on one or more factors, such as contents of CSI measurement, AI/ML model, AI/ML functionality, etc. The parameters XAI can be denoted as XAIij, where i is an integer variable representing a row index and j is an integer variable representing a column index. In the table, the meaning of Class A/B/C for different types of CSI-ReportConfig, Type A/B/C for different AI/ML functionalities, Model A/B/C for different AI/ML models are the same as illustrated in the above description.
Table 7
In this embodiment, when UE (e.g., UE 10) receives the last symbol of CSI reference signals which include CSI-RS, IM-RS and other types of signals, the process of AI/ML model activation begins. The time duration XAI starts when the UE (e.g., UE 10) receives the last symbol of CSI reference signals transmitted from gNB (e.g., gNB 20) and ends when the AI/ML model has completed its inference for CSI measurement and calculation with an inference result for CSI reporting obtained. That is, duration XAI equals to T′proc, CSI, AI/ML for the n-th triggered CSI report.
When the time of T′proc, CSI, AI/ML ends earlier than a transmission occasion of the first symbol of a message carrying a CSI report using an uplink channel, such as PUSCH, the UE shall provide a valid CSI report for the n-th triggered CSI report using AI/ML model inference. Otherwise, the UE may ignore the CSI report using the AI/ML model inference if no HARQ-ACK or transport block is multiplexed on the PUSCH.
FIG. 9 is a block diagram of an example system 700 for wireless communication according to an embodiment of the present disclosure. Embodiments described herein may be implemented into the system using any suitably configured hardware and/or software. FIG. 9 illustrates the system 700 including a radio frequency (RF) circuitry 710, a baseband circuitry 720, a processing unit 730, a memory/storage 740, a display 750, a camera 760, a sensor 770, and an input/output (I/O) interface 780, coupled with each other as illustrated.
The processing unit 730 may include circuitry, such as, but not limited to, one or more single-core or multi-core processors. The processors may include any combinations of general-purpose processors and dedicated processors, such as graphics processors and application processors. The processors may be coupled with the memory/storage and configured to execute instructions stored in the memory/storage to enable various applications and/or operating systems running on the system.
The radio control functions may include, but are not limited to, signal modulation, encoding, decoding, radio frequency shifting, etc. In some embodiments, the baseband circuitry may provide for communication compatible with one or more radio technologies. For example, in some embodiments, the baseband circuitry may support communication with 5G NR, LTE, an evolved universal terrestrial radio access network (EUTRAN) and/or other wireless metropolitan area networks (WMAN) , a wireless local area network (WLAN) , a wireless personal area network (WPAN) . Embodiments in which the baseband  circuitry is configured to support radio communications of more than one wireless protocol may be referred to as multi-mode baseband circuitry. In various embodiments, the baseband circuitry 720 may include circuitry to operate with signals that are not strictly considered as being in a baseband frequency. For example, in some embodiments, baseband circuitry may include circuitry to operate with signals having an intermediate frequency, which is between a baseband frequency and a radio frequency.
In various embodiments, the system 700 may be a mobile computing device such as, but not limited to, a laptop computing device, a tablet computing device, a netbook, an ultrabook, a smartphone, etc. In various embodiments, the system may have more or less components, and/or different architectures. Where appropriate, the methods described herein may be implemented as a computer program. The computer program may be stored on a storage medium, such as a non-transitory storage medium.
The embodiment of the present disclosure is a combination of techniques/processes that can be adopted in 3GPP specification to create an end product.
If the software function unit is realized and used and sold as a product, it can be stored in a readable storage medium in a computer. Based on this understanding, the technical plan proposed by the present disclosure can be essentially or partially realized as the form of a software product. Or, one part of the technical plan beneficial to the conventional technology can be realized as the form of a software product. The software product in the computer is stored in a storage medium, including a plurality of commands for a computational device (such as a personal computer, a server, or a network device) to run all or some of the steps disclosed by the embodiments of the present disclosure. The storage medium includes a USB disk, a mobile hard disk, a read-only memory (ROM) , a random-access memory (RAM) , a floppy disk, or other kinds of media capable of storing program codes.
This disclosure presents a new framework that uses AI/ML models to measure Channel State Information (CSI) . The framework covers reference signal configuration, CSI measurement, and reporting. Our solution has several benefits, such as lower signaling overhead, scalability, efficient resource allocation for different model requirements, and the integration of one-sided and two-sided model procedures into one process with an AI model-ID.
While the present disclosure has been described in connection with what is considered the most practical and preferred embodiments, it is understood that the present disclosure is not limited to the disclosed embodiments but is intended to cover various arrangements made without departing from the scope of the broadest interpretation of the appended claims.

Claims (49)

  1. An artificial intelligence (AI) /machine learning (ML) -based method for processing channel state information (CSI) , executed by a base station, comprising:
    receiving AI/ML-based CSI processing capability reported by a user equipment (UE) ; and
    determining AI/ML-based CSI measurement configuration based on the AI/ML-based CSI processing capability, wherein the AI/ML-based CSI measurement configuration comprises one or more of a configuration for CSI resource scheduling, a configuration for CSI reporting, and a configuration for CSI measurement triggering.
  2. The method of claim 1, wherein determining AI/ML-based CSI measurement configuration based on the AI/ML-based CSI processing capability comprises:
    AI/ML-based CSI processing resources occupied by the AI/ML-based CSI measurement configuration are less than or equal to the AI/ML-based CSI processing capability;
    a weighted value of the AI/ML-based CSI processing resources occupied by the AI/ML-based CSI measurement configuration is less than or equal to the AI/ML-based CSI processing capability; or
    the AI/ML-based CSI processing resources occupied by the AI/ML-based CSI measurement configuration and CSI processing resources occupied by the non-AI/ML-based CSI measurement are less than or equal to processing capability of the UE.
  3. The method of claim 1, wherein the AI/ML-based CSI processing capability of the UE comprises one or more of:
    a number of AI/ML-based CSI processing units (CPUs) of the UE,
    a number of AI/ML-based CSI processing processes of the UE,
    a number of AI/ML-based CSI memory units of the UE,
    a number of AI/ML-based CSI computation units of the UE, and
    a number of AI/ML-based CSI power consumption units of the UE.
  4. The method of claim 3, wherein the AI/ML-based CSI processing capability of the UE is determined based on one or more of:
    subcarrier spacing (SCS) ,
    AI/ML-based CSI parameter information, and
    AI/ML-based CSI model features.
  5. The method of claim 2, wherein the AI/ML-based CSI processing resources comprise one or more of:
    a number of AI/ML-based CSI processing units (CPUs) of the UE occupied by the AI/ML-based CSI measurement configuration;
    a number of AI/ML-based CSI processing processes of the UE occupied by the AI/ML-based CSI measurement configuration;
    a number of AI/ML-based CSI memory units of the UE occupied by the AI/ML-based CSI measurement configuration;
    a number of AI/ML-based CSI computation units of the UE occupied by the AI/ML-based CSI measurement configuration; and
    a number of AI/ML-based CSI power consumption units of the UE occupied by the AI/ML-based CSI measurement configuration.
  6. The method of claim 2, wherein the AI/ML-based CSI processing resources are determined based on one  or more of:
    subcarrier spacing (SCS) ;
    AI/ML-based CSI parameter information; and
    CSI AI/ML model features.
  7. The method of claim 2, wherein the AI/ML-based CSI processing resources of the UE is obtained through one or more of the following:
    the AI/ML-based CSI processing resources of the UE occupied by the AI/ML-based CSI measurement configuration is preconfigured or predefined by the base station; and
    the base station receives a message from the UE, wherein the message comprises one or more of:
    the AI/ML-based CSI processing resources of the UE;
    an indicator showing that the AI/ML-based CSI processing resources and non-AI/ML-based CSI processing resources are separated.
  8. The method of claim 2, wherein the AI/ML-based CSI processing resources of the UE is obtained through one or more of the following:
    the AI/ML-based CSI processing resources of the UE occupied by the AI/ML-based CSI measurement configuration is preconfigured or predefined by the base station; and
    the base station receives a message from the UE, wherein the message comprises one or more of:
    the AI/ML-based CSI processing resources of the UE;
    an indicator showing that the AI/ML-based CSI processing resources and non-AI/ML-based CSI processing resources are one common CSI processing unit (CPU) pool.
  9. A base station comprising:
    a processor configured to call and run a computer program stored in a memory, to cause a device in which the processor is installed to execute the method of any of claims 1 to 8.
  10. A chip, comprising:
    a processor, configured to call and run a computer program stored in a memory, to cause a device in which the chip is installed to execute the method of any of claims 1 to 8.
  11. A computer-readable storage medium, in which a computer program is stored, wherein the computer program causes a computer to execute the method of any of claims 1 to 8.
  12. A computer program product, comprising a computer program, wherein the computer program causes a computer to execute the method of any of claims 1 to 8.
  13. An artificial intelligence (AI) /machine learning (ML) -based method for processing channel state information (CSI) , executed by a user equipment (UE) , comprising:
    reporting AI/ML-based CSI processing capability by the user equipment (UE) ;
    receiving AI/ML-based CSI measurement configuration from a base station, wherein the AI/ML-based CSI measurement configuration is based on the AI/ML-based CSI processing capability; and
    performing a first operation according to the AI/ML-based CSI measurement configuration;
    wherein the AI/ML-based CSI measurement configuration comprises one or more of a configuration for CSI resource scheduling, a configuration for CSI reporting, and a configuration for CSI measurement triggering.
  14. The method of claim 13, wherein the AI/ML-based CSI measurement configuration is based on the AI/ML-based CSI processing capability comprise:
    AI/ML-based CSI processing resources occupied by the AI/ML-based CSI measurement configuration are  less than or equal to the AI/ML-based CSI processing capability; or
    a weighted value of the AI/ML-based CSI processing resources occupied by the AI/ML-based CSI measurement configuration is less than or equal to the AI/ML-based CSI processing capability; or
    the AI/ML-based CSI processing resources occupied by the AI/ML-based CSI measurement configuration and CSI processing resources occupied by the non-AI/ML-based CSI measurement are less than or equal to CSI processing capability of the UE.
  15. The method of claim 13, wherein the AI/ML-based CSI processing capability of the UE comprises one or more of:
    a number of AI/ML-based CSI processing units (CPUs) of the UE;
    a number of AI/ML-based CSI processing processes of the UE;
    a number of AI/ML-based CSI memory units of the UE;
    a number of AI/ML-based CSI computation units of the UE; and
    a number of AI/ML-based CSI power consumption units of the UE.
  16. The method of claim 15, wherein the AI/ML-based CSI processing capability of the UE is determined based on one or more of:
    subcarrier spacing (SCS) ;
    AI/ML-based CSI parameter information; and
    CSI AI/ML model features.
  17. The method of claim 14, wherein the AI/ML-based CSI processing resources comprise one or more of:
    a number of AI/ML-based CSI processing units (CPUs) of the UE occupied by the AI/ML-based CSI measurement configuration;
    a number of AI/ML-based CSI processing processes of the UE occupied by the AI/ML-based CSI measurement configuration;
    a number of AI/ML-based CSI memory units of the UE occupied by the AI/ML-based CSI measurement configuration;
    a number of AI/ML-based CSI computation units of the UE occupied by the AI/ML-based CSI measurement configuration; and
    a number of AI/ML-based CSI power consumption units of the UE occupied by the AI/ML-based CSI measurement configuration.
  18. The method of claim 17, wherein the AI/ML-based CSI processing resources are determined based on one or more of:
    subcarrier spacing (SCS) ;
    AI/ML-based CSI parameter information; and
    CSI AI/ML model features.
  19. The method of claim 17, wherein the AI/ML-based CSI processing resources of the UE is obtained through one or more of the following:
    the AI/ML-based CSI processing resources of the UE occupied by the AI/ML-based CSI measurement configuration is preconfigured or predefined; and
    a base station receives a message from the UE, wherein the message comprises one or more of:
    the AI/ML-based CSI processing resources of the UE;
    an indicator showing that the AI/ML-based CSI processing resources and non-AI/ML-based CSI processing  resources are separated.
  20. The method of claim 17, wherein the AI/ML-based CSI processing resources of the UE is obtained through one or more of the following:
    the AI/ML-based CSI processing resources of the UE occupied by the AI/ML-based CSI measurement configuration is preconfigured or predefined; and
    a base station receives a message from the UE, wherein the message comprises one or more of:
    the AI/ML-based CSI processing resources of the UE;
    an indicator showing that the AI/ML-based CSI processing resources and non-AI/ML-based CSI processing resources are one common CSI processing unit (CPU) pool.
  21. The method of claim 13, wherein when AI/ML-based CSI measurement configuration exceeds the AI/ML-based CSI processing capability of the UE, the UE discards the AI/ML-based CSI measurement configuration that exceeds the AI/ML-based CSI processing capability of the UE; and
    when the AI/ML-based CSI measurement configuration is less than or equal to the AI/ML-based CSI processing capability of the UE, the UE performs the first operation according to the AI/ML-based CSI measurement configuration.
  22. The method of claim 13, wherein when a weighted sum calculated from AI/ML-based CSI measurement configuration and weights of the AI/ML-based CSI measurement configuration exceeds the AI/ML-based CSI processing capability of the UE, the UE discards the AI/ML-based CSI measurement configuration that exceeds the AI/ML-based CSI processing capability of the UE; and
    when the weighted sum is less than or equal to the AI/ML-based CSI processing capability of the UE, the UE performs the first operation according to the AI/ML-based CSI measurement configuration.
  23. A user equipment (UE) comprising:
    a processor configured to call and run a computer program stored in a memory, to cause a device in which the processor is installed to execute the method of any of claims 13 to 22.
  24. A chip, comprising:
    a processor, configured to call and run a computer program stored in a memory, to cause a device in which the chip is installed to execute the method of any of claims 13 to 22.
  25. A computer-readable storage medium, in which a computer program is stored, wherein the computer program causes a computer to execute the method of any of claims 13 to 22.
  26. A computer program product, comprising a computer program, wherein the computer program causes a computer to execute the method of any of claims 13 to 22.
  27. A computer program, wherein the computer program causes a computer to execute the method of any of claims 13 to 22.
  28. An artificial intelligence (AI) /machine learning (ML) -based method for processing channel state information (CSI) , executed by a user equipment (UE) , comprising:
    receiving AI/ML-based CSI measurement configuration and AI/ML-based CSI reference signals;
    wherein a time interval from reception of the AI/ML-based CSI measurement configuration to CSI reporting is less than or equal to a first processing time, or a time interval from reception of the AI/ML-based CSI reference signals to the CSI reporting is less than or equal to a second processing time; and
    the first processing time and the second processing time are determined by a UE capability of AI/ML-based CSI processing.
  29. The method of claim 28, wherein the first processing time comprises one or more of:
    a duration required for decoding the AI/ML-based CSI measurement configuration;
    a duration required for AI/ML model activation;
    a duration required for AI/ML model switching;
    a duration required for AI/ML model inferring; and
    a duration required for UE antenna switching.
  30. The method of claim 28, wherein the second processing time comprises one or more of:
    one portion or entirety of a duration required for AI/ML model activation;
    a duration required for AI/ML model switching; and
    a duration required for AI/ML model inferring.
  31. The method of claim 28, wherein the first processing time and/or the second processing time is determined by one or more of:
    AI/ML-based CSI measurement data;
    AI/ML-based CSI measurement features or function;
    AI/ML-based CSI measurement time;
    AI/ML model parameters for CSI measurement;
    a duration required for AI/ML model activation;
    a duration required for AI/ML model switching;
    a duration required for AI/ML model inferring; and
    subcarrier spacing (SCS) .
  32. The method of claim 28, wherein the first processing time and/or the second processing time is obtained through one or more of:
    predefinition or pre-configuration;
    an empirical value obtained from statistics of time spent on performing said AI/ML-based CSI measurement configuration; and
    a value of the first processing time and/or the second processing conveyed in an uplink message that is transmitted in a UE capability report, a scheduling request, physical uplink shared channel (PUSCH) , or physical uplink control channel (PUCCH) .
  33. The method of claim 30, wherein the second processing time comprises one portion or entirety of the duration required for AI/ML model activation;
    if the AI/ML model activation begins after reception of the AI/ML-based CSI reference signals, the second processing time comprises entirety of the duration required for AI/ML model activation;
    if the AI/ML model activation begins before the reception of the AI/ML-based CSI reference signals and completes after the reception of the AI/ML-based CSI reference signals, the second processing time comprises one portion of the duration required for AI/ML model activation; and
    if the AI/ML model activation completes before the reception of the AI/ML-based CSI reference signals, the second processing time does not include the duration required for AI/ML model activation.
  34. A user equipment (UE) comprising:
    a processor configured to call and run a computer program stored in a memory, to cause a device in  which the processor is installed to execute the method of any of claims 28 to 33.
  35. A chip, comprising:
    a processor, configured to call and run a computer program stored in a memory, to cause a device in which the chip is installed to execute the method of any of claims 28 to 33.
  36. A computer-readable storage medium, in which a computer program is stored, wherein the computer program causes a computer to execute the method of any of claims 28 to 33.
  37. A computer program product, comprising a computer program, wherein the computer program causes a computer to execute the method of any of claims 28 to 33.
  38. A computer program, wherein the computer program causes a computer to execute the method of any of claims 28 to 33.
  39. An artificial intelligence (AI) /machine learning (ML) -based method for processing channel state information (CSI) , executed by a base station, comprising:
    transmitting, by the base station, AI/ML-based CSI measurement configuration and AI/ML-based CSI reference signals, so that a user equipment (UE) performs CSI measurement based on the AI/ML-based CSI measurement configuration and the AI/ML-based CSI reference signals, wherein a time interval from reception of the AI/ML-based CSI measurement configuration to CSI reporting is less than or equal to a first processing time, or a time interval from reception of the AI/ML-based CSI reference signals to the CSI reporting is less than or equal to a second processing time, and the first processing time and the second processing time are determined by a UE capability of AI/ML-based CSI processing; and
    receiving a result of the CSI reporting.
  40. The method of claim 39, wherein the first processing time comprises one or more of:
    a duration required for decoding the AI/ML-based CSI measurement configuration;
    a duration required for AI/ML model activation;
    a duration required for AI/ML model switching;
    a duration required for AI/ML model inferring; and
    a duration required for UE antenna switching.
  41. The method of claim 39, wherein the second processing time comprises one or more of:
    one portion or entirety of a duration required for AI/ML model activation;
    a duration required for AI/ML model switching; and
    a duration required for AI/ML model inferring.
  42. The method of claim 39, wherein the first processing time and/or the second processing time is determined by one or more of:
    AI/ML-based CSI measurement data;
    AI/ML-based CSI measurement features or function;
    AI/ML-based CSI measurement time;
    AI/ML model parameters for CSI measurement;
    a duration required for AI/ML model activation;
    a duration required for AI/ML model switching;
    a duration required for AI/ML model inferring; and
    subcarrier spacing (SCS) .
  43. The method of claim 39, wherein the first processing time and/or the second processing time is obtained through one or more of:
    predefinition or pre-configuration;
    an empirical value obtained from statistics of time spent on performing said AI/ML-based CSI measurement configuration; and
    a value of the first processing time and/or the second processing conveyed in an uplink message that is transmitted in a UE capability report, a scheduling request, physical uplink shared channel (PUSCH) , or physical uplink control channel (PUCCH) .
  44. The method of claim 41, wherein the second processing time comprises one portion or entirety of the duration required for AI/ML model activation;
    if the AI/ML model activation begins after reception of the AI/ML-based CSI reference signals, the second processing time comprises entirety of the duration required for AI/ML model activation;
    if the AI/ML model activation begins before the reception of the AI/ML-based CSI reference signals and completes after the reception of the AI/ML-based CSI reference signals, the second processing time comprises one portion of the duration required for AI/ML model activation; and
    if the AI/ML model activation completes before the reception of the AI/ML-based CSI reference signals, the second processing time does not include the duration required for AI/ML model activation.
  45. A base station comprising:
    a processor configured to call and run a computer program stored in a memory, to cause a device in which the processor is installed to execute the method of any of claims 39 to 44.
  46. A chip, comprising:
    a processor, configured to call and run a computer program stored in a memory, to cause a device in which the chip is installed to execute the method of any of claims 39 to 44.
  47. A computer-readable storage medium, in which a computer program is stored, wherein the computer program causes a computer to execute the method of any of claims 39 to 44.
  48. A computer program product, comprising a computer program, wherein the computer program causes a computer to execute the method of any of claims 39 to 44.
  49. A computer program, wherein the computer program causes a computer to execute the method of any of claims 39 to 44.
PCT/CN2023/119548 2023-09-18 2023-09-18 Ai/ml-based method for processing channel state information and wireless communication device Pending WO2025059827A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/CN2023/119548 WO2025059827A1 (en) 2023-09-18 2023-09-18 Ai/ml-based method for processing channel state information and wireless communication device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2023/119548 WO2025059827A1 (en) 2023-09-18 2023-09-18 Ai/ml-based method for processing channel state information and wireless communication device

Publications (1)

Publication Number Publication Date
WO2025059827A1 true WO2025059827A1 (en) 2025-03-27

Family

ID=95073189

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2023/119548 Pending WO2025059827A1 (en) 2023-09-18 2023-09-18 Ai/ml-based method for processing channel state information and wireless communication device

Country Status (1)

Country Link
WO (1) WO2025059827A1 (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180062813A1 (en) * 2012-09-28 2018-03-01 Huawei Technologies Co., Ltd. Channel-state information process processing method, network device, and user equipment
US20210328630A1 (en) * 2020-04-16 2021-10-21 Qualcomm Incorporated Machine learning model selection in beamformed communications
US20220302977A1 (en) * 2020-10-14 2022-09-22 Apple Inc. UE Capability-Based CSI Report Configuration
US20230128145A1 (en) * 2021-10-21 2023-04-27 Apple Inc. Predictive csi enhancements for high speed scenarios
WO2023081187A1 (en) * 2021-11-03 2023-05-11 Interdigital Patent Holdings, Inc. Methods and apparatuses for multi-resolution csi feedback for wireless systems

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180062813A1 (en) * 2012-09-28 2018-03-01 Huawei Technologies Co., Ltd. Channel-state information process processing method, network device, and user equipment
US20210328630A1 (en) * 2020-04-16 2021-10-21 Qualcomm Incorporated Machine learning model selection in beamformed communications
US20220302977A1 (en) * 2020-10-14 2022-09-22 Apple Inc. UE Capability-Based CSI Report Configuration
US20230128145A1 (en) * 2021-10-21 2023-04-27 Apple Inc. Predictive csi enhancements for high speed scenarios
WO2023081187A1 (en) * 2021-11-03 2023-05-11 Interdigital Patent Holdings, Inc. Methods and apparatuses for multi-resolution csi feedback for wireless systems

Similar Documents

Publication Publication Date Title
US10841055B2 (en) Method and apparatus for obtaining resource indication value
US12279299B2 (en) Resource configuration method, base station and terminal
US11528637B2 (en) Method and apparatus for wireless communication
WO2021158162A1 (en) Configuration for ue energy consumption reduction features
JP2023536741A (en) Control channel handling for extended cross-carrier scheduling
CN111373708B (en) Resource allocation method for sub-PRB uplink transmission
US20250286673A1 (en) Network devices, terminal devices, and methods therein
WO2024229857A1 (en) Machine learning model processing method, wireless communication device, and system
US20230074305A1 (en) Resource determining method, apparatus, and system
US20250337659A1 (en) Communication method, network device, and terminal device
US12231371B2 (en) Interference avoidance for cellular networks
US20240073875A1 (en) Terminal device and method therein for resource reservation
WO2025059827A1 (en) Ai/ml-based method for processing channel state information and wireless communication device
CN112055985B (en) Single-stage downlink control message design for scheduling multiple active bandwidth portions
WO2021121590A1 (en) Control information for conflicting uplink grants
US12446011B2 (en) Network resource allocation for energy-constrained devices
US20210058217A1 (en) Methods and devices for pdcch monitoring
US12495424B2 (en) Communication method, communication apparatus, and device including periodicity
US12273293B2 (en) Fast radio access network parameter control
US20210400711A1 (en) Methods, radio nodes and computer readable media for enhanced grant skipping
WO2025030349A1 (en) User equipment, base station, and method for processng artificial intelligence-related user equipment capability reporting
WO2025145438A1 (en) Ai/ml model capability
US20230199781A1 (en) Method and Apparatus for Configuring Downlink Resource of Search Space
WO2024012595A1 (en) Physical downlink control channel detection method, apparatus, and communication device
WO2025139975A1 (en) Communication processing methods and apparatuses, and device and readable storage medium

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23952493

Country of ref document: EP

Kind code of ref document: A1