[go: up one dir, main page]

WO2025060324A1 - Ai/ml framework for communication - Google Patents

Ai/ml framework for communication Download PDF

Info

Publication number
WO2025060324A1
WO2025060324A1 PCT/CN2024/075274 CN2024075274W WO2025060324A1 WO 2025060324 A1 WO2025060324 A1 WO 2025060324A1 CN 2024075274 W CN2024075274 W CN 2024075274W WO 2025060324 A1 WO2025060324 A1 WO 2025060324A1
Authority
WO
WIPO (PCT)
Prior art keywords
function
sensing
model
data
receiving
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
PCT/CN2024/075274
Other languages
French (fr)
Inventor
Hao Tang
Jianglei Ma
Peiying Zhu
Wen Tong
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Technologies Co Ltd
Original Assignee
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co Ltd filed Critical Huawei Technologies Co Ltd
Publication of WO2025060324A1 publication Critical patent/WO2025060324A1/en
Pending legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/16Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks using machine learning or artificial intelligence

Definitions

  • Example embodiments of the present disclosure generally relate to the field of communications, and in particular, to an artificial intelligence /machine learning (AI/ML) functional framework for communication.
  • AI/ML artificial intelligence /machine learning
  • AI Artificial intelligence
  • ML deep machine learning
  • AI Artificial intelligence
  • existing communication techniques which rely on classical analytical modeling of channels, have enabled wireless communications to take place at close to the theoretical Shannon limit.
  • existing techniques may be unsatisfactory.
  • AI is expected to help address this challenge.
  • Other aspects of wireless communication may benefit from the use of AI, particularly in future generations of wireless technologies, such as technologies in advanced 5G and future 6G systems, and beyond.
  • network is expected to provide AI service (s) .
  • Some embodiments of the disclosure will propose designs on the AI/ML framework with sensing functionalities, including sensing for AI/ML (sensing improves AI/ML performance) and AI/ML for sensing (AI/ML improves sensing performance) .
  • example embodiments of the present disclosure provide a solution for an AI/ML functional framework for communication, especially for a 6G AI/ML framework with sensing.
  • a method comprising: performing at least one operation based on an artificial intelligence/machine learning (AI/ML) functional framework, wherein the AI/ML functional framework comprises: a first function configured to perform model training of at least one of an AI/ML model, an AI/ML sub-model, an AI/ML functionality or an AI/ML sub-functionality; a second function configured to perform management of the AI/ML model; a third function configured to perform inference of the AI/ML model to obtain inference results; a fourth function configured to store the AI/ML model; and at least one function configured to operate based on sensing data.
  • an AI/ML framework with sensing functionalities including sensing for AI/ML and AI/ML for sensing
  • the first function is further configured to perform at least one of the following: validation of the AI/ML model; testing of the AI/ML model; or data preparation based on data received by the first function.
  • the first function can provide a more accurate AI/ML model, which in turn can provide more accurate inference results, thus the reliability of the AI/ML model can be improved.
  • the second function is further configured to at least one of the following: perform control of the model training of the at least one of AI/ML model, AI/ML sub-model, AI/ML functionality or AI/ML sub-functionality; perform control of the inference of the AI/ML model; or monitor output of the AI/ML model.
  • the second function can facilitate the first function to provide a more accurate AI/ML model, which in turn can provide more accurate inference results, thus the reliability of the AI/ML model can be improved.
  • the third function is further configured to at least one of the following: perform an action based on the inference results; or perform data preparation based on data received by the third function. In this way, the third function can perform the action based on the inference results of the AI/ML model, improving the processing efficiency and reliability with the AI/ML model.
  • the at least one operation comprises at least one of the following operations performed by the first function: transmitting the trained AI/ML model to the fourth function, receiving AI/ML assistance information from the second function, or receiving, from the second function, a performance level of the AI/ML model and a request to retrain the AI/ML model.
  • the first function can provide a more accurate (re) trained AI/ML model based on the AI/ML assistance information and/or the performance level of the AI/ML model.
  • the (re) trained AI/ML model can, in turn, provide more accurate inference results, thus the reliability of the (re) trained AI/ML model can be improved.
  • the at least one operation comprises the following operations performed by the second function: receiving the inference results from the third function.
  • the second function can facilitate the first function to provide a more accurate retrained/updated AI/ML model based on the inference results of the current AI/ML model.
  • the retrained/updated AI/ML model can, in turn, provide more accurate inference results, thus the reliability of the AI/ML model can be improved.
  • the at least one operation further comprises the following operations performed by the second function: determining that a performance level of the AI/ML model is below a threshold level based on the inference results received from the third function; and based on determining that the performance level is below the threshold level, transmitting, to the first function, the performance level of the AI/ML model and a request to retrain the AI/ML model.
  • the second function can request the first function to retrain the AI/ML model in response to the performance level of the currently used AI/ML model becoming below a threshold level.
  • the second function can facilitate the first function to provide a more accurate retrained/updated AI/ML model based on the inference results of the current AI/ML model.
  • the retrained/updated AI/ML model can, in turn, provide more accurate inference results, thus the reliability of the retrained/updated AI/ML model can be improved as compared with the currently used AI/ML model.
  • the at least one operation comprises at least one of the following operations performed by the second function: transmitting AI/ML assistance information to the first function, transmitting, to the third function, a switching indication to switch from the AI/ML model to another AI/ML model; transmitting, to the third function, a fallback indication to apply a non-AI/ML model instead of the AI/ML model; transmitting, to the third function, an activating indication to activate one or more of a plurality of candidate AI/ML models; or transmitting, to the third function, a deactivating indication to deactivate one or more of the plurality of candidate AI/ML models.
  • the second function can provide the AI/ML assistance information to the first function to obtain a more accurate (re) trained AI/ML model based on the AI/ML assistance information.
  • the retrained/updated AI/ML model can, in turn, provide more accurate inference results, thus the reliability of the retrained/updated AI/ML model can be improved as compared with the currently used AI/ML model.
  • the second function can change/switch/ (de) select a desired AI/ML model for future use, improving the flexibility in management on the third function and further the whole AI/ML functional framework.
  • the at least one operation comprises the following operation performed by the second function: transmitting, to the fourth function, a request that the fourth function transmits the AI/ML model to the third function.
  • the second function can transmit the (re) trained AI/ML model to the third function for future use, while the retrained/updated AI/ML model can provide more accurate inference results than the currently used AI/ML model at the third function. Therefore, the reliability of the retrained/updated AI/ML model can be improved as compared with the currently used AI/ML model.
  • the retrained/updated AI/ML model can, in turn, provide more accurate inference results as compared with the currently used AI/ML model at the third function, thus the reliability of the retrained/updated AI/ML model can be improved as compared with the currently used AI/ML model.
  • the RF sensing is one of: 3rd generation partnership project (3GPP) defined RF sensing, or non-3GPP defined RF sensing.
  • 3GPP 3rd generation partnership project
  • non-3GPP defined RF sensing sensing data can be collected through RF sensing, for example, either 3GPP defined RF sensing or non-3GPP defined RF sensing.
  • the at least one function further comprises: a ninth function configured to collect the sensing data; and a tenth function configured to obtain fused data based on the non-sensing data and the sensing data.
  • the fused data can be obtained which is more accurate than either one of the non-sensing data and the sensing data, and is less in quantity than the sum of the non-sensing data and the sensing data.
  • the at least one function further comprises at least one of the following: an eleventh function configured to obtain a sensing model or a sensing result; a twelfth function configured to perform management of the sensing model or sensing result; or thirteenth function configured to assist communication or determine an event based on the sensing model or sensing result.
  • an eleventh function configured to obtain a sensing model or a sensing result
  • a twelfth function configured to perform management of the sensing model or sensing result
  • thirteenth function configured to assist communication or determine an event based on the sensing model or sensing result.
  • the at least one function further comprises: a fourteenth function configured to store the sensing model or the sensing result.
  • the sensing model can be stored in the fourteenth function which is separate from the fourth function, and the operations involving the storage and retrieval of the AI/ML model and the sensing model can be performed separately in a decoupled manner.
  • the at least one operation comprises at least one of the following operations performed by the first function: receiving first input data from at least one of the fifth function, the ninth function or the tenth function.
  • a (re) trained AI/ML model can be (re) trained with the first input data as the training data. Since the first input data is from at least one of the fifth function, the ninth function or the tenth function, which implies the first input data may include non-sensing data and sensing data, AI/ML functionalities of the AI/ML functional framework can be enhanced by the non-sensing data and the sensing data.
  • the training process of the (re) trained AI/ML model can be shortened and the accuracy of the (re) trained AI/ML model can be more accurate.
  • the at least one operation comprises the following operation performed by the second function: receiving second input data from at least one of the fifth function, the ninth function or the tenth function.
  • the second function can perform management of the AI/ML model based on the second input data. Since the second input data is from at least one of the fifth function, the ninth function or the tenth function, which implies the second input data may include non-sensing data and sensing data, AI/ML functionalities of the AI/ML functional framework can be enhanced by the non-sensing data and the sensing data. At the same time, with the large-quantity sensing data, the management of the AI/ML model can be more efficient and accurate.
  • the at least one operation comprises the following operation performed by the third function: receiving third input data from at least one of the fifth function, the ninth function or the tenth function.
  • the third function can perform inference of the AI/ML model based on the third input data. Since the third input data is from at least one of the fifth function, the ninth function or the tenth function, which implies the third input data may include non-sensing data and sensing data, where the non-sensing data can be utilized by the third function to perform inference of the AI/ML model more accurately and reliably.
  • the at least one operation comprises the following operation performed by the fifth function: transmitting the non-sensing data to at least one of the first function, the second function or the third function, and at least one of the eleventh function, the twelfth function or the thirteenth function.
  • the non-sensing data can be utilized by the first function to train the AI/ML model to obtain a more accurate AI/ML model.
  • the non-sensing data can help the second function to manage the AI/ML model more reliably and help the third function to perform inference of the AI/ML model more accurately and thus reliably.
  • the non-sensing data can be utilized by the eleventh function to train the sensing model to obtain a more accurate sensing model.
  • the non-sensing data can help the twelfth function to manage the sensing model more reliably and help the thirteenth function to perform inference of the sensing model more accurately and thus reliably.
  • the at least one operation comprises the following operation performed by the ninth function: transmitting the sensing data to at least one of the first function, the second function or the third function, and at least one of the eleventh function, the twelfth function or the thirteenth function.
  • the sensing data can be utilized by the first function to train the AI/ML model to obtain a more accurate AI/ML model.
  • the sensing data can help the second function to manage the AI/ML model more reliably and help the third function to perform inference of the AI/ML model more accurately and thus reliably.
  • the sensing data can facilitate the first function, second function and third function to improve AI/ML functionalities of the AI/ML model and further the AI/ML functional framework. Further, the sensing data can be utilized by the eleventh function to train the sensing model to obtain a more accurate sensing model. At the same time, the sensing data can help the twelfth function to manage the sensing model more reliably and help the thirteenth function to perform inference of the sensing model more accurately and thus reliably.
  • the at least one operation comprises the following operations performed by the tenth function: receiving the non-sensing data from the sixth function, receiving the sensing data from the ninth function, and performing data processing on the received non-sensing data and sensing data to obtain the fused data.
  • the fused data can be obtained which is more accurate than either one of the non-sensing data and the sensing data, and is less in quantity than the sum of the non-sensing data and the sensing data.
  • the at least one operation further comprises the following operation performed by the tenth function: transmitting the fused data to at least one of the first function, the second function or the third function, and at least one of the eleventh function, the twelfth function or the thirteenth function.
  • the fused data then can be utilized by the first function to train the AI/ML model to obtain a more accurate AI/ML model.
  • the fused data can help the second function to manage the AI/ML model more reliably and help the third function to perform inference of the AI/ML model more accurately and thus reliably.
  • the fused data can facilitate the first function, second function and third function to improve AI/ML functionalities of the AI/ML model and further the AI/ML functional framework. Further, the fused data then can be utilized by the eleventh function to train the sensing model to obtain a more accurate sensing model. At the same time, the fused data can help the twelfth function to manage the sensing model more reliably and help the thirteenth function to perform inference of the sensing model more accurately and thus reliably.
  • the eleventh function is further configured to at least one of the following: perform data processing based on fourth input data obtained from at least two of the fifth function, the ninth function or the tenth function. In this way, based on the fourth input data as the training data for the sensing model, the eleventh function can train the sensing model more accurately.
  • the model training of the at least one of sensing model, sensing sub-model, sensing functionality or sensing sub-functionality comprises at least one of the following: environment reconstruction, channel reconstruction, target reconstruction or digital twin or object detection. In this way, the sensing model can be trained more accurately.
  • the twelfth function is further configured to at least one of the following: perform control of the model training of the at least one of sensing model, sensing sub-model, sensing functionality or sensing sub-functionality; perform control of the inference of the sensing model; or monitor output of the sensing model.
  • the twelfth function can facilitate the eleventh function to provide a more accurate sensing model, which can produce more accurate sensing inference results, thus the reliability of the sensing model can be improved.
  • the thirteenth function is further configured to at least one of the following: perform data preparation based on sixth input data obtained from at least one of the fifth function, the ninth function or the tenth function. In this way, data used in processing by the thirteenth function can be more organized as compared with the case where the sixth input data is used in the processing without data preparation, thus the processing by the thirteenth function can be more accurate with a higher speed.
  • the at least one operation comprises at least one of the following operations performed by the eleventh function: receiving the fourth input data from at least one of the fifth function, the ninth function or the tenth function; receiving, from the twelfth function, a performance level of the sensing model and a request to retrain the sensing model; receiving the sensing inference results from the thirteenth function, receiving sensing information from the twelfth function, or transmitting the trained or retrained sensing model to the fourteenth function.
  • the eleventh function can provide a more accurate (re) trained sensing model based on the fourth input data and/or the performance level of the sensing model and/or the sensing information and/or the sensing inference results.
  • the (re) trained sensing model can, in turn, provide more accurate inference results, thus the reliability of the (re) trained sensing model can be improved.
  • the at least one operation further comprises the following operation performed by the eleventh function: receiving the inference results from the third function.
  • the inference results of the AI/ML model can help the eleventh function to improve the accuracy and performance of the (re) trained AI/ML model and further the AI/ML functional framework.
  • the at least one operation comprises the following operations performed by the twelfth function: receiving fifth input data from at least one of the fifth function, the ninth function or the tenth function; and receiving the sensing inference results from the thirteenth function.
  • the twelfth function can facilitate the eleventh function to provide a more accurate sensing model, which in turn can provide more accurate sensing inference results, thus the reliability of the sensing model can be improved.
  • the at least one operation further comprises the following operations performed by the twelfth function: determining that a performance level of the sensing model is below a threshold level based on the sensing inference results received from the thirteenth function; and based on determining that the performance level is below the threshold level, transmitting, to the eleventh function, the performance level of the sensing model and a request to retrain the sensing model.
  • the twelfth function can request the eleventh function to retrain the sensing model in response to the performance level of the currently used sensing model becoming below a threshold level.
  • the twelfth function can facilitate the eleventh function to provide a more accurate retrained/updated sensing model based on the sensing inference results of the current sensing model.
  • the retrained/updated sensing model can, in turn, provide more accurate sensing inference results, thus the reliability of the retrained/updated sensing model can be improved as compared with the currently used sensing model.
  • the at least one operation comprises at least one of the following operations performed by the twelfth function: transmitting sensing information to the eleventh function, transmitting, to the thirteenth function, a switching indication to switch from the sensing model to another sensing model; transmitting, to the thirteenth function, a fallback indication to apply a non-sensing model instead of the sensing model; transmitting, to the thirteenth function, an activating indication to activate one or more of a plurality of candidate sensing models; or transmitting, to the thirteenth function, a deactivating indication to deactivate one or more of the plurality of candidate sensing models.
  • the twelfth function can provide the sensing information to the eleventh function to obtain a more accurate (re) trained sensing model based on the sensing information.
  • the retrained/updated sensing model can, in turn, provide more accurate sensing inference results, thus the reliability of the retrained/updated sensing model can be improved as compared with the currently used sensing model.
  • the twelfth function can change/switch/ (de) select a desired sensing model for future use, improving the flexibility in management on the thirteenth function and further the whole AI/ML functional framework.
  • the at least one operation comprises the following operation performed by the twelfth function: transmitting, to the fourteenth function, a request that the fourteenth function transmits the sensing model to the thirteenth function.
  • the twelfth function can request the fourteenth function to transmit the (re) trained sensing model to the thirteenth function for future use, while the retrained/updated sensing model can provide more accurate sensing inference results than the currently used sensing model at the thirteenth function. Therefore, the reliability of the retrained/updated sensing model can be improved as compared with the currently used sensing model.
  • the at least one operation comprises the following operation performed by the twelfth function: receiving the inference results from the third function.
  • the inference results can facilitate the twelfth function to improve sensing functionalities of the sensing model and further the AI/ML functional framework.
  • the at least one operation comprises the following operations performed by the thirteenth function: receiving sixth input data from at least one of fifth function, the ninth function or the tenth function; transmitting the sensing inference results to the twelfth function.
  • the thirteenth function can determine the sensing inference results, and send the sensing inference results to the twelfth function.
  • the twelfth function can determine whether the performance level of the sensing model is below a threshold level based on the sensing inference results received from the thirteenth function. If so, the twelfth function can request the eleventh function to retrain the sensing model accordingly.
  • the thirteenth function can help the twelfth function to facilitate the eleventh function to provide a more accurate retrained/updated sensing model based on the sensing inference results.
  • the retrained/updated sensing model can, in turn, provide more accurate sensing inference results as compared with the currently used sensing model at the thirteenth function, thus the reliability of the retrained/updated sensing model can be improved as compared with the currently used sensing model.
  • the at least one operation further comprises at least one of the following operation performed by the thirteenth function: transmitting the sensing inference results to at least one of the first function, the second function or the third function, or receiving the sensing model from the fourteenth function.
  • the sensing inference results can facilitate the first function, the second function or the third function to improve AI/ML functionalities of the AI/ML model and further the AI/ML functional framework.
  • the at least one operation comprises at least one of the following operations performed by the thirteenth function: receiving, from the twelfth function, a switching indication to switch from the sensing model to another sensing model; receiving, from the twelfth function, a fallback indication to apply a non-sensing model instead of the sensing model; receiving, from the twelfth function, an activating indication to activate one or more of a plurality of candidate sensing models; or receiving, from the twelfth function, a deactivating indication to deactivate one or more of the plurality of candidate sensing models.
  • the thirteenth function can turn to use a desired sensing model indicated by the twelfth function, improving the flexibility in management on the thirteenth function and further the whole AI/ML functional framework.
  • the at least one operation comprises at least one of the following operations performed by the fourteenth function: receiving the trained sensing model from the eleventh function; or based on receiving, from the twelfth function, a request that the fourteenth function transmits the sensing model to the thirteenth function, transmitting the sensing model to the thirteenth function.
  • the fourteenth function can provide the sensing model to the thirteenth function, such that the thirteenth function can use the (re) trained sensing model to provide more accurate sensing inference results as compared with the currently used sensing model at the thirteenth function, thus the reliability of the (re) trained sensing model can be improved as compared with the currently used sensing model.
  • the request comprises at least one of the following: a model ID of the requested sensing model, a sensing functionality ID for the requested sensing functionality, or a sensing performance requirement indicating the requested sensing performance.
  • a sensing model desired by the twelfth function to be used at the thirteenth function can be requested using various parameters, improving the flexibility and usability of the AI/ML functional framework.
  • the at least one function further comprises: a fifteenth function configured to perform sensing inference to obtain a sensing result, wherein the first function is further configured to perform model training of at least a sensing model, a sensing sub-model, a sensing functionality or a sensing sub-functionality, and the second function is further configured to perform management of the sensing model.
  • the first function can not only train an AI/ML model, but also can train a sensing model
  • the second function can monitor not only the AI/ML model but also the sensing model.
  • the fifteenth function which is in charge of sensing inference of the sensing model is separate from the third function which is in charge of model inference of the AI/ML model.
  • the at least one function further comprises: a sixteenth function configured to obtain fused data.
  • the fused data may be obtained by processing on non-sensing data and sensing data. In this way, the fused data, which is less in quantity than the sum of the non-sensing data and the sensing data, can be used in future processing to improve data accuracy and decrease data processing volume.
  • the first function is further configured to at least one of the following: perform data preparation based on seventh input data obtained from the sixteenth function. In this way, data used in processing by the first function can be more organized as compared with the case where the seventh input data is used in the processing without data preparation, thus the processing by the first function can be more accurate with a higher speed.
  • the second function is further configured to at least one of the following: perform control of the model training of the at least one of sensing model, sensing sub-model, sensing functionality or sensing sub-functionality; perform control of the sensing inference of the sensing model; or monitor output of the sensing model.
  • the second function which performs management of the AI/ML model, can also perform management of the sensing model (including model training and inference of the sensing model) .
  • the at least one operation comprises at least one of the following operations performed by the first function: receiving the seventh input data from the sixteenth function; receiving, from the second function, a performance level of the sensing model and a request to retrain the sensing model; receiving sensing information from the second function, or transmitting the trained or retrained sensing model to the fourth function.
  • the first function can provide a more accurate (re) trained sensing model based on the seventh input data and/or the performance level of the sensing model and/or the sensing information.
  • the (re) trained sensing model can, in turn, provide more accurate inference results, thus the reliability of the (re) trained sensing model can be improved.
  • the at least one operation comprises the following operations performed by the second function: receiving eighth input data from the sixteenth function; and receiving the sensing inference results from the fifteenth function.
  • the second function can facilitate the first function to provide a more accurate retrained/updated AI/ML model and/or sensing model based on the eighth input data and/or the sensing inference results of the current sensing model.
  • the sensing inference results can facilitate the second function to improve AI/ML functionalities of the AI/ML model and further the AI/ML functional framework. More specifically, the retrained/updated AI/ML model can, in turn, provide more accurate inference results, thus the reliability of the AI/ML model can be improved. Also, the retrained/updated sensing model can, in turn, provide more accurate inference results, thus the reliability of the sensing model can be improved.
  • the at least one operation further comprises the following operations performed by the second function: determining that a performance level of the sensing model is below a threshold level based on the sensing inference results received from the fifteenth function; and based on determining that the performance level is below the threshold level, transmitting, to the first function, the performance level of the sensing model and a request to retrain the sensing model.
  • the second function can request the first function to retrain the sensing model in response to the performance level of the currently used sensing model becoming below a threshold level.
  • the second function can facilitate the first function to provide a more accurate retrained/updated sensing model based on the inference results of the current sensing model.
  • the retrained/updated sensing model can, in turn, provide more accurate inference results, thus the reliability of the retrained/updated sensing model can be improved as compared with the currently used sensing model.
  • the at least one operation comprises at least one of the following operations performed by the second function: transmitting sensing information to the first function, transmitting, to the fifteenth function, a switching indication to switch from the sensing model to another sensing model; transmitting, to the fifteenth function, a fallback indication to apply a non-sensing model instead of the sensing model; transmitting, to the fifteenth function, an activating indication to activate one or more of a plurality of candidate sensing models; or transmitting, to the fifteenth function, a deactivating indication to deactivate one or more of the plurality of candidate sensing models.
  • the second function can provide the sensing information to the first function to obtain a more accurate (re) trained sensing model based on the sensing information.
  • the retrained/updated sensing model can, in turn, provide more accurate sensing inference results, thus the reliability of the retrained/updated sensing model can be improved as compared with the currently used sensing model.
  • the second function can change/switch/ (de) select a desired sensing model for future use, improving the flexibility in management on the fifteenth function and further the whole AI/ML functional framework.
  • the at least one operation comprises the following operation performed by the second function: transmitting, to the fourth function, a request that the fourth function transmits the sensing model to the fifteenth function.
  • the second function can transmit the (re) trained sensing model to the fifteenth function for future use, while the retrained/updated sensing model can provide more accurate inference results than the currently used sensing model at the third function. Therefore, the reliability of the retrained/updated sensing model can be improved as compared with the currently used sensing model.
  • the at least one operation comprises the following operation performed by the third function: receiving ninth input data from the sixteenth function.
  • the third function can provide more accurate sensing inference result (s) based on the ninth input data.
  • the at least one operation comprises at least one of the following operation performed by the third function: transmitting the inference results to the fifteenth function, or receiving the sensing result (or, sensing inference result) from the fifteenth function.
  • the inference results can facilitate the fifteenth function to improve sensing functionalities of the sensing model.
  • the sensing result can facilitate the third function to improve inference results of the AI/ML model and further the AI/ML functional framework.
  • the at least one operation comprises the following operations performed by the fifteenth function: receiving tenth input data from the sixteenth function; and receiving the sensing model from the fourth function. In this way, with the tenth input data and the sensing model, the fifteenth function can perform sensing inference and obtain the sensing result.
  • the at least one operation further comprises at least one of the following operation performed by the fifteenth function: receiving the inference results from the second function, or transmitting the sensing results to the second function.
  • the inference results can facilitate the fifteenth function to improve sensing functionalities of the sensing model.
  • the sensing result can facilitate the second function to improve management of the AI/ML model and further the AI/ML functional framework.
  • the at least one operation comprises at least one of the following operations performed by the fifteenth function: receiving, from the second function, a switching indication to switch from the sensing model to another sensing model; receiving, from the second function, a fallback indication to apply a non-sensing model instead of the sensing model; receiving, from the second function, an activating indication to activate one or more of a plurality of candidate sensing models; or receiving, from the second function, a deactivating indication to deactivate one or more of the plurality of candidate sensing models.
  • the fifteenth function can change/switch to a desired sensing model as indicated by the second function for future use, improving the flexibility in management the sensing model and further the whole AI/ML functional framework.
  • the at least one function further comprises: a seventeenth function configured to collect non-sensing data, and an eighteenth function configured to collect sensing data. In this way, both non-sensing data and sensing data can be utilized in the AI/ML functional framework, thus accuracy and performance of the AI/ML model and the sensing model can be improved.
  • the at least one operation comprises the following operations performed by the sixteenth function: receiving the non-sensing data from the seventeenth function, receiving the sensing data from the eighteenth function, and performing data processing on the received non-sensing data and sensing data to obtain the fused data.
  • the fused data can be obtained by processing on the non-sensing data from the seventeenth function and the sensing data from the eighteenth function.
  • the fused data which is less in quantity than the sum of the non-sensing data and the sensing data, can be used in future processing to improve data accuracy and decrease data processing volume.
  • the at least one operation further comprises the following operation performed by the sixteenth function: transmitting the fused data to at least one of the first function, the second function, the third function or the fifteenth function.
  • the fused data then can be utilized by the first function to train the AI/ML model and/or the sensing model to obtain a more accurate AI/ML model and/or sensing model.
  • the fused data can help the second function to manage the AI/ML model and/or the sensing model more reliably, help the third function to perform inference of the AI/ML model more accurately and thus reliably, and help the fifteenth function to perform inference of the sensing model more accurately and thus reliably.
  • the at least one function further comprises at least two of: a nineteenth function configured to provide ground-truth sensing data, a twentieth function configured to provide non-ground-truth sensing data, or a twenty-first function configured to provide non-sensing ground-truth data.
  • a nineteenth function configured to provide ground-truth sensing data
  • a twentieth function configured to provide non-ground-truth sensing data
  • a twenty-first function configured to provide non-sensing ground-truth data.
  • the at least one operation comprises the following operations performed by the sixteenth function: receiving at least two of: ground-truth sensing data from the nineteenth function, the non-ground-truth sensing data from the twentieth function, or the non-sensing ground-truth data from the twenty-first function, and performing data processing on the received data to obtain the fused data.
  • the fused data then can be utilized by the first function to train the AI/ML model and/or the sensing model to obtain a more accurate AI/ML model and/or sensing model.
  • the fused data can help the second function to manage the AI/ML model and/or the sensing model more reliably, help the third function to perform inference of the AI/ML model more accurately and thus reliably, and help the fifteenth function to perform sensing inference of the sensing model more accurately and thus reliably.
  • the fused data can facilitate the first function, second function and third function to improve AI/ML functionalities of the AI/ML model and further the AI/ML functional framework.
  • the at least one operation further comprises the following operation performed by the sixteenth function: transmitting the fused data to at least one of the first function, the second function, the third function or the fifteenth function.
  • the first function can utilize the fused data to train the AI/ML model and/or the sensing model to obtain a more accurate AI/ML model and/or sensing model.
  • the second function can utilize the fused data to manage the AI/ML model and/or the sensing model more reliably.
  • the third function can utilize the fused data to perform inference of the AI/ML model more accurately and thus reliably.
  • the fifteenth function can utilize the fused data to perform sensing inference of the sensing model more accurately and thus reliably.
  • the first function, second function and third function can utilize the fused data to improve AI/ML functionalities of the AI/ML model and further the AI/ML functional framework.
  • the data processing comprises at least one of the following: data pre-processing, data cleaning, data formatting, data transformation, or data integration.
  • data pre-processing data cleaning
  • data formatting data formatting
  • data transformation data transformation
  • data integration data integration
  • At least one of the first function, the second function, the third function, the fourth function, the fifth function, the sixth function, the seventh function, the eighth function, the ninth function, the tenth function, the eleventh function, the twelfth function, the thirteenth function, the fourteenth function, the fifteenth function, the sixteenth function, the seventeenth function, the eighteenth function, the nineteenth function, the twentieth function or the twenty-first function is implemented in one of the following: a terminal device, an access network device, a core network device, or a third party device.
  • each function may be implemented in one of the terminal device, access network device, core network device or third party device in a “distributed” manner, improving the flexibility of implementation and enabling dynamic implementation with various modules where each module may, by itself or in combination with other module (s) , implement one or more functions as described here.
  • an apparatus comprising: a transceiver; and a processor communicatively coupled with the transceiver, wherein the processor is configured to perform at least one operation based on an artificial intelligence/machine learning (AI/ML) functional framework
  • AI/ML functional framework comprises: a first function configured to perform model training of at least one of an AI/ML model, an AI/ML sub-model, an AI/ML functionality or an AI/ML sub-functionality; a second function configured to perform management of the AI/ML model; a third function configured to perform inference of the AI/ML model; a fourth function configured to store the AI/ML model; and at least one function configured to operate based on sensing data.
  • AI/ML functional framework comprises: a first function configured to perform model training of at least one of an AI/ML model, an AI/ML sub-model, an AI/ML functionality or an AI/ML sub-functionality; a second function configured to perform management of the AI/ML model; a third function configured to perform inference of the AI/ML model; a fourth
  • a non-transitory computer-readable storage medium comprising computer program stored thereon.
  • the computer program when executed on at least one processor, cause the at least one processor to perform the method of the first aspect.
  • a non-transitory computer-readable storage medium comprising computer program can be provided to implement integrated AI and sensing can be obtained for high-accuracy purpose to facilitate communication.
  • a chip comprising at least one processing circuit configured to perform the method of the first aspect.
  • a chip can be provided to implement integrated AI and sensing can be obtained for high-accuracy purpose to facilitate communication.
  • a computer program product tangibly stored on a computer-readable medium and comprising computer-executable instructions which, when executed, cause an apparatus to perform a method of the first aspect.
  • the computer program product can be provided to implement integrated AI and sensing can be obtained for high-accuracy purpose to facilitate communication.
  • FIG. 1A illustrates an example of a network environment in which some example embodiments of the present disclosure may be implemented
  • FIG. 1B illustrates an example communication system in which some example embodiments of the present disclosure may be implemented
  • FIG. 1C illustrates an example of an electric device and a base station in accordance with some example embodiments of the present disclosure
  • FIG. 1D illustrates units or modules in a device in accordance with some example embodiments of the present
  • FIG. 1E illustrates an example sensing system in accordance with some example embodiments of the present disclosure
  • FIG. 1F illustrates an example apparatus that may implement the methods and teachings in accordance with some example embodiments of the present disclosure
  • FIG. 1G illustrates a schematic diagram of an example model in accordance with some example embodiments of the present disclosure
  • FIG. 2 illustrates a flowchart illustrating an example communication process in accordance with some example embodiments of the present disclosure
  • FIG. 3 illustrates a schematic diagram of an example AI/ML functional framework in accordance with some embodiments of the present disclosure
  • FIG. 4 illustrates a schematic diagram of an example AI/ML functional framework and the flowchart of operations in the AI/ML functional framework in accordance with some embodiments of the present disclosure
  • FIG. 5 illustrates a schematic diagram of another example AI/ML functional framework and the flowchart of operations in the AI/ML functional framework in accordance with some embodiments of the present disclosure
  • FIG. 6 illustrates a schematic diagram of a third example AI/ML functional framework and the flowchart of operations in the AI/ML functional framework in accordance with some embodiments of the present disclosure
  • FIG. 7 illustrates a schematic diagram of a fourth example AI/ML functional framework and the flowchart of operations in the AI/ML functional framework in accordance with some embodiments of the present disclosure
  • FIG. 8 illustrates a block diagram of an electronic device that may be used for implementing devices and methods in accordance with some embodiments of the present disclosure.
  • FIG. 9 illustrates a schematic diagram of a structure of an apparatus in accordance with some embodiments of the present disclosure.
  • references in the present disclosure to “one embodiment, ” “an embodiment, ” “an example embodiment, ” and the like indicate that the embodiment described may include a particular feature, structure, or characteristic, but it is not necessary that every embodiment includes the particular feature, structure, or characteristic. Moreover, such phrases are not necessarily referring to the same embodiment. Further, when a particular feature, structure, or characteristic is described in connection with an embodiment, it is submitted that it is within the knowledge of one skilled in the art to affect such feature, structure, or characteristic in connection with other embodiments whether or not explicitly described.
  • first and second etc. may be used herein to describe various elements, these elements should not be limited by these terms. These terms are used to distinguish one element from another. For example, a first element could be termed a second element, and similarly, a second element could be termed a first element, without departing from the scope of example embodiments.
  • the term “and/or” includes any and all combinations of one or more of the listed terms.
  • the term “communication network” refers to a network following any suitable communication standards, such as Long Term Evolution (LTE) , LTE-Advanced (LTE-A) , Wideband Code Division Multiple Access (WCDMA) , High-Speed Packet Access (HSPA) , Narrow Band Internet of Things (NB-IoT) , Wireless Fidelity (WiFi) and so on.
  • LTE Long Term Evolution
  • LTE-A LTE-Advanced
  • WCDMA Wideband Code Division Multiple Access
  • HSPA High-Speed Packet Access
  • NB-IoT Narrow Band Internet of Things
  • WiFi Wireless Fidelity
  • the communications between a terminal device and a network device in the communication network may be performed according to any suitable generation communication protocols, including, but not limited to, the fourth generation (4G) , 4.5G, the future fifth generation (5G) , IEEE 802.11 communication protocols, and/or any other protocols either currently known or to be developed in the future.
  • 4G fourth generation
  • Embodiments of the present disclosure may be applied in various communication systems. Given the rapid development in communications, there will of course also be future type communication technologies and systems with which the present disclosure may be embodied. It should not be seen as limiting the scope of the present disclosure to only the aforementioned system.
  • the term “network device” refers to a node in a communication network via which a terminal device accesses the network and receives services therefrom.
  • the network device may refer to a base station (BS) or an access point (AP) , for example, a node B (NodeB or NB) , an evolved NodeB (eNodeB or eNB) , a NR NB (also referred to as a gNB) , a Remote Radio Unit (RRU) , a radio header (RH) , a remote radio head (RRH) , a WiFi device, a relay, a low power node such as a femto, a pico, and so forth, depending on the applied terminology and technology.
  • BS base station
  • AP access point
  • terminal device refers to any end device that may be capable of wireless communication.
  • a terminal device may also be referred to as a communication device, user equipment (UE) , a Subscriber Station (SS) , a Portable Subscriber Station, a Mobile Station (MS) , a station (STA) or station device, or an Access Terminal (AT) .
  • UE user equipment
  • SS Subscriber Station
  • MS Mobile Station
  • STA station
  • AT Access Terminal
  • the terminal device may include, but not limited to, a mobile phone, a cellular phone, a smart phone, voice over IP (VoIP) phones, wireless local loop phones, a tablet, a wearable terminal device, a personal digital assistant (PDA) , portable computers, desktop computer, image capture terminal devices such as digital cameras, gaming terminal devices, music storage and playback appliances, vehicle-mounted wireless terminal devices, wireless endpoints, mobile stations, laptop-embedded equipment (LEE) , laptop-mounted equipment (LME) , USB dongles, smart devices, wireless customer-premises equipment (CPE) , an Internet of Things (loT) device, a watch or other wearable, a VR (virtual reality) device, an XR (eXtended reality) device, a head-mounted display (HMD) , a vehicle, a drone, a medical device and applications (for example, remote surgery) , an industrial device and applications (for example, a robot and/or other wireless devices operating in an industrial and/or an automated processing chain
  • the communication system 100A comprises a radio access network 120.
  • the radio access network 120 may be a next generation (e.g. sixth generation (6G) or later) radio access network, or a legacy (e.g. 5G, 4G, 3G or 2G) radio access network.
  • One or more communication user equipment (UE, also referred to as electric device (ED) ) 110a -110j (generically referred to as 110) may be interconnected to one another or connected to one or more network nodes (170a, 170b, generically referred to as 170) in the radio access network 120.
  • UE communication user equipment
  • ED electric device
  • a core network 130 may be a part of the communication system 100A and may be dependent or independent of the radio access technology used in the communication system 100A.
  • the communication system 100A comprises a public switched telephone network (PSTN) 180, the internet 185, and other networks 160.
  • PSTN public switched telephone network
  • the other networks 160 may include a multi-access edge computing (MEC) platform.
  • MEC multi-access edge computing
  • FIG. 1B illustrates an example communication system 100B.
  • the communication system 100B enables multiple wireless or wired elements to communicate data and other content.
  • the purpose of the communication system 100B may be to provide content, such as voice, data, video, and/or text, via broadcast, multicast and unicast, etc.
  • the communication system 100B may operate by sharing resources, such as carrier spectrum bandwidth, between its constituent elements.
  • the communication system 100B may include a terrestrial communication system and/or a non-terrestrial communication system.
  • the communication system 100B may provide a wide range of communication services and applications (such as earth monitoring, remote sensing, passive sensing and positioning, navigation and tracking, autonomous delivery and mobility, etc. ) .
  • the communication system 100B may provide a high degree of availability and robustness through a joint operation of the terrestrial communication system and the non-terrestrial communication system.
  • integrating a non-terrestrial communication system (or components thereof) into a terrestrial communication system can result in what may be considered a heterogeneous network comprising multiple layers.
  • the heterogeneous network may achieve better overall performance through efficient multi-link joint operation, more flexible functionality sharing, and faster physical layer link switching between terrestrial networks and non-terrestrial networks.
  • the terrestrial communication system and the non-terrestrial communication system could be considered sub-systems of the communication system 100B.
  • the communication system 100B includes electronic devices (ED) 110a -110d (generically referred to as ED 110) , radio access networks (RANs) 120a -120b, non-terrestrial communication network 120c, a core network 130, a public switched telephone network (PSTN) 180, the internet 185, and other networks 160.
  • the RANs 120a-120b include respective base stations (BSs) 170a-170b, which may be generically referred to as terrestrial transmit and receive points (T-TRPs) 170a-170b.
  • BSs base stations
  • T-TRPs terrestrial transmit and receive points
  • the non-terrestrial communication network 120c includes an access node, which may be generically referred to as a non-terrestrial transmit and receive point (NT-TRP) 172.
  • N-TRP non-terrestrial transmit and receive point
  • the other networks 160 may include a multi-access edge computing (MEC) platform.
  • MEC multi-access edge computing
  • Any ED 110 may be alternatively or additionally configured to interface, access, or communicate with any other T-TRP 170a-170b and NT-TRP 172, the internet 185, the core network 130, the PSTN 180, the other networks 160, or any combination of the preceding.
  • ED 110a may communicate an uplink and/or downlink transmission over an interface 190a with T-TRP 170a.
  • the EDs 110a, 110b and 110d may also communicate directly with one another via one or more sidelink air interfaces 190b.
  • ED 110d may communicate an uplink and/or downlink transmission over an interface 190c with NT-TRP 172.
  • the air interfaces 190a and 190b may use similar communication technology, such as any suitable radio access technology.
  • the communication system 100B may implement one or more channel access methods, such as code division multiple access (CDMA) , time division multiple access (TDMA) , frequency division multiple access (FDMA) , orthogonal FDMA (OFDMA) , or single-carrier FDMA (SC-FDMA) in the air interfaces 190a and 190b.
  • CDMA code division multiple access
  • TDMA time division multiple access
  • FDMA frequency division multiple access
  • OFDMA orthogonal FDMA
  • SC-FDMA single-carrier FDMA
  • the air interfaces 190a and 190b may utilize other higher dimension signal spaces, which may involve a combination of orthogonal and/or non-orthogonal dimensions.
  • the air interface 190c can enable communication between the ED 110d and one or multiple NT-TRPs 172 via a wireless link or simply a link.
  • the link is a dedicated connection for unicast transmission, a connection for broadcast transmission, or a connection between a group of EDs and one or multiple NT-TRPs for multicast transmission.
  • the RANs 120a and 120b are in communication with the core network 130 to provide the EDs 110a, 110b, and 110c with various services such as voice, data, and other services.
  • the RANs 120a and 120b and/or the core network 130 may be in direct or indirect communication with one or more other RANs (not shown) , which may or may not be directly served by core network 130, and may or may not employ the same radio access technology as RAN 120a, RAN 120b or both.
  • the core network 130 may also serve as a gateway access between (i) the RANs 120a and 120b or EDs 110a, 110b, and 110c or both, and (ii) other networks (such as the PSTN 180, the internet 185, and the other networks 160) .
  • the EDs 110a, 110b, and 110c may include functionality for communicating with different wireless networks over different wireless links using different wireless technologies and/or protocols. Instead of wireless communication (or in addition thereto) , the EDs 110a, 110b, and 110c may communicate via wired communication channels to a service provider or switch (not shown) , and to the internet 185.
  • PSTN 180 may include circuit switched telephone networks for providing plain old telephone service (POTS) .
  • Internet 185 may include a network of computers and subnets (intranets) or both, and incorporate protocols, such as Internet Protocol (IP) , Transmission Control Protocol (TCP) , User Datagram Protocol (UDP) .
  • IP Internet Protocol
  • TCP Transmission Control Protocol
  • UDP User Datagram Protocol
  • EDs 110a, 110b, and 110c may be multimode devices capable of operation according to multiple radio access technologies, and incorporate multiple transceivers necessary to support such.
  • FIG. 1C illustrates another example of an ED 110 and a base station 170a, 170b and/or 170c.
  • the ED 110 is used to connect persons, objects, machines, etc.
  • the ED 110 may be widely used in various scenarios, for example, cellular communications, device-to-device (D2D) , vehicle to everything (V2X) , peer-to-peer (P2P) , machine-to-machine (M2M) , machine-type communications (MTC) , internet of things (IOT) , virtual reality (VR) , augmented reality (AR) , industrial control, self-driving, remote medical, smart grid, smart furniture, smart office, smart wearable, smart transportation, smart city, drones, robots, remote sensing, passive sensing, positioning, navigation and tracking, autonomous delivery and mobility, etc.
  • D2D device-to-device
  • V2X vehicle to everything
  • P2P peer-to-peer
  • M2M machine-to-machine
  • Each ED 110 represents any suitable end user device for wireless operation and may include such devices (or may be referred to) as a user equipment/device (UE) , a wireless transmit/receive unit (WTRU) , a mobile station, a fixed or mobile subscriber unit, a cellular telephone, a station (STA) , a machine type communication (MTC) device, a personal digital assistant (PDA) , a smartphone, a laptop, a computer, a tablet, a wireless sensor, a consumer electronics device, a smart book, a vehicle, a car, a truck, a bus, a train, or an IoT device, an industrial device, or apparatus (e.g.
  • the base station 170a and 170b is a T-TRP and will hereafter be referred to as T-TRP 170. Also shown in FIG. 3, a NT-TRP will hereafter be referred to as NT-TRP 172.
  • Each ED 110 connected to T-TRP 170 and/or NT-TRP 172 can be dynamically or semi-statically turned-on (i.e., established, activated, or enabled) , turned-off (i.e., released, deactivated, or disabled) and/or configured in response to one of more of: connection availability and connection necessity.
  • the ED 110 includes a transmitter 201 and a receiver 203 coupled to one or more antennas 204. Only one antenna 204 is illustrated. One, some, or all of the antennas may alternatively be panels.
  • the transmitter 201 and the receiver 203 may be integrated, e.g. as a transceiver.
  • the transceiver is configured to modulate data or other content for transmission by at least one antenna 204 or network interface controller (NIC) .
  • NIC network interface controller
  • the transceiver is also configured to demodulate data or other content received by the at least one antenna 204.
  • Each transceiver includes any suitable structure for generating signals for wireless or wired transmission and/or processing signals received wirelessly or by wire.
  • Each antenna 204 includes any suitable structure for transmitting and/or receiving wireless or wired signals.
  • the ED 110 includes at least one memory 208.
  • the memory 208 stores instructions and data used, generated, or collected by the ED 110.
  • the memory 208 could store software instructions or modules configured to implement some or all of the functionality and/or embodiments described herein and that are executed by the processing unit (s) 210.
  • Each memory 208 includes any suitable volatile and/or non-volatile storage and retrieval device (s) . Any suitable type of memory may be used, such as random access memory (RAM) , read only memory (ROM) , hard disk, optical disc, subscriber identity module (SIM) card, memory stick, secure digital (SD) memory card, on-processor cache, and the like.
  • RAM random access memory
  • ROM read only memory
  • SIM subscriber identity module
  • SD secure digital
  • the ED 110 may further include one or more input/output devices (not shown) or interfaces (such as a wired interface to the internet 185 in FIG. 1A) .
  • the input/output devices permit interaction with a user or other devices in the network.
  • Each input/output device includes any suitable structure for providing information to or receiving information from a user, such as a speaker, microphone, keypad, keyboard, display, or touch screen, including network interface communications.
  • the ED 110 further includes a processor 210 for performing operations including those related to preparing a transmission for uplink transmission to the NT-TRP 172 and/or T-TRP 170, those related to processing downlink transmissions received from the NT-TRP 172 and/or T-TRP 170, and those related to processing sidelink transmission to and from another ED 110.
  • Processing operations related to preparing a transmission for uplink transmission may include operations such as encoding, modulating, transmit beamforming, and generating symbols for transmission.
  • Processing operations related to processing downlink transmissions may include operations such as receive beamforming, demodulating and decoding received symbols.
  • a downlink transmission may be received by the receiver 203, possibly using receive beamforming, and the processor 210 may extract signaling from the downlink transmission (e.g. by detecting and/or decoding the signaling) .
  • An example of signaling may be a reference signal transmitted by NT-TRP 172 and/or T-TRP 170.
  • the processor 276 implements the transmit beamforming and/or receive beamforming based on the indication of beam direction, e.g. beam angle information (BAI) , received from T-TRP 170.
  • the processor 210 may perform operations relating to network access (e.g.
  • the processor 210 may perform channel estimation, e.g. using a reference signal received from the NT-TRP 172 and/or T-TRP 170.
  • the processor 210 may form part of the transmitter 201 and/or receiver 203.
  • the memory 208 may form part of the processor 210.
  • the processor 210, and the processing components of the transmitter 201 and receiver 203 may each be implemented by the same or different one or more processors that are configured to execute instructions stored in a memory (e.g. in memory 208) .
  • some or all of the processor 210, and the processing components of the transmitter 201 and receiver 203 may be implemented using dedicated circuitry, such as a programmed field-programmable gate array (FPGA) , a graphical processing unit (GPU) , or an application-specific integrated circuit (ASIC) .
  • FPGA field-programmable gate array
  • GPU graphical processing unit
  • ASIC application-specific integrated circuit
  • the T-TRP 170 may be known by other names in some implementations, such as a base station, a base transceiver station (BTS) , a radio base station, a network node, a network device, a device on the network side, a transmit/receive node, a Node B, an evolved NodeB (eNodeB or eNB) , a Home eNodeB, a next Generation NodeB (gNB) , a transmission point (TP) ) , a site controller, an access point (AP) , or a wireless router, a relay station, a remote radio head, a terrestrial node, a terrestrial network device, or a terrestrial base station, base band unit (BBU) , remote radio unit (RRU) , active antenna unit (AAU) , remote radio head (RRH) , central unit (CU) , distribute unit (DU) , positioning node, among other possibilities.
  • BBU base band unit
  • RRU remote radio unit
  • the T-TRP 170 may be macro BSs, pico BSs, relay node, donor node, or the like, or combinations thereof.
  • the T-TRP 170 may refer to the forging devices or apparatus (e.g. communication module, modem, or chip) in the forgoing devices.
  • the parts of the T-TRP 170 may be distributed.
  • some of the modules of the T-TRP 170 may be located remote from the equipment housing the antennas of the T-TRP 170, and may be coupled to the equipment housing the antennas over a communication link (not shown) sometimes known as front haul, such as common public radio interface (CPRI) .
  • the term T-TRP 170 may also refer to modules on the network side that perform processing operations, such as determining the location of the ED 110, resource allocation (scheduling) , message generation, and encoding/decoding, and that are not necessarily part of the equipment housing the antennas of the T-TRP 170.
  • the modules may also be coupled to other T-TRPs.
  • the T-TRP 170 may actually be a plurality of T-TRPs that are operating together to serve the ED 110, e.g. through coordinated multipoint transmissions.
  • the T-TRP 170 includes at least one transmitter 252 and at least one receiver 254 coupled to one or more antennas 256. Only one antenna 256 is illustrated. One, some, or all of the antennas may alternatively be panels. The transmitter 252 and the receiver 254 may be integrated as a transceiver.
  • the T-TRP 170 further includes a processor 260 for performing operations including those related to: preparing a transmission for downlink transmission to the ED 110, processing an uplink transmission received from the ED 110, preparing a transmission for backhaul transmission to NT-TRP 172, and processing a transmission received over backhaul from the NT-TRP 172.
  • Processing operations related to preparing a transmission for downlink or backhaul transmission may include operations such as encoding, modulating, precoding (e.g. MIMO precoding) , transmit beamforming, and generating symbols for transmission.
  • Processing operations related to processing received transmissions in the uplink or over backhaul may include operations such as receive beamforming, and demodulating and decoding received symbols.
  • the processor 260 may also perform operations relating to network access (e.g. initial access) and/or downlink synchronization, such as generating the content of synchronization signal blocks (SSBs) , generating the system information, etc.
  • the processor 260 also generates the indication of beam direction, e.g. BAI, which may be scheduled for transmission by scheduler 253.
  • the processor 260 performs other network-side processing operations described herein, such as determining the location of the ED 110, determining where to deploy NT-TRP 172, etc.
  • the processor 260 may generate signaling, e.g. to configure one or more parameters of the ED 110 and/or one or more parameters of the NT-TRP 172. Any signaling generated by the processor 260 is sent by the transmitter 252.
  • “signaling” may alternatively be called control signaling.
  • Dynamic signaling may be transmitted in a control channel, e.g. a physical downlink control channel (PDCCH) , and static or semi-static higher layer signaling may be included in a packet transmitted in a data channel, e.g. in a physical downlink shared channel (PDSCH) .
  • PDCH physical downlink control channel
  • PDSCH physical downlink shared channel
  • a scheduler 253 may be coupled to the processor 260.
  • the scheduler 253 may be included within or operated separately from the T-TRP 170, which may schedule uplink, downlink, and/or backhaul transmissions, including issuing scheduling grants and/or configuring scheduling-free ( “configured grant” ) resources.
  • the T-TRP 170 further includes a memory 258 for storing information and data.
  • the memory 258 stores instructions and data used, generated, or collected by the T-TRP 170.
  • the memory 258 could store software instructions or modules configured to implement some or all of the functionality and/or embodiments described herein and that are executed by the processor 260.
  • the processor 260 may form part of the transmitter 252 and/or receiver 254. Also, although not illustrated, the processor 260 may implement the scheduler 253. Although not illustrated, the memory 258 may form part of the processor 260.
  • the processor 260, the scheduler 253, and the processing components of the transmitter 252 and receiver 254 may each be implemented by the same or different one or more processors that are configured to execute instructions stored in a memory, e.g. in memory 258.
  • some or all of the processor 260, the scheduler 253, and the processing components of the transmitter 252 and receiver 254 may be implemented using dedicated circuitry, such as a FPGA, a GPU, or an ASIC.
  • the NT-TRP 172 is illustrated as a drone only as an example, the NT-TRP 172 may be implemented in any suitable non-terrestrial form. Also, the NT-TRP 172 may be known by other names in some implementations, such as a non-terrestrial node, a non-terrestrial network device, or a non-terrestrial base station.
  • the NT-TRP 172 includes a transmitter 272 and a receiver 274 coupled to one or more antennas 280. Only one antenna 280 is illustrated. One, some, or all of the antennas may alternatively be panels.
  • the transmitter 272 and the receiver 274 may be integrated as a transceiver.
  • the NT-TRP 172 further includes a processor 276 for performing operations including those related to: preparing a transmission for downlink transmission to the ED 110, processing an uplink transmission received from the ED 110, preparing a transmission for backhaul transmission to T-TRP 170, and processing a transmission received over backhaul from the T-TRP 170.
  • Processing operations related to preparing a transmission for downlink or backhaul transmission may include operations such as encoding, modulating, precoding (e.g. MIMO precoding) , transmit beamforming, and generating symbols for transmission.
  • Processing operations related to processing received transmissions in the uplink or over backhaul may include operations such as receive beamforming, and demodulating and decoding received symbols.
  • the processor 276 implements the transmit beamforming and/or receive beamforming based on beam direction information (e.g. BAI) received from T-TRP 170. In some embodiments, the processor 276 may generate signaling, e.g. to configure one or more parameters of the ED 110.
  • the NT-TRP 172 implements physical layer processing, but does not implement higher layer functions such as functions at the medium access control (MAC) or radio link control (RLC) layer. As this is only an example, more generally, the NT-TRP 172 may implement higher layer functions in addition to physical layer processing.
  • MAC medium access control
  • RLC radio link control
  • the NT-TRP 172 further includes a memory 278 for storing information and data.
  • the processor 276 may form part of the transmitter 272 and/or receiver 274.
  • the memory 278 may form part of the processor 276.
  • the processor 276 and the processing components of the transmitter 272 and receiver 274 may each be implemented by the same or different one or more processors that are configured to execute instructions stored in a memory, e.g. in memory 278. Alternatively, some or all of the processor 276 and the processing components of the transmitter 272 and receiver 274 may be implemented using dedicated circuitry, such as a programmed FPGA, a GPU, or an ASIC. In some embodiments, the NT-TRP 172 may actually be a plurality of NT-TRPs that are operating together to serve the ED 110, e.g. through coordinated multipoint transmissions.
  • the T-TRP 170, the NT-TRP 172, and/or the ED 110 may include other components, but these have been omitted for the sake of clarity.
  • FIG. 1D illustrates units or modules in a device, such as in ED 110, in T-TRP 170, or in NT- TRP 172.
  • a signal may be transmitted by a transmitting unit or a transmitting module.
  • a signal may be transmitted by a transmitting unit or a transmitting module.
  • a signal may be received by a receiving unit or a receiving module.
  • a signal may be processed by a processing unit or a processing module.
  • Other steps may be performed by an artificial intelligence (AI) or machine learning (ML) module.
  • the respective units or modules may be implemented using hardware, one or more components or devices that execute software, or a combination thereof.
  • one or more of the units or modules may be an integrated circuit, such as a programmed FPGA, a GPU, or an ASIC.
  • the modules may be retrieved by a processor, in whole or part as needed, individually or together for processing, in single or multiple instances, and that the modules themselves may include instructions for further deployment and instantiation.
  • An air interface generally includes a number of components and associated parameters that collectively specify how a transmission is to be sent and/or received over a wireless communications link between two or more communicating devices.
  • an air interface may include one or more components defining the waveform (s) , frame structure (s) , multiple access scheme (s) , protocol (s) , coding scheme (s) and/or modulation scheme (s) for conveying information (e.g. data) over a wireless communications link.
  • the wireless communications link may support a link between a radio access network and user equipment (e.g. a “Uu” link) , and/or the wireless communications link may support a link between device and device, such as between two user equipments (e.g. a “sidelink” ) , and/or the wireless communications link may support a link between a non-terrestrial (NT) -communication network and user equipment (UE) .
  • NT non-terrestrial
  • UE user equipment
  • a waveform component may specify a shape and form of a signal being transmitted.
  • Waveform options may include orthogonal multiple access waveforms and non-orthogonal multiple access waveforms.
  • Non-limiting examples of such waveform options include Orthogonal Frequency Division Multiplexing (OFDM) , Filtered OFDM (f-OFDM) , Time windowing OFDM, Filter Bank Multicarrier (FBMC) , Universal Filtered Multicarrier (UFMC) , Generalized Frequency Division Multiplexing (GFDM) , Wavelet Packet Modulation (WPM) , Faster Than Nyquist (FTN) Waveform, and low Peak to Average Power Ratio Waveform (low PAPR WF) .
  • OFDM Orthogonal Frequency Division Multiplexing
  • f-OFDM Filtered OFDM
  • FBMC Filter Bank Multicarrier
  • UMC Universal Filtered Multicarrier
  • GFDM Generalized Frequency Division Multiplexing
  • WPM Wavelet Packet Modulation
  • a frame structure component may specify a configuration of a frame or group of frames.
  • the frame structure component may indicate one or more of a time, frequency, pilot signature, code, or other parameter of the frame or group of frames. More details of frame structure will be discussed below.
  • a multiple access scheme component may specify multiple access technique options, including technologies defining how communicating devices share a common physical channel, such as: Time Division Multiple Access (TDMA) , Frequency Division Multiple Access (FDMA) , Code Division Multiple Access (CDMA) , Single Carrier Frequency Division Multiple Access (SC-FDMA) , Low Density Signature Multicarrier Code Division Multiple Access (LDS-MC-CDMA) , Non-Orthogonal Multiple Access (NOMA) , Pattern Division Multiple Access (PDMA) , Lattice Partition Multiple Access (LPMA) , Resource Spread Multiple Access (RSMA) , and Sparse Code Multiple Access (SCMA) .
  • multiple access technique options may include: scheduled access vs.
  • non-scheduled access also known as grant-free access
  • non-orthogonal multiple access vs. orthogonal multiple access, e.g., via a dedicated channel resource (e.g., no sharing between multiple communicating devices)
  • contention-based shared channel resources vs. non-contention-based shared channel resources, and cognitive radio-based access.
  • a hybrid automatic repeat request (HARQ) protocol component may specify how a transmission and/or a re-transmission is to be made.
  • Non-limiting examples of transmission and/or re-transmission mechanism options include those that specify a scheduled data pipe size, a signaling mechanism for transmission and/or re-transmission, and a re-transmission mechanism.
  • a coding and modulation component may specify how information being transmitted may be encoded/decoded and modulated/demodulated for transmission/reception purposes.
  • Coding may refer to methods of error detection and forward error correction.
  • Non-limiting examples of coding options include turbo trellis codes, turbo product codes, fountain codes, low-density parity check codes, and polar codes.
  • Modulation may refer, simply, to the constellation (including, for example, the modulation technique and order) , or more specifically to various types of advanced modulation methods such as hierarchical modulation and low PAPR modulation.
  • the air interface may be a “one-size-fits-all concept” .
  • the components within the air interface cannot be changed or adapted once the air interface is defined.
  • only limited parameters or modes of an air interface such as a cyclic prefix (CP) length or a multiple input multiple output (MIMO) mode, can be configured.
  • an air interface design may provide a unified or flexible framework to support below 6 GHz and beyond 6 GHz frequency (e.g., mmWave) bands for both licensed and unlicensed access.
  • flexibility of a configurable air interface provided by a scalable numerology and symbol duration may allow for transmission parameter optimization for different spectrum bands and for different services/devices.
  • a unified air interface may be self-contained in a frequency domain, and a frequency domain self-contained design may support more flexible radio access network (RAN) slicing through channel resource sharing between different services in both frequency and time.
  • RAN radio access network
  • a frame structure is a feature of the wireless communication physical layer that defines a time domain signal transmission structure, e.g. to allow for timing reference and timing alignment of basic time domain transmission units.
  • Wireless communication between communicating devices may occur on time-frequency resources governed by a frame structure.
  • the frame structure may sometimes instead be called a radio frame structure.
  • FDD frequency division duplex
  • TDD time-division duplex
  • FD full duplex
  • FDD communication is when transmissions in different directions (e.g. uplink vs. downlink) occur in different frequency bands.
  • TDD communication is when transmissions in different directions (e.g. uplink vs. downlink) occur over different time durations.
  • FD communication is when transmission and reception occurs on the same time-frequency resource, i.e. a device can both transmit and receive on the same frequency resource concurrently in time.
  • each frame is 10 ms in duration; each frame has 10 subframes, which are each 1 ms in duration; each subframe includes two slots, each of which is 0.5 ms in duration; each slot is for transmission of 7 OFDM symbols (assuming normal CP) ; each OFDM symbol has a symbol duration and a particular bandwidth (or partial bandwidth or bandwidth partition) related to the number of subcarriers and subcarrier spacing; the frame structure is based on OFDM waveform parameters such as subcarrier spacing and CP length (where the CP has a fixed length or limited length options) ; and the switching gap between uplink and downlink in TDD has to be the integer time of OFDM symbol duration.
  • LTE long-term evolution
  • a frame structure is a frame structure in new radio (NR) having the following specifications: multiple subcarrier spacings are supported, each subcarrier spacing corresponding to a respective numerology; the frame structure depends on the numerology, but in any case the frame length is set at 10 ms, and consists of ten subframes of 1 ms each; a slot is defined as 14 OFDM symbols, and slot length depends upon the numerology.
  • the NR frame structure for normal CP 15 kHz subcarrier spacing ( “numerology 1” ) and the NR frame structure for normal CP 30 kHz subcarrier spacing ( “numerology 2” ) are different.
  • For 15 kHz subcarrier spacing a slot length is 1 ms
  • 30 kHz subcarrier spacing a slot length is 0.5 ms.
  • the NR frame structure may have more flexibility than the LTE frame structure.
  • a frame structure is an example flexible frame structure, e.g. for use in a 6G network or later.
  • a symbol block may be defined as the minimum duration of time that may be scheduled in the flexible frame structure.
  • a symbol block may be a unit of transmission having an optional redundancy portion (e.g. CP portion) and an information (e.g. data) portion.
  • An OFDM symbol is an example of a symbol block.
  • a symbol block may alternatively be called a symbol.
  • Embodiments of flexible frame structures include different parameters that may be configurable, e.g. frame length, subframe length, symbol block length, etc.
  • a non-exhaustive list of possible configurable parameters in some embodiments of a flexible frame structure include:
  • each frame includes one or multiple downlink synchronization channels and/or one or multiple downlink broadcast channels, and each synchronization channel and/or broadcast channel may be transmitted in a different direction by different beamforming.
  • the frame length may be more than one possible value and configured based on the application scenario. For example, autonomous vehicles may require relatively fast initial access, in which case the frame length may be set as 5 ms for autonomous vehicle applications. As another example, smart meters on houses may not require fast initial access, in which case the frame length may be set as 20 ms for smart meter applications.
  • a subframe might or might not be defined in the flexible frame structure, depending upon the implementation.
  • a frame may be defined to include slots, but no subframes.
  • the duration of the subframe may be configurable.
  • a subframe may be configured to have a length of 0.1 ms or 0.2 ms or 0.5 ms or 1 ms or 2 ms or 5 ms, etc.
  • the subframe length may be defined to be the same as the frame length or not defined.
  • slot configuration A slot might or might not be defined in the flexible frame structure, depending upon the implementation. In frames in which a slot is defined, then the definition of a slot (e.g. in time duration and/or in number of symbol blocks) may be configurable.
  • the slot configuration is common to all UEs or a group of UEs.
  • the slot configuration information may be transmitted to UEs in a broadcast channel or common control channel (s) .
  • the slot configuration may be UE specific, in which case the slot configuration information may be transmitted in a UE-specific control channel.
  • the slot configuration signaling can be transmitted together with frame configuration signaling and/or subframe configuration signaling.
  • the slot configuration can be transmitted independently from the frame configuration signaling and/or subframe configuration signaling.
  • the slot configuration may be system common, base station common, UE group common, or UE specific.
  • SCS is one parameter of scalable numerology which may allow the SCS to possibly range from 15 KHz to 480 KHz.
  • the SCS may vary with the frequency of the spectrum and/or maximum UE speed to minimize the impact of the Doppler shift and phase noise.
  • there may be separate transmission and reception frames and the SCS of symbols in the reception frame structure may be configured independently from the SCS of symbols in the transmission frame structure.
  • the SCS in a reception frame may be different from the SCS in a transmission frame.
  • the SCS of each transmission frame may be half the SCS of each reception frame.
  • the difference does not necessarily have to scale by a factor of two, e.g. if more flexible symbol durations are implemented using inverse discrete Fourier transform (IDFT) instead of fast Fourier transform (FFT) .
  • IDFT inverse discrete Fourier transform
  • FFT fast Fourier transform
  • the basic transmission unit may be a symbol block (alternatively called a symbol) , which in general includes a redundancy portion (referred to as the CP) and an information (e.g. data) portion, although in some embodiments the CP may be omitted from the symbol block.
  • the CP length may be flexible and configurable.
  • the CP length may be fixed within a frame or flexible within a frame, and the CP length may possibly change from one frame to another, or from one group of frames to another group of frames, or from one subframe to another subframe, or from one slot to another slot, or dynamically from one scheduling to another scheduling.
  • the information (e.g. data) portion may be flexible and configurable.
  • a symbol block length may be adjusted according to: channel condition (e.g. mulit-path delay, Doppler) ; and/or latency requirement; and/or available time duration.
  • a symbol block length may be adjusted to fit an available time duration in the frame.
  • a frame may include both a downlink portion for downlink transmissions from a base station, and an uplink portion for uplink transmissions from UEs.
  • a gap may be present between each uplink and downlink portion, which is referred to as a switching gap.
  • the switching gap length (duration) may be configurable.
  • a switching gap duration may be fixed within a frame or flexible within a frame, and a switching gap duration may possibly change from one frame to another, or from one group of frames to another group of frames, or from one subframe to another subframe, or from one slot to another slot, or dynamically from one scheduling to another scheduling.
  • BWPs bandwidth parts
  • a device such as a base station, may provide coverage over a cell.
  • Wireless communication with the device may occur over one or more carrier frequencies.
  • a carrier frequency will be referred to as a carrier.
  • a carrier may alternatively be called a component carrier (CC) .
  • CC component carrier
  • a carrier may be characterized by its bandwidth and a reference frequency, e.g. the center or lowest or highest frequency of the carrier.
  • a carrier may be on licensed or unlicensed spectrum.
  • Wireless communication with the device may also or instead occur over one or more bandwidth parts (BWPs) .
  • BWPs bandwidth parts
  • a carrier may have one or more BWPs. More generally, wireless communication with the device may occur over spectrum.
  • the spectrum may comprise one or more carriers and/or one or more BWPs.
  • a cell may include one or multiple downlink resources and optionally one or multiple uplink resources, or a cell may include one or multiple uplink resources and optionally one or multiple downlink resources, or a cell may include both one or multiple downlink resources and one or multiple uplink resources.
  • a cell might only include one downlink carrier/BWP, or only include one uplink carrier/BWP, or include multiple downlink carriers/BWPs, or include multiple uplink carriers/BWPs, or include one downlink carrier/BWP and one uplink carrier/BWP, or include one downlink carrier/BWP and multiple uplink carriers/BWPs, or include multiple downlink carriers/BWPs and one uplink carrier/BWP, or include multiple downlink carriers/BWPs and multiple uplink carriers/BWPs.
  • a cell may instead or additionally include one or multiple sidelink resources, including sidelink transmitting and receiving resources.
  • a BWP is a set of contiguous or non-contiguous frequency subcarriers on a carrier, or a set of contiguous or non-contiguous frequency subcarriers on multiple carriers, or a set of non-contiguous or contiguous frequency subcarriers, which may have one or more carriers.
  • a carrier may have one or more BWPs, e.g. a carrier may have a bandwidth of 20 MHz and consist of one BWP, or a carrier may have a bandwidth of 80 MHz and consist of two adjacent contiguous BWPs, etc.
  • a BWP may have one or more carriers, e.g. a BWP may have a bandwidth of 40 MHz and consists of two adjacent contiguous carriers, where each carrier has a bandwidth of 20 MHz.
  • a BWP may comprise non-contiguous spectrum resources which consists of non-contiguous multiple carriers, where the first carrier of the non-contiguous multiple carriers may be in mmW band, the second carrier may be in a low band (such as 2 GHz band) , the third carrier (if it exists) may be in THz band, and the fourth carrier (if it exists) may be in visible light band.
  • Resources in one carrier which belong to the BWP may be contiguous or non-contiguous.
  • a BWP has non-contiguous spectrum resources on one carrier.
  • Wireless communication may occur over an occupied bandwidth.
  • the occupied bandwidth may be defined as the width of a frequency band such that, below the lower and above the upper frequency limits, the mean powers emitted are each equal to a specified percentage ⁇ /2 of the total mean transmitted power, for example, the value of ⁇ /2 is taken as 0.5%.
  • the carrier, the BWP, or the occupied bandwidth may be signaled by a network device (e.g. base station) dynamically, e.g. in physical layer control signaling such as DCI, or semi-statically, e.g. in radio resource control (RRC) signaling or in the medium access control (MAC) layer, or be predefined based on the application scenario; or be determined by the UE as a function of other parameters that are known by the UE, or may be fixed, e.g. by a standard.
  • a network device e.g. base station
  • RRC radio resource control
  • MAC medium access control
  • frame timing and synchronization is established based on synchronization signals, such as a primary synchronization signal (PSS) and a secondary synchronization signal (SSS) .
  • PSS primary synchronization signal
  • SSS secondary synchronization signal
  • known frame timing and synchronization strategies involve adding a timestamp, e.g., (xx0: yy0: zz) , to a frame boundary, where xx0, yy0, zz in the timestamp may represent a time format such as hour, minute, and second, respectively.
  • the present disclosure relates, generally, to mobile, wireless communication and, in particular embodiments, to a frame timing alignment/realignment, where the frame timing alignment/realignment may comprise a timing alignment/realignment in terms of a boundary of a symbol, a slot or a sub-frame within a frame; or a frame (thus the frame timing alignment/realignment here is more general, not limiting to the cases where a timing alignment/realignment is from a frame boundary only) .
  • relative timing to a frame or frame boundary should be interpreted in a more general sense, i.e., the frame boundary means a timing point of a frame element with the frame such as (starting or ending of) a symbol, a slot or subframe within a frame, or a frame.
  • the phrases “ (frame) timing alignment or timing realignment” and “relative timing to a frame boundary” are used in more general sense described in above.
  • aspects of the present application relate to a network device, such as a base station 170, referenced hereinafter as a TRP 170, transmitting signaling that carries a timing realignment indication message.
  • the timing realignment indication message includes information allowing a receiving UE 110 to determine a timing reference point.
  • transmission of frames, by the UE 110 may be aligned.
  • the frames that become aligned are in different sub-bands of one carrier frequency band.
  • the frames that become aligned are found in neighboring carrier frequency bands.
  • aspects of the present application relate to use of one or more types of signaling to indicate the timing realignment (or/and timing correction) message.
  • Two example types of signaling are provided here to show the schemes.
  • the first example type of signaling may be referenced as cell-specific signaling, examples of which include group common signaling and broadcast signaling.
  • the second example type of signaling may be referenced as UE-specific signaling.
  • One of these two types of signaling or a combination of the two types of signaling may be used to transmit a timing realignment indication message.
  • the timing realignment indication message may be shown to notify one or more UEs 110 of a configuration of a timing reference point.
  • references, hereinafter, to the term “UE 110” may be understood to represent reference to a broad class of generic wireless communication devices within a cell (i.e., a network receiving node, such as a wireless device, a sensor, a gateway, a router, etc. ) , that is, being served by the TRP 170.
  • a timing reference point is a timing reference instant and may be expressed in terms of a relative timing, in view of a timing point in a frame, such as (starting or ending boundary of) a symbol, a slot or a sub-frame within a frame; or a frame.
  • the term “aframe boundary” is used to represent a boundary of possibly a symbol, a slot or a sub-frame within a frame; or a frame.
  • the timing reference point may be expressed in terms of a relative timing, in view of a current frame boundary, e.g., the start of the current frame.
  • the timing reference point may be expressed in terms of an absolute timing based on certain standards timing reference such as a GNSS (e.g., GPS) , Coordinated Universal Time ( “UTC” ) , etc.
  • GNSS e.g., GPS
  • UTC Coordinated Universal Time
  • the timing reference point may be shown to allow for timing adjustments to be implemented at the UEs 110.
  • the timing adjustments may be implemented for improvement of accuracy for a clock at the UE 110.
  • the timing reference point may be shown to allow for adjustments to be implemented in future transmissions made from the UEs 110.
  • the adjustments may be shown to cause realignment of transmitted frames at the timing reference point.
  • the realignment of transmitted frames at the timing reference point may comprise the timing realignment from (the starting boundary of) a symbol, a slot or a sub-frame within a frame; or a frame at the timing reference point for one or more UEs and one or more BSs (in a cell or a group of cells) , which applies across the application below.
  • the UE 110 may monitor for the timing realignment indication message. Responsive to receiving the timing realignment indication message, the UE 110 may obtain the timing reference point and take steps to cause frame realignment at the timing reference point. Those steps may, for example, include commencing transmission of a subsequent frame at the timing reference point.
  • the UE 110 may cause the TRP 170 to transmit the timing realignment indication message by transmitting, to the TRP 170, a request for a timing realignment, that is, a timing realignment request message.
  • the TRP 170 may transmit, to the UE 110, a timing realignment indication message including information on a timing reference point, thereby allowing the UE 110 to implement a timing realignment (or/and a timing adjustment including clock timing error correction) , wherein the timing realignment is in terms of (e.g., a starting boundary of) a symbol, a slot or a sub-frame within a frame; or a frame for UEs and base station (s) in a cell (or a group of cells) .
  • a TRP 170 associated with a given cell may transmit a timing realignment indication message.
  • the timing realignment indication message may include enough information to allow a receiver of the message to obtain a timing reference point.
  • the timing reference point may be used, by one or more UEs 110 in the given cell, when performing a timing realignment (or/and a timing adjustment including clock timing error correction) .
  • the timing reference point may be expressed, within the timing realignment indication message, relative to a frame boundary (where, as previously described and to be applicable below across the application, a frame boundary can be a boundary of a symbol, a slot or a sub-frame with a frame; or a frame) .
  • the timing realignment indication message may include a relative timing indication, ⁇ t. It may be shown that the relative timing indication, ⁇ t, expresses the timing reference point as occurring a particular duration, i.e., ⁇ t, subsequent to a frame boundary for a given frame. Since the frame boundary is important to allowing the UE 110 to determine the timing reference point, it is important that the UE 110 be aware of the given frame that has the frame boundary of interest. Accordingly, the timing realignment indication message may also include a system frame number (SFN) for the given frame.
  • SFN system frame number
  • the SFN is a value in range from 0 to 1023, inclusive. Accordingly, 10 bits may be used to represent a SFN.
  • MIB Master Information Block
  • PBCH Physical Broadcast Channel
  • the timing realignment indication message may include other parameters.
  • the other parameters may, for example, include a minimum time offset.
  • the minimum time offset may establish a duration of time preceding the timing reference point.
  • the UE 110 may rely upon the minimum time offset as an indication that DL signaling, including the timing realignment indication message, will allow the UE 110 enough time to detect the timing realignment indication message to obtain information on the timing reference point.
  • UE position information is often used in cellular communication networks to improve various performance metrics for the network.
  • performance metrics may, for example, include capacity, agility, and efficiency.
  • the improvement may be achieved when elements of the network exploit the position, the behavior, the mobility pattern, etc., of the UE in the context of a priori information describing a wireless environment in which the UE is operating.
  • a sensing system may be used to help gather UE pose information, including its location in a global coordinate system, its velocity and direction of movement in the global coordinate system, orientation information, and the information about the wireless environment. “Location” is also known as “position” and these two terms may be used interchangeably herein. Examples of well-known sensing systems include RADAR (Radio Detection and Ranging) and LIDAR (Light Detection and Ranging) . While the sensing system can be separate from the communication system, it could be advantageous to gather the information using an integrated system, which reduces the hardware (and cost) in the system as well as the time, frequency, or spatial resources needed to perform both functionalities.
  • the difficulty of the problem relates to factors such as the limited resolution of the communication system, the dynamicity of the environment, and the huge number of objects whose electromagnetic properties and position are to be estimated.
  • integrated sensing and communication also known as integrated communication and sensing
  • integrated communication and sensing is a desirable feature in existing and future communication systems
  • any or all of the EDs 110 and BS 170 may be sensing nodes in the communication system 100E as illustrated in FIG. 1E, which is an example sensing system in accordance with some example embodiments of the present disclosure.
  • Sensing nodes are network entities that perform sensing by transmitting and receiving sensing signals. Some sensing nodes are communication equipment that perform both communications and sensing. However, it is possible that some sensing nodes do not perform communications, and are instead dedicated to sensing.
  • FIG. 1E differs from FIG. 1B in that there is a sensing agent 195 in the communication system 100E, which is absent in FIG. 1B.
  • the sensing agent 195 is an example of a sensing node that is dedicated to sensing.
  • the sensing agent 195 does not transmit or receive communication signals. However, the sensing agent 195 may communicate configuration information, sensing information, signaling information, or other information within the communication system 100E. The sensing agent 195 may be in communication with the core network 130 to communicate information with the rest of the communication system 100E. By way of example, the sensing agent 195 may determine the location of the ED 110a, and transmit this information to the base station 170a via the core network 130. Although only one sensing agent 195 is shown in FIG. 1E, any number of sensing agents may be implemented in the communication system 100E. In some embodiments, one or more sensing agents may be implemented at one or more of the RANs 120.
  • a sensing node may combine sensing-based techniques with reference signal-based techniques to enhance UE pose determination.
  • This type of sensing node may also be known as a sensing management function (SMF) .
  • the SMF may also be known as a location management function (LMF) .
  • the SMF may be implemented as a physically independent entity located at the core network 130 with connection to the multiple BSs 170.
  • the SMF may be implemented as a logical entity co-located inside a BS 170 through logic carried out by the processor 260.
  • FIG. 1F illustrates an example apparatus 100F that may implement the methods and teachings according to this disclosure.
  • FIG. 1F illustrates an example SMF 176, which may be implemented in a UE 110, a system node 120, or a network node 130.
  • the SMF 176 may be specialized, or include specialized components, to support training and/or execution of AI models (e.g., training and/or execution of neural networks) .
  • the SMF 176 when implemented as a physically independent entity, includes at least one processor 290, at least one transmitter 282, at least one receiver 284, one or more antennas 286, and at least one memory 288.
  • a transceiver not shown, may be used instead of the transmitter 282 and receiver 284.
  • a scheduler 283 may be coupled to the processor 290. The scheduler 283 may be included within or operated separately from the SMF 176.
  • the processor 290 implements various processing operations of the SMF 176, such as signal coding, data processing, power control, input/output processing, or any other functionality.
  • the processor 290 can also be configured to implement some or all of the functionality and/or embodiments described in more detail above.
  • Each processor 290 includes any suitable processing or computing device configured to perform one or more operations.
  • Each processor 290 could, for example, include a microprocessor, microcontroller, digital signal processor, field programmable gate array, or application specific integrated circuit.
  • a reference signal-based pose determination technique belongs to an “active” pose estimation paradigm.
  • the enquirer of pose information i.e., the UE
  • the enquirer may transmit or receive (or both) a signal specific to pose determination process.
  • Positioning techniques based on a global navigation satellite system (GNSS) such as Global Positioning System (GPS) are other examples of the active pose estimation paradigm.
  • GNSS global navigation satellite system
  • GPS Global Positioning System
  • a sensing technique based on radar for example, may be considered as belonging to a “passive” pose determination paradigm.
  • a passive pose determination paradigm the target is oblivious to the pose determination process.
  • sensing-based techniques By integrating sensing and communications in one system, the system need not operate according to only a single paradigm. Thus, the combination of sensing-based techniques and reference signal-based techniques can yield enhanced pose determination.
  • the enhanced pose determination may, for example, include obtaining UE channel sub-space information, which is particularly useful for UE channel reconstruction at the sensing node, especially for a beam-based operation and communication.
  • the UE channel sub-space is a subset of the entire algebraic space, defined over the spatial domain, in which the entire channel from the TP to the UE lies. Accordingly, the UE channel sub-space defines the TP-to-UE channel with very high accuracy.
  • the signals transmitted over other sub-spaces result in a negligible contribution to the UE channel.
  • Knowledge of the UE channel sub-space helps to reduce the effort needed for channel measurement at the UE and channel reconstruction at the network-side.
  • sensing-based techniques and reference signal-based techniques may enable the UE channel reconstruction with much less overhead as compared to traditional methods.
  • Sub-space information can also facilitate sub-space based sensing to reduce sensing complexity and improve sensing accuracy.
  • a same radio access technology RAT is used for sensing and communication. This avoids the need to multiplex two different RATs under one carrier spectrum, or necessitating two different carrier spectrums for the two different RATs.
  • a first set of channels may be used to transmit a sensing signal
  • a second set of channels may be used to transmit a communications signal.
  • each channel in the first set of channels and each channel in the second set of channels is a logical channel, a transport channel, or a physical channel.
  • communication and sensing may be performed via separate physical channels.
  • a first physical downlink shared channel PDSCH-C is defined for data communication, while a second physical downlink shared channel PDSCH-Sis defined for sensing.
  • a second physical downlink shared channel PDSCH-Sis is defined for sensing.
  • separate physical uplink shared channels (PUSCH) , PUSCH-C and PUSCH-S could be defined for uplink communication and sensing.
  • control channel (s) and data channel (s) for sensing can have the same or different channel structure (format) , occupy same or different frequency bands or bandwidth parts.
  • a common physical downlink control channel (PDCCH) and a common physical uplink control channel (PUCCH) is used to carry control information for both sensing and communication.
  • separate physical layer control channels may be used to carry separate control information for communication and sensing.
  • PUCCH-Sand PUCCH-C could be used for uplink control for sensing and communication respectively, and PDCCH-Sand PDCCH-C for downlink control for sensing and communication respectively.
  • RADAR originates from the phrase Radio Detection and Ranging; however, expressions with different forms of capitalization (i.e., Radar and radar) are equally valid and now more common.
  • Radar is typically used for detecting a presence and a location of an object.
  • a radar system radiates radio frequency energy and receives echoes of the energy reflected from one or more targets. The system determines the pose of a given target based on the echoes returned from the given target.
  • the radiated energy can be in the form of an energy pulse or a continuous wave, which can be expressed or defined by a particular waveform. Examples of waveforms used in radar include frequency modulated continuous wave (FMCW) and ultra-wideband (UWB) waveforms.
  • FMCW frequency modulated continuous wave
  • UWB ultra-wideband
  • Radar systems can be monostatic, bi-static, or multi-static.
  • a monostatic radar system the radar signal transmitter and receiver are co-located, such as being integrated in a transceiver.
  • a bi-static radar system the transmitter and receiver are spatially separated, and the distance of separation is comparable to, or larger than, the expected target distance (often referred to as the range) .
  • a multi-static radar system two or more radar components are spatially diverse but with a shared area of coverage.
  • a multi-static radar is also referred to as a multisite or netted radar.
  • Terrestrial radar applications encounter challenges such as multipath propagation and shadowing impairments. Another challenge is the problem of identifiability because terrestrial targets have similar physical attributes. Integrating sensing into a communication system is likely to suffer from these same challenges, and more.
  • Communication nodes can be either half-duplex or full-duplex.
  • a half-duplex node cannot both transmit and receive using the same physical resources (time, frequency, etc. ) ; conversely, a full-duplex node can transmit and receive using the same physical resources.
  • Existing commercial wireless communications networks are all half-duplex. Even if full-duplex communications networks become practical in the future, it is expected that at least some of the nodes in the network will still be half-duplex nodes because half-duplex devices are less complex, and have lower cost and lower power consumption. In particular, full-duplex implementation is more challenging at higher frequencies (e.g. in the millimeter wave bands) , and very challenging for small and low-cost devices, such as femtocell base stations and UEs.
  • half-duplex nodes in the communications network presents further challenges toward integrating sensing and communications into the devices and systems of the communications network.
  • both half-duplex and full-duplex nodes can perform bi-static or multi-static sensing, but monostatic sensing typically requires the sensing node have full-duplex capability.
  • a half-duplex node may perform monostatic sensing with certain limitations, such as in a pulsed radar with a specific duty cycle and ranging capability.
  • Sensing signal waveform and frame structure will now be described.
  • Properties of a sensing signal, or a signal used for both sensing and communication include the waveform of the signal and the frame structure of the signal.
  • the frame structure defines the time-domain boundaries of the signal.
  • the waveform describes the shape of the signal as a function of time and frequency. Examples of waveforms that can be used for a sensing signal include ultra-wide band (UWB) pulse, Frequency-Modulated Continuous Wave (FMCW) or “chirp” , orthogonal frequency-division multiplexing (OFDM) , cyclic prefix (CP) -OFDM, and Discrete Fourier Transform spread (DFT-s) -OFDM.
  • UWB ultra-wide band
  • FMCW Frequency-Modulated Continuous Wave
  • OFDM orthogonal frequency-division multiplexing
  • CP cyclic prefix
  • DFT-s Discrete Fourier Transform spread
  • the sensing signal is a linear chirp signal with bandwidth B and time duration T.
  • a linear chirp signal is generally known from its use in FMCW radar systems.
  • Such linear chirp signal can be presented as in the baseband representation.
  • Precoding as used herein may refer to any coding operation (s) or modulation (s) that transform an input signal into an output signal. Precoding may be performed in different domains, and typically transform the input signal in a first domain to an output signal in a second domain. Precoding may include linear operations.
  • a terrestrial communication system may also be referred to as a land-based or ground-based communication system, although a terrestrial communication system can also, or instead, be implemented on or in water.
  • the non-terrestrial communication system may bridge the coverage gaps for underserved areas by extending the coverage of cellular networks through non-terrestrial nodes, which will be key to ensuring global seamless coverage and providing mobile broadband services to unserved/underserved regions, in this case, it is hardly possible to implement terrestrial access-points/base-stations infrastructure in the areas like oceans, mountains, forests, or other remote areas.
  • the terrestrial communication system may be a wireless communications using 5G technology and/or later generation wireless technology (e.g., 6G or later) .
  • the terrestrial communication system may also accommodate some legacy wireless technology (e.g., 3G or 4G wireless technology) .
  • the non-terrestrial communication system may be a communications using the satellite constellations like Geo-Stationary Orbit (GEO) satellites which utilizing broadcast public/popular contents to a local server, Low earth orbit (LEO) satellites establishing a better balance between large coverage area and propagation path-loss/delay, stabilize satellites in very low earth orbits (VLEO) enabling technologies substantially reducing the costs for launching satellites to lower orbits, high altitude platforms (HAPs) providing a low path-loss air interface for the users with limited power budget, or Unmanned Aerial Vehicles (UAVs) (or unmanned aerial system (UAS) ) achieving a dense deployment since their coverage can be limited to a local area, such as airborne, balloon, quadcopter, drones, etc.
  • GEO Geo-Stationary Orbit
  • LEO Low earth orbit
  • VLEO very low earth orbits
  • UAVs Unmanned Aerial Vehicles
  • UAS unmanned aerial system
  • GEO satellites, LEO satellites, UAVs, HAPs and VLEOs may be horizontal and two-dimensional.
  • UAVs, HAPs and VLEOs coupled to integrate satellite communications to cellular networks emerging 3D vertical networks consist of many moving (other than geostationary satellites) and high altitude access points such as UAVs, HAPs and VLEOs.
  • MIMO Multiple input multiple-output
  • the above ED110 and T-TRP 170, and/or NT-TRP use MIMO to communicate over the wireless resource blocks.
  • MIMO utilizes multiple antennas at the transmitter and/or receiver to transmit wireless resource blocks over parallel wireless signals.
  • MIMO may beamform parallel wireless signals for reliable multipath transmission of a wireless resource block.
  • MIMO may bond parallel wireless signals that transport different data to increase the data rate of the wireless resource block.
  • the T-TRP 170, and/or NT-TRP 172 is generally configured with more than ten antenna units (such as 128 or 256) , and serves for dozens of the ED 110 (such as 40) in the meanwhile.
  • a large number of antenna units of the T-TRP 170, and NT-TRP 172 can greatly increase the degree of spatial freedom of wireless communication, greatly improve the transmission rate, spectrum efficiency and power efficiency, and eliminate the interference between cells to a large extent.
  • each antenna unit makes each antenna unit be made in a smaller size with a lower cost.
  • the T-TRP 170, and NT-TRP 172 of each cell can communicate with many ED 110 in the cell on the same time-frequency resource at the same time, thus greatly increasing the spectrum efficiency.
  • a large number of antenna units of the T-TRP 170, and/or NT-TRP 172 also enable each user to have better spatial directivity for uplink and downlink transmission, so that the transmitting power of the T-TRP 170, and/or NT-TRP 172 and an ED 110 is obviously reduced, and the power efficiency is greatly increased.
  • the antenna number of the T-TRP 170, and/or NT-TRP 172 is sufficiently large, random channels between each ED 110 and the T-TRP 170, and/or NT-TRP 172 can approach to be orthogonal, and the interference between the cell and the users and the effect of noises can be eliminated.
  • the plurality of advantages described above enable the large-scale MIMO to have a beautiful application prospect.
  • a MIMO system may include a receiver connected to a receive (Rx) antenna, a transmitter connected to transmit (Tx) antenna, and a signal processor connected to the transmitter and the receiver.
  • Each of the Rx antenna and the Tx antenna may include a plurality of antennas.
  • the Rx antenna may have an ULA antenna array in which the plurality of antennas are arranged in line at even intervals.
  • RF radio frequency
  • a non-exhaustive list of possible unit or possible configurable parameters or in some embodiments of a MIMO system include:
  • Panel unit of antenna group, or antenna array, or antenna sub-array which can control its Tx or Rx beam independently.
  • a beam is formed by performing amplitude and/or phase weighting on data transmitted or received by at least one antenna port, or may be formed by using another method, for example, adjusting a related parameter of an antenna unit.
  • the beam may include a Tx beam and/or a Rx beam.
  • the transmit beam indicates distribution of signal strength formed in different directions in space after a signal is transmitted through an antenna.
  • the receive beam indicates distribution of signal strength that is of a wireless signal received from an antenna and that is in different directions in space.
  • the beam information may be a beam identifier, or antenna port (s) identifier, or CSI-RS resource identifier, or SSB resource identifier, or SRS resource identifier, or other reference signal resource identifier.
  • Artificial Intelligence technologies can be applied in communication, including artificial intelligence or machine learning (AI/ML) based communication in the physical layer and/or AI/ML based communication in the higher layer, e.g., medium access control (MAC) layer.
  • AI/ML artificial intelligence or machine learning
  • the AI/ML based communication may aim to optimize component design and/or improve the algorithm performance.
  • the AI/ML based communication may aim to utilize the AI/ML capability for learning, prediction, and/or making a decision to solve a complicated optimization problem with possible better strategy and/or optimal solution, e.g. to optimize the functionality in the MAC layer, e.g.
  • TRP management intelligent TRP management, intelligent beam management, intelligent channel resource allocation, intelligent power control, intelligent spectrum utilization, intelligent modulation and coding scheme (MCS) , intelligent hybrid automatic repeat request (HARQ) strategy, intelligent transmit/receive (Tx/Rx) mode adaption, etc.
  • MCS modulation and coding scheme
  • HARQ intelligent hybrid automatic repeat request
  • Data is the very important component for AI/ML techniques.
  • Data collection is a process of collecting data by the network nodes, management entity, or UE for the purpose of AI/ML model training, data analytics and inference.
  • AI/ML model training is a process to train an AI/ML Model by learning the input/output relationship in a data driven manner and obtain the trained AI/ML Model for inference.
  • a process of using a trained AI/ML model to produce a set of outputs based on a set of inputs is a process of using a trained AI/ML model to produce a set of outputs based on a set of inputs.
  • validation is used to evaluate the quality of an AI/ML model using a dataset different from the one used for model training. Validation can help selecting model parameters that generalize beyond the dataset used for model training. The model parameter after training can be adjusted further by the validation process.
  • testing is also a sub-process of training, and it is used to evaluate the performance of a final AI/ML model using a dataset different from the one used for model training and validation. Differently from AI/ML model validation, testing do not assume subsequent tuning of the model.
  • Online training means an AI/ML training process where the model being used for inference is typically continuously trained in (near) real-time with the arrival of new training samples.
  • An AI/ML training process where the model is trained based on collected dataset, and where the trained model is later used or delivered for inference.
  • the lifecycle management (LCM) of AI/ML models is essential for sustainable operation of AI/ML in NR air-interface.
  • Life cycle management covers the whole procedure of AI/ML technologies which applied on one or more nodes.
  • it includes at least one of the following sub-process: data collection, model training, model identification, model registration, model deployment, model configuration, model inference, model selection, model activation, deactivation, model switching, model fallback, model monitoring, model update, model transfer/delivery and UE capability report.
  • Model monitoring can be based on inference accuracy, including metrics related to intermediate key performance indicator (KPI) s, and it can also be based on system performance, including metrics related to system performance KPIs, e.g., accuracy and relevance, overhead, complexity (computation and memory cost) , latency (timeliness of monitoring result, from model failure to action) and power consumption.
  • KPI intermediate key performance indicator
  • system performance including metrics related to system performance KPIs, e.g., accuracy and relevance, overhead, complexity (computation and memory cost) , latency (timeliness of monitoring result, from model failure to action) and power consumption.
  • data distribution may shift after deployment due to the environment changes, thus the model based on input or output data distribution should also be considered.
  • the goal of supervised learning algorithms is to train a model that maps feature vectors (inputs) to labels (output) , based on the training data which includes the example feature-label pairs.
  • the supervised learning can analyze the training data and produce an inferred function, which can be used for mapping the inference data.
  • Supervised learning can be further divided into two types: Classification and Regression.
  • Classification is used when the output of the AI/ML model is categorical i.e. with two or more classes.
  • Regression is used when the output of the AI/ML model is a real or continuous value.
  • the unsupervised methods learn concise representations of the input data without the labelled data, which can be used for data exploration or to analyze or generate new data.
  • One typical unsupervised learning is clustering which explores the hidden structure of input data and provide the classification results for the data.
  • Reinforce learning is used to solve sequential decision-making problems.
  • Reinforce learning is a process of training the action of intelligent agent from input (state) and a feedback signal (reward) in an environment.
  • an intelligent agent interacts with an environment by taking an action to maximize the cumulative reward. Whenever the intelligent agent takes one action, the current state in the environment may transfer to the new state, and the new state resulted by the action will bring to the associated reward. Then the intelligent agent can take the next action based on the received reward and new state in the environment.
  • the agent interacts with the environment to collect experience. The environments often mimicked by the simulator since it is expensive to directly interact with the real system.
  • the agent can use the optimal decision-making rule learned from the training phase to achieve the maximal accumulated reward.
  • Federated learning is a machine learning technique that is used to train an AI/ML model by a central node (e.g., server) and a plurality of decentralized edge nodes (e.g., UEs, next Generation NodeBs, “gNBs” ) .
  • a central node e.g., server
  • a plurality of decentralized edge nodes e.g., UEs, next Generation NodeBs, “gNBs”
  • a server may provide, to an edge node, a set of model parameters (e.g., weights, biases, gradients) that describe a global AI/ML model.
  • the edge node may initialize a local AI/ML model with the received global AI/ML model parameters.
  • the edge node may then train the local AI/ML model using local data samples to, thereby, produce a trained local AI/ML model.
  • the edge node may then provide, to the serve, a set of AI/ML model parameters that describe the local AI/ML model.
  • the server may aggregate the local AI/ML model parameters reported from the plurality of UEs and, based on such aggregation, update the global AI/ML model. A subsequent iteration progresses much like the first iteration.
  • the server may transmit the aggregated global model to a plurality of edge nodes. The above procedure is performed multiple iterations until the global AI/ML model is considered to be finalized, e.g., the AI/ML model is converged or the training stopping conditions are satisfied.
  • the wireless FL technique does not involve exchange of local data samples. Indeed, the local data samples remain at respective edge nodes.
  • AI technologies may be applied in communication, including AI-based communication in the physical layer and/or AI-based communication in the MAC layer.
  • the AI communication may aim to optimize component design and/or improve the algorithm performance.
  • AI may be applied in relation to the implementation of: channel coding, channel modelling, channel estimation, channel decoding, modulation, demodulation, MIMO, waveform, multiple access, physical layer element parameter optimization and update, beam forming, tracking, sensing, and/or positioning, etc.
  • the AI communication may aim to utilize the AI capability for learning, prediction, and/or making a decision to solve a complicated optimization problem with possible better strategy and/or optimal solution, e.g. to optimize the functionality in the MAC layer.
  • AI may be applied to implement: intelligent TRP management, intelligent beam management, intelligent channel resource allocation, intelligent power control, intelligent spectrum utilization, intelligent MCS, intelligent HARQ strategy, and/or intelligent transmission/reception mode adaption, etc.
  • An AI architecture may involve multiple nodes, where the multiple nodes may possibly be organized in one of two modes, i.e., centralized and distributed, both of which may be deployed in an access network, a core network, or an edge computing system or third party network.
  • a centralized training and computing architecture is restricted by possibly large communication overhead and strict user data privacy.
  • a distributed training and computing architecture may comprise several frameworks, e.g., distributed machine learning and federated learning.
  • an AI architecture may comprise an intelligent controller which can perform as a single agent or a multi-agent, based on joint optimization or individual optimization. New protocols and signaling mechanisms are desired so that the corresponding interface link can be personalized with customized parameters to meet particular requirements while minimizing signaling overhead and maximizing the whole system spectrum efficiency by personalized AI technologies.
  • New protocols and signaling mechanisms are provided for operating within and switching between different modes of operation, including between AI and non-AI modes, and for measurement and feedback to accommodate the different possible measurements and information that may need to be fed back, depending upon the implementation.
  • AI enabled air interface An air interface that uses AI as part of the implementation, e.g. to optimize one or more components of the air interface, will be referred to herein as an “AI enabled air interface” .
  • AI enabled air interface there may be two types of AI operation in an AI enabled air interface: both the network and the UE implement learning; or learning is only applied by the network.
  • AI-related communications between the system node 120 and one or more UEs 110 may be via an interface such as the Uu link in 5G and 4G network systems, or may be via an AI-dedicated air interface (e.g., using an AI-related protocol on an AI-related logical layer, as discussed herein) .
  • AI-related communications between a system node 120 and a UE 110 served by the system node 120 may be over an AI-dedicated air interface, whereas non-AI-related communications may be over a 5G or 4G Uu link.
  • FIG. 1G illustrates a schematic diagram of an example model 100G in accordance with some example embodiments of the present disclosure.
  • the pre-trained big model is also referred to as a global model, or called as foundation model.
  • the pre-trained big model may be deployed at the core network (CN) or a third party to support multiple tasks.
  • the pre-trained big model 100G is utilized here as a basis for AI tasks at the radio access network (RAN) side.
  • RAN radio access network
  • RAN node e.g. BS
  • the fragmented models are too expensive (because individual hardware should be prepared for each AI model) and not efficient.
  • the RAN side can obtain a basic customized model from the global model (e.g., the customized model is a smaller model than the global model) , and perform fine-tuning on the local model. This is the basic technical concept of some embodiments of this disclosure, and will be described later in more detail with reference to FIGS. 2-8.
  • 5G To support the use of AI in a wireless network, an appropriate AI framework is needed.
  • 5G only consider the AI use cases to improve network performance, does not support network providing AI service to UE.
  • NW which has data and computing capability, by distributed training/inference, network could provide AI service.
  • sensing functionalities is not considered in current 5G AI framework.
  • FIG. 2 illustrates a flowchart of an example method 200 implemented in an AI/ML functional framework in accordance with some example embodiments of the present disclosure. Only for the purpose of discussion, the method 200 will be described with reference to FIGS. 1A-1G. The method 200 may involve an AI/ML functional framework, which will be described in more detail with reference to FIG. 3.
  • the method 200 includes, at 210, performing at least one operation based on an AI/ML functional framework, for example, the AI/ML function framework 300 as illustrated in FIG. 3.
  • the AI/ML function framework 300 may be implemented to provide integrated AI and sensing.
  • FIG. 3 illustrates a schematic AI/ML functional framework 300 in accordance with some example embodiments of the present disclosure.
  • the AI/ML functional framework 300 may include a first function 340, a second function 345, a third function 350, a fourth function 355 and at least one function 360 configured to operate based on sensing data.
  • the first function 340 may be configured to perform model training of at least one of an AI/ML model, an AI/ML sub-model, an AI/ML functionality or an AI/ML sub-functionality.
  • the second function 345 may be configured to perform management of the AI/ML model, AI/ML sub-model, AI/ML functionality or AI/ML sub-functionality.
  • the third function 350 may be configured to perform inference of the AI/ML model to obtain inference results.
  • the fourth function 355 may be configured to store the AI/ML model.
  • aspecific function is configured to do something
  • the specific function is configured, for example, by a base station device or a core network device, to do something.
  • the specific function is pre-defined, for example, in a specification (for example, in a 3GPP specification) , to do something.
  • sensing data collection can also be called as data collection, 3GPP sensing data collection, 3GPP and non-3GPP sensing data collection, data measurement, sensing measurement, etc.
  • Sensing modeling can also be called as sensing results processing, sensing information processing, sensing data processing, sensing measurement processing, environment information processing, object information processing, environment and object information processing.
  • Sensing results storage can also be called as sensing storage, RAN storage, local RAN storage, RAN and core network storage.
  • Sensing management can also be called as sensing control, sensing results management, or simply management.
  • Sensing application can also be called as sensing action, sensing in RAN, sensing usage, sensing use cases, sensing assisted communication, sensing service, sensing assisted communication and sensing service, etc.
  • the first function 340 may be further configured to perform validation or testing of the AI/ML model, AI/ML sub-model, AI/ML functionality or AI/ML sub-functionality. Alternatively or in addition, the first function 340 may be further configured to perform data preparation based on data received by the first function 340. In this way, the first function 340 can provide a more accurate AI/ML model, which in turn can provide more accurate inference results, thus the reliability of the AI/ML model can be improved.
  • the second function 345 may be further configured to perform control of the model training of the at least one of AI/ML model, AI/ML sub-model, AI/ML functionality or AI/ML sub-functionality. Alternatively or in addition, the second function 345 may be further configured to perform control of the inference of the AI/ML model. Alternatively or in addition, the second function 345 may be further configured to monitor output of the AI/ML model. In this way, the second function 345 can facilitate the first function to provide a more accurate AI/ML model, which in turn can provide more accurate inference results, thus the reliability of the AI/ML model can be improved.
  • the third function 350 may be further configured to perform an action based on the inference results. Alternatively or in addition, the third function 350 may be further configured to perform data preparation based on data received by the third function 350. In this way, the third function 350 can perform the action based on the inference results of the AI/ML model, improving the processing efficiency and reliability with the AI/ML model.
  • the first function 340 may transmit the trained AI/ML model to the fourth function 355. Alternatively or in addition, the first function 340 may receive AI/ML assistance information from the second function 345. Alternatively or in addition, the first function 340 may receive, from the second function 345, a performance level of the AI/ML model and a request to retrain the AI/ML model. In this way, the first function 340 can provide a more accurate (re) trained AI/ML model based on the AI/ML assistance information and/or the performance level of the AI/ML model. The (re) trained AI/ML model can, in turn, provide more accurate inference results, thus the reliability of the (re) trained AI/ML model can be improved.
  • the second function 345 may receive the inference results from the third function 350. In this way, the second function 345 can facilitate the first function to provide a more accurate retrained/updated AI/ML model based on the inference results of the current AI/ML model.
  • the retrained/updated AI/ML model can, in turn, provide more accurate inference results, thus the reliability of the AI/ML model can be improved.
  • the second function 345 may determine that a performance level of the AI/ML model is below a threshold level based on the inference results received from the third function 350. Based on determining that the performance level is below the threshold level, the second function 345 may further transmit, to the first function 340, the performance level of the AI/ML model and a request to retrain the AI/ML model. In this way, the second function 345 can request the first function 340 to retrain the AI/ML model in response to the performance level of the currently used AI/ML model becoming below a threshold level. In this sense, the second function 345 can facilitate the first function 340 to provide a more accurate retrained/updated AI/ML model based on the inference results of the current AI/ML model. The retrained/updated AI/ML model can, in turn, provide more accurate inference results, thus the reliability of the retrained/updated AI/ML model can be improved as compared with the currently used AI/ML model.
  • the second function 345 may transmit AI/ML assistance information to the first function 340, transmitting, to the third function, a switching indication to switch from the AI/ML model to another AI/ML model.
  • the second function 345 may transmit, to the third function 350, a fallback indication to apply a non-AI/ML model instead of the AI/ML model.
  • the second function 345 may transmit, to the third function 350, an activating indication to activate one or more of a plurality of candidate AI/ML models.
  • the second function 345 may transmit, to the third function 350, a deactivating indication to deactivate one or more of the plurality of candidate AI/ML models.
  • the second function 345 can provide the AI/ML assistance information to the first function 340 to obtain a more accurate (re) trained AI/ML model based on the AI/ML assistance information.
  • the retrained/updated AI/ML model can, in turn, provide more accurate inference results, thus the reliability of the retrained/updated AI/ML model can be improved as compared with the currently used AI/ML model.
  • the second function 345 can change/switch/ (de) select a desired AI/ML model for future use, improving the flexibility in management on the third function 350 and further the whole AI/ML functional framework 300.
  • the second function 345 may transmit, to the fourth function 355, a request that the fourth function 355 transmits the AI/ML model to the third function.
  • the second function 345 can transmit the (re) trained AI/ML model to the third function 350 for future use, while the retrained/updated AI/ML model can provide more accurate inference results than the currently used AI/ML model at the third function 350. Therefore, the reliability of the retrained/updated AI/ML model can be improved as compared with the currently used AI/ML model.
  • the third function 350 may transmit the inference results to the second function 345.
  • the second function 345 can determine whether the performance level of the AI/ML model is below a threshold level based on the inference results received from the third function 350. If so, the second function 345 can request the first function 340 to retrain the AI/ML model accordingly. In this sense, the third function 350 can help the second function 345 to facilitate the first function 340 to provide a more accurate retrained/updated AI/ML model based on the inference results of the current AI/ML model.
  • the retrained/updated AI/ML model can, in turn, provide more accurate inference results as compared with the currently used AI/ML model at the third function 350, thus the reliability of the retrained/updated AI/ML model can be improved as compared with the currently used AI/ML model.
  • the third function 350 may receive, from the second function 345, a switching indication to switch from the AI/ML model to another AI/ML model. Alternatively or in addition, the third function 350 may receive, from the second function 345, a fallback indication to apply a non-AI/ML model instead of the AI/ML model. Alternatively or in addition, the third function 350 may receive, from the second function 345, an activating indication to activate one or more of a plurality of candidate AI/ML models. Alternatively or in addition, the third function 350 may receive, from the second function 345, a deactivating indication to deactivate one or more of the plurality of candidate AI/ML models. In this way, the third function 350 can turn to use a desired AI/ML model indicated by the second function 345, improving the flexibility in management on the third function 350 and further the whole AI/ML functional framework 300.
  • the third function 350 may receive the AI/ML model from the fourth function 355.
  • the third function 345 can use the retrained/updated AI/ML model to provide more accurate inference results as compared with the currently used AI/ML model at the third function 345, thus the reliability of the retrained/updated AI/ML model can be improved as compared with the currently used AI/ML model.
  • the AI/ML functional framework 300 may further comprise a fifth function configured to collect non-sensing data.
  • the first function 340 can obtain a more accurate AI/ML model, and the second function 345 and third function 350 can also work more accurately.
  • the at least one function 360 may further comprise a sixth function configured to collect radio frequency (RF) sensing data, a seventh function configured to collect non-RF sensing data, and an eighth function configured to obtain fused data based on the RF sensing data and the non-RF sensing data.
  • RF sensing may be 3 rd generation partnership project (3GPP) defined RF sensing or non-3GPP defined RF sensing. In this way, sensing data can be collected through RF sensing, for example, either 3GPP defined RF sensing or non-3GPP defined RF sensing.
  • 3GPP 3 rd generation partnership project
  • the seventh function may be further configured to collect the non-RF sensing data using at least one of light detection and ranging (LIDAR) , non-3GPP defined RF sensing, wireless fidelity (WiFi) sensing, camera (s) , video (s) , or sensor (s) .
  • LIDAR light detection and ranging
  • WiFi wireless fidelity
  • the non-RF sensing data can be collected in various ways like LIDAR, non-3GPP defined RF sensing, WiFi sensing, camera (s) , video (s) , or sensor (s) . Therefore, it becomes easier and faster to obtain enough non-RF sensing data to be used by the first function, second function and third function.
  • the first function 340 may further receive first input data from at least one of the fifth function, the sixth function, the seventh function or the eighth function.
  • a (re) trained AI/ML model can be (re) trained with the first input data as the training data. Since the first input data is from at least one of the fifth function, the sixth function, the seventh function or the eighth function, which implies the first input data may include sensing data, AI/ML functionalities of the AI/ML functional framework 300 can be enhanced by the sensing data.
  • the training process of the (re) trained AI/ML model can be shortened and the accuracy of the (re) trained AI/ML model can be more accurate.
  • the second function 345 may receive second input data from at least one of the fifth function, the sixth function, the seventh function or the eighth function. In this way, the second function 345 can perform management of the AI/ML model based on the second input data. Since the second input data is from at least one of the fifth function, the sixth function, the seventh function or the eighth function, which implies the second input data may include sensing data, AI/ML functionalities of the AI/ML functional framework 300 can be enhanced by the sensing data.
  • the management of the AI/ML model can be more efficient and accurate.
  • the third function 350 may receive third input data from at least one of the fifth function, the sixth function, the seventh function or the eighth function. In this way, the third function can perform inference of the AI/ML model based on the third input data. Since the third input data is from at least one of the fifth function, the sixth function, the seventh function or the eighth function, which implies the third input data may include sensing data, thus AI/ML functionalities of the AI/ML functional framework 300 can be enhanced by the sensing data.
  • the fifth function may transmit the non-sensing data to at least one of the first function 340, the second function 345 or the third function 350.
  • the non-sensing data can be utilized by the first function 340 to train the AI/ML model to obtain a more accurate AI/ML model.
  • the non-sensing data can help the second function 345 to manage the AI/ML model more reliably and help the third function 350 to perform inference of the AI/ML model more accurately and thus reliably.
  • the sixth function may transmit the RF sensing data to at least one of the first function 340, the second function 345 or the third function 350.
  • the RF sensing data can be utilized by the first function 340 to train the AI/ML model to obtain a more accurate AI/ML model.
  • the RF sensing data can help the second function 345 to manage the AI/ML model more reliably and help the third function 350 to perform inference of the AI/ML model more accurately and thus reliably.
  • the RF sensing data can facilitate the first function 340, second function 345 and third function 350 to improve AI/ML functionalities of the AI/ML model and further the AI/ML functional framework 300.
  • the seventh function may transmit the non-RF sensing data to at least one of the first function 340, the second function 345 or the third function 350.
  • the non-RF sensing data can be utilized by the first function 340 to train the AI/ML model to obtain a more accurate AI/ML model.
  • the non-RF sensing data can help the second function 345 to manage the AI/ML model more reliably and help the third function 350 to perform inference of the AI/ML model more accurately and thus reliably.
  • the non-RF sensing data can facilitate the first function 340, second function 345 and third function 350 to improve AI/ML functionalities of the AI/ML model and further the AI/ML functional framework 300.
  • the eighth function may receive the RF sensing data from the sixth function, receiving the non-RF sensing data from the seventh function. Alternatively or in addition, the eighth function may perform data processing on the received RF sensing data and non-RF sensing data to obtain the fused data. In this way, the fused data can be obtained which is more accurate than either one of the RF sensing data and the non-RF sensing data, and is less in quantity than the sum of the RF sensing data and the non-RF sensing data.
  • the eighth function may transmit the fused data to at least one of the first function 340, the second function 345 or the third function 350.
  • the fused data then can be utilized by the first function 340 to train the AI/ML model to obtain a more accurate AI/ML model.
  • the fused data can help the second function 345 to manage the AI/ML model more reliably and help the third function 350 to perform inference of the AI/ML model more accurately and thus reliably.
  • the fused data can facilitate the first function 340, second function 345 and third function 350 to improve AI/ML functionalities of the AI/ML model and further the AI/ML functional framework 300.
  • the at least one function may comprise a ninth function configured to collect the sensing data, and a tenth function configured to obtain fused data based on the non-sensing data and the sensing data.
  • the fused data can be obtained which is more accurate than either one of the non-sensing data and the sensing data, and is less in quantity than the sum of the non-sensing data and the sensing data.
  • the at least one function 360 may comprise at least one of an eleventh function configured to obtain a sensing model or a sensing result, a twelfth function configured to perform management of the sensing model or sensing result, or a thirteenth function configured to assist communication or determine an event based on the sensing model or sensing result. In this way, a sensing model can be obtained and used to assist communication or determine an event based on the sensing model.
  • the at least one function 360 may further comprise a fourteenth function configured to store the sensing model or the sensing result.
  • the sensing model can be stored in the fourteenth function which is separate from the fourth function 355, and the operations involving the storage and retrieval of the AI/ML model and the sensing model can be performed separately in a decoupled manner.
  • the first function 340 may further receive first input data from at least one of the fifth function, the ninth function or the tenth function.
  • a (re) trained AI/ML model can be (re) trained with the first input data as the training data. Since the first input data is from at least one of the fifth function, the ninth function or the tenth function, which implies the first input data may include non-sensing data and sensing data, AI/ML functionalities of the AI/ML functional framework can be enhanced by the non-sensing data and the sensing data.
  • the training process of the (re) trained AI/ML model can be shortened and the accuracy of the (re) trained AI/ML model can be more accurate.
  • the second function 345 may further receive second input data from at least one of the fifth function, the ninth function or the tenth function. In this way, the second function 345 can perform management of the AI/ML model based on the second input data. Since the second input data is from at least one of the fifth function, the ninth function or the tenth function, which implies the second input data may include non-sensing data and sensing data, AI/ML functionalities of the AI/ML functional framework can be enhanced by the non-sensing data and the sensing data. At the same time, with the large-quantity sensing data, the management of the AI/ML model can be more efficient and accurate.
  • the third function may further receive third input data from at least one of the fifth function, the ninth function or the tenth function. In this way, the third function can perform inference of the AI/ML model based on the third input data. Since the third input data is from at least one of the fifth function, the ninth function or the tenth function, which implies the third input data may include non-sensing data and sensing data, where the non-sensing data can be utilized by the third function to perform inference of the AI/ML model more accurately and reliably.
  • the fifth function may transmit the non-sensing data to at least one of the first function 340, the second function 345 or the third function 350, and at least one of the eleventh function, the twelfth function or the thirteenth function.
  • the non-sensing data can be utilized by the first function 340 to train the AI/ML model to obtain a more accurate AI/ML model.
  • the non-sensing data can help the second function 345 to manage the AI/ML model more reliably and help the third function 350 to perform inference of the AI/ML model more accurately and thus reliably.
  • the non-sensing data can be utilized by the eleventh function to train the sensing model to obtain a more accurate sensing model.
  • the non-sensing data can help the twelfth function to manage the sensing model more reliably and help the thirteenth function to perform inference of the sensing model more accurately and thus reliably.
  • the ninth function may transmit the sensing data to at least one of the first function 340, the second function 345 or the third function 350, and at least one of the eleventh function, the twelfth function or the thirteenth function.
  • the sensing data can be utilized by the first function 340 to train the AI/ML model to obtain a more accurate AI/ML model.
  • the sensing data can help the second function 345 to manage the AI/ML model more reliably and help the third function 350 to perform inference of the AI/ML model more accurately and thus reliably.
  • the sensing data can facilitate the first function 340, second function 345 and third function 350 to improve AI/ML functionalities of the AI/ML model and further the AI/ML functional framework. Further, the sensing data can be utilized by the eleventh function to train the sensing model to obtain a more accurate sensing model. At the same time, the sensing data can help the twelfth function to manage the sensing model more reliably and help the thirteenth function to perform inference of the sensing model more accurately and thus reliably.
  • the tenth function may receive the non-sensing data from the sixth function. Alternatively or in addition, the tenth function may receive the sensing data from the ninth function. Alternatively or in addition, the tenth function may perform data processing on the received non-sensing data and sensing data to obtain the fused data. In this way, the fused data can be obtained which is more accurate than either one of the non-sensing data and the sensing data, and is less in quantity than the sum of the non-sensing data and the sensing data.
  • the tenth function may transmit the fused data to at least one of the first function 340, the second function 345 or the third function 350, and at least one of the eleventh function, the twelfth function or the thirteenth function.
  • the fused data then can be utilized by the first function 340 to train the AI/ML model to obtain a more accurate AI/ML model.
  • the fused data can help the second function 345 to manage the AI/ML model more reliably and help the third function 350 to perform inference of the AI/ML model more accurately and thus reliably.
  • the fused data can facilitate the first function 340, second function 345 and third function 350 to improve AI/ML functionalities of the AI/ML model and further the AI/ML functional framework. Further, the fused data then can be utilized by the eleventh function to train the sensing model to obtain a more accurate sensing model. At the same time, the fused data can help the twelfth function to manage the sensing model more reliably and help the thirteenth function to perform inference of the sensing model more accurately and thus reliably.
  • the eleventh function may be further configured to perform data processing based on fourth input data obtained from at least two of the fifth function, the ninth function or the tenth function. In this way, based on the fourth input data as the training data for the sensing model, the eleventh function can train the sensing model more accurately.
  • the model training of the at least one of sensing model, sensing sub-model, sensing functionality or sensing sub-functionality may comprise at least one of environment reconstruction, channel reconstruction, target reconstruction or digital twin or object detection. In this way, the sensing model can be trained more accurately.
  • the twelfth function may be further configured to perform control of the model training of the at least one of sensing model, sensing sub-model, sensing functionality or sensing sub-functionality. In some cases, the twelfth function may be further configured to perform control of the inference of the sensing model, or monitor output of the sensing model. In this way, the twelfth function can facilitate the eleventh function to provide a more accurate sensing model, which can produce more accurate sensing inference results, thus the reliability of the sensing model can be improved.
  • the thirteenth function may be further configured to perform data preparation based on sixth input data obtained from at least one of the fifth function, the ninth function or the tenth function. In this way, data used in processing by the thirteenth function can be more organized as compared with the case where the sixth input data is used in the processing without data preparation, thus the processing by the thirteenth function can be more accurate with a higher speed.
  • the eleventh function may receive the fourth input data from at least one of the fifth function, the ninth function or the tenth function. Alternatively or in addition, the eleventh function may receive from the twelfth function, a performance level of the sensing model and a request to retrain the sensing model. Alternatively or in addition, the eleventh function may receive the sensing inference results from the thirteenth function. Alternatively or in addition, the eleventh function may receive sensing information from the twelfth function. Alternatively or in addition, the eleventh function may transmit the trained or retrained sensing model to the fourteenth function.
  • the eleventh function can provide a more accurate (re) trained sensing model based on the fourth input data and/or the performance level of the sensing model and/or the sensing information and/or the sensing inference results.
  • the (re) trained sensing model can, in turn, provide more accurate inference results, thus the reliability of the (re) trained sensing model can be improved.
  • the eleventh function may receive the inference results from the third function 350.
  • the inference results of the AI/ML model can help the eleventh function to improve the accuracy and performance of the (re) trained AI/ML model and further the AI/ML functional framework.
  • the twelfth function may receive fifth input data from at least one of the fifth function, the ninth function or the tenth function. Alternatively or in addition, the twelfth function may receive the sensing inference results from the thirteenth function. In this way, the twelfth function can facilitate the eleventh function to provide a more accurate sensing model, which in turn can provide more accurate sensing inference results, thus the reliability of the sensing model can be improved.
  • the twelfth function may determine that a performance level of the sensing model is below a threshold level based on the sensing inference results received from the thirteenth function. Alternatively or in addition, based on determining that the performance level is below the threshold level, the twelfth function may transmit, to the eleventh function, the performance level of the sensing model and a request to retrain the sensing model. In this way, the twelfth function can request the eleventh function to retrain the sensing model in response to the performance level of the currently used sensing model becoming below a threshold level.
  • the twelfth function can facilitate the eleventh function to provide a more accurate retrained/updated sensing model based on the sensing inference results of the current sensing model.
  • the retrained/updated sensing model can, in turn, provide more accurate sensing inference results, thus the reliability of the retrained/updated sensing model can be improved as compared with the currently used sensing model.
  • the twelfth function may transmit sensing information to the eleventh function.
  • the twelfth function may transmit, to the thirteenth function, a switching indication to switch from the sensing model to another sensing model.
  • the twelfth function may transmit, to the thirteenth function, a fallback indication to apply a non-sensing model instead of the sensing model.
  • the twelfth function may transmit, to the thirteenth function, an activating indication to activate one or more of a plurality of candidate sensing models.
  • the twelfth function may transmit, to the thirteenth function, a deactivating indication to deactivate one or more of the plurality of candidate sensing models.
  • the twelfth function can provide the sensing information to the eleventh function to obtain a more accurate (re) trained sensing model based on the sensing information.
  • the retrained/updated sensing model can, in turn, provide more accurate sensing inference results, thus the reliability of the retrained/updated sensing model can be improved as compared with the currently used sensing model.
  • the twelfth function can change/switch/ (de) select a desired sensing model for future use, improving the flexibility in management on the thirteenth function and further the whole AI/ML functional framework.
  • the twelfth function may transmit, to the fourteenth function, a request that the fourteenth function transmits the sensing model to the thirteenth function.
  • the twelfth function can request the fourteenth function to transmit the (re) trained sensing model to the thirteenth function for future use, while the retrained/updated sensing model can provide more accurate sensing inference results than the currently used sensing model at the thirteenth function. Therefore, the reliability of the retrained/updated sensing model can be improved as compared with the currently used sensing model.
  • the twelfth function may receive the inference results from the third function 350.
  • the inference results can facilitate the twelfth function to improve sensing functionalities of the sensing model and further the AI/ML functional framework 300.
  • the thirteenth function may receive sixth input data from at least one of fifth function, the ninth function or the tenth function. Alternatively or in addition, the thirteenth function may transmit the sensing inference results to the twelfth function. In this way, with the sixth input data, the thirteenth function can determine the sensing inference results, and send the sensing inference results to the twelfth function. With the sensing inference results, the twelfth function can determine whether the performance level of the sensing model is below a threshold level based on the sensing inference results received from the thirteenth function. If so, the twelfth function can request the eleventh function to retrain the sensing model accordingly.
  • the thirteenth function can help the twelfth function to facilitate the eleventh function to provide a more accurate retrained/updated sensing model based on the sensing inference results.
  • the retrained/updated sensing model can, in turn, provide more accurate sensing inference results as compared with the currently used sensing model at the thirteenth function, thus the reliability of the retrained/updated sensing model can be improved as compared with the currently used sensing model.
  • the thirteenth function may transmit the sensing inference results to at least one of the first function 340, the second function 345 or the third function 350. Alternatively or in addition, the thirteenth function may receive the sensing model from the fourteenth function. In this way, in the sense of sensing for AI/ML, the sensing inference results can facilitate the first function 340, the second function 345 or the third function 350 to improve AI/ML functionalities of the AI/ML model and further the AI/ML functional framework.
  • the thirteenth function may receive, from the twelfth function, a switching indication to switch from the sensing model to another sensing model.
  • the thirteenth function may receive, from the twelfth function, a fallback indication to apply a non-sensing model instead of the sensing model.
  • the thirteenth function may receive, from the twelfth function, an activating indication to activate one or more of a plurality of candidate sensing models.
  • the thirteenth function may receive, from the twelfth function, a deactivating indication to deactivate one or more of the plurality of candidate sensing models. In this way, the thirteenth function can turn to use a desired sensing model indicated by the twelfth function, improving the flexibility in management on the thirteenth function and further the whole AI/ML functional framework.
  • the fourteenth function may receive the trained sensing model from the eleventh function. Alternatively or in addition, based on receiving, from the twelfth function, a request that the fourteenth function transmits the sensing model to the thirteenth function, the fourteenth function may transmit the sensing model to the thirteenth function. In this way, the fourteenth function can provide the sensing model to the thirteenth function, such that the thirteenth function can use the (re) trained sensing model to provide more accurate sensing inference results as compared with the currently used sensing model at the thirteenth function, thus the reliability of the (re) trained sensing model can be improved as compared with the currently used sensing model.
  • the request may comprise at least one of a model ID of the requested sensing model, a sensing functionality ID for the requested sensing functionality, or a sensing performance requirement indicating the requested sensing performance.
  • a sensing model desired by the twelfth function to be used at the thirteenth function can be requested using various parameters, improving the flexibility and usability of the AI/ML functional framework.
  • the AL/ML functional framework 300 may further comprise a fifteenth function configured to perform sensing inference to obtain a sensing result.
  • the first function 340 may be further configured to perform model training of at least a sensing model, a sensing sub-model, a sensing functionality or a sensing sub-functionality
  • the second function 345 may be further configured to perform management of the at least sensing model, sensing sub-model, sensing functionality or sensing sub-functionality.
  • the first function 340 can not only train an AI/ML model, but also can train a sensing model
  • the second function 345 can monitor not only the AI/ML model but also the sensing model.
  • the fifteenth function which is in charge of sensing inference of the sensing model is separate from the third function 345 which is in charge of model inference of the AI/ML model.
  • the at least one function 360 may further comprise a sixteenth function configured to obtain fused data.
  • the fused data may be obtained by processing on non-sensing data and sensing data. In this way, the fused data, which is less in quantity than the sum of the non-sensing data and the sensing data, can be used in future processing to improve data accuracy and decrease data processing volume.
  • the first function 340 may be further configured to perform data preparation based on seventh input data obtained from the sixteenth function. In this way, data used in processing by the first function 340 can be more organized as compared with the case where the seventh input data is used in the processing without data preparation, thus the processing by the first function 340 can be more accurate with a higher speed.
  • the second function 345 may be further configured to perform control of the model training of the at least one of sensing model, sensing sub-model, sensing functionality or sensing sub-functionality. Alternatively or in addition, the second function 345 may perform control of the sensing inference of the sensing model. Alternatively or in addition, the second function 345 may monitor output of the sensing model. In this way, the second function 345, which performs management of the AI/ML model, can also perform management of the sensing model (including model training and inference of the sensing model) .
  • the first function 340 may further receive the seventh input data from the sixteenth function. Alternatively or in addition, the first function 340 may receive, from the second function 345, a performance level of the sensing model and a request to retrain the sensing model. Alternatively or in addition, the first function 340 may receive sensing information from the second function 345. Alternatively or in addition, the first function 340 may transmit the trained or retrained sensing model to the fourth function 355. In this way, the first function 340 can provide a more accurate (re) trained sensing model based on the seventh input data and/or the performance level of the sensing model and/or the sensing information. The (re) trained sensing model can, in turn, provide more accurate inference results, thus the reliability of the (re) trained sensing model can be improved.
  • the second function 345 may further receive eighth input data from the sixteenth function. Alternatively or in addition, the second function 345 may receive the sensing inference results from the fifteenth function. In this way, the second function 345 can facilitate the first function 340 to provide a more accurate retrained/updated AI/ML model and/or sensing model based on the eighth input data and/or the sensing inference results of the current sensing model.
  • the sensing inference results can facilitate the second function 345 to improve AI/ML functionalities of the AI/ML model and further the AI/ML functional framework. More specifically, the retrained/updated AI/ML model can, in turn, provide more accurate inference results, thus the reliability of the AI/ML model can be improved. Also, the retrained/updated sensing model can, in turn, provide more accurate inference results, thus the reliability of the sensing model can be improved.
  • the second function 345 may further determine that a performance level of the sensing model is below a threshold level based on the sensing inference results received from the fifteenth function. Alternatively or in addition, based on determining that the performance level is below the threshold level, the second function 345 may transmit, to the first function 340, the performance level of the sensing model and a request to retrain the sensing model. In this way, the second function 345 can request the first function 340 to retrain the sensing model in response to the performance level of the currently used sensing model becoming below a threshold level. In this sense, the second function 345 can facilitate the first function 340 to provide a more accurate retrained/updated sensing model based on the inference results of the current sensing model. The retrained/updated sensing model can, in turn, provide more accurate inference results, thus the reliability of the retrained/updated sensing model can be improved as compared with the currently used sensing model.
  • the second function 345 may further transmit sensing information to the first function 340. Alternatively or in addition, the second function 345 may transmit, to the fifteenth function, a switching indication to switch from the sensing model to another sensing model. Alternatively or in addition, the second function 345 may transmit, to the fifteenth function, a fallback indication to apply a non-sensing model instead of the sensing model. Alternatively or in addition, the second function 345 may transmit, to the fifteenth function, an activating indication to activate one or more of a plurality of candidate sensing models. Alternatively or in addition, the second function 345 may transmit, to the fifteenth function, a deactivating indication to deactivate one or more of the plurality of candidate sensing models.
  • the second function 345 can provide the sensing information to the first function 340 to obtain a more accurate (re) trained sensing model based on the sensing information.
  • the retrained/updated sensing model can, in turn, provide more accurate sensing inference results, thus the reliability of the retrained/updated sensing model can be improved as compared with the currently used sensing model.
  • the second function 345 can change/switch/ (de) select a desired sensing model for future use, improving the flexibility in management on the fifteenth function and further the whole AI/ML functional framework.
  • the second function 345 may further transmit, to the fourth function 355, a request that the fourth function 355 transmits the sensing model to the fifteenth function.
  • the second function 345 can transmit the (re) trained sensing model to the fifteenth function for future use, while the retrained/updated sensing model can provide more accurate inference results than the currently used sensing model at the third function 350. Therefore, the reliability of the retrained/updated sensing model can be improved as compared with the currently used sensing model.
  • the third function 350 may further receive ninth input data from the sixteenth function. In this way, the third function 350 can provide more accurate sensing inference result (s) based on the ninth input data.
  • the third function 350 may further transmit the inference results to the fifteenth function.
  • the third function 350 may receive the sensing result (or, sensing inference result) from the fifteenth function.
  • the inference results can facilitate the fifteenth function to improve sensing functionalities of the sensing model.
  • the sensing result can facilitate the third function 350 to improve inference results of the AI/ML model and further the AI/ML functional framework 300.
  • the fifteenth function may receive tenth input data from the sixteenth function.
  • the fifteenth function may receive the sensing model from the fourth function 360. In this way, with the tenth input data and the sensing model, the fifteenth function can perform sensing inference and obtain the sensing result.
  • the fifteenth function may further receive the inference results from the second function 345. Alternatively or in addition, the fifteenth function may transmit the sensing results to the second function 345. In this way, on one hand, in the sense of AI/ML for sensing, the inference results can facilitate the fifteenth function to improve sensing functionalities of the sensing model. On the other hand, in the sense of sensing for AI/ML, the sensing result can facilitate the second function 345 to improve management of the AI/ML model and further the AI/ML functional framework 300.
  • the fifteenth function may further receive, from the second function 345, a switching indication to switch from the sensing model to another sensing model. Alternatively or in addition, the fifteenth function may further receive, from the second function 345, a fallback indication to apply a non-sensing model instead of the sensing model. Alternatively or in addition, the fifteenth function may receive, from the second function 345, an activating indication to activate one or more of a plurality of candidate sensing models. Alternatively or in addition, the fifteenth function may receive, from the second function 345, a deactivating indication to deactivate one or more of the plurality of candidate sensing models. In this way, the fifteenth function can change/switch to a desired sensing model as indicated by the second function 345 for future use, improving the flexibility in management the sensing model and further the whole AI/ML functional framework 300.
  • the AI/ML functional framework 300 may further comprise a seventeenth function configured to collect non-sensing data, and the at least one function 360 may further comprise an eighteenth function configured to collect sensing data. In this way, both non-sensing data and sensing data can be utilized in the AI/ML functional framework 300, thus accuracy and performance of the AI/ML model and the sensing model can be improved.
  • the sixteenth function may further receive the non-sensing data from the seventeenth function. Alternatively or in addition, the sixteenth function may further receive the sensing data from the eighteenth function. Alternatively or in addition, the sixteenth function may perform data processing on the received non-sensing data and sensing data to obtain the fused data. In this way, the fused data can be obtained by processing on the non-sensing data from the seventeenth function and the sensing data from the eighteenth function. With the fused data, which is less in quantity than the sum of the non-sensing data and the sensing data, data accuracy can be improved and data processing volume can be decreased.
  • the sixteenth function may further transmit the fused data to at least one of the first function 340, the second function 345, the third function 350 or the fifteenth function.
  • the fused data then can be utilized by the first function 340 to train the AI/ML model and/or the sensing model to obtain a more accurate AI/ML model and/or sensing model.
  • the fused data can help the second function 345 to manage the AI/ML model and/or the sensing model more reliably, help the third function 350 to perform inference of the AI/ML model more accurately and thus reliably, and help the fifteenth function to perform inference of the sensing model more accurately and thus reliably.
  • the AI/ML functional framework 300 may further comprise at least two of: a nineteenth function configured to provide ground-truth sensing data, a twentieth function configured to provide non-ground-truth sensing data, or a twenty-first function configured to provide non-sensing ground-truth data.
  • a nineteenth function configured to provide ground-truth sensing data
  • a twentieth function configured to provide non-ground-truth sensing data
  • a twenty-first function configured to provide non-sensing ground-truth data.
  • the sixteenth function may further receive at least two of: ground-truth sensing data from the nineteenth function, the non-ground-truth sensing data from the twentieth function, or the non-sensing ground-truth data from the twenty-first function.
  • the sixteenth function may perform data processing on the received data to obtain the fused data. In this way, the fused data then can be utilized by the first function 340 to train the AI/ML model and/or the sensing model to obtain a more accurate AI/ML model and/or sensing model.
  • the fused data can help the second function 345 to manage the AI/ML model and/or the sensing model more reliably, help the third function 355 to perform inference of the AI/ML model more accurately and thus reliably, and help the fifteenth function to perform sensing inference of the sensing model more accurately and thus reliably.
  • the fused data can facilitate the first function 340, second function 345 and third function 350 to improve AI/ML functionalities of the AI/ML model and further the AI/ML functional framework 300.
  • the sixteenth function may further transmit the fused data to at least one of the first function 340, the second function 345, the third function 350 or the fifteenth function.
  • the first function 340 can utilize the fused data to train the AI/ML model and/or the sensing model to obtain a more accurate AI/ML model and/or sensing model.
  • the second function 345 can utilize the fused data to manage the AI/ML model and/or the sensing model more reliably.
  • the third function 350 can utilize the fused data to perform inference of the AI/ML model more accurately and thus reliably.
  • the fifteenth function can utilize the fused data to perform sensing inference of the sensing model more accurately and thus reliably.
  • the first function 340, second function 345 and third function 350 can utilize the fused data to improve AI/ML functionalities of the AI/ML model and further the AI/ML functional framework 300.
  • the data processing may comprise at least one of data pre-processing, data cleaning, data formatting, data transformation, or data integration.
  • data pre-processing data cleaning
  • data formatting data formatting
  • data transformation data transformation
  • data integration data integration
  • At least one of the first function 340, the second function 345, the third function 350, the fourth function 355, the fifth function, the sixth function, the seventh function, the eighth function, the ninth function, the tenth function, the eleventh function, the twelfth function, the thirteenth function, the fourteenth function, the fifteenth function, the sixteenth function, the seventeenth function, the eighteenth function, the nineteenth function, the twentieth function or the twenty-first function may be implemented in one of the a terminal device, an access network device, a core network device, or a third party device.
  • each function may be implemented in one of the terminal device, access network device, core network device or third party device in a “distributed” manner, improving the flexibility of implementation and enabling dynamic implementation with various modules where each module may, by itself or in combination with other module (s) , implement one or more functions as described here.
  • AI/ML functional framework 300 for integrated AI and sensing can be defined for high-accuracy purpose to facilitate communication.
  • FIG. 4 illustrates a schematic diagram of an example AI/ML functional framework 400 and the flowchart of operations in the AI/ML functional framework 400 in accordance with some embodiments of the present disclosure.
  • the AI/ML functional framework 400 as shown in FIG. 4 includes 8 parts, i.e., a model training function 440, a management function 445, an inference function 450, a model storage function 455, an RF sensing data collection 460-1, a non-RF sensing data collection 460-2, a non-sensing data collection 465 and a data fusion function 470.
  • the model training function 440, management function 445, inference function 450 and model storage function 455 may each be an example of the first function 340, second function 345, third function 350 and fourth function 355 as illustrated in FIG. 3, respectively.
  • the RF sensing data collection 460-1, non-RF sensing data collection 460-2 and data fusion function 470 may each be an example of the function 375 which is configured to operate based on sensing data as illustrated in FIG. 3, respectively.
  • the non-sensing data collection function 465 is a function that provides input data (in FIG. 4, the non-sensing data 401) to model training function 440, management function 445 and inference function 450. For the input data, it is collected by non-sensing schemes, e.g. by measurement of reference signal (s) .
  • the non-sensing data collection function 465 may include interfaces for training data, monitoring data and inference data.
  • Training data is Data needed as input for the model training function 440, e.g., data for model training, include assistance information.
  • Monitoring data is data needed as input for the management function 445.
  • Inference data is data needed as input for the inference function 450.
  • the RF Sensing data collection 460-1 is a function that provides input data (here, in FIG. 4, sensing data 403) to the model training function 440, management function 445 and inference function 450.
  • input data it is collected by RF sensing.
  • RF sensing means that a transmitter sends a RF signal and obtains the surrounding information by receiving and processing either this RF signal or the echoed (reflected) RF signal.
  • the RF sensing it can be 3GPP defined RF sensing and/or non-3GPP defined RF sensing.
  • the transmitter sends a 3GPP defined RF signal and the receiver detects the sensing results.
  • the transmitter sends a non-3GPP defined RF signal, e.g. radar sensing, WIFI sensing.
  • the RF Sensing data collection 460-1 may include interfaces for training data, monitoring data, inference data as well as input to data fusion function 470.
  • Training data is data needed as input for the model training function 440, e.g. data for model training, include assistance information.
  • Monitoring data is data needed as input for the management function 445.
  • Inference data is data needed as input for the inference function 450.
  • the non-RF sensing data collection function 460-2 is a function that provides input data to the model training function 440, management function 445 and the data fusion function 470.
  • the non-RF sensing data collection function 460-2 may also provide input data to the inference function 450 (not shown in FIG. 4) .
  • Such input data is collected by non-RF sensing.
  • the sensing results are obtained not by radio frequency signal detection as the RF sensing data collection function 460-1, e.g., but by LIDAR (light detection and ranging) , camera, video, sensor, etc.
  • the non-RF sensing may also include non-3GPP defined RF sensing, e.g. by WIFI sensing.
  • the non-RF sensing data collection function 460-2 may include interfaces for training data, monitoring data, inference data as well as input to data fusion function 470.
  • the training data is data needed as input for the model training function 440, e.g. data for model training, include assistance information.
  • the monitoring data is data needed as input for the management function 445.
  • the inference data is data needed as input for the inference function 450.
  • RF sensing data may comprise only 3GPP RF sensing data, and non-3GPP RF sensing data may be regarded as non-RF sensing data.
  • non-3GPP RF sensing data may also be regarded as RF sensing data, i.e., the RF sensing data includes both 3GPP RF sensing data and non-3GPP RF sensing data.
  • the data fusion function 470 is a function that provides input data to the model training function 440, management function 445 and inference function 450. It should be noted that data fusion function 470 could also be called as data collection function.
  • the data fusion function 470 receives input from the RF sensing data collection function 460-1 and the non-RF sensing data collection function 460-2.
  • the data fusion function 470 is responsible for data processing. Data processing may include data pre-processing and cleaning, formatting, and transformation, integrating multiple data sources to produce more useful information than that provided by any individual data source (here, the RF sensing data collection function 460-1 and the non-RF sensing data collection function 460-2) . For example, the data fusion function 470 may combine RF sensing data from the RF sensing data collection function 460-1 and non-RF sensing data from the non-RF sensing data collection function 460-2 such that the resulting information has less uncertainty than that when the RF sensing data or non-RF sensing data is used individually.
  • the data fusion function 470 may include interfaces for training data, monitoring data and inference data.
  • the training data is data needed as input for the model training function 440, e.g., data for model training, include assistance information.
  • the monitoring data is data needed as input for the management function 445.
  • the inference data is data needed as input for the inference function 450.
  • the modelling training function 440 is a function that performs the ML model training, validation, and testing which may generate model performance metrics as part of the model testing procedure.
  • the model training function 440 is also responsible for data preparation (e.g., data pre-processing and cleaning, formatting, and transformation) . Interaction between model training function and other function includes training data and trained/updated model.
  • the modelling training function 440 may receive training data from at least one of the non-sensing data collection function 465, RF sensing data collection function 460-1, non-RF sensing data collection function 460-2 and data fusion function 470.
  • the modelling training function 440 may send a trained AI model (e.g. by distributed learning) to the model storage function 455. Alternatively, the modelling training function 440 may deliver an updated AI model to the model storage function 455.
  • a trained AI model e.g. by distributed learning
  • the management function 445 is a function that is responsible for performing model control to model training function 440 and inference function 450, and it monitors the model output (i.e., monitoring output 410) . Based on the model output, the management function 445 may determine whether the model qualities are applicable. If it is determined that the model qualities are no longer applicable, the management function 445 may request the model training function 440 to re-train the model, and it will indicate the model inference function 450 to switch the model.
  • the management function 445 receives the monitoring data from data collection function (i.e., receive data from the non-sensing data collection function 465, RF sensing data collection function 460-1, non-RF sensing data collection function 460-2 and data fusion function 470) .
  • the management function 445 receives the ground truth data from the data collection function.
  • the management function 445 may compare the AI/ML model output and the ground truth. After comparing the AI/ML model output and the ground truth, the model performance can be evaluated.
  • the management function 445 also receives the output of the model inference function, the output includes the performance of the model inference.
  • a performance feedback/re-model request may be applied.
  • the Management function 445 When the Management function 445 observes that the performance of current AI model is not good enough, the management function 445 will send current AI performance to the model training function 440, including current AI model output and its accuracy, etc. In addition, the management function 445 also requests the model training function 440 to retrain the model, and request to get an updated AI model.
  • the management function 445 When the management function 445 observes that the performance of current AI model is not good enough, it can send model switching signalling to the inference function 450 to switch to another AI model, or send a fallback signalling to indicate the inference function 450 to use non-AI mode.
  • the management function 445 can indicate the inference function 450 to use which AI model, and activate or de-activate one or multiple the candidate AI models.
  • the management function 445 may send an AI model transfer request to the model storage function 455 to request a model for the inference function 450.
  • the request may be an initial model transfer request for an initially trained AI/ML model or an updated model transfer request for an updated AI/ML model obtained by re-training an existing model.
  • the inference function 450 is a function that provides inference results.
  • the inference function 450 is also responsible for performing actions according to inference results. For example, the inference function 450 may trigger or perform corresponding actions according to inference decision and it may trigger actions directed to other entities or to itself.
  • the inference function 450 may also be responsible for data preparation (e.g. data pre-processing and cleaning, formatting, and transformation) based on inference data delivered by a data collection function (i.e., receive data from at least one of the non-sensing data collection function 465, RF sensing data collection function 460-1, non-RF sensing data collection function 460-2 or data fusion function 470) .
  • the inference function 450 may send inference output 410 to management function 445 to monitor the performance of the AI model.
  • the model storage function 455 is a function that store the models.
  • the storage location can be within RAN (e.g. BS and/or UE side) , or outside RAN (e.g. core network or the third party) . It receives the model from the model training function 440.
  • the model stored at the model storage function 455 may be the first trained model, or the re-trained/updated model.
  • the model storage function 455 may receive a model transfer request 406 from the management function 445. In response to reception of the model transfer request 406, the model storage function 455 will send the corresponding model to the inference function 450.
  • the model transfer request 406 may indicate the requested model ID, then the model storage function 455 may send the model with the requested ID to the inference function 450.
  • the model transfer request 406 may indicate the requested/desired AI functionality ID and/or AI performance requirement (e.g. AI accuracy, AI complexity, AI model size) , then the model storage function may deliver a model satisfying the indicated AI functionality and the performance requirement.
  • an AI function in FIG. 4 it can be located at UE, BS, core network, or the 3rd party. For different AI functions, they may be located at the same physical entity or different physical entities.
  • an AI/ML functional framework (including the signaling interface among functions) can be defined to support integrated AI and sensing, including sensing to improve AI performance and AI to improve sensing performance.
  • FIG. 5 illustrates a schematic diagram of another example AI/ML functional framework 500 and the flowchart of operations in the AI/ML functional framework 500 in accordance with some embodiments of the present disclosure.
  • the AI/ML functional framework 500 as shown in FIG. 5 includes 11 parts, i.e., a model training function 540, a management function 545, an inference function 550, a model storage function 555, a sensing data collection function 560, a non-sensing data collection function 565, a data fusion function 570, a sensing modeling function 582, a sensing management function 584, a sensing application function 586 and sensing results storage function 588.
  • 11 parts i.e., a model training function 540, a management function 545, an inference function 550, a model storage function 555, a sensing data collection function 560, a non-sensing data collection function 565, a data fusion function 570, a sensing modeling function 582, a sensing
  • the model training function 540, management function 545, inference function 550 and model storage function 555 may each be an example of the first function 340, second function 345, third function 350, fourth function 355 as illustrated in FIG. 3, respectively, and may be the same or similar to the model training function 440, the management function 445, the inference function 450 and the model storage function 455 as illustrated in FIG. 4, respectively.
  • the sensing data collection 560 and data fusion function 570 may each be an example of the function 375 which is configured to operate based on sensing data as illustrated in FIG. 3, respectively.
  • the sensing modeling function 582, sensing management function 584, sensing application function 586 and sensing results storage function 588 may each, in logic, be similar to the model training function 540, management function 545, inference function 550 and model storage function 555, respectively, with the difference that the sensing modeling function 582, sensing management function 584, sensing application function 586 and sensing results storage function 588 focus on sensing, while the model training function 540, management function 545, inference function 550 and model storage function 555 focus on AI.
  • model transfer request 406 may have similar meaning as the trained /updated model 402, performance feedback /retraining request 404, model transfer request 406, model selection/activation/deactivation/switching/fallback 408, inference output 410 and model transfer 412 as illustrated in FIG. 4.
  • the AI/ML functional framework 500 differs from the AI/ML functional framework 400 as shown in FIG. 4 mainly in the following aspects.
  • sensing modeling function 582, sensing management function 584, sensing application function 586 and sensing results storage function 588 dedicated for sensing in the AI/ML functional framework 500, while in the AI/ML functional framework 400, such functions are incorporated in corresponding AI functions (i.e., the model training function 540, management function 545, inference function 550 and model storage function 555) .
  • the data collection function in the AI/ML functional framework 500 includes a sensing data collection 560 for collecting sensing data, while in the AI/ML functional framework 400, RF sensing data collection function 460-1 and non-RF sensing data collection function 460-2 are included to provide RF sensing data and non-RF sensing data, respectively.
  • the data fusion function 470 takes the output from the sensing data collection 560 as well as the non-sensing data collection function 565 as input, while the data fusion function 470 takes output from the RF sensing data collection 460-1, non-RF sensing data collection function 460-2 but not non-sensing data collection function 475 as the input to perform data fusion.
  • sensing data includes RF sensing data and non-RF sensing data (provided by RF sensing data collection function like the RF sensing data collection function 460-1) and non-RF sensing data (provided by non-RF sensing data collection function like the non-RF sensing data collection function 460-2) .
  • RF sensing data includes RF sensing data and non-RF sensing data (provided by RF sensing data collection function like the RF sensing data collection function 460-1) and non-RF sensing data (provided by non-RF sensing data collection function like the non-RF sensing data collection function 460-2) .
  • the AI/ML functional framework 500 and the flowchart of operations in the AI/ML functional framework 500 as shown in FIG. 5 support sensing for AI and AI for sensing.
  • the sensing results delivered by the sensing application function 586 can provide input data for model training function 540, management function 545 and inference function 550.
  • the sensing results 526 can provide extra data for AI model, and may also provide approximate ground-truth to AI model, e.g. location, channel obtained by sensing function like the sensing data collection function 560.
  • the inference results 506 delivered by the inference function 550 can provide input data for sensing modeling function 582, sensing management function 584 and sensing application function 586.
  • the AI model can provide predicated channel information (for example, in the inference results 506) for better sensing.
  • the non-sensing data collection function 565 is a function for collecting non-sensing data.
  • the sensing data collection function 560 is a function for collecting sensing data, including RF sensing data and non-RF sensing data collection, therefore, the sensing data collection function 560 may also be considered including an RF sensing data collection like the RF sensing data collection function 460-1 in FIG. 4 and a non-RF sensing data collection function like the non-RF sensing data collection function 460-2 in FIG. 4.
  • the data fusion function 570 is responsible for data processing.
  • the data processing may include data pre-processing and cleaning, formatting, and transformation, integrating multiple data sources to produce more useful information than that provided by any individual data source.
  • the sensing modelling function 582 is a function that reconstructs the physical world (that is, gets a model for the physical world) .
  • the sensing modelling function 582 may be responsible for environment reconstruction, channel reconstruction (by a ray tracing scheme, for example) , target reconstruction, digital twin, and so on. Other features or functions that may be supported or provided in the context of physical world reconstruction may include target detection and/or target tracking.
  • the sensing modelling function 582 should be able to request specific information to be used to train the sensing model and to avoid reception of unnecessary information.
  • the sensing modelling function 582 may train a sensing model in some embodiments and training is one way to obtain a sensing model. Generally, a sensing model may be trained or otherwise obtained.
  • the trained/updated model 522 in FIG. 5 indicates that, the sensing modelling function 582 may send a trained sensing model to the sensing results storage function 588. Alternatively, the sensing modelling function 582 may deliver an updated sensing model to the sensing results storage function 588.
  • the sensing management function 584 is a function that is responsible for performing sensing control on the sensing modelling function 582 and the sensing application function 586.
  • the sensing management function 584 may also monitor the sensing output. Based on the sensing output, the sensing management function 584 may determine whether the sensing results is applicable, for example, by comparing the sensing results with a pre-determined or pre-defined threshold. If it is determined that the sensing results are no longer applicable, the sensing management function 584 may request the sensing modelling function 582 to re-train the (sensing) model, and may indicate the sensing application function 586 to switch the (sensing) model.
  • the sensing management function can also be referred to as sensing control function, sensing results management function, or simply management function.
  • sensing manager is also used herein as a general term for a sensing management element in a sensing system.
  • the sensing management function 584 receives the monitoring data from data fusion function, e.g. the ground truth data. With the monitoring data, the sensing management function 584 can compare the sensing output and the ground truth to determine the performance of the sensing model. After comparing the sensing output and the ground truth (i.e., after determined the performance of the sensing model) , the sensing performance can be evaluated.
  • data fusion function e.g. the ground truth data
  • the sensing management function 584 can compare the sensing output and the ground truth to determine the performance of the sensing model. After comparing the sensing output and the ground truth (i.e., after determined the performance of the sensing model) , the sensing performance can be evaluated.
  • the sensing management function 584 may also receive the inference output 530 from the sensing application function 586.
  • the inference output 530 may include the performance of the sensing application function 586.
  • the performance feedback /re-model request 524 may be applied, for example, when the sensing management function 584 observes that the sensing performance of current sensing model is not good enough. For example, during channel construction, a sensing model is generated according to a static environment map, but when there are many moving targets in the environment, causing too much signal refection, the channel construction model may be inapplicable.
  • the sensing management function 584 may send current sensing performance to the sensing modelling function 582, including current sensing output and its accuracy, resolution, etc.
  • the sensing management function 584 may also request the sensing modelling function 582 to retrain the model, and request the sensing application function 576 to get an updated sensing model from the sensing results storage function 588.
  • the sensing management function 584 may send a sensing model transfer request 526 to the sensing results storage function 588 to request a model for the sensing application function 586.
  • the sensing model transfer request 526 may be an initial model transfer request for an initially trained AI/ML model or an updated model transfer request for an updated AI/ML model obtained by re-training an existing model.
  • the sensing modelling function 522 may obtain the sensing results (e.g. environment map) 526, application functions may then use the environment map to assist communication (e.g. beam prediction) , where the data for the application functions may be a reference signal (RS) with low density for beam management.
  • the data for the application functions is optional, which means the application functions may totally depend on the sensing results. Therefore, the sensing results 526 is beneficial to reduce the RS overhead.
  • the sensing modelling function 522 may obtain the sensing results (e.g.
  • the sensing modelling function 522 may obtain the sensing results 526 (e.g. whether there is an object and object information (for intruder detection, it is an intruder information) ) .
  • the sensing results 526 may also be provided to the 3rd party.
  • the object information location, shape, etc.
  • the sensing application can use the sensing results 526 for beam management.
  • the action data is the RS for beam management.
  • it is the sensing modelling function 522 who determines whether there is an object and also the object information.
  • application functions may determine whether there is an object.
  • the sensing modelling function 522 may obtain the sensing model for object data (e.g. according to the received sensing signal to determine the object information) .
  • the application functions may obtain the object information.
  • the action data may be the received sensing signal.
  • the inference output 530 is the output of the sensing model produced by the sensing application function 586.
  • the sensing application function 586 should signal the inference output 530 to nodes that have explicitly requested them (e.g. via subscription) , or nodes that are subject to actions based on the output from the sensing application function 586.
  • the sensing results storage function 588 is a function that stores the sensing models, for example, the reconstructed physical world (environment map, target and its location, for example) .
  • the storage location may be within RAN (at BS and/or UE side, for example) , or outside RAN (at the core network or the third party, for example) .
  • the sensing results storage function 588 may receive the sensing model from the sensing modelling function 582.
  • the sensing model may be an initially trained sensing model, or a re-trained/updated sensing model.
  • the sensing results storage function can also be referred to as sensing storage function, RAN storage function, local RAN storage function, or RAN and Core Network storage function.
  • the name “storage subsystem” is also used herein as a general term for a sensing results storage element in a sensing system.
  • a model is one type of sensing result shown in FIG. 1. More generally, the sensing results storage function 588 may store sensing results, which may, but need not necessarily, include a (sensing) model.
  • the sensing results storage function 588 may receive the sensing model transfer request 526 from the sensing management function 584, as described before in connection with the sensing management function 584. In response to reception of the sensing model transfer request 526, the sensing results storage function 588 may send the corresponding model to the sensing application function 586. For example, the sensing model transfer request 526 may indicate the requested model ID. Then, in response to the sensing model transfer request 526, the sensing results storage function 588 may send the model with the requested ID. Alternatively, the sensing model transfer request 526 may indicate the required (or desired) sensing functionality ID and/or sensing performance requirement (e.g. sensing accuracy, sensing distance/speed/angle resolution) . Then, in response to the sensing model transfer request 526, the sensing results storage function may deliver a model satisfying the indicated sensing functionality and the performance requirement.
  • the sensing model transfer request 526 may indicate the required (or desired) sensing functionality ID and/or sensing performance requirement (e.g. sensing accuracy, sens
  • the AI/ML functional framework 500 For a function among the 11 parts of the AI/ML functional framework 500 as illustrated in FIG. 5, it can be located at UE, BS, core network, or the 3rd party. For different sensing functions, they may be located at the same physical entity or different physical entities.
  • an AI/ML functional framework 500 (including the signaling interface among functions) can be defined to support integrated AI and sensing, including sensing to improve AI performance and AI to improve sensing performance.
  • FIG. 6 illustrates a schematic diagram of a third example AI/ML functional framework 600 and the flowchart of operations in the AI/ML functional framework 600 in accordance with some embodiments of the present disclosure.
  • the AI/ML functional framework 600 as shown in FIG. 6 includes 8 parts, i.e., a model training function 640, a management function 645, a model inference function 650, a sensing application function 652, a model storage function 655, a sensing data collection function 660, a non-sensing data collection function 665 and a data fusion function 670.
  • the model training function 640, management function 645, model inference function 650 and model storage function 655 may each be an example of the first function 340, second function 345, third function 350, fourth function 355 as illustrated in FIG. 3, respectively, and may be the same or similar to the model training function 440, the management function 445, the inference function 450 and the model storage function 455 as illustrated in FIG. 4, respectively, and may be the same or similar to the model training function 540, the management function 545, the inference function 550 and the model storage function 555 as illustrated in FIG. 5, respectively.
  • the sensing data collection 660 and data fusion function 670 may each be an example of the function 375 which is configured to operate based on sensing data as illustrated in FIG. 3, respectively.
  • the sensing data collection function 660, non-sensing data collection function 665 and data fusion function 670 may be the same as or similar to the sensing data collection function 560, non-sensing data collection function 565 and data fusion function 570, respectively.
  • the AI/ML functional framework 600 may be considered as a combination of the left part of FIG. 5 (the non-sensing data collection function 565, sensing data collection 560 and the data fusion function 570) and the right part of FIG. 4 (that is, the model training function 440, management function 445, inference function 450 and the model storage function 455) plus a separate sensing application function 652.
  • the difference will be described, and the elements similar to those of FIGS. 4-5 may refer to the corresponding description and thus may be omitted for simplicity.
  • the non-sensing data collection function 665 is a function for collecting non-sensing data.
  • the sensing data collection function 660 is a function for collecting sensing data.
  • the sensing data may include RF sensing data and/or non-RF sensing data.
  • the data fusion function 670 is responsible for data processing.
  • the data processing may include data pre-processing and cleaning, formatting, and transformation, integrating multiple data sources to produce more useful information than that provided by any individual data source.
  • the model training function 640 is a function that performs the ML model training, validation, and testing which may generate model performance metrics as part of the model testing procedure.
  • the model training function 640 is also responsible for sensing model training.
  • the model training function 640 is also responsible for data preparation (e.g. data pre-processing and cleaning, formatting, and transformation) . Interaction between model training function and other function may include training data 601, trained/updated model 602 and sensing information and/or AI assistance information 607.
  • the training data 601 is training data received by the model training function 640 from at least one of the non-sensing data collection function 665, sensing data collection function 660, RF sensing data collection function, non-RF sensing data collection function or data fusion function 670.
  • sensing data collection function 660 may also be considered including an RF sensing data collection like the RF sensing data collection function 460-1 in FIG. 4 and a non-RF sensing data collection function like the non-RF sensing data collection function 460-2 in FIG. 4.
  • the sensing data collection function 660 in FIG. 6 may be replaced by an RF sensing data collection (like the RF sensing data collection function 460-1 in FIG. 4) and a non-RF sensing data collection function (like the non-RF sensing data collection function 460-2 in FIG. 4) .
  • the model training function 602 may involve a trained/updated model 602 and/or sensing information and/or AI assistance information 607.
  • the model training function 602 may send a (initially) trained AI model (e.g. by distributed learning) to the model storage function 665.
  • the model training function 602 may deliver an updated (i.e., re-trained) AI model to the model storage function 655.
  • the model training function 602 may receive sensing information and/or assistance information 607 from the management function 645.
  • the management function 645 is a function that is responsible for performing model control on model training function 640, model inference function 650 and sensing application function 652.
  • the management function 645 may also monitor the model output of the AI and/or sensing model, and determine whether the AI or sensing model qualities is applicable. If it is determined that the AI or sensing model qualities are no longer applicable, the management function 645 may request the model training function 640 to re-train the AI and/or sensing model accordingly, and indicate the model inference function 650 and/or sensing application function 652 to switch the AI or sensing model.
  • the management function 645 may receive the monitor data 603 from the data fusion function 670.
  • the management function 645 may also receive the output of the model inference function 650 and/or sensing application function 652.
  • the output of the model inference function 650 is illustrated as output 610
  • the output of the sensing application function 652 is illustrated as output 611.
  • the output 610 may include the performance of the model inference
  • the output 611 may include the performance of the sensing inference results.
  • model inference function 650 and/or sensing application function 652 or information derived from the management function 645 is suitable for improvement of the AI/sensing model trained in the model training function 640. If certain information derived from model inference function 650 and/or sensing application function 652 or information derived from the management function 645 is suitable for improvement of the AI/sensing model trained in the model training function 640, the performance feedback /re-model (retraining) request 604 is applied.
  • the management function 645 may send current AI/sensing performance to the model training function 640, including current AI/sensing model output and its accuracy, etc.
  • the management function 645 may also request the model training function 640 to retrain the AI/sensing model and send the retrained AI/sensing model to the model storage function 655, and request the model inference function 650 to get an updated AI model, and request the sensing application function 652 to get an updated sensing model.
  • the management function 645 may send a model switching signalling to the model inference function 650 to switch to another AI model, or send a fallback signalling to indicate the model inference function 650 to use non-AI mode.
  • the management function 645 may send a model switching signalling to the sensing application function 652 to switch to another sensing model, or send a fallback signalling to indicate the sensing application function 652 to use non-sensing mode.
  • the management function 645 may indicate the model inference function 650 to use which AI model, and activate or de-activate one or multiple of the candidate AI models. Also, when there are multiple candidate sensing models, the management function 645 may indicate the sensing application function 652 to use which sensing model, and activate or de-activate one or multiple of the candidate sensing models.
  • the management function 645 may send an AI model transfer request 606 to the model storage function 655 to request an AI model for the model inference function 650.
  • the AI model transfer request 606 may be an initial AI model transfer request for an initially trained AI/ML model or an updated AI model transfer request for an updated AI/ML model obtained by re-training an existing AI/ML model.
  • the management function 645 may send a sensing model transfer request 606 to the model storage function 655 to request a sensing model for the sensing application function 652.
  • the sensing model transfer request 606 may be an initial AI model transfer request for an initially trained sensing model or an updated sensing model transfer request for an updated sensing model obtained by re-training an existing sensing model.
  • Model inference function 650 is a function that provides inference results.
  • the model inference function 650 is also responsible for performing actions according to inference results. For example, the model inference function 650 may trigger or perform corresponding actions according to inference decision and it may trigger actions directed to other entities or to itself.
  • the model inference function 650 may also be responsible for data preparation (e.g. data pre-processing and cleaning, formatting, and transformation) based on inference data delivered by a data collection function (data received from the non-sensing data collection function 665, sensing data collection function 660, RF sensing data collection function, non-RF sensing data collection function or data fusion function 670) .
  • the model inference function 650 may send the model inference results to sensing application function 652 to assist sensing inference.
  • Sensing application function 652 is a function that provides sensing decision output or sensing inference output (e.g. predictions or detections, for example, target detection, channel prediction, etc. ) .
  • the sensing application function 652 is also responsible for performing actions according to sensing results. For example, the sensing application function 652 may trigger or perform corresponding actions according to sensing decision or prediction, and it may trigger actions directed to other entities or to itself.
  • the sensing application function 652 may also be responsible for data preparation (e.g. data pre-processing and cleaning, formatting, and transformation) .
  • the sensing application function 652 may send the sensing results to model inference function 650 to assist model inference for better model inference results.
  • Model storage function 655 is a function that stores the AI/sensing models.
  • the storage location can be within RAN (e.g. BS and/or UE side) , or outside RAN (e.g. core network or the third party) .
  • an AI/ML functional framework 600 (including the signaling interface among functions) can be defined to support integrated AI and sensing, including sensing to improve AI performance and AI to improve sensing performance.
  • FIG. 7 illustrates a schematic diagram of a fourth example AI/ML functional framework 700 and the flowchart of operations in the AI/ML functional framework 700 in accordance with some embodiments of the present disclosure.
  • the AI/ML functional framework 700 as shown in FIG. 7 includes 10 parts, i.e., a model training function 740, a management function 745, a model inference function 750, a sensing application function 752, a model storage function 755, an anchor management function 766, an AI anchor data collection function 762, a sensing anchor data collection function 760, a non-anchor data collection function 764 and a data fusion function 770.
  • the model training function 740, management function 745, model inference function 750 and model storage function 755 may each be an example of the first function 340, second function 345, third function 350, fourth function 355 as illustrated in FIG. 3, respectively.
  • the AI anchor data collection function 762, sensing anchor data collection function 760, non-anchor data collection function 764 and data fusion function 770 may each be an example of the function 375 which is configured to operate based on sensing data as illustrated in FIG. 3.
  • the model training function 740, management function 745, model inference function 750, sensing application function 752 and model storage function 755 may each be the same as or similar to the model training function 640, management function 645, model inference function 650, sensing application function 652 and model storage function 655 as illustrated in FIG. 6.
  • the AI/ML functional framework 700 differs from the AI/ML functional framework 600 as shown in FIG. 6 mainly in the data collection function, i.e., the left part of FIG. 7.
  • the difference will be described, and the elements similar to those of FIG. 6 may refer to the corresponding description and thus may be omitted for simplicity.
  • Anchor management function 766 is a function that is responsible for performing control on AI anchors, sensing anchors and non-anchors.
  • the anchor Management function 766 can configure which node is the AI anchor or sensing anchor or non-anchor, and indicate a specific anchor to perform data collection with a corresponding data type.
  • the anchor management function 766 may also indicate non-anchor to perform data collection with a corresponding collected data type.
  • An anchor may be a node which can report ground truth to other functions.
  • the anchor is deployed by the network operator at a known location, and the anchor performs measurement and reports the collected data to the network, including the measurement data and the ground truth.
  • the ground truth includes the label data information for an AI model.
  • the anchor may include AI anchor and sensing anchor, the sensing anchor is deployed for sensing data collection, and the AI anchor is deployed for AI data collection, training, monitoring and/or inference.
  • an anchor may be a passive object.
  • an anchor may be an object with known information such as shape, size, orientation, speed, location, distances or relative motion between objects. Such anchor information can be indicated from a base station to a UE for example, in which case the UE can perform sensing measurement and compare its sensing results with the anchor information, so as to calibrate its sensing results.
  • the sensing anchor data collection function 760 may be a function that collects sensing anchor data from passive sensing anchors.
  • the AI anchor data collection function 762 may be a function that collects AI anchor data from passive AI anchors.
  • the collected sensing anchor data and/or AI anchor data may be used by the model training function 740 and/or management function 745 and/or model inference function 750 and/or sensing application function 752 to process other data (for example, for data calibration) .
  • the model training function 740 may use the sensing anchor data and/or AI anchor data to perform data preparation.
  • the management function 745 may use the sensing anchor data and/or AI anchor data to perform model performance monitoring.
  • the management function 745 may be aware that the model performance of the sensing model currently used degrades, and then transmits a performance feedback and/or retraining request 704 to the model training function 740 to, for example, re-train (and update) the sensing model for better model performance.
  • the sensing application function 752 may also use the sensing anchor data and/or AI anchor data, for example, to perform data preparation and/or self-check of its inference results.
  • the sensing application function 752 may check whether its inference results is precise enough using the sensing anchor data and/or AI anchor data, for example, by comparing an object shape derived from its inference results and the actual object shape as indicated in the sensing anchor data and/or AI anchor data to confirm whether the difference between the two is within a pre-defined or pre-configured or required threshold.
  • Sensing anchor data collection function 760 is a function that provides input data to data fusion function 770. Specifically, the sensing anchor data collection function 760 may use sensing anchors to collect sensing anchor data and then provide the sensing anchor data as input data to the data fusion function 770 for data fusion.
  • the input data may include ground truth information. Ground truth refers to the true answer to a specific problem or question. For example, for channel prediction by AI, the ground truth is the exact channel information. Examples of input data may include measurements from UEs or different network entities. Here, the measurement may be RF sensing measurement, non-RF sensing (LIDAR (Light Detection and Ranging) , camera, video, sensor, etc. ) measurement.
  • LIDAR Light Detection and Ranging
  • AI anchor data collection function 762 is a function that provides input data to data fusion function. Specifically, the AI anchor data collection function 762 may use AI anchors to collect AI anchor data and then provide the AI anchor data as input data to the data fusion function 770 for data fusion.
  • the input data may include ground truth information. Examples of input data may include measurements results from UEs or different network entities. Here, the measurement results are not obtained by sensing, e.g. it may be obtained by measurement of a reference signal.
  • Non-anchor data collection function 764 is a function that provides input data to data fusion function 770. Specifically, the non-anchor data collection function 764 may use non-anchors to collect non-anchor data and then provide the non-anchor data as input data to the data fusion function 770 for data fusion.
  • the input data does not include the ground truth information. Examples of input data may include measurements from UEs or different network entities. Here, the measurement maybe RF sensing measurement, non-RF sensing (LIDAR (Light Detection and Ranging) , camera, video, sensor, etc. ) measurement and non-sensing data.
  • LIDAR Light Detection and Ranging
  • the data fusion function 770 is responsible for data processing.
  • the data processing may include data pre-processing and cleaning, formatting, and transformation, integrating multiple data sources to produce more useful information than that provided by any individual data source.
  • the data fusion function 770 combines the input from sensing anchor data collection function 760, AI anchor data collection function 762 and non-anchor data collection function 764, so as to derive fused data to be used in the training data 701, the monitoring data 703 and the inference data 705.
  • Model training function 740 is a function that performs the ML model training, validation, and testing which may generate model performance metrics as part of the model testing procedure.
  • the model training function 740 may also be responsible for sensing model training.
  • the model training function 740 may also be responsible for data preparation (e.g. data pre-processing and cleaning, formatting, and transformation) . Interaction between model training function 740 and other functions may include the training data 701, trained/updated model 702 and sensing information and/or AI assistance information 707.
  • the model training function 740 receives training data 701 from at least one of the AI anchor data collection function 762, sensing anchor data collection function 760, non-anchor data collection function 764 or data fusion function 770.
  • the model training function 740 receives training data 701 from the data fusion function 770.
  • output of at least one of the AI anchor data collection function 762, sensing anchor data collection function 760, non-anchor data collection function 764 may also be sent to the model training function 740 as the training data 701.
  • the model training function 740 may send the trained/updated (re-trained) model 702 to the model storage function 755.
  • the model training function 740 may receive sensing information and/or assistance information 707 from management function 745.
  • the management function 745 is a function that is responsible for performing model control on model training function 740, model inference function 750 and sensing application function 752.
  • the management function 745 may also monitor the model output of the AI and/or sensing model, and determine whether the AI or sensing model qualities is applicable. If it is determined that the AI or sensing model qualities are no longer applicable, the management function 745 may request the model training function 740 to re-train the AI and/or sensing model accordingly, and indicate the model inference function 750 and/or sensing application function 752 to switch the AI or sensing model.
  • the management function 745 may receive the monitor data 703 from the data fusion function 770.
  • the management function 745 may also receive the output of the model inference function 750 and/or sensing application function 752.
  • the output of the model inference function 750 is illustrated as output 710
  • the output of the sensing application function 752 is illustrated as output 711.
  • the output 710 may include the performance of the model inference
  • the output 711 may include the performance of the sensing inference results.
  • model inference function 750 and/or sensing application function 752 or information derived from the management function 745 is suitable for improvement of the AI/sensing model trained in the model training function 740. If certain information derived from model inference function 750 and/or sensing application function 752 or information derived from the management function 745 is suitable for improvement of the AI/sensing model trained in the model training function 740, the performance feedback /re-model (retraining) request 704 is applied.
  • the management function 745 may send current AI/sensing performance to the model training function 740, including current AI/sensing model output and its accuracy, etc.
  • the management function 745 may also request the model training function 740 to retrain the AI/sensing model and send the retrained AI/sensing model to the model storage function 755, and request the model inference function 750 to get an updated AI model, and request the sensing application function 752 to get an updated sensing model.
  • the management function 745 may send a model switching signalling to the model inference function 750 to switch to another AI model, or send a fallback signalling to indicate the model inference function 750 to use non-AI mode.
  • the management function 745 may send a model switching signalling to the sensing application function 752 to switch to another sensing model, or send a fallback signalling to indicate the sensing application function 752 to use non-sensing mode.
  • the management function 745 may indicate the model inference function 750 to use which AI model, and activate or de-activate one or multiple of the candidate AI models. Also, when there are multiple candidate sensing models, the management function 745 may indicate the sensing application function 752 to use which sensing model, and activate or de-activate one or multiple of the candidate sensing models.
  • the management function 745 may send an AI model transfer request 706 to the model storage function 755 to request an AI model for the model inference function 750.
  • the AI model transfer request 706 may be an initial AI model transfer request for an initially trained AI/ML model or an updated AI model transfer request for an updated AI/ML model obtained by re-training an existing AI/ML model.
  • the management function 745 may send a sensing model transfer request 706 to the model storage function 755 to request a sensing model for the sensing application function 752.
  • the sensing model transfer request 706 may be an initial AI model transfer request for an initially trained sensing model or an updated sensing model transfer request for an updated sensing model obtained by re-training an existing sensing model.
  • Model inference function 750 is a function that provides inference results.
  • the model inference function 750 is also responsible for performing actions according to inference results.
  • the model inference function 750 may trigger or perform corresponding actions according to inference decision and it may trigger actions directed to other entities or to itself.
  • the model inference function 750 may also be responsible for data preparation (e.g. data pre-processing and cleaning, formatting, and transformation) based on inference data delivered by a data collection function (data received from the non-sensing data collection function 765, sensing data collection function 760, RF sensing data collection function, non-RF sensing data collection function or data fusion function 770) .
  • the model inference function 750 may send the model inference results to sensing application function 752 to assist sensing inference.
  • Sensing application function 752 is a function that provides sensing decision output or sensing inference output (e.g. predictions or detections, for example, target detection, channel prediction, etc. ) .
  • the sensing application function 752 is also responsible for performing actions according to sensing results. For example, the sensing application function 752 may trigger or perform corresponding actions according to sensing decision or prediction, and it may trigger actions directed to other entities or to itself.
  • the sensing application function 752 may also be responsible for data preparation (e.g. data pre-processing and cleaning, formatting, and transformation) .
  • the Sensing application function 752 may send the sensing results to model inference function 750 to assist model inference for better model inference results.
  • Model storage function 755 is a function that stores the AI/sensing models.
  • the storage location can be within RAN (e.g. BS and/or UE side) , or outside RAN (e.g. core network or the third party) .
  • the 8 parts of the AI/ML functional framework 700 as illustrated in FIG. 7 may be located at UE, BS, core network, or the 3rd party.
  • they may be located at the same physical entity or different physical entities.
  • an AI/ML functional framework (including the signaling interface among functions) can be defined to support integrated AI and sensing, including sensing to improve AI performance and AI to improve sensing performance.
  • FIG. 8 illustrates a block diagram of an electronic device (ED) 800 that may be used for implementing the devices and methods disclosed herein.
  • the electronic device 800 may be an element of communications network infrastructure, such as a base station (for example, a NodeB, an evolved Node B (eNodeB, or eNB) , a next generation NodeB (sometimes referred to as a gNodeB or gNB) , a home subscriber server (HSS) , a gateway (GW) such as a packet gateway (PGW) or a serving gateway (SGW) or various other nodes or functions within a core network (CN) or a Public Land Mobility Network (PLMN) .
  • a base station for example, a NodeB, an evolved Node B (eNodeB, or eNB)
  • a next generation NodeB sometimes referred to as a gNodeB or gNB
  • HSS home subscriber server
  • GW gateway
  • PGW packet gateway
  • SGW serving
  • the electronic device may be a device that connects to the network infrastructure over a radio interface, such as a mobile phone, smart phone or other such device that may be classified as a User Equipment (UE) .
  • ED 800 may be a Machine Type Communications (MTC) device (also referred to as a machine-to-machine (M2M) device) , or another such device that may be categorized as a UE despite not providing a direct service to a user.
  • MTC Machine Type Communications
  • M2M machine-to-machine
  • ED 800 may be a road side unit (RSU) , a vehicle UE (V-UE) , pedestrian UE (P-UE) or an infrastructure UE (I-UE) .
  • RSU road side unit
  • V-UE vehicle UE
  • P-UE pedestrian UE
  • I-UE infrastructure UE
  • an ED may also be referred to as a mobile device, a term intended to reflect devices that connect to mobile network, regardless of whether the device itself is designed for, or capable of, mobility.
  • Specific devices may utilize all of the components shown or only a subset of the components, and levels of integration may vary from device to device.
  • a device may contain multiple instances of a component, such as multiple processors, memories, transmitters, receivers, etc.
  • the electronic device 800 typically includes a processor 802, such as a Central Processing Unit (CPU) , and may further include specialized processors such as a Graphics Processing Unit (GPU) or other such processor, a memory 804, a network interface 806 and a bus 808 to connect the components of ED 800.
  • ED 800 may optionally also include components such as a mass storage device 810, a video adapter 812, and an I/O interface 816 (shown in dashed lines) .
  • the memory 804 may comprise any type of non-transitory system memory, readable by the processor 802, such as static random access memory (SRAM) , dynamic random access memory (DRAM) , synchronous DRAM (SDRAM) , read-only memory (ROM) , or a combination thereof.
  • the memory 804 may include more than one type of memory, such as ROM for use at boot-up, and DRAM for program and data storage for use while executing programs.
  • the bus 808 may be one or more of any type of several bus architectures including a memory bus or memory controller, a peripheral bus, or a video bus.
  • the electronic device 800 may also include one or more network interfaces 806, which may include at least one of a wired network interface and a wireless network interface. As illustrated in FIG. X, network interface 806 may include a wired network interface to connect to a network 822, and also may include a radio access network interface 820 for connecting to other devices over a radio link. When ED 800 is a network infrastructure element, the radio access network interface 820 may be omitted for nodes or functions acting as elements of the PLMN other than those at the radio edge (e.g., an eNB) . When ED 800 is infrastructure at the radio edge of a network, both wired and wireless network interfaces may be included.
  • network interface 806 may include a wired network interface to connect to a network 822, and also may include a radio access network interface 820 for connecting to other devices over a radio link.
  • the radio access network interface 820 may be omitted for nodes or functions acting as elements of the PLMN other than those at the radio edge (e.g
  • radio access network interface 820 may be present and it may be supplemented by other wireless interfaces such as WiFi network interfaces.
  • the network interfaces 806 allow the electronic device 800 to communicate with remote entities such as those connected to network 822.
  • the mass storage 810 may comprise any type of non-transitory storage device configured to store data, programs, and other information and to make the data, programs, and other information accessible via the bus 808.
  • the mass storage 810 may comprise, for example, one or more of a solid state drive, hard disk drive, a magnetic disk drive, or an optical disk drive.
  • the mass storage 810 may be remote to the electronic device 800 and accessible through use of a network interface such as interface 806.
  • the mass storage 810 is distinct from memory 804 where it is included, and may generally perform storage tasks compatible with higher latency, but may generally provide lesser or no volatility.
  • the mass storage 810 may be integrated with a heterogeneous memory 804.
  • the optional video adapter 812 and the I/O interface 816 provide interfaces to couple the electronic device 800 to external input and output devices.
  • input and output devices include a display 814 coupled to the video adapter 812 and an I/O device 818 such as a touch-screen coupled to the I/O interface 816.
  • Other devices may be coupled to the electronic device 800, and additional or fewer interfaces may be utilized.
  • a serial interface such as Universal Serial Bus (USB) (not shown) may be used to provide an interface for an external device.
  • USB Universal Serial Bus
  • the embodiments of the present disclosure may be implemented by means of a software program so that the electronic device 800 may perform any process of the embodiments of the disclosure as discussed with reference to FIG. 2-8.
  • the embodiments of the present disclosure may also be implemented by hardware or by a combination of software and hardware.
  • the software program may be tangibly contained in a computer-readable medium which may be included in the electronic device 800 (such as in the memory 804 or mass storage 810) or other storage devices that are accessible by the electronic device 800.
  • the electronic device 800 may load the software program from the computer-readable medium to the memory 804 for execution.
  • the computer-readable medium may include any types of tangible non-volatile storage, such as ROM, EPROM, a flash memory, a hard disk, CD, DVD, and the like.
  • various embodiments of the present disclosure may be implemented in hardware or special purpose circuits, software, logic or any combination thereof. Some aspects may be implemented in hardware, while other aspects may be implemented in firmware or software which may be executed by a controller, microprocessor or other computing device. While various aspects of embodiments of the present disclosure are illustrated and described as block diagrams, flowcharts, or using some other pictorial representations, it is to be understood that the block, apparatus, system, technique or method described herein may be implemented in, as non-limiting examples, hardware, software, firmware, special purpose circuits or logic, general purpose hardware or controller or other computing devices, or some combination thereof.
  • FIG. 9 illustrates a schematic diagram of a structure of an apparatus 900 in accordance with some embodiments of the present disclosure.
  • the apparatus 900 includes a performing unit 902.
  • the apparatus 900 may be applied to the communication system as shown in FIG. 1, and may implement any of the methods provided in the foregoing embodiments.
  • a physical representation form of the apparatus 900 may comprise a communication device (for example, a network device, or a UE, or a core network device, or a 3rd party device) , or a part of the communication device.
  • the apparatus 900 may be another apparatus that can implement a function of the communication device, for example, a processor or a chip inside the communication device.
  • the apparatus 900 may be some programmable chips such as a field-programmable gate array (field-programmable gate array, FPGA) , a complex programmable logic device (complex programmable logic device, CPLD) , an application-specific integrated circuit (application-specific integrated circuits, ASIC) , or a system on a chip (System on a chip, SOC) .
  • FPGA field-programmable gate array
  • CPLD complex programmable logic device
  • ASIC application-specific integrated circuits
  • SOC system on a chip
  • the performing unit 902 may be configured to perform at least one operation based on an AI/ML functional framework.
  • the AI/ML functional framework may comprise at least one of a first function configured to determine first one or more devices for participating in a training process of an AI/ML model, a second function configured to determine second one or more devices for performing model monitoring or functionality monitoring of the AI/ML model, or third function configured to determine third one or more devices for performing model inference based on the AI/ML model.
  • the AI/ML functional framework may further comprise at least one of a fourth function configured to perform model training of the AI/ML model based on the training process, a fifth function configured to perform model management of the AI/ML model, a sixth function configured to provide at least one inference result of the model inference, a seventh function configured to provide first input data to the first function, provide second input data to the second function, and provide third input data to the third function, or an eighth function configured to store the AI/ML model.
  • a fourth function configured to perform model training of the AI/ML model based on the training process
  • a fifth function configured to perform model management of the AI/ML model
  • a sixth function configured to provide at least one inference result of the model inference
  • a seventh function configured to provide first input data to the first function, provide second input data to the second function, and provide third input data to the third function
  • an eighth function configured to store the AI/ML model.
  • the apparatus 900 can include various other units or modules which may be configured to perform various operations or functions as described in connection with the foregoing method embodiments. The details can be obtained referring to the detailed description of the foregoing method embodiments and are not described herein again.
  • division into the units or modules in the foregoing embodiments of the present disclosure is an example, and is merely logical function division. In actual implementation, there may be another division manner.
  • function units in embodiments of the present disclosure may be integrated into one performing unit, or may exist alone physically, or two or more units may be integrated into one unit.
  • the integrated unit may be implemented in a form of hardware, or may be implemented in a form of a software function unit.
  • the present disclosure also provides at least one computer program product tangibly stored on a non-transitory computer-readable storage medium.
  • the computer program product includes computer-executable instructions, such as those included in program modules, being executed in a device on a target real or virtual processor, to carry out the method 200 or the flowchart as described above with reference to FIGS. 2-8.
  • program modules include routines, programs, libraries, objects, classes, components, data structures, or the like that perform particular tasks or implement particular abstract data types.
  • the functionality of the program modules may be combined or split between program modules as desired in various embodiments.
  • Machine-executable instructions for program modules may be executed within a local or distributed device. In a distributed device, program modules may be located in both local and remote storage media.
  • Program code for carrying out methods of the present disclosure may be written in any combination of one or more programming languages. These program codes may be provided to a processor or controller of a general purpose computer, special purpose computer, or other programmable data processing apparatus, such that the program codes, when executed by the processor or controller, cause the functions/operations specified in the flowcharts and/or block diagrams to be implemented.
  • the program code may execute entirely on a machine, partly on the machine, as a stand-alone software package, partly on the machine and partly on a remote machine or entirely on the remote machine or server.
  • the computer program codes or related data may be carried by any suitable carrier to enable the device, apparatus or processor to perform various processes and operations as described above.
  • Examples of the carrier include a signal, computer-readable medium, and the like.
  • the computer-readable medium may be a computer-readable signal medium or a computer-readable storage medium.
  • a computer-readable medium may include but not limited to an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of the computer-readable storage medium would include an electrical connection having one or more wires, a portable computer diskette, a hard disk, a random access memory (RAM) , a read-only memory (ROM) , an erasable programmable read-only memory (EPROM or Flash memory) , an optical fiber, a portable compact disc read-only memory (CD-ROM) , an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.

Landscapes

  • Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Databases & Information Systems (AREA)
  • Evolutionary Computation (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Traffic Control Systems (AREA)

Abstract

Example embodiments of the present disclosure relate to an artificial intelligence/machine learning (AI/ML) framework for communication. An example method includes performing at least one operation based on an AI/ML functional framework, which may comprise at least one of a first function, a second function, a third function, a fourth function or at least one function configured to operate based on sensing data. The first function is configured to perform model training of at least one of an AI/ML model, an AI/ML sub-model, an AI/ML functionality or an AI/ML sub-functionality. The second function is configured to perform management of the AI/ML model. The third function is configured to perform inference of the AI/ML model to obtain inference results. The fourth function is configured to store the AI/ML model. In this way, AI/ML framework with sensing functionalities (including sensing for AI/ML and AI/ML for sensing) can be implemented.

Description

AI/ML FRAMEWORK FOR COMMUNICATION FIELD
Example embodiments of the present disclosure generally relate to the field of communications, and in particular, to an artificial intelligence /machine learning (AI/ML) functional framework for communication.
BACKGROUND
Artificial intelligence (AI) , and in particular deep machine learning (ML) , is a wide-ranging branch of computer science concerned with building smart machines capable of performing tasks that typically require human intelligence. It is expected that the introduction of AI will create a paradigm shift in virtually every sector of the tech industry and AI is expected to play a role in advancement of network technologies. For example, existing communication techniques, which rely on classical analytical modeling of channels, have enabled wireless communications to take place at close to the theoretical Shannon limit. To further maximize efficient use of the signal space, existing techniques may be unsatisfactory. AI is expected to help address this challenge. Other aspects of wireless communication may benefit from the use of AI, particularly in future generations of wireless technologies, such as technologies in advanced 5G and future 6G systems, and beyond.
However, considering the massive devices in the network which have data and computing capability, network is expected to provide AI service (s) .
SUMMARY
Some embodiments of the disclosure will propose designs on the AI/ML framework with sensing functionalities, including sensing for AI/ML (sensing improves AI/ML performance) and AI/ML for sensing (AI/ML improves sensing performance) . In general, example embodiments of the present disclosure provide a solution for an AI/ML functional framework for communication, especially for a 6G AI/ML framework with sensing.
In a first aspect, there is provided a method. The method comprises: performing at least one operation based on an artificial intelligence/machine learning (AI/ML) functional framework, wherein the AI/ML functional framework comprises: a first function configured to perform model training of at least one of an AI/ML model, an AI/ML sub-model, an AI/ML functionality or an AI/ML sub-functionality; a second function configured to perform management of the AI/ML model; a third function configured to perform inference of the AI/ML model to obtain inference results; a fourth function configured to store the AI/ML model; and at least one function configured to operate based on sensing data. In this way, an AI/ML framework with sensing functionalities (including sensing for AI/ML and AI/ML for sensing) can be implemented.
In some example embodiments, the first function is further configured to perform at least one of the following: validation of the AI/ML model; testing of the AI/ML model; or data preparation based on data received by the first function. In this way, the first function can provide a more accurate AI/ML model, which in turn can provide more accurate inference results, thus the reliability of the AI/ML model can be improved.
In some example embodiments, the second function is further configured to at least one of the following: perform control of the model training of the at least one of AI/ML model, AI/ML sub-model, AI/ML functionality or AI/ML sub-functionality; perform control of the inference of the AI/ML model; or monitor output of the AI/ML model. In this way, the second function can facilitate the first function to provide a more accurate AI/ML model, which in turn can provide more accurate inference results, thus the reliability of the AI/ML model can be improved.
In some example embodiments, the third function is further configured to at least one of the following: perform an action based on the inference results; or perform data preparation based on data received by the third function. In this way, the third function can perform the action based on the inference results of the AI/ML model, improving the processing efficiency and reliability with the AI/ML model.
In some example embodiments, the at least one operation comprises at least one of the following operations performed by the first function: transmitting the trained AI/ML model to the fourth function, receiving AI/ML assistance information from the second function, or receiving, from the second function, a performance level of the AI/ML model and a request to retrain the AI/ML model. In this way, the first function can provide a more accurate (re) trained AI/ML model based on the AI/ML assistance information and/or the performance level of the AI/ML model. The (re) trained AI/ML model can, in turn, provide more accurate inference results, thus the reliability of the (re) trained AI/ML model can be improved.
In some example embodiments, the at least one operation comprises the following operations performed by the second function: receiving the inference results from the third function. In this way, the second function can facilitate the first function to provide a more accurate retrained/updated AI/ML model based on the inference results of the current AI/ML model. The retrained/updated AI/ML model can, in turn, provide more accurate inference results, thus the reliability of the AI/ML model can be improved.
In some example embodiments, the at least one operation further comprises the following operations performed by the second function: determining that a performance level of the AI/ML model is below a threshold level based on the inference results received from the third function; and based on determining that the performance level is below the threshold level, transmitting, to the first function, the performance level of the AI/ML model and a request to retrain the AI/ML model. In this way, the second function can request the first function to retrain the AI/ML model in response to the performance level of the currently used AI/ML model becoming below a threshold level. In this sense, the second function can facilitate the first function to provide a more accurate retrained/updated AI/ML model based on the inference results of the current AI/ML model. The retrained/updated AI/ML model can, in turn, provide more accurate inference results, thus the reliability of the retrained/updated AI/ML model can be improved as compared with the currently used AI/ML model.
In some example embodiments, the at least one operation comprises at least one of the following operations performed by the second function: transmitting AI/ML assistance information to the first function, transmitting, to the third function, a switching indication to switch from the AI/ML model to another AI/ML model; transmitting, to the third function, a fallback indication to apply a non-AI/ML model instead of the AI/ML model; transmitting, to the third function, an activating indication to activate one or more of a plurality of candidate AI/ML models; or transmitting, to the third function, a deactivating indication to deactivate one or more of the plurality of candidate AI/ML models. In this way, the second function can provide the AI/ML assistance information to the first function to obtain a more accurate (re) trained AI/ML model based on the AI/ML assistance information. The retrained/updated AI/ML model can, in turn, provide more accurate inference results, thus the reliability of the retrained/updated AI/ML model can be improved as compared with the currently used AI/ML model. Also, the second function can change/switch/ (de) select a desired AI/ML model for future use, improving the flexibility in management on the third function and further the whole AI/ML functional framework.
In some example embodiments, the at least one operation comprises the following operation performed by the second function: transmitting, to the fourth function, a request that the fourth function transmits the AI/ML model to the third function. In this way, the second function can transmit the (re) trained AI/ML model to the third function for future use, while the retrained/updated AI/ML model can provide more accurate inference results than the currently used AI/ML model at the third function. Therefore, the reliability of the retrained/updated AI/ML model can be improved as compared with the currently used AI/ML model.
In some example embodiments, the at least one operation comprises the following operations performed by the third function: transmitting the inference results to the second function. In this way, the second function can determine  whether the performance level of the AI/ML model is below a threshold level based on the inference results received from the third function. If so, the second function can request the first function retrain the AI/ML model accordingly. In this sense, the third function can help the second function to facilitate the first function to provide a more accurate retrained/updated AI/ML model based on the inference results of the current AI/ML model. The retrained/updated AI/ML model can, in turn, provide more accurate inference results as compared with the currently used AI/ML model at the third function, thus the reliability of the retrained/updated AI/ML model can be improved as compared with the currently used AI/ML model.
In some example embodiments, the at least one operation comprises at least one of the following operations performed by the third function: receiving, from the second function, a switching indication to switch from the AI/ML model to another AI/ML model; receiving, from the second function, a fallback indication to apply a non-AI/ML model instead of the AI/ML model; receiving, from the second function, an activating indication to activate one or more of a plurality of candidate AI/ML models; or receiving, from the second function, a deactivating indication to deactivate one or more of the plurality of candidate AI/ML models. In this way, the third function can turn to use a desired AI/ML model indicated by the second function, improving the flexibility in management on the third function and further the whole AI/ML functional framework.
In some example embodiments, the at least one operation comprises the following operation performed by the third function: receiving the AI/ML model from the fourth function. In this way, the third function can use the retrained/updated AI/ML model to provide more accurate inference results as compared with the currently used AI/ML model at the third function, thus the reliability of the retrained/updated AI/ML model can be improved as compared with the currently used AI/ML model.
In some example embodiments, the at least one function further comprises: a fifth function configured to collect non-sensing data. In this way, with the collected non-sensing data, the first function can obtain a more accurate AI/ML model, and the second function and third function can also work more accurately.
In some example embodiments, the at least one function further comprises: a sixth function configured to collect radio frequency (RF) sensing data; a seventh function configured to collect non-RF sensing data; and an eighth function configured to obtain fused data based on the RF sensing data and the non-RF sensing data. In this way, with sensing data, the first function can obtain a more accurate AI/ML model, AI/ML functionalities of the AI/ML functional framework can be enhanced by the sensing data, and the second function and third function can also work more accurately.
In some example embodiments, the RF sensing is one of: 3rd generation partnership project (3GPP) defined RF sensing, or non-3GPP defined RF sensing. In this way, sensing data can be collected through RF sensing, for example, either 3GPP defined RF sensing or non-3GPP defined RF sensing.
In some example embodiments, the seventh function is further configured to collect the non-RF sensing data using at least one of light detection and ranging (LIDAR) , non-3GPP defined RF sensing, wireless fidelity (WiFi) sensing, camera (s) , video (s) , or sensor (s) . In this way, the non-RF sensing data can be collected in various ways like LIDAR, non-3GPP defined RF sensing, WiFi sensing, camera (s) , video (s) , or sensor (s) . Therefore, it becomes easier and faster to obtain enough non-RF sensing data to be used by the first function, second function and third function.
In some example embodiments, the at least one operation comprises following operation performed by the first function: receiving first input data from at least one of the fifth function, the sixth function, the seventh function or the eighth function. In this way, a (re) trained AI/ML model can be (re) trained with the first input data as the training data. Since the first input data is from at least one of the fifth function, the sixth function, the seventh function or the eighth function, which implies the first input data may include sensing data, AI/ML functionalities of the AI/ML functional framework can be enhanced by the sensing data. At the same time, with the large-quantity sensing data (including RF sensing data and/or non-RF sensing data, where the RF sensing data may include 3GPP defined RF sensing data and/or non-3GPP defined RF  sensing data) , the training process of the (re) trained AI/ML model can be shortened and the accuracy of the (re) trained AI/ML model can be more accurate.
In some example embodiments, the at least one operation comprises the following operation performed by the second function: receiving second input data from at least one of the fifth function, the sixth function, the seventh function or the eighth function. In this way, the second function can perform management of the AI/ML model based on the second input data. Since the second input data is from at least one of the fifth function, the sixth function, the seventh function or the eighth function, which implies the second input data may include sensing data, AI/ML functionalities of the AI/ML functional framework can be enhanced by the sensing data. At the same time, with the large-quantity sensing data (including RF sensing data and/or non-RF sensing data, where the RF sensing data may include 3GPP defined RF sensing data and/or non-3GPP defined RF sensing data) , the management of the AI/ML model can be more efficient and accurate.
In some example embodiments, the at least one operation comprises the following operation performed by the third function: receiving third input data from at least one of the fifth function, the sixth function, the seventh function or the eighth function. In this way, the third function can perform inference of the AI/ML model based on the third input data. Since the third input data is from at least one of the fifth function, the sixth function, the seventh function or the eighth function, which implies the third input data may include sensing data, AI/ML functionalities of the AI/ML functional framework can be enhanced by the sensing data.
In some example embodiments, the at least one operation comprises the following operation performed by the fifth function: transmitting the non-sensing data to at least one of the first function, the second function or the third function. In this way, the non-sensing data can be utilized by the first function to train the AI/ML model to obtain a more accurate AI/ML model. At the same time, the non-sensing data can help the second function to manage the AI/ML model more reliably and help the third function to perform inference of the AI/ML model more accurately and thus reliably.
In some example embodiments, the at least one operation comprises the following operation performed by the sixth function: transmitting the RF sensing data to at least one of the first function, the second function or the third function. In this way, the RF sensing data can be utilized by the first function to train the AI/ML model to obtain a more accurate AI/ML model. At the same time, the RF sensing data can help the second function to manage the AI/ML model more reliably and help the third function to perform inference of the AI/ML model more accurately and thus reliably. Meanwhile, in the sense of sensing for AI/ML, the RF sensing data can facilitate the first function, second function and third function to improve AI/ML functionalities of the AI/ML model and further the AI/ML functional framework.
In some example embodiments, the at least one operation comprises the following operation performed by the seventh function: transmitting the non-RF sensing data to at least one of the first function, the second function or the third function. In this way, the non-RF sensing data can be utilized by the first function to train the AI/ML model to obtain a more accurate AI/ML model. At the same time, the non-RF sensing data can help the second function to manage the AI/ML model more reliably and help the third function to perform inference of the AI/ML model more accurately and thus reliably. Meanwhile, in the sense of sensing for AI/ML, the non-RF sensing data can facilitate the first function, second function and third function to improve AI/ML functionalities of the AI/ML model and further the AI/ML functional framework.
In some example embodiments, the at least one operation comprises the following operations performed by the eighth function: receiving the RF sensing data from the sixth function, receiving the non-RF sensing data from the seventh function, and performing data processing on the received RF sensing data and non-RF sensing data to obtain the fused data. In this way, the fused data can be obtained which is more accurate than either one of the RF sensing data and the non-RF sensing data, and is less in quantity than the sum of the RF sensing data and the non-RF sensing data.
In some example embodiments, the at least one operation further comprises the following operation performed by the eighth function: transmitting the fused data to at least one of the first function, the second function or the third function. In this way, the fused data then can be utilized by the first function to train the AI/ML model to obtain a more accurate  AI/ML model. At the same time, the fused data can help the second function to manage the AI/ML model more reliably and help the third function to perform inference of the AI/ML model more accurately and thus reliably. Meanwhile, in the sense of sensing for AI/ML, the fused data can facilitate the first function, second function and third function to improve AI/ML functionalities of the AI/ML model and further the AI/ML functional framework.
In some example embodiments, the at least one function further comprises: a ninth function configured to collect the sensing data; and a tenth function configured to obtain fused data based on the non-sensing data and the sensing data. In this way, the fused data can be obtained which is more accurate than either one of the non-sensing data and the sensing data, and is less in quantity than the sum of the non-sensing data and the sensing data.
In some example embodiments, the at least one function further comprises at least one of the following: an eleventh function configured to obtain a sensing model or a sensing result; a twelfth function configured to perform management of the sensing model or sensing result; or thirteenth function configured to assist communication or determine an event based on the sensing model or sensing result. In this way, a sensing model can be obtained and used to assist communication or determine an event based on the sensing model.
In some example embodiments, the at least one function further comprises: a fourteenth function configured to store the sensing model or the sensing result. In this way, the sensing model can be stored in the fourteenth function which is separate from the fourth function, and the operations involving the storage and retrieval of the AI/ML model and the sensing model can be performed separately in a decoupled manner.
In some example embodiments, the at least one operation comprises at least one of the following operations performed by the first function: receiving first input data from at least one of the fifth function, the ninth function or the tenth function. In this way, a (re) trained AI/ML model can be (re) trained with the first input data as the training data. Since the first input data is from at least one of the fifth function, the ninth function or the tenth function, which implies the first input data may include non-sensing data and sensing data, AI/ML functionalities of the AI/ML functional framework can be enhanced by the non-sensing data and the sensing data. At the same time, with the large-quantity sensing data, the training process of the (re) trained AI/ML model can be shortened and the accuracy of the (re) trained AI/ML model can be more accurate.
In some example embodiments, the at least one operation comprises the following operation performed by the second function: receiving second input data from at least one of the fifth function, the ninth function or the tenth function. In this way, the second function can perform management of the AI/ML model based on the second input data. Since the second input data is from at least one of the fifth function, the ninth function or the tenth function, which implies the second input data may include non-sensing data and sensing data, AI/ML functionalities of the AI/ML functional framework can be enhanced by the non-sensing data and the sensing data. At the same time, with the large-quantity sensing data, the management of the AI/ML model can be more efficient and accurate.
In some example embodiments, the at least one operation comprises the following operation performed by the third function: receiving third input data from at least one of the fifth function, the ninth function or the tenth function. In this way, the third function can perform inference of the AI/ML model based on the third input data. Since the third input data is from at least one of the fifth function, the ninth function or the tenth function, which implies the third input data may include non-sensing data and sensing data, where the non-sensing data can be utilized by the third function to perform inference of the AI/ML model more accurately and reliably.
In some example embodiments, the at least one operation comprises the following operation performed by the fifth function: transmitting the non-sensing data to at least one of the first function, the second function or the third function, and at least one of the eleventh function, the twelfth function or the thirteenth function. In this way, the non-sensing data can be utilized by the first function to train the AI/ML model to obtain a more accurate AI/ML model. At the same time, the non-sensing data can help the second function to manage the AI/ML model more reliably and help the third function to perform  inference of the AI/ML model more accurately and thus reliably. Further, the non-sensing data can be utilized by the eleventh function to train the sensing model to obtain a more accurate sensing model. At the same time, the non-sensing data can help the twelfth function to manage the sensing model more reliably and help the thirteenth function to perform inference of the sensing model more accurately and thus reliably.
In some example embodiments, the at least one operation comprises the following operation performed by the ninth function: transmitting the sensing data to at least one of the first function, the second function or the third function, and at least one of the eleventh function, the twelfth function or the thirteenth function. In this way, the sensing data can be utilized by the first function to train the AI/ML model to obtain a more accurate AI/ML model. At the same time, the sensing data can help the second function to manage the AI/ML model more reliably and help the third function to perform inference of the AI/ML model more accurately and thus reliably. Meanwhile, in the sense of sensing for AI/ML, the sensing data can facilitate the first function, second function and third function to improve AI/ML functionalities of the AI/ML model and further the AI/ML functional framework. Further, the sensing data can be utilized by the eleventh function to train the sensing model to obtain a more accurate sensing model. At the same time, the sensing data can help the twelfth function to manage the sensing model more reliably and help the thirteenth function to perform inference of the sensing model more accurately and thus reliably.
In some example embodiments, the at least one operation comprises the following operations performed by the tenth function: receiving the non-sensing data from the sixth function, receiving the sensing data from the ninth function, and performing data processing on the received non-sensing data and sensing data to obtain the fused data. In this way, the fused data can be obtained which is more accurate than either one of the non-sensing data and the sensing data, and is less in quantity than the sum of the non-sensing data and the sensing data.
In some example embodiments, the at least one operation further comprises the following operation performed by the tenth function: transmitting the fused data to at least one of the first function, the second function or the third function, and at least one of the eleventh function, the twelfth function or the thirteenth function. In this way, the fused data then can be utilized by the first function to train the AI/ML model to obtain a more accurate AI/ML model. At the same time, the fused data can help the second function to manage the AI/ML model more reliably and help the third function to perform inference of the AI/ML model more accurately and thus reliably. Meanwhile, in the sense of sensing for AI/ML, the fused data can facilitate the first function, second function and third function to improve AI/ML functionalities of the AI/ML model and further the AI/ML functional framework. Further, the fused data then can be utilized by the eleventh function to train the sensing model to obtain a more accurate sensing model. At the same time, the fused data can help the twelfth function to manage the sensing model more reliably and help the thirteenth function to perform inference of the sensing model more accurately and thus reliably.
In some example embodiments, the eleventh function is further configured to at least one of the following: perform data processing based on fourth input data obtained from at least two of the fifth function, the ninth function or the tenth function. In this way, based on the fourth input data as the training data for the sensing model, the eleventh function can train the sensing model more accurately.
In some example embodiments, the model training of the at least one of sensing model, sensing sub-model, sensing functionality or sensing sub-functionality comprises at least one of the following: environment reconstruction, channel reconstruction, target reconstruction or digital twin or object detection. In this way, the sensing model can be trained more accurately.
In some example embodiments, the twelfth function is further configured to at least one of the following: perform control of the model training of the at least one of sensing model, sensing sub-model, sensing functionality or sensing sub-functionality; perform control of the inference of the sensing model; or monitor output of the sensing model. In this way, the  twelfth function can facilitate the eleventh function to provide a more accurate sensing model, which can produce more accurate sensing inference results, thus the reliability of the sensing model can be improved.
In some example embodiments, the thirteenth function is further configured to at least one of the following: perform data preparation based on sixth input data obtained from at least one of the fifth function, the ninth function or the tenth function. In this way, data used in processing by the thirteenth function can be more organized as compared with the case where the sixth input data is used in the processing without data preparation, thus the processing by the thirteenth function can be more accurate with a higher speed.
In some example embodiments, , the at least one operation comprises at least one of the following operations performed by the eleventh function: receiving the fourth input data from at least one of the fifth function, the ninth function or the tenth function; receiving, from the twelfth function, a performance level of the sensing model and a request to retrain the sensing model; receiving the sensing inference results from the thirteenth function, receiving sensing information from the twelfth function, or transmitting the trained or retrained sensing model to the fourteenth function. In this way, the eleventh function can provide a more accurate (re) trained sensing model based on the fourth input data and/or the performance level of the sensing model and/or the sensing information and/or the sensing inference results. The (re) trained sensing model can, in turn, provide more accurate inference results, thus the reliability of the (re) trained sensing model can be improved.
In some example embodiments, the at least one operation further comprises the following operation performed by the eleventh function: receiving the inference results from the third function. In this way, in the sense of AI/ML for sensing, the inference results of the AI/ML model can help the eleventh function to improve the accuracy and performance of the (re) trained AI/ML model and further the AI/ML functional framework.
In some example embodiments, the at least one operation comprises the following operations performed by the twelfth function: receiving fifth input data from at least one of the fifth function, the ninth function or the tenth function; and receiving the sensing inference results from the thirteenth function. In this way, the twelfth function can facilitate the eleventh function to provide a more accurate sensing model, which in turn can provide more accurate sensing inference results, thus the reliability of the sensing model can be improved.
In some example embodiments, the at least one operation further comprises the following operations performed by the twelfth function: determining that a performance level of the sensing model is below a threshold level based on the sensing inference results received from the thirteenth function; and based on determining that the performance level is below the threshold level, transmitting, to the eleventh function, the performance level of the sensing model and a request to retrain the sensing model. In this way, the twelfth function can request the eleventh function to retrain the sensing model in response to the performance level of the currently used sensing model becoming below a threshold level. In this sense, the twelfth function can facilitate the eleventh function to provide a more accurate retrained/updated sensing model based on the sensing inference results of the current sensing model. The retrained/updated sensing model can, in turn, provide more accurate sensing inference results, thus the reliability of the retrained/updated sensing model can be improved as compared with the currently used sensing model.
In some example embodiments, the at least one operation comprises at least one of the following operations performed by the twelfth function: transmitting sensing information to the eleventh function, transmitting, to the thirteenth function, a switching indication to switch from the sensing model to another sensing model; transmitting, to the thirteenth function, a fallback indication to apply a non-sensing model instead of the sensing model; transmitting, to the thirteenth function, an activating indication to activate one or more of a plurality of candidate sensing models; or transmitting, to the thirteenth function, a deactivating indication to deactivate one or more of the plurality of candidate sensing models. In this way, the twelfth function can provide the sensing information to the eleventh function to obtain a more accurate (re) trained sensing model based on the sensing information. The retrained/updated sensing model can, in turn, provide more accurate  sensing inference results, thus the reliability of the retrained/updated sensing model can be improved as compared with the currently used sensing model. Also, the twelfth function can change/switch/ (de) select a desired sensing model for future use, improving the flexibility in management on the thirteenth function and further the whole AI/ML functional framework.
In some example embodiments, the at least one operation comprises the following operation performed by the twelfth function: transmitting, to the fourteenth function, a request that the fourteenth function transmits the sensing model to the thirteenth function. In this way, the twelfth function can request the fourteenth function to transmit the (re) trained sensing model to the thirteenth function for future use, while the retrained/updated sensing model can provide more accurate sensing inference results than the currently used sensing model at the thirteenth function. Therefore, the reliability of the retrained/updated sensing model can be improved as compared with the currently used sensing model.
In some example embodiments, the at least one operation comprises the following operation performed by the twelfth function: receiving the inference results from the third function. In this way, in the sense of AI/ML for sensing, the inference results can facilitate the twelfth function to improve sensing functionalities of the sensing model and further the AI/ML functional framework.
In some example embodiments, the at least one operation comprises the following operations performed by the thirteenth function: receiving sixth input data from at least one of fifth function, the ninth function or the tenth function; transmitting the sensing inference results to the twelfth function. In this way, with the sixth input data, the thirteenth function can determine the sensing inference results, and send the sensing inference results to the twelfth function. With the sensing inference results, the twelfth function can determine whether the performance level of the sensing model is below a threshold level based on the sensing inference results received from the thirteenth function. If so, the twelfth function can request the eleventh function to retrain the sensing model accordingly. In this sense, the thirteenth function can help the twelfth function to facilitate the eleventh function to provide a more accurate retrained/updated sensing model based on the sensing inference results. The retrained/updated sensing model can, in turn, provide more accurate sensing inference results as compared with the currently used sensing model at the thirteenth function, thus the reliability of the retrained/updated sensing model can be improved as compared with the currently used sensing model.
In some example embodiments, the at least one operation further comprises at least one of the following operation performed by the thirteenth function: transmitting the sensing inference results to at least one of the first function, the second function or the third function, or receiving the sensing model from the fourteenth function. In this way, in the sense of sensing for AI/ML, the sensing inference results can facilitate the first function, the second function or the third function to improve AI/ML functionalities of the AI/ML model and further the AI/ML functional framework.
In some example embodiments, the at least one operation comprises at least one of the following operations performed by the thirteenth function: receiving, from the twelfth function, a switching indication to switch from the sensing model to another sensing model; receiving, from the twelfth function, a fallback indication to apply a non-sensing model instead of the sensing model; receiving, from the twelfth function, an activating indication to activate one or more of a plurality of candidate sensing models; or receiving, from the twelfth function, a deactivating indication to deactivate one or more of the plurality of candidate sensing models. In this way, the thirteenth function can turn to use a desired sensing model indicated by the twelfth function, improving the flexibility in management on the thirteenth function and further the whole AI/ML functional framework.
In some example embodiments, the at least one operation comprises at least one of the following operations performed by the fourteenth function: receiving the trained sensing model from the eleventh function; or based on receiving, from the twelfth function, a request that the fourteenth function transmits the sensing model to the thirteenth function, transmitting the sensing model to the thirteenth function. In this way, the fourteenth function can provide the sensing model to the thirteenth function, such that the thirteenth function can use the (re) trained sensing model to provide more accurate  sensing inference results as compared with the currently used sensing model at the thirteenth function, thus the reliability of the (re) trained sensing model can be improved as compared with the currently used sensing model.
In some example embodiments, the request comprises at least one of the following: a model ID of the requested sensing model, a sensing functionality ID for the requested sensing functionality, or a sensing performance requirement indicating the requested sensing performance. In this way, a sensing model desired by the twelfth function to be used at the thirteenth function can be requested using various parameters, improving the flexibility and usability of the AI/ML functional framework.
In some example embodiments, the at least one function further comprises: a fifteenth function configured to perform sensing inference to obtain a sensing result, wherein the first function is further configured to perform model training of at least a sensing model, a sensing sub-model, a sensing functionality or a sensing sub-functionality, and the second function is further configured to perform management of the sensing model. In this way, the first function can not only train an AI/ML model, but also can train a sensing model, the second function can monitor not only the AI/ML model but also the sensing model. Meanwhile, the fifteenth function which is in charge of sensing inference of the sensing model is separate from the third function which is in charge of model inference of the AI/ML model.
In some example embodiments, the at least one function further comprises: a sixteenth function configured to obtain fused data. The fused data may be obtained by processing on non-sensing data and sensing data. In this way, the fused data, which is less in quantity than the sum of the non-sensing data and the sensing data, can be used in future processing to improve data accuracy and decrease data processing volume.
In some example embodiments, the first function is further configured to at least one of the following: perform data preparation based on seventh input data obtained from the sixteenth function. In this way, data used in processing by the first function can be more organized as compared with the case where the seventh input data is used in the processing without data preparation, thus the processing by the first function can be more accurate with a higher speed.
In some example embodiments, the second function is further configured to at least one of the following: perform control of the model training of the at least one of sensing model, sensing sub-model, sensing functionality or sensing sub-functionality; perform control of the sensing inference of the sensing model; or monitor output of the sensing model. In this way, the second function, which performs management of the AI/ML model, can also perform management of the sensing model (including model training and inference of the sensing model) .
In some example embodiments, the at least one operation comprises at least one of the following operations performed by the first function: receiving the seventh input data from the sixteenth function; receiving, from the second function, a performance level of the sensing model and a request to retrain the sensing model; receiving sensing information from the second function, or transmitting the trained or retrained sensing model to the fourth function. In this way, the first function can provide a more accurate (re) trained sensing model based on the seventh input data and/or the performance level of the sensing model and/or the sensing information. The (re) trained sensing model can, in turn, provide more accurate inference results, thus the reliability of the (re) trained sensing model can be improved.
In some example embodiments, the at least one operation comprises the following operations performed by the second function: receiving eighth input data from the sixteenth function; and receiving the sensing inference results from the fifteenth function. In this way, the second function can facilitate the first function to provide a more accurate retrained/updated AI/ML model and/or sensing model based on the eighth input data and/or the sensing inference results of the current sensing model. In the sense of sensing for AI/ML, the sensing inference results can facilitate the second function to improve AI/ML functionalities of the AI/ML model and further the AI/ML functional framework. More specifically, the retrained/updated AI/ML model can, in turn, provide more accurate inference results, thus the reliability of the AI/ML model can be improved. Also, the retrained/updated sensing model can, in turn, provide more accurate inference results, thus the reliability of the sensing model can be improved.
In some example embodiments, the at least one operation further comprises the following operations performed by the second function: determining that a performance level of the sensing model is below a threshold level based on the sensing inference results received from the fifteenth function; and based on determining that the performance level is below the threshold level, transmitting, to the first function, the performance level of the sensing model and a request to retrain the sensing model. In this way, the second function can request the first function to retrain the sensing model in response to the performance level of the currently used sensing model becoming below a threshold level. In this sense, the second function can facilitate the first function to provide a more accurate retrained/updated sensing model based on the inference results of the current sensing model. The retrained/updated sensing model can, in turn, provide more accurate inference results, thus the reliability of the retrained/updated sensing model can be improved as compared with the currently used sensing model.
In some example embodiments, the at least one operation comprises at least one of the following operations performed by the second function: transmitting sensing information to the first function, transmitting, to the fifteenth function, a switching indication to switch from the sensing model to another sensing model; transmitting, to the fifteenth function, a fallback indication to apply a non-sensing model instead of the sensing model; transmitting, to the fifteenth function, an activating indication to activate one or more of a plurality of candidate sensing models; or transmitting, to the fifteenth function, a deactivating indication to deactivate one or more of the plurality of candidate sensing models. In this way, the second function can provide the sensing information to the first function to obtain a more accurate (re) trained sensing model based on the sensing information. The retrained/updated sensing model can, in turn, provide more accurate sensing inference results, thus the reliability of the retrained/updated sensing model can be improved as compared with the currently used sensing model. Also, the second function can change/switch/ (de) select a desired sensing model for future use, improving the flexibility in management on the fifteenth function and further the whole AI/ML functional framework.
In some example embodiments, the at least one operation comprises the following operation performed by the second function: transmitting, to the fourth function, a request that the fourth function transmits the sensing model to the fifteenth function. In this way, the second function can transmit the (re) trained sensing model to the fifteenth function for future use, while the retrained/updated sensing model can provide more accurate inference results than the currently used sensing model at the third function. Therefore, the reliability of the retrained/updated sensing model can be improved as compared with the currently used sensing model.
In some example embodiments, the at least one operation comprises the following operation performed by the third function: receiving ninth input data from the sixteenth function. In this way, the third function can provide more accurate sensing inference result (s) based on the ninth input data.
In some example embodiments, the at least one operation comprises at least one of the following operation performed by the third function: transmitting the inference results to the fifteenth function, or receiving the sensing result (or, sensing inference result) from the fifteenth function. In this way, on one hand, in the sense of AI/ML for sensing, the inference results can facilitate the fifteenth function to improve sensing functionalities of the sensing model. On the other hand, in the sense of sensing for AI/ML, the sensing result can facilitate the third function to improve inference results of the AI/ML model and further the AI/ML functional framework.
In some example embodiments, the at least one operation comprises the following operations performed by the fifteenth function: receiving tenth input data from the sixteenth function; and receiving the sensing model from the fourth function. In this way, with the tenth input data and the sensing model, the fifteenth function can perform sensing inference and obtain the sensing result.
In some example embodiments, the at least one operation further comprises at least one of the following operation performed by the fifteenth function: receiving the inference results from the second function, or transmitting the sensing results to the second function. In this way, on one hand, in the sense of AI/ML for sensing, the inference results can facilitate the fifteenth function to improve sensing functionalities of the sensing model. On the other hand, in the sense of  sensing for AI/ML, the sensing result can facilitate the second function to improve management of the AI/ML model and further the AI/ML functional framework.
In some example embodiments, the at least one operation comprises at least one of the following operations performed by the fifteenth function: receiving, from the second function, a switching indication to switch from the sensing model to another sensing model; receiving, from the second function, a fallback indication to apply a non-sensing model instead of the sensing model; receiving, from the second function, an activating indication to activate one or more of a plurality of candidate sensing models; or receiving, from the second function, a deactivating indication to deactivate one or more of the plurality of candidate sensing models. In this way, the fifteenth function can change/switch to a desired sensing model as indicated by the second function for future use, improving the flexibility in management the sensing model and further the whole AI/ML functional framework.
In some example embodiments, the at least one function further comprises: a seventeenth function configured to collect non-sensing data, and an eighteenth function configured to collect sensing data. In this way, both non-sensing data and sensing data can be utilized in the AI/ML functional framework, thus accuracy and performance of the AI/ML model and the sensing model can be improved.
In some example embodiments, the at least one operation comprises the following operations performed by the sixteenth function: receiving the non-sensing data from the seventeenth function, receiving the sensing data from the eighteenth function, and performing data processing on the received non-sensing data and sensing data to obtain the fused data. In this way, the fused data can be obtained by processing on the non-sensing data from the seventeenth function and the sensing data from the eighteenth function. In this way, the fused data, which is less in quantity than the sum of the non-sensing data and the sensing data, can be used in future processing to improve data accuracy and decrease data processing volume.
In some example embodiments, the at least one operation further comprises the following operation performed by the sixteenth function: transmitting the fused data to at least one of the first function, the second function, the third function or the fifteenth function. In this way, the fused data then can be utilized by the first function to train the AI/ML model and/or the sensing model to obtain a more accurate AI/ML model and/or sensing model. At the same time, the fused data can help the second function to manage the AI/ML model and/or the sensing model more reliably, help the third function to perform inference of the AI/ML model more accurately and thus reliably, and help the fifteenth function to perform inference of the sensing model more accurately and thus reliably.
In some example embodiments, the at least one function further comprises at least two of: a nineteenth function configured to provide ground-truth sensing data, a twentieth function configured to provide non-ground-truth sensing data, or a twenty-first function configured to provide non-sensing ground-truth data. In this way, ground-truth sensing data, non-ground-truth sensing data and non-sensing ground-truth data can be utilized in the AI/ML functional framework, thus accuracy and performance of the AI/ML model and the sensing model can be improved, and performance of the AI/ML model and sensing model can be more flexible.
In some example embodiments, the at least one operation comprises the following operations performed by the sixteenth function: receiving at least two of: ground-truth sensing data from the nineteenth function, the non-ground-truth sensing data from the twentieth function, or the non-sensing ground-truth data from the twenty-first function, and performing data processing on the received data to obtain the fused data. In this way, the fused data then can be utilized by the first function to train the AI/ML model and/or the sensing model to obtain a more accurate AI/ML model and/or sensing model. At the same time, the fused data can help the second function to manage the AI/ML model and/or the sensing model more reliably, help the third function to perform inference of the AI/ML model more accurately and thus reliably, and help the fifteenth function to perform sensing inference of the sensing model more accurately and thus reliably. In the sense of  sensing for AI/ML, the fused data can facilitate the first function, second function and third function to improve AI/ML functionalities of the AI/ML model and further the AI/ML functional framework.
In some example embodiments, the at least one operation further comprises the following operation performed by the sixteenth function: transmitting the fused data to at least one of the first function, the second function, the third function or the fifteenth function. In this way, the first function can utilize the fused data to train the AI/ML model and/or the sensing model to obtain a more accurate AI/ML model and/or sensing model. At the same time, the second function can utilize the fused data to manage the AI/ML model and/or the sensing model more reliably. The third function can utilize the fused data to perform inference of the AI/ML model more accurately and thus reliably. The fifteenth function can utilize the fused data to perform sensing inference of the sensing model more accurately and thus reliably. In the sense of sensing for AI/ML, the first function, second function and third function can utilize the fused data to improve AI/ML functionalities of the AI/ML model and further the AI/ML functional framework.
In some example embodiments, the data processing comprises at least one of the following: data pre-processing, data cleaning, data formatting, data transformation, or data integration. In this way, data obtained through data processing can be more organized as compared with the case where data is used without data processing, thus future processing on the data can be more accurate and efficient.
In some example embodiments, at least one of the first function, the second function, the third function, the fourth function, the fifth function, the sixth function, the seventh function, the eighth function, the ninth function, the tenth function, the eleventh function, the twelfth function, the thirteenth function, the fourteenth function, the fifteenth function, the sixteenth function, the seventeenth function, the eighteenth function, the nineteenth function, the twentieth function or the twenty-first function is implemented in one of the following: a terminal device, an access network device, a core network device, or a third party device. In this way, each function may be implemented in one of the terminal device, access network device, core network device or third party device in a “distributed” manner, improving the flexibility of implementation and enabling dynamic implementation with various modules where each module may, by itself or in combination with other module (s) , implement one or more functions as described here.
In this way, according to the first aspect and its example embodiments, an AI/ML functional framework for integrated AI and sensing can be defined for high-accuracy purpose to facilitate communication.
In a second aspect, there is provided an apparatus. The apparatus comprises: a transceiver; and a processor communicatively coupled with the transceiver, wherein the processor is configured to perform at least one operation based on an artificial intelligence/machine learning (AI/ML) functional framework, wherein the AI/ML functional framework comprises: a first function configured to perform model training of at least one of an AI/ML model, an AI/ML sub-model, an AI/ML functionality or an AI/ML sub-functionality; a second function configured to perform management of the AI/ML model; a third function configured to perform inference of the AI/ML model; a fourth function configured to store the AI/ML model; and at least one function configured to operate based on sensing data. In this way, an apparatus capable of implementing integrated AI and sensing can be obtained for high-accuracy purpose to facilitate communication.
In a third aspect, there is provided a non-transitory computer-readable storage medium comprising computer program stored thereon. The computer program, when executed on at least one processor, cause the at least one processor to perform the method of the first aspect. In this way, a non-transitory computer-readable storage medium comprising computer program can be provided to implement integrated AI and sensing can be obtained for high-accuracy purpose to facilitate communication.
In a fourth aspect, there is provided a chip comprising at least one processing circuit configured to perform the method of the first aspect. In this way, a chip can be provided to implement integrated AI and sensing can be obtained for high-accuracy purpose to facilitate communication.
In a fifth aspect, there is provided a computer program product tangibly stored on a computer-readable medium and comprising computer-executable instructions which, when executed, cause an apparatus to perform a method of the first aspect. In this way, the computer program product can be provided to implement integrated AI and sensing can be obtained for high-accuracy purpose to facilitate communication.
It is to be understood that the summary section is not intended to identify key or essential features of embodiments of the present disclosure, nor is it intended to be used to limit the scope of the present disclosure. Other features of the present disclosure will become easily comprehensible through the following description.
BRIEF DESCRIPTION OF THE DRAWINGS
Some example embodiments will now be described with reference to the accompanying drawings, in which:
FIG. 1A illustrates an example of a network environment in which some example embodiments of the present disclosure may be implemented;
FIG. 1B illustrates an example communication system in which some example embodiments of the present disclosure may be implemented;
FIG. 1C illustrates an example of an electric device and a base station in accordance with some example embodiments of the present disclosure;
FIG. 1D illustrates units or modules in a device in accordance with some example embodiments of the present
disclosure;
FIG. 1E illustrates an example sensing system in accordance with some example embodiments of the present disclosure;
FIG. 1F illustrates an example apparatus that may implement the methods and teachings in accordance with some example embodiments of the present disclosure;
FIG. 1G illustrates a schematic diagram of an example model in accordance with some example embodiments of the present disclosure;
FIG. 2 illustrates a flowchart illustrating an example communication process in accordance with some example embodiments of the present disclosure;
FIG. 3 illustrates a schematic diagram of an example AI/ML functional framework in accordance with some embodiments of the present disclosure;
FIG. 4 illustrates a schematic diagram of an example AI/ML functional framework and the flowchart of operations in the AI/ML functional framework in accordance with some embodiments of the present disclosure;
FIG. 5 illustrates a schematic diagram of another example AI/ML functional framework and the flowchart of operations in the AI/ML functional framework in accordance with some embodiments of the present disclosure;
FIG. 6 illustrates a schematic diagram of a third example AI/ML functional framework and the flowchart of operations in the AI/ML functional framework in accordance with some embodiments of the present disclosure;
FIG. 7 illustrates a schematic diagram of a fourth example AI/ML functional framework and the flowchart of operations in the AI/ML functional framework in accordance with some embodiments of the present disclosure;
FIG. 8 illustrates a block diagram of an electronic device that may be used for implementing devices and methods in accordance with some embodiments of the present disclosure.
FIG. 9 illustrates a schematic diagram of a structure of an apparatus in accordance with some embodiments of the present disclosure.
Throughout the drawings, the same or similar reference numerals represent the same or similar elements.
DETAILED DESCRIPTION
Principles of the present disclosure will now be described with reference to some example embodiments. It is to be understood that these embodiments are described for the purpose of illustration and help those skilled in the art to understand and implement the present disclosure, without suggesting any limitation as to the scope of the disclosure. Embodiments of the disclosure described herein can be implemented in various manners other than the ones described below.
In the following description and claims, unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skills in the art to which this disclosure belongs.
References in the present disclosure to “one embodiment, ” “an embodiment, ” “an example embodiment, ” and the like indicate that the embodiment described may include a particular feature, structure, or characteristic, but it is not necessary that every embodiment includes the particular feature, structure, or characteristic. Moreover, such phrases are not necessarily referring to the same embodiment. Further, when a particular feature, structure, or characteristic is described in connection with an embodiment, it is submitted that it is within the knowledge of one skilled in the art to affect such feature, structure, or characteristic in connection with other embodiments whether or not explicitly described.
It shall be understood that although the terms “first” and “second” etc. may be used herein to describe various elements, these elements should not be limited by these terms. These terms are used to distinguish one element from another. For example, a first element could be termed a second element, and similarly, a second element could be termed a first element, without departing from the scope of example embodiments. As used herein, the term “and/or” includes any and all combinations of one or more of the listed terms.
The terminology used herein is for the purpose of describing particular embodiments and is not intended to be limiting of example embodiments. As used herein, the singular forms “a” , “an” and “the” are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms “comprises” , “comprising” , “has” , “having” , “includes” and/or “including” , when used herein, specify the presence of stated features, elements, and/or components etc., but do not preclude the presence or addition of one or more other features, elements, components and/or combinations thereof.
As used herein, the term “communication network” refers to a network following any suitable communication standards, such as Long Term Evolution (LTE) , LTE-Advanced (LTE-A) , Wideband Code Division Multiple Access (WCDMA) , High-Speed Packet Access (HSPA) , Narrow Band Internet of Things (NB-IoT) , Wireless Fidelity (WiFi) and so on. Furthermore, the communications between a terminal device and a network device in the communication network may be performed according to any suitable generation communication protocols, including, but not limited to, the fourth generation (4G) , 4.5G, the future fifth generation (5G) , IEEE 802.11 communication protocols, and/or any other protocols either currently known or to be developed in the future. Embodiments of the present disclosure may be applied in various communication systems. Given the rapid development in communications, there will of course also be future type communication technologies and systems with which the present disclosure may be embodied. It should not be seen as limiting the scope of the present disclosure to only the aforementioned system.
As used herein, the term “network device” refers to a node in a communication network via which a terminal device accesses the network and receives services therefrom. The network device may refer to a base station (BS) or an access point (AP) , for example, a node B (NodeB or NB) , an evolved NodeB (eNodeB or eNB) , a NR NB (also referred to as a gNB) , a Remote Radio Unit (RRU) , a radio header (RH) , a remote radio head (RRH) , a WiFi device, a relay, a low power  node such as a femto, a pico, and so forth, depending on the applied terminology and technology. In the following description, the terms “network device” , “AP device” , “AP” and “access point” may be used interchangeably.
The term “terminal device” refers to any end device that may be capable of wireless communication. By way of example rather than limitation, a terminal device may also be referred to as a communication device, user equipment (UE) , a Subscriber Station (SS) , a Portable Subscriber Station, a Mobile Station (MS) , a station (STA) or station device, or an Access Terminal (AT) . The terminal device may include, but not limited to, a mobile phone, a cellular phone, a smart phone, voice over IP (VoIP) phones, wireless local loop phones, a tablet, a wearable terminal device, a personal digital assistant (PDA) , portable computers, desktop computer, image capture terminal devices such as digital cameras, gaming terminal devices, music storage and playback appliances, vehicle-mounted wireless terminal devices, wireless endpoints, mobile stations, laptop-embedded equipment (LEE) , laptop-mounted equipment (LME) , USB dongles, smart devices, wireless customer-premises equipment (CPE) , an Internet of Things (loT) device, a watch or other wearable, a VR (virtual reality) device, an XR (eXtended reality) device, a head-mounted display (HMD) , a vehicle, a drone, a medical device and applications (for example, remote surgery) , an industrial device and applications (for example, a robot and/or other wireless devices operating in an industrial and/or an automated processing chain contexts) , a consumer electronics device, a device operating on commercial and/or industrial wireless networks, and the like. In the following description, the terms “station” , “station device” , “STA” , “terminal device” , “communication device” , “terminal” , “user equipment” and “UE” may be used interchangeably.
Referring to FIG. 1A, as an illustrative example without limitation, a simplified schematic illustration of a communication system 100A is provided. The communication system 100A comprises a radio access network 120. The radio access network 120 may be a next generation (e.g. sixth generation (6G) or later) radio access network, or a legacy (e.g. 5G, 4G, 3G or 2G) radio access network. One or more communication user equipment (UE, also referred to as electric device (ED) ) 110a -110j (generically referred to as 110) may be interconnected to one another or connected to one or more network nodes (170a, 170b, generically referred to as 170) in the radio access network 120. A core network 130 may be a part of the communication system 100A and may be dependent or independent of the radio access technology used in the communication system 100A. Also the communication system 100A comprises a public switched telephone network (PSTN) 180, the internet 185, and other networks 160. The other networks 160 may include a multi-access edge computing (MEC) platform.
FIG. 1B illustrates an example communication system 100B. In general, the communication system 100B enables multiple wireless or wired elements to communicate data and other content. The purpose of the communication system 100B may be to provide content, such as voice, data, video, and/or text, via broadcast, multicast and unicast, etc. The communication system 100B may operate by sharing resources, such as carrier spectrum bandwidth, between its constituent elements. The communication system 100B may include a terrestrial communication system and/or a non-terrestrial communication system. The communication system 100B may provide a wide range of communication services and applications (such as earth monitoring, remote sensing, passive sensing and positioning, navigation and tracking, autonomous delivery and mobility, etc. ) . The communication system 100B may provide a high degree of availability and robustness through a joint operation of the terrestrial communication system and the non-terrestrial communication system. For example, integrating a non-terrestrial communication system (or components thereof) into a terrestrial communication system can result in what may be considered a heterogeneous network comprising multiple layers. Compared to conventional communication networks, the heterogeneous network may achieve better overall performance through efficient multi-link joint operation, more flexible functionality sharing, and faster physical layer link switching between terrestrial networks and non-terrestrial networks.
The terrestrial communication system and the non-terrestrial communication system could be considered sub-systems of the communication system 100B. In the example shown, the communication system 100B includes electronic devices (ED) 110a -110d (generically referred to as ED 110) , radio access networks (RANs) 120a -120b, non-terrestrial  communication network 120c, a core network 130, a public switched telephone network (PSTN) 180, the internet 185, and other networks 160. The RANs 120a-120b include respective base stations (BSs) 170a-170b, which may be generically referred to as terrestrial transmit and receive points (T-TRPs) 170a-170b. The non-terrestrial communication network 120c includes an access node, which may be generically referred to as a non-terrestrial transmit and receive point (NT-TRP) 172. As described above, the other networks 160 may include a multi-access edge computing (MEC) platform.
Any ED 110 may be alternatively or additionally configured to interface, access, or communicate with any other T-TRP 170a-170b and NT-TRP 172, the internet 185, the core network 130, the PSTN 180, the other networks 160, or any combination of the preceding. In some examples, ED 110a may communicate an uplink and/or downlink transmission over an interface 190a with T-TRP 170a. In some examples, the EDs 110a, 110b and 110d may also communicate directly with one another via one or more sidelink air interfaces 190b. In some examples, ED 110d may communicate an uplink and/or downlink transmission over an interface 190c with NT-TRP 172.
The air interfaces 190a and 190b may use similar communication technology, such as any suitable radio access technology. For example, the communication system 100B may implement one or more channel access methods, such as code division multiple access (CDMA) , time division multiple access (TDMA) , frequency division multiple access (FDMA) , orthogonal FDMA (OFDMA) , or single-carrier FDMA (SC-FDMA) in the air interfaces 190a and 190b. The air interfaces 190a and 190b may utilize other higher dimension signal spaces, which may involve a combination of orthogonal and/or non-orthogonal dimensions.
The air interface 190c can enable communication between the ED 110d and one or multiple NT-TRPs 172 via a wireless link or simply a link. For some examples, the link is a dedicated connection for unicast transmission, a connection for broadcast transmission, or a connection between a group of EDs and one or multiple NT-TRPs for multicast transmission.
The RANs 120a and 120b are in communication with the core network 130 to provide the EDs 110a, 110b, and 110c with various services such as voice, data, and other services. The RANs 120a and 120b and/or the core network 130 may be in direct or indirect communication with one or more other RANs (not shown) , which may or may not be directly served by core network 130, and may or may not employ the same radio access technology as RAN 120a, RAN 120b or both. The core network 130 may also serve as a gateway access between (i) the RANs 120a and 120b or EDs 110a, 110b, and 110c or both, and (ii) other networks (such as the PSTN 180, the internet 185, and the other networks 160) . In addition, some or all of the EDs 110a, 110b, and 110c may include functionality for communicating with different wireless networks over different wireless links using different wireless technologies and/or protocols. Instead of wireless communication (or in addition thereto) , the EDs 110a, 110b, and 110c may communicate via wired communication channels to a service provider or switch (not shown) , and to the internet 185. PSTN 180 may include circuit switched telephone networks for providing plain old telephone service (POTS) . Internet 185 may include a network of computers and subnets (intranets) or both, and incorporate protocols, such as Internet Protocol (IP) , Transmission Control Protocol (TCP) , User Datagram Protocol (UDP) . EDs 110a, 110b, and 110c may be multimode devices capable of operation according to multiple radio access technologies, and incorporate multiple transceivers necessary to support such.
FIG. 1C illustrates another example of an ED 110 and a base station 170a, 170b and/or 170c. The ED 110 is used to connect persons, objects, machines, etc. The ED 110 may be widely used in various scenarios, for example, cellular communications, device-to-device (D2D) , vehicle to everything (V2X) , peer-to-peer (P2P) , machine-to-machine (M2M) , machine-type communications (MTC) , internet of things (IOT) , virtual reality (VR) , augmented reality (AR) , industrial control, self-driving, remote medical, smart grid, smart furniture, smart office, smart wearable, smart transportation, smart city, drones, robots, remote sensing, passive sensing, positioning, navigation and tracking, autonomous delivery and mobility, etc.
Each ED 110 represents any suitable end user device for wireless operation and may include such devices (or may be referred to) as a user equipment/device (UE) , a wireless transmit/receive unit (WTRU) , a mobile station, a fixed or mobile  subscriber unit, a cellular telephone, a station (STA) , a machine type communication (MTC) device, a personal digital assistant (PDA) , a smartphone, a laptop, a computer, a tablet, a wireless sensor, a consumer electronics device, a smart book, a vehicle, a car, a truck, a bus, a train, or an IoT device, an industrial device, or apparatus (e.g. communication module, modem, or chip) in the forgoing devices, among other possibilities. Future generation EDs 110 may be referred to using other terms. The base station 170a and 170b is a T-TRP and will hereafter be referred to as T-TRP 170. Also shown in FIG. 3, a NT-TRP will hereafter be referred to as NT-TRP 172. Each ED 110 connected to T-TRP 170 and/or NT-TRP 172 can be dynamically or semi-statically turned-on (i.e., established, activated, or enabled) , turned-off (i.e., released, deactivated, or disabled) and/or configured in response to one of more of: connection availability and connection necessity.
The ED 110 includes a transmitter 201 and a receiver 203 coupled to one or more antennas 204. Only one antenna 204 is illustrated. One, some, or all of the antennas may alternatively be panels. The transmitter 201 and the receiver 203 may be integrated, e.g. as a transceiver. The transceiver is configured to modulate data or other content for transmission by at least one antenna 204 or network interface controller (NIC) . The transceiver is also configured to demodulate data or other content received by the at least one antenna 204. Each transceiver includes any suitable structure for generating signals for wireless or wired transmission and/or processing signals received wirelessly or by wire. Each antenna 204 includes any suitable structure for transmitting and/or receiving wireless or wired signals.
The ED 110 includes at least one memory 208. The memory 208 stores instructions and data used, generated, or collected by the ED 110. For example, the memory 208 could store software instructions or modules configured to implement some or all of the functionality and/or embodiments described herein and that are executed by the processing unit (s) 210. Each memory 208 includes any suitable volatile and/or non-volatile storage and retrieval device (s) . Any suitable type of memory may be used, such as random access memory (RAM) , read only memory (ROM) , hard disk, optical disc, subscriber identity module (SIM) card, memory stick, secure digital (SD) memory card, on-processor cache, and the like.
The ED 110 may further include one or more input/output devices (not shown) or interfaces (such as a wired interface to the internet 185 in FIG. 1A) . The input/output devices permit interaction with a user or other devices in the network. Each input/output device includes any suitable structure for providing information to or receiving information from a user, such as a speaker, microphone, keypad, keyboard, display, or touch screen, including network interface communications.
The ED 110 further includes a processor 210 for performing operations including those related to preparing a transmission for uplink transmission to the NT-TRP 172 and/or T-TRP 170, those related to processing downlink transmissions received from the NT-TRP 172 and/or T-TRP 170, and those related to processing sidelink transmission to and from another ED 110. Processing operations related to preparing a transmission for uplink transmission may include operations such as encoding, modulating, transmit beamforming, and generating symbols for transmission. Processing operations related to processing downlink transmissions may include operations such as receive beamforming, demodulating and decoding received symbols. Depending upon the embodiment, a downlink transmission may be received by the receiver 203, possibly using receive beamforming, and the processor 210 may extract signaling from the downlink transmission (e.g. by detecting and/or decoding the signaling) . An example of signaling may be a reference signal transmitted by NT-TRP 172 and/or T-TRP 170. In some embodiments, the processor 276 implements the transmit beamforming and/or receive beamforming based on the indication of beam direction, e.g. beam angle information (BAI) , received from T-TRP 170. In some embodiments, the processor 210 may perform operations relating to network access (e.g. initial access) and/or downlink synchronization, such as operations relating to detecting a synchronization sequence, decoding and obtaining the system information, etc. In some embodiments, the processor 210 may perform channel estimation, e.g. using a reference signal received from the NT-TRP 172 and/or T-TRP 170.
Although not illustrated, the processor 210 may form part of the transmitter 201 and/or receiver 203. Although not illustrated, the memory 208 may form part of the processor 210.
The processor 210, and the processing components of the transmitter 201 and receiver 203 may each be implemented by the same or different one or more processors that are configured to execute instructions stored in a memory (e.g. in memory 208) . Alternatively, some or all of the processor 210, and the processing components of the transmitter 201 and receiver 203 may be implemented using dedicated circuitry, such as a programmed field-programmable gate array (FPGA) , a graphical processing unit (GPU) , or an application-specific integrated circuit (ASIC) .
The T-TRP 170 may be known by other names in some implementations, such as a base station, a base transceiver station (BTS) , a radio base station, a network node, a network device, a device on the network side, a transmit/receive node, a Node B, an evolved NodeB (eNodeB or eNB) , a Home eNodeB, a next Generation NodeB (gNB) , a transmission point (TP) ) , a site controller, an access point (AP) , or a wireless router, a relay station, a remote radio head, a terrestrial node, a terrestrial network device, or a terrestrial base station, base band unit (BBU) , remote radio unit (RRU) , active antenna unit (AAU) , remote radio head (RRH) , central unit (CU) , distribute unit (DU) , positioning node, among other possibilities. The T-TRP 170 may be macro BSs, pico BSs, relay node, donor node, or the like, or combinations thereof. The T-TRP 170 may refer to the forging devices or apparatus (e.g. communication module, modem, or chip) in the forgoing devices.
In some embodiments, the parts of the T-TRP 170 may be distributed. For example, some of the modules of the T-TRP 170 may be located remote from the equipment housing the antennas of the T-TRP 170, and may be coupled to the equipment housing the antennas over a communication link (not shown) sometimes known as front haul, such as common public radio interface (CPRI) . Therefore, in some embodiments, the term T-TRP 170 may also refer to modules on the network side that perform processing operations, such as determining the location of the ED 110, resource allocation (scheduling) , message generation, and encoding/decoding, and that are not necessarily part of the equipment housing the antennas of the T-TRP 170. The modules may also be coupled to other T-TRPs. In some embodiments, the T-TRP 170 may actually be a plurality of T-TRPs that are operating together to serve the ED 110, e.g. through coordinated multipoint transmissions.
The T-TRP 170 includes at least one transmitter 252 and at least one receiver 254 coupled to one or more antennas 256. Only one antenna 256 is illustrated. One, some, or all of the antennas may alternatively be panels. The transmitter 252 and the receiver 254 may be integrated as a transceiver. The T-TRP 170 further includes a processor 260 for performing operations including those related to: preparing a transmission for downlink transmission to the ED 110, processing an uplink transmission received from the ED 110, preparing a transmission for backhaul transmission to NT-TRP 172, and processing a transmission received over backhaul from the NT-TRP 172. Processing operations related to preparing a transmission for downlink or backhaul transmission may include operations such as encoding, modulating, precoding (e.g. MIMO precoding) , transmit beamforming, and generating symbols for transmission. Processing operations related to processing received transmissions in the uplink or over backhaul may include operations such as receive beamforming, and demodulating and decoding received symbols. The processor 260 may also perform operations relating to network access (e.g. initial access) and/or downlink synchronization, such as generating the content of synchronization signal blocks (SSBs) , generating the system information, etc. In some embodiments, the processor 260 also generates the indication of beam direction, e.g. BAI, which may be scheduled for transmission by scheduler 253. The processor 260 performs other network-side processing operations described herein, such as determining the location of the ED 110, determining where to deploy NT-TRP 172, etc. In some embodiments, the processor 260 may generate signaling, e.g. to configure one or more parameters of the ED 110 and/or one or more parameters of the NT-TRP 172. Any signaling generated by the processor 260 is sent by the transmitter 252. Note that “signaling” , as used herein, may alternatively be called control signaling. Dynamic signaling may be transmitted in a control channel, e.g. a physical downlink control channel (PDCCH) , and static or semi-static higher layer signaling may be included in a packet transmitted in a data channel, e.g. in a physical downlink shared channel (PDSCH) .
A scheduler 253 may be coupled to the processor 260. The scheduler 253 may be included within or operated separately from the T-TRP 170, which may schedule uplink, downlink, and/or backhaul transmissions, including issuing  scheduling grants and/or configuring scheduling-free ( “configured grant” ) resources. The T-TRP 170 further includes a memory 258 for storing information and data. The memory 258 stores instructions and data used, generated, or collected by the T-TRP 170. For example, the memory 258 could store software instructions or modules configured to implement some or all of the functionality and/or embodiments described herein and that are executed by the processor 260.
Although not illustrated, the processor 260 may form part of the transmitter 252 and/or receiver 254. Also, although not illustrated, the processor 260 may implement the scheduler 253. Although not illustrated, the memory 258 may form part of the processor 260.
The processor 260, the scheduler 253, and the processing components of the transmitter 252 and receiver 254 may each be implemented by the same or different one or more processors that are configured to execute instructions stored in a memory, e.g. in memory 258. Alternatively, some or all of the processor 260, the scheduler 253, and the processing components of the transmitter 252 and receiver 254 may be implemented using dedicated circuitry, such as a FPGA, a GPU, or an ASIC.
Although the NT-TRP 172 is illustrated as a drone only as an example, the NT-TRP 172 may be implemented in any suitable non-terrestrial form. Also, the NT-TRP 172 may be known by other names in some implementations, such as a non-terrestrial node, a non-terrestrial network device, or a non-terrestrial base station. The NT-TRP 172 includes a transmitter 272 and a receiver 274 coupled to one or more antennas 280. Only one antenna 280 is illustrated. One, some, or all of the antennas may alternatively be panels. The transmitter 272 and the receiver 274 may be integrated as a transceiver. The NT-TRP 172 further includes a processor 276 for performing operations including those related to: preparing a transmission for downlink transmission to the ED 110, processing an uplink transmission received from the ED 110, preparing a transmission for backhaul transmission to T-TRP 170, and processing a transmission received over backhaul from the T-TRP 170. Processing operations related to preparing a transmission for downlink or backhaul transmission may include operations such as encoding, modulating, precoding (e.g. MIMO precoding) , transmit beamforming, and generating symbols for transmission. Processing operations related to processing received transmissions in the uplink or over backhaul may include operations such as receive beamforming, and demodulating and decoding received symbols. In some embodiments, the processor 276 implements the transmit beamforming and/or receive beamforming based on beam direction information (e.g. BAI) received from T-TRP 170. In some embodiments, the processor 276 may generate signaling, e.g. to configure one or more parameters of the ED 110. In some embodiments, the NT-TRP 172 implements physical layer processing, but does not implement higher layer functions such as functions at the medium access control (MAC) or radio link control (RLC) layer. As this is only an example, more generally, the NT-TRP 172 may implement higher layer functions in addition to physical layer processing.
The NT-TRP 172 further includes a memory 278 for storing information and data. Although not illustrated, the processor 276 may form part of the transmitter 272 and/or receiver 274. Although not illustrated, the memory 278 may form part of the processor 276.
The processor 276 and the processing components of the transmitter 272 and receiver 274 may each be implemented by the same or different one or more processors that are configured to execute instructions stored in a memory, e.g. in memory 278. Alternatively, some or all of the processor 276 and the processing components of the transmitter 272 and receiver 274 may be implemented using dedicated circuitry, such as a programmed FPGA, a GPU, or an ASIC. In some embodiments, the NT-TRP 172 may actually be a plurality of NT-TRPs that are operating together to serve the ED 110, e.g. through coordinated multipoint transmissions.
The T-TRP 170, the NT-TRP 172, and/or the ED 110 may include other components, but these have been omitted for the sake of clarity.
One or more steps of the embodiment methods provided herein may be performed by corresponding units or modules, according to FIG. 1D. FIG. 1D illustrates units or modules in a device, such as in ED 110, in T-TRP 170, or in NT- TRP 172. For example, a signal may be transmitted by a transmitting unit or a transmitting module. For example, a signal may be transmitted by a transmitting unit or a transmitting module. A signal may be received by a receiving unit or a receiving module. A signal may be processed by a processing unit or a processing module. Other steps may be performed by an artificial intelligence (AI) or machine learning (ML) module. The respective units or modules may be implemented using hardware, one or more components or devices that execute software, or a combination thereof. For instance, one or more of the units or modules may be an integrated circuit, such as a programmed FPGA, a GPU, or an ASIC. It will be appreciated that where the modules are implemented using software for execution by a processor for example, they may be retrieved by a processor, in whole or part as needed, individually or together for processing, in single or multiple instances, and that the modules themselves may include instructions for further deployment and instantiation.
Additional details regarding the EDs 110, T-TRP 170, and NT-TRP 172 are known to those of skill in the art. As such, these details are omitted here.
An air interface generally includes a number of components and associated parameters that collectively specify how a transmission is to be sent and/or received over a wireless communications link between two or more communicating devices. For example, an air interface may include one or more components defining the waveform (s) , frame structure (s) , multiple access scheme (s) , protocol (s) , coding scheme (s) and/or modulation scheme (s) for conveying information (e.g. data) over a wireless communications link. The wireless communications link may support a link between a radio access network and user equipment (e.g. a “Uu” link) , and/or the wireless communications link may support a link between device and device, such as between two user equipments (e.g. a “sidelink” ) , and/or the wireless communications link may support a link between a non-terrestrial (NT) -communication network and user equipment (UE) . The followings are some examples for the above components:
A waveform component may specify a shape and form of a signal being transmitted. Waveform options may include orthogonal multiple access waveforms and non-orthogonal multiple access waveforms. Non-limiting examples of such waveform options include Orthogonal Frequency Division Multiplexing (OFDM) , Filtered OFDM (f-OFDM) , Time windowing OFDM, Filter Bank Multicarrier (FBMC) , Universal Filtered Multicarrier (UFMC) , Generalized Frequency Division Multiplexing (GFDM) , Wavelet Packet Modulation (WPM) , Faster Than Nyquist (FTN) Waveform, and low Peak to Average Power Ratio Waveform (low PAPR WF) .
A frame structure component may specify a configuration of a frame or group of frames. The frame structure component may indicate one or more of a time, frequency, pilot signature, code, or other parameter of the frame or group of frames. More details of frame structure will be discussed below.
A multiple access scheme component may specify multiple access technique options, including technologies defining how communicating devices share a common physical channel, such as: Time Division Multiple Access (TDMA) , Frequency Division Multiple Access (FDMA) , Code Division Multiple Access (CDMA) , Single Carrier Frequency Division Multiple Access (SC-FDMA) , Low Density Signature Multicarrier Code Division Multiple Access (LDS-MC-CDMA) , Non-Orthogonal Multiple Access (NOMA) , Pattern Division Multiple Access (PDMA) , Lattice Partition Multiple Access (LPMA) , Resource Spread Multiple Access (RSMA) , and Sparse Code Multiple Access (SCMA) . Furthermore, multiple access technique options may include: scheduled access vs. non-scheduled access, also known as grant-free access; non-orthogonal multiple access vs. orthogonal multiple access, e.g., via a dedicated channel resource (e.g., no sharing between multiple communicating devices) ; contention-based shared channel resources vs. non-contention-based shared channel resources, and cognitive radio-based access.
A hybrid automatic repeat request (HARQ) protocol component may specify how a transmission and/or a re-transmission is to be made. Non-limiting examples of transmission and/or re-transmission mechanism options include those that specify a scheduled data pipe size, a signaling mechanism for transmission and/or re-transmission, and a re-transmission mechanism.
A coding and modulation component may specify how information being transmitted may be encoded/decoded and modulated/demodulated for transmission/reception purposes. Coding may refer to methods of error detection and forward error correction. Non-limiting examples of coding options include turbo trellis codes, turbo product codes, fountain codes, low-density parity check codes, and polar codes. Modulation may refer, simply, to the constellation (including, for example, the modulation technique and order) , or more specifically to various types of advanced modulation methods such as hierarchical modulation and low PAPR modulation.
In some embodiments, the air interface may be a “one-size-fits-all concept” . For example, the components within the air interface cannot be changed or adapted once the air interface is defined. In some implementations, only limited parameters or modes of an air interface, such as a cyclic prefix (CP) length or a multiple input multiple output (MIMO) mode, can be configured. In some embodiments, an air interface design may provide a unified or flexible framework to support below 6 GHz and beyond 6 GHz frequency (e.g., mmWave) bands for both licensed and unlicensed access. As an example, flexibility of a configurable air interface provided by a scalable numerology and symbol duration may allow for transmission parameter optimization for different spectrum bands and for different services/devices. As another example, a unified air interface may be self-contained in a frequency domain, and a frequency domain self-contained design may support more flexible radio access network (RAN) slicing through channel resource sharing between different services in both frequency and time.
A frame structure is a feature of the wireless communication physical layer that defines a time domain signal transmission structure, e.g. to allow for timing reference and timing alignment of basic time domain transmission units. Wireless communication between communicating devices may occur on time-frequency resources governed by a frame structure. The frame structure may sometimes instead be called a radio frame structure.
Depending upon the frame structure and/or configuration of frames in the frame structure, frequency division duplex (FDD) and/or time-division duplex (TDD) and/or full duplex (FD) communication may be possible. FDD communication is when transmissions in different directions (e.g. uplink vs. downlink) occur in different frequency bands. TDD communication is when transmissions in different directions (e.g. uplink vs. downlink) occur over different time durations. FD communication is when transmission and reception occurs on the same time-frequency resource, i.e. a device can both transmit and receive on the same frequency resource concurrently in time.
One example of a frame structure is a frame structure in long-term evolution (LTE) having the following specifications: each frame is 10 ms in duration; each frame has 10 subframes, which are each 1 ms in duration; each subframe includes two slots, each of which is 0.5 ms in duration; each slot is for transmission of 7 OFDM symbols (assuming normal CP) ; each OFDM symbol has a symbol duration and a particular bandwidth (or partial bandwidth or bandwidth partition) related to the number of subcarriers and subcarrier spacing; the frame structure is based on OFDM waveform parameters such as subcarrier spacing and CP length (where the CP has a fixed length or limited length options) ; and the switching gap between uplink and downlink in TDD has to be the integer time of OFDM symbol duration.
Another example of a frame structure is a frame structure in new radio (NR) having the following specifications: multiple subcarrier spacings are supported, each subcarrier spacing corresponding to a respective numerology; the frame structure depends on the numerology, but in any case the frame length is set at 10 ms, and consists of ten subframes of 1 ms each; a slot is defined as 14 OFDM symbols, and slot length depends upon the numerology. For example, the NR frame structure for normal CP 15 kHz subcarrier spacing ( “numerology 1” ) and the NR frame structure for normal CP 30 kHz subcarrier spacing ( “numerology 2” ) are different. For 15 kHz subcarrier spacing a slot length is 1 ms, and for 30 kHz subcarrier spacing a slot length is 0.5 ms. The NR frame structure may have more flexibility than the LTE frame structure.
Another example of a frame structure is an example flexible frame structure, e.g. for use in a 6G network or later. In a flexible frame structure, a symbol block may be defined as the minimum duration of time that may be scheduled in the flexible frame structure. A symbol block may be a unit of transmission having an optional redundancy portion (e.g. CP  portion) and an information (e.g. data) portion. An OFDM symbol is an example of a symbol block. A symbol block may alternatively be called a symbol. Embodiments of flexible frame structures include different parameters that may be configurable, e.g. frame length, subframe length, symbol block length, etc. A non-exhaustive list of possible configurable parameters in some embodiments of a flexible frame structure include:
(1) Frame: The frame length need not be limited to 10 ms, and the frame length may be configurable and change over time. In some embodiments, each frame includes one or multiple downlink synchronization channels and/or one or multiple downlink broadcast channels, and each synchronization channel and/or broadcast channel may be transmitted in a different direction by different beamforming. The frame length may be more than one possible value and configured based on the application scenario. For example, autonomous vehicles may require relatively fast initial access, in which case the frame length may be set as 5 ms for autonomous vehicle applications. As another example, smart meters on houses may not require fast initial access, in which case the frame length may be set as 20 ms for smart meter applications.
(2) Subframe duration: A subframe might or might not be defined in the flexible frame structure, depending upon the implementation. For example, a frame may be defined to include slots, but no subframes. In frames in which a subframe is defined, e.g. for time domain alignment, then the duration of the subframe may be configurable. For example, a subframe may be configured to have a length of 0.1 ms or 0.2 ms or 0.5 ms or 1 ms or 2 ms or 5 ms, etc. In some embodiments, if a subframe is not needed in a particular scenario, then the subframe length may be defined to be the same as the frame length or not defined.
(3) Slot configuration: A slot might or might not be defined in the flexible frame structure, depending upon the implementation. In frames in which a slot is defined, then the definition of a slot (e.g. in time duration and/or in number of symbol blocks) may be configurable. In one embodiment, the slot configuration is common to all UEs or a group of UEs. For this case, the slot configuration information may be transmitted to UEs in a broadcast channel or common control channel (s) . In other embodiments, the slot configuration may be UE specific, in which case the slot configuration information may be transmitted in a UE-specific control channel. In some embodiments, the slot configuration signaling can be transmitted together with frame configuration signaling and/or subframe configuration signaling. In other embodiments, the slot configuration can be transmitted independently from the frame configuration signaling and/or subframe configuration signaling. In general, the slot configuration may be system common, base station common, UE group common, or UE specific.
(4) Subcarrier spacing (SCS) : SCS is one parameter of scalable numerology which may allow the SCS to possibly range from 15 KHz to 480 KHz. The SCS may vary with the frequency of the spectrum and/or maximum UE speed to minimize the impact of the Doppler shift and phase noise. In some examples, there may be separate transmission and reception frames, and the SCS of symbols in the reception frame structure may be configured independently from the SCS of symbols in the transmission frame structure. The SCS in a reception frame may be different from the SCS in a transmission frame. In some examples, the SCS of each transmission frame may be half the SCS of each reception frame. If the SCS between a reception frame and a transmission frame is different, the difference does not necessarily have to scale by a factor of two, e.g. if more flexible symbol durations are implemented using inverse discrete Fourier transform (IDFT) instead of fast Fourier transform (FFT) . Additional examples of frame structures can be used with different SCSs.
(5) Flexible transmission duration of basic transmission unit: The basic transmission unit may be a symbol block (alternatively called a symbol) , which in general includes a redundancy portion (referred to as the CP) and an information (e.g. data) portion, although in some embodiments the CP may be omitted from the symbol block. The CP length may be flexible and configurable. The CP length may be fixed within a frame or flexible within a frame, and the CP length may possibly change from one frame to another, or from one group of frames to another group of frames, or from one subframe to another subframe, or from one slot to another slot, or dynamically from one scheduling to another scheduling. The information (e.g. data) portion may be flexible and configurable. Another possible parameter relating to a symbol block that  may be defined is ratio of CP duration to information (e.g. data) duration. In some embodiments, the symbol block length may be adjusted according to: channel condition (e.g. mulit-path delay, Doppler) ; and/or latency requirement; and/or available time duration. As another example, a symbol block length may be adjusted to fit an available time duration in the frame.
(6) Flexible switch gap: A frame may include both a downlink portion for downlink transmissions from a base station, and an uplink portion for uplink transmissions from UEs. A gap may be present between each uplink and downlink portion, which is referred to as a switching gap. The switching gap length (duration) may be configurable. A switching gap duration may be fixed within a frame or flexible within a frame, and a switching gap duration may possibly change from one frame to another, or from one group of frames to another group of frames, or from one subframe to another subframe, or from one slot to another slot, or dynamically from one scheduling to another scheduling.
The concept of cell, carrier, bandwidth parts (BWPs) and occupied bandwidth will be described below.
A device, such as a base station, may provide coverage over a cell. Wireless communication with the device may occur over one or more carrier frequencies. A carrier frequency will be referred to as a carrier. A carrier may alternatively be called a component carrier (CC) . A carrier may be characterized by its bandwidth and a reference frequency, e.g. the center or lowest or highest frequency of the carrier. A carrier may be on licensed or unlicensed spectrum. Wireless communication with the device may also or instead occur over one or more bandwidth parts (BWPs) . For example, a carrier may have one or more BWPs. More generally, wireless communication with the device may occur over spectrum. The spectrum may comprise one or more carriers and/or one or more BWPs.
A cell may include one or multiple downlink resources and optionally one or multiple uplink resources, or a cell may include one or multiple uplink resources and optionally one or multiple downlink resources, or a cell may include both one or multiple downlink resources and one or multiple uplink resources. As an example, a cell might only include one downlink carrier/BWP, or only include one uplink carrier/BWP, or include multiple downlink carriers/BWPs, or include multiple uplink carriers/BWPs, or include one downlink carrier/BWP and one uplink carrier/BWP, or include one downlink carrier/BWP and multiple uplink carriers/BWPs, or include multiple downlink carriers/BWPs and one uplink carrier/BWP, or include multiple downlink carriers/BWPs and multiple uplink carriers/BWPs. In some embodiments, a cell may instead or additionally include one or multiple sidelink resources, including sidelink transmitting and receiving resources.
A BWP is a set of contiguous or non-contiguous frequency subcarriers on a carrier, or a set of contiguous or non-contiguous frequency subcarriers on multiple carriers, or a set of non-contiguous or contiguous frequency subcarriers, which may have one or more carriers.
In some embodiments, a carrier may have one or more BWPs, e.g. a carrier may have a bandwidth of 20 MHz and consist of one BWP, or a carrier may have a bandwidth of 80 MHz and consist of two adjacent contiguous BWPs, etc. In other embodiments, a BWP may have one or more carriers, e.g. a BWP may have a bandwidth of 40 MHz and consists of two adjacent contiguous carriers, where each carrier has a bandwidth of 20 MHz. In some embodiments, a BWP may comprise non-contiguous spectrum resources which consists of non-contiguous multiple carriers, where the first carrier of the non-contiguous multiple carriers may be in mmW band, the second carrier may be in a low band (such as 2 GHz band) , the third carrier (if it exists) may be in THz band, and the fourth carrier (if it exists) may be in visible light band. Resources in one carrier which belong to the BWP may be contiguous or non-contiguous. In some embodiments, a BWP has non-contiguous spectrum resources on one carrier.
Wireless communication may occur over an occupied bandwidth. The occupied bandwidth may be defined as the width of a frequency band such that, below the lower and above the upper frequency limits, the mean powers emitted are each equal to a specified percentage β/2 of the total mean transmitted power, for example, the value of β/2 is taken as 0.5%.
The carrier, the BWP, or the occupied bandwidth may be signaled by a network device (e.g. base station) dynamically, e.g. in physical layer control signaling such as DCI, or semi-statically, e.g. in radio resource control (RRC) signaling or in the medium access control (MAC) layer, or be predefined based on the application scenario; or be determined by the UE as a function of other parameters that are known by the UE, or may be fixed, e.g. by a standard.
In current networks, frame timing and synchronization is established based on synchronization signals, such as a primary synchronization signal (PSS) and a secondary synchronization signal (SSS) . Notably, known frame timing and synchronization strategies involve adding a timestamp, e.g., (xx0: yy0: zz) , to a frame boundary, where xx0, yy0, zz in the timestamp may represent a time format such as hour, minute, and second, respectively.
It is anticipated that diverse applications and use cases in future networks may involve usage of different periods of frames, slots and symbols to satisfy the different requirements, functionalities and Quality of Service (QoS) types. It follows that usage of different periods of frames to satisfy these applications may present challenges for frame timing alignment among diverse frame structures. Consider, for example, frame timing alignment for a TDD configuration in neighboring carrier frequency bands or among sub-bands (or bandwidth parts) of one channel/carrier bandwidth.
The present disclosure relates, generally, to mobile, wireless communication and, in particular embodiments, to a frame timing alignment/realignment, where the frame timing alignment/realignment may comprise a timing alignment/realignment in terms of a boundary of a symbol, a slot or a sub-frame within a frame; or a frame (thus the frame timing alignment/realignment here is more general, not limiting to the cases where a timing alignment/realignment is from a frame boundary only) . Also, in this application, relative timing to a frame or frame boundary should be interpreted in a more general sense, i.e., the frame boundary means a timing point of a frame element with the frame such as (starting or ending of) a symbol, a slot or subframe within a frame, or a frame. In the following, the phrases “ (frame) timing alignment or timing realignment” and “relative timing to a frame boundary” are used in more general sense described in above.
In overview, aspects of the present application relate to a network device, such as a base station 170, referenced hereinafter as a TRP 170, transmitting signaling that carries a timing realignment indication message. The timing realignment indication message includes information allowing a receiving UE 110 to determine a timing reference point. On the basis of the timing reference point, transmission of frames, by the UE 110, may be aligned. In some aspects of the present application, the frames that become aligned are in different sub-bands of one carrier frequency band. In other aspects of the present application, the frames that become aligned are found in neighboring carrier frequency bands.
On the TRP 170 side, aspects of the present application relate to use of one or more types of signaling to indicate the timing realignment (or/and timing correction) message. Two example types of signaling are provided here to show the schemes. The first example type of signaling may be referenced as cell-specific signaling, examples of which include group common signaling and broadcast signaling. The second example type of signaling may be referenced as UE-specific signaling. One of these two types of signaling or a combination of the two types of signaling may be used to transmit a timing realignment indication message. The timing realignment indication message may be shown to notify one or more UEs 110 of a configuration of a timing reference point. References, hereinafter, to the term “UE 110” may be understood to represent reference to a broad class of generic wireless communication devices within a cell (i.e., a network receiving node, such as a wireless device, a sensor, a gateway, a router, etc. ) , that is, being served by the TRP 170. A timing reference point is a timing reference instant and may be expressed in terms of a relative timing, in view of a timing point in a frame, such as (starting or ending boundary of) a symbol, a slot or a sub-frame within a frame; or a frame. For a simple description in the following, the term “aframe boundary” is used to represent a boundary of possibly a symbol, a slot or a sub-frame within a frame; or a frame. Thus, the timing reference point may be expressed in terms of a relative timing, in view of a current frame boundary, e.g., the start of the current frame. Alternatively, the timing reference point may be expressed in terms of an absolute timing based on certain standards timing reference such as a GNSS (e.g., GPS) , Coordinated Universal Time ( “UTC” ) , etc. In the absolute timing version of the timing reference point, a timing reference point may be explicitly stated.
The timing reference point may be shown to allow for timing adjustments to be implemented at the UEs 110. The timing adjustments may be implemented for improvement of accuracy for a clock at the UE 110. Alternatively, or additionally, the timing reference point may be shown to allow for adjustments to be implemented in future transmissions made from the UEs 110. The adjustments may be shown to cause realignment of transmitted frames at the timing reference point. Note that the realignment of transmitted frames at the timing reference point may comprise the timing realignment from (the starting boundary of) a symbol, a slot or a sub-frame within a frame; or a frame at the timing reference point for one or more UEs and one or more BSs (in a cell or a group of cells) , which applies across the application below.
At UE 110 side, the UE 110 may monitor for the timing realignment indication message. Responsive to receiving the timing realignment indication message, the UE 110 may obtain the timing reference point and take steps to cause frame realignment at the timing reference point. Those steps may, for example, include commencing transmission of a subsequent frame at the timing reference point.
Furthermore, or alternatively, before monitoring for the timing realignment indication message, the UE 110 may cause the TRP 170 to transmit the timing realignment indication message by transmitting, to the TRP 170, a request for a timing realignment, that is, a timing realignment request message. Responsive to receiving the timing realignment request message, the TRP 170 may transmit, to the UE 110, a timing realignment indication message including information on a timing reference point, thereby allowing the UE 110 to implement a timing realignment (or/and a timing adjustment including clock timing error correction) , wherein the timing realignment is in terms of (e.g., a starting boundary of) a symbol, a slot or a sub-frame within a frame; or a frame for UEs and base station (s) in a cell (or a group of cells) .
According to aspects of the present application, a TRP 170 associated with a given cell may transmit a timing realignment indication message. The timing realignment indication message may include enough information to allow a receiver of the message to obtain a timing reference point. The timing reference point may be used, by one or more UEs 110 in the given cell, when performing a timing realignment (or/and a timing adjustment including clock timing error correction) .
According to aspects of the present application, the timing reference point may be expressed, within the timing realignment indication message, relative to a frame boundary (where, as previously described and to be applicable below across the application, a frame boundary can be a boundary of a symbol, a slot or a sub-frame with a frame; or a frame) . The timing realignment indication message may include a relative timing indication, Δt. It may be shown that the relative timing indication, Δt, expresses the timing reference point as occurring a particular duration, i.e., Δt, subsequent to a frame boundary for a given frame. Since the frame boundary is important to allowing the UE 110 to determine the timing reference point, it is important that the UE 110 be aware of the given frame that has the frame boundary of interest. Accordingly, the timing realignment indication message may also include a system frame number (SFN) for the given frame.
It is known, in 5G NR, that the SFN is a value in range from 0 to 1023, inclusive. Accordingly, 10 bits may be used to represent a SFN. When a SFN is carried by an SSB, six of the 10 bits for the SFN may be carried in a Master Information Block (MIB) and the remaining four bits of the 10 bits for the SFN may be carried in a Physical Broadcast Channel (PBCH) payload.
Optionally, the timing realignment indication message may include other parameters. The other parameters may, for example, include a minimum time offset. The minimum time offset may establish a duration of time preceding the timing reference point. The UE 110 may rely upon the minimum time offset as an indication that DL signaling, including the timing realignment indication message, will allow the UE 110 enough time to detect the timing realignment indication message to obtain information on the timing reference point.
A generic background for 6G integrated sensing and communication will now be described. User Equipment (UE) position information is often used in cellular communication networks to improve various performance metrics for the network. Such performance metrics may, for example, include capacity, agility, and efficiency. The improvement may be  achieved when elements of the network exploit the position, the behavior, the mobility pattern, etc., of the UE in the context of a priori information describing a wireless environment in which the UE is operating.
A sensing system may be used to help gather UE pose information, including its location in a global coordinate system, its velocity and direction of movement in the global coordinate system, orientation information, and the information about the wireless environment. “Location” is also known as “position” and these two terms may be used interchangeably herein. Examples of well-known sensing systems include RADAR (Radio Detection and Ranging) and LIDAR (Light Detection and Ranging) . While the sensing system can be separate from the communication system, it could be advantageous to gather the information using an integrated system, which reduces the hardware (and cost) in the system as well as the time, frequency, or spatial resources needed to perform both functionalities. However, using the communication system hardware to perform sensing of UE pose and environment information is a highly challenging and open problem. The difficulty of the problem relates to factors such as the limited resolution of the communication system, the dynamicity of the environment, and the huge number of objects whose electromagnetic properties and position are to be estimated.
Accordingly, integrated sensing and communication (also known as integrated communication and sensing) is a desirable feature in existing and future communication systems
Any or all of the EDs 110 and BS 170 may be sensing nodes in the communication system 100E as illustrated in FIG. 1E, which is an example sensing system in accordance with some example embodiments of the present disclosure. Sensing nodes are network entities that perform sensing by transmitting and receiving sensing signals. Some sensing nodes are communication equipment that perform both communications and sensing. However, it is possible that some sensing nodes do not perform communications, and are instead dedicated to sensing. FIG. 1E differs from FIG. 1B in that there is a sensing agent 195 in the communication system 100E, which is absent in FIG. 1B. The sensing agent 195 is an example of a sensing node that is dedicated to sensing. Unlike the EDs 110 and BS 170, the sensing agent 195 does not transmit or receive communication signals. However, the sensing agent 195 may communicate configuration information, sensing information, signaling information, or other information within the communication system 100E. The sensing agent 195 may be in communication with the core network 130 to communicate information with the rest of the communication system 100E. By way of example, the sensing agent 195 may determine the location of the ED 110a, and transmit this information to the base station 170a via the core network 130. Although only one sensing agent 195 is shown in FIG. 1E, any number of sensing agents may be implemented in the communication system 100E. In some embodiments, one or more sensing agents may be implemented at one or more of the RANs 120.
A sensing node may combine sensing-based techniques with reference signal-based techniques to enhance UE pose determination. This type of sensing node may also be known as a sensing management function (SMF) . In some networks, the SMF may also be known as a location management function (LMF) . The SMF may be implemented as a physically independent entity located at the core network 130 with connection to the multiple BSs 170. In other aspects of the present application, the SMF may be implemented as a logical entity co-located inside a BS 170 through logic carried out by the processor 260.
FIG. 1F illustrates an example apparatus 100F that may implement the methods and teachings according to this disclosure. In particular, FIG. 1F illustrates an example SMF 176, which may be implemented in a UE 110, a system node 120, or a network node 130. As will be discussed further below, the SMF 176 may be specialized, or include specialized components, to support training and/or execution of AI models (e.g., training and/or execution of neural networks) .
As shown in FIG. 1F, the SMF 176, when implemented as a physically independent entity, includes at least one processor 290, at least one transmitter 282, at least one receiver 284, one or more antennas 286, and at least one memory 288. A transceiver, not shown, may be used instead of the transmitter 282 and receiver 284. A scheduler 283 may be coupled to the processor 290. The scheduler 283 may be included within or operated separately from the SMF 176. The processor 290 implements various processing operations of the SMF 176, such as signal coding, data processing, power control,  input/output processing, or any other functionality. The processor 290 can also be configured to implement some or all of the functionality and/or embodiments described in more detail above. Each processor 290 includes any suitable processing or computing device configured to perform one or more operations. Each processor 290 could, for example, include a microprocessor, microcontroller, digital signal processor, field programmable gate array, or application specific integrated circuit.
A reference signal-based pose determination technique belongs to an “active” pose estimation paradigm. In an active pose estimation paradigm, the enquirer of pose information (i.e., the UE) takes part in process of determining the pose of the enquirer. The enquirer may transmit or receive (or both) a signal specific to pose determination process. Positioning techniques based on a global navigation satellite system (GNSS) such as Global Positioning System (GPS) are other examples of the active pose estimation paradigm.
In contrast, a sensing technique, based on radar for example, may be considered as belonging to a “passive” pose determination paradigm. In a passive pose determination paradigm, the target is oblivious to the pose determination process.
By integrating sensing and communications in one system, the system need not operate according to only a single paradigm. Thus, the combination of sensing-based techniques and reference signal-based techniques can yield enhanced pose determination.
The enhanced pose determination may, for example, include obtaining UE channel sub-space information, which is particularly useful for UE channel reconstruction at the sensing node, especially for a beam-based operation and communication. The UE channel sub-space is a subset of the entire algebraic space, defined over the spatial domain, in which the entire channel from the TP to the UE lies. Accordingly, the UE channel sub-space defines the TP-to-UE channel with very high accuracy. The signals transmitted over other sub-spaces result in a negligible contribution to the UE channel. Knowledge of the UE channel sub-space helps to reduce the effort needed for channel measurement at the UE and channel reconstruction at the network-side. Therefore, the combination of sensing-based techniques and reference signal-based techniques may enable the UE channel reconstruction with much less overhead as compared to traditional methods. Sub-space information can also facilitate sub-space based sensing to reduce sensing complexity and improve sensing accuracy. In some embodiments of integrated sensing and communication, a same radio access technology (RAT) is used for sensing and communication. This avoids the need to multiplex two different RATs under one carrier spectrum, or necessitating two different carrier spectrums for the two different RATs.
In embodiments that integrate sensing and communication under one RAT, a first set of channels may be used to transmit a sensing signal, and a second set of channels may be used to transmit a communications signal. In some embodiments, each channel in the first set of channels and each channel in the second set of channels is a logical channel, a transport channel, or a physical channel.
At the physical layer, communication and sensing may be performed via separate physical channels. For example, a first physical downlink shared channel PDSCH-C is defined for data communication, while a second physical downlink shared channel PDSCH-Sis defined for sensing. Similarly, separate physical uplink shared channels (PUSCH) , PUSCH-C and PUSCH-S, could be defined for uplink communication and sensing.
In another example, the same PDSCH and PUSCH could be also used for both communication and sensing, with separate logical layer channels and/or transport layer channels defined for communication and sensing. Note also that control channel (s) and data channel (s) for sensing can have the same or different channel structure (format) , occupy same or different frequency bands or bandwidth parts.
In a further example, a common physical downlink control channel (PDCCH) and a common physical uplink control channel (PUCCH) is used to carry control information for both sensing and communication. Alternatively, separate physical layer control channels may be used to carry separate control information for communication and sensing. For  example, PUCCH-Sand PUCCH-C could be used for uplink control for sensing and communication respectively, and PDCCH-Sand PDCCH-C for downlink control for sensing and communication respectively.
Different combinations of shared and dedicated channels for sensing and communication, at each of the physical, transport, and logical layers, are possible.
The term RADAR originates from the phrase Radio Detection and Ranging; however, expressions with different forms of capitalization (i.e., Radar and radar) are equally valid and now more common. Radar is typically used for detecting a presence and a location of an object. A radar system radiates radio frequency energy and receives echoes of the energy reflected from one or more targets. The system determines the pose of a given target based on the echoes returned from the given target. The radiated energy can be in the form of an energy pulse or a continuous wave, which can be expressed or defined by a particular waveform. Examples of waveforms used in radar include frequency modulated continuous wave (FMCW) and ultra-wideband (UWB) waveforms.
Radar systems can be monostatic, bi-static, or multi-static. In a monostatic radar system, the radar signal transmitter and receiver are co-located, such as being integrated in a transceiver. In a bi-static radar system, the transmitter and receiver are spatially separated, and the distance of separation is comparable to, or larger than, the expected target distance (often referred to as the range) . In a multi-static radar system, two or more radar components are spatially diverse but with a shared area of coverage. A multi-static radar is also referred to as a multisite or netted radar.
Terrestrial radar applications encounter challenges such as multipath propagation and shadowing impairments. Another challenge is the problem of identifiability because terrestrial targets have similar physical attributes. Integrating sensing into a communication system is likely to suffer from these same challenges, and more.
Communication nodes can be either half-duplex or full-duplex. A half-duplex node cannot both transmit and receive using the same physical resources (time, frequency, etc. ) ; conversely, a full-duplex node can transmit and receive using the same physical resources. Existing commercial wireless communications networks are all half-duplex. Even if full-duplex communications networks become practical in the future, it is expected that at least some of the nodes in the network will still be half-duplex nodes because half-duplex devices are less complex, and have lower cost and lower power consumption. In particular, full-duplex implementation is more challenging at higher frequencies (e.g. in the millimeter wave bands) , and very challenging for small and low-cost devices, such as femtocell base stations and UEs.
The limitation of half-duplex nodes in the communications network presents further challenges toward integrating sensing and communications into the devices and systems of the communications network. For example, both half-duplex and full-duplex nodes can perform bi-static or multi-static sensing, but monostatic sensing typically requires the sensing node have full-duplex capability. A half-duplex node may perform monostatic sensing with certain limitations, such as in a pulsed radar with a specific duty cycle and ranging capability.
Sensing signal waveform and frame structure will now be described. Properties of a sensing signal, or a signal used for both sensing and communication, include the waveform of the signal and the frame structure of the signal. The frame structure defines the time-domain boundaries of the signal. The waveform describes the shape of the signal as a function of time and frequency. Examples of waveforms that can be used for a sensing signal include ultra-wide band (UWB) pulse, Frequency-Modulated Continuous Wave (FMCW) or “chirp” , orthogonal frequency-division multiplexing (OFDM) , cyclic prefix (CP) -OFDM, and Discrete Fourier Transform spread (DFT-s) -OFDM.
In an embodiment, the sensing signal is a linear chirp signal with bandwidth B and time duration T. Such a linear chirp signal is generally known from its use in FMCW radar systems. A linear chirp signal is defined by an increase in frequency from an initial frequency, fchirp0, at an initial time, tchirp0, to a final frequency, fchirp1, at a final time, tchirp1where the relation between the frequency (f) and time (t) can be expressed as a linear relation off-fchirp0=α (t-tchirp0) , whereis defined as the chirp slope. The bandwidth of the linear chirp signal may be  defined as B=fchirp1-fchirp0 and the time duration of the linear chirp signal may be defined as T=tchirp1-tchirp0. Such linear chirp signal can be presented asin the baseband representation.
Precoding as used herein may refer to any coding operation (s) or modulation (s) that transform an input signal into an output signal. Precoding may be performed in different domains, and typically transform the input signal in a first domain to an output signal in a second domain. Precoding may include linear operations.
A terrestrial communication system may also be referred to as a land-based or ground-based communication system, although a terrestrial communication system can also, or instead, be implemented on or in water. The non-terrestrial communication system may bridge the coverage gaps for underserved areas by extending the coverage of cellular networks through non-terrestrial nodes, which will be key to ensuring global seamless coverage and providing mobile broadband services to unserved/underserved regions, in this case, it is hardly possible to implement terrestrial access-points/base-stations infrastructure in the areas like oceans, mountains, forests, or other remote areas.
The terrestrial communication system may be a wireless communications using 5G technology and/or later generation wireless technology (e.g., 6G or later) . In some examples, the terrestrial communication system may also accommodate some legacy wireless technology (e.g., 3G or 4G wireless technology) . The non-terrestrial communication system may be a communications using the satellite constellations like Geo-Stationary Orbit (GEO) satellites which utilizing broadcast public/popular contents to a local server, Low earth orbit (LEO) satellites establishing a better balance between large coverage area and propagation path-loss/delay, stabilize satellites in very low earth orbits (VLEO) enabling technologies substantially reducing the costs for launching satellites to lower orbits, high altitude platforms (HAPs) providing a low path-loss air interface for the users with limited power budget, or Unmanned Aerial Vehicles (UAVs) (or unmanned aerial system (UAS) ) achieving a dense deployment since their coverage can be limited to a local area, such as airborne, balloon, quadcopter, drones, etc. In some examples, GEO satellites, LEO satellites, UAVs, HAPs and VLEOs may be horizontal and two-dimensional. In some examples, UAVs, HAPs and VLEOs coupled to integrate satellite communications to cellular networks emerging 3D vertical networks consist of many moving (other than geostationary satellites) and high altitude access points such as UAVs, HAPs and VLEOs.
Multiple input multiple-output (MIMO) technology allows an antenna array of multiple antennas to perform signal transmissions and receptions to meet high transmission rate requirement. The above ED110 and T-TRP 170, and/or NT-TRP use MIMO to communicate over the wireless resource blocks. MIMO utilizes multiple antennas at the transmitter and/or receiver to transmit wireless resource blocks over parallel wireless signals. MIMO may beamform parallel wireless signals for reliable multipath transmission of a wireless resource block. MIMO may bond parallel wireless signals that transport different data to increase the data rate of the wireless resource block.
In recent years, a MIMO (large-scale MIMO) wireless communication system with the above T-TRP 170, and/or NT-TRP 172 configured with a large number of antennas has gained wide attentions from the academia and the industry. In the large-scale MIMO system, the T-TRP 170, and/or NT-TRP 172 is generally configured with more than ten antenna units (such as 128 or 256) , and serves for dozens of the ED 110 (such as 40) in the meanwhile. A large number of antenna units of the T-TRP 170, and NT-TRP 172 can greatly increase the degree of spatial freedom of wireless communication, greatly improve the transmission rate, spectrum efficiency and power efficiency, and eliminate the interference between cells to a large extent. The increase of the number of antennas makes each antenna unit be made in a smaller size with a lower cost. Using the degree of spatial freedom provided by the large-scale antenna units, the T-TRP 170, and NT-TRP 172 of each cell can communicate with many ED 110 in the cell on the same time-frequency resource at the same time, thus greatly increasing the spectrum efficiency. A large number of antenna units of the T-TRP 170, and/or NT-TRP 172 also enable each user to have better spatial directivity for uplink and downlink transmission, so that the transmitting power of the T-TRP 170, and/or NT-TRP 172 and an ED 110 is obviously reduced, and the power efficiency is greatly increased. When the antenna number of the T-TRP 170, and/or NT-TRP 172 is sufficiently large, random channels between each ED 110 and the T-TRP  170, and/or NT-TRP 172 can approach to be orthogonal, and the interference between the cell and the users and the effect of noises can be eliminated. The plurality of advantages described above enable the large-scale MIMO to have a magnificent application prospect.
A MIMO system may include a receiver connected to a receive (Rx) antenna, a transmitter connected to transmit (Tx) antenna, and a signal processor connected to the transmitter and the receiver. Each of the Rx antenna and the Tx antenna may include a plurality of antennas. For instance, the Rx antenna may have an ULA antenna array in which the plurality of antennas are arranged in line at even intervals. When a radio frequency (RF) signal is transmitted through the Tx antenna, the Rx antenna may receive a signal reflected and returned from a forward target.
A non-exhaustive list of possible unit or possible configurable parameters or in some embodiments of a MIMO system include:
Panel: unit of antenna group, or antenna array, or antenna sub-array which can control its Tx or Rx beam independently.
Beam: A beam is formed by performing amplitude and/or phase weighting on data transmitted or received by at least one antenna port, or may be formed by using another method, for example, adjusting a related parameter of an antenna unit. The beam may include a Tx beam and/or a Rx beam. The transmit beam indicates distribution of signal strength formed in different directions in space after a signal is transmitted through an antenna. The receive beam indicates distribution of signal strength that is of a wireless signal received from an antenna and that is in different directions in space. The beam information may be a beam identifier, or antenna port (s) identifier, or CSI-RS resource identifier, or SSB resource identifier, or SRS resource identifier, or other reference signal resource identifier.
Artificial Intelligence technologies can be applied in communication, including artificial intelligence or machine learning (AI/ML) based communication in the physical layer and/or AI/ML based communication in the higher layer, e.g., medium access control (MAC) layer. For example, in the physical layer, the AI/ML based communication may aim to optimize component design and/or improve the algorithm performance. For the MAC layer, the AI/ML based communication may aim to utilize the AI/ML capability for learning, prediction, and/or making a decision to solve a complicated optimization problem with possible better strategy and/or optimal solution, e.g. to optimize the functionality in the MAC layer, e.g. intelligent TRP management, intelligent beam management, intelligent channel resource allocation, intelligent power control, intelligent spectrum utilization, intelligent modulation and coding scheme (MCS) , intelligent hybrid automatic repeat request (HARQ) strategy, intelligent transmit/receive (Tx/Rx) mode adaption, etc.
The following are some terminologies which are used in AI/ML field:
Data collection
Data is the very important component for AI/ML techniques. Data collection is a process of collecting data by the network nodes, management entity, or UE for the purpose of AI/ML model training, data analytics and inference.
AI/ML model training
AI/ML model training is a process to train an AI/ML Model by learning the input/output relationship in a data driven manner and obtain the trained AI/ML Model for inference.
AI/ML model inference
A process of using a trained AI/ML model to produce a set of outputs based on a set of inputs.
AI/ML model validation
As a sub-process of training, validation is used to evaluate the quality of an AI/ML model using a dataset different from the one used for model training. Validation can help selecting model parameters that generalize beyond the dataset used for model training. The model parameter after training can be adjusted further by the validation process.
AI/ML model testing
Similar with validation, testing is also a sub-process of training, and it is used to evaluate the performance of a final AI/ML model using a dataset different from the one used for model training and validation. Differently from AI/ML model validation, testing do not assume subsequent tuning of the model.
Online training:
Online training means an AI/ML training process where the model being used for inference is typically continuously trained in (near) real-time with the arrival of new training samples.
Offline training:
An AI/ML training process where the model is trained based on collected dataset, and where the trained model is later used or delivered for inference.
AI/ML model delivery/transfer
A generic term referring to delivery of an AI/ML model from one entity to another entity in any manner. Delivery of an AI/ML model over the air interface includes either parameters of a model structure known at the receiving end or a new model with parameters. Delivery may contain a full model or a partial model.
Life cycle management (LCM)
When the AI/ML model is trained and/or inferred at one device, it is necessary to monitor and manage the whole AI/ML process to guarantee the performance gain obtained by AI/ML technologies. For example, due to the randomness of wireless channels and the mobility of UEs, the propagation environment of wireless signals changes frequently. Nevertheless, it is difficult for an AI/ML model to maintain optimal performance in all scenarios for all the time, and the performance may even deteriorate sharply in some scenarios. Therefore, the lifecycle management (LCM) of AI/ML models is essential for sustainable operation of AI/ML in NR air-interface.
Life cycle management covers the whole procedure of AI/ML technologies which applied on one or more nodes. In specific, it includes at least one of the following sub-process: data collection, model training, model identification, model registration, model deployment, model configuration, model inference, model selection, model activation, deactivation, model switching, model fallback, model monitoring, model update, model transfer/delivery and UE capability report.
Model monitoring can be based on inference accuracy, including metrics related to intermediate key performance indicator (KPI) s, and it can also be based on system performance, including metrics related to system performance KPIs, e.g., accuracy and relevance, overhead, complexity (computation and memory cost) , latency (timeliness of monitoring result, from model failure to action) and power consumption. Moreover, data distribution may shift after deployment due to the environment changes, thus the model based on input or output data distribution should also be considered.
Supervised learning:
The goal of supervised learning algorithms is to train a model that maps feature vectors (inputs) to labels (output) , based on the training data which includes the example feature-label pairs. The supervised learning can analyze the training data and produce an inferred function, which can be used for mapping the inference data.
Supervised learning can be further divided into two types: Classification and Regression. Classification is used when the output of the AI/ML model is categorical i.e. with two or more classes. Regression is used when the output of the AI/ML model is a real or continuous value.
Unsupervised learning:
In contrast to supervised learning where the AI/ML models learn to map the input to the target output, the unsupervised methods learn concise representations of the input data without the labelled data, which can be used for data  exploration or to analyze or generate new data. One typical unsupervised learning is clustering which explores the hidden structure of input data and provide the classification results for the data.
Reinforce learning:
Reinforce learning is used to solve sequential decision-making problems. Reinforce learning is a process of training the action of intelligent agent from input (state) and a feedback signal (reward) in an environment. In reinforce learning, an intelligent agent interacts with an environment by taking an action to maximize the cumulative reward. Whenever the intelligent agent takes one action, the current state in the environment may transfer to the new state, and the new state resulted by the action will bring to the associated reward. Then the intelligent agent can take the next action based on the received reward and new state in the environment. During the training phase, the agent interacts with the environment to collect experience. The environments often mimicked by the simulator since it is expensive to directly interact with the real system. In the inference phase, the agent can use the optimal decision-making rule learned from the training phase to achieve the maximal accumulated reward.
Federated learning:
Federated learning (FL) is a machine learning technique that is used to train an AI/ML model by a central node (e.g., server) and a plurality of decentralized edge nodes (e.g., UEs, next Generation NodeBs, “gNBs” ) .
According to the wireless FL technique, a server may provide, to an edge node, a set of model parameters (e.g., weights, biases, gradients) that describe a global AI/ML model. The edge node may initialize a local AI/ML model with the received global AI/ML model parameters. The edge node may then train the local AI/ML model using local data samples to, thereby, produce a trained local AI/ML model. The edge node may then provide, to the serve, a set of AI/ML model parameters that describe the local AI/ML model.
Upon receiving, from a plurality of edge nodes, a plurality of sets of AI/ML model parameters that describe respective local AI/ML models at the plurality of edge nodes, the server may aggregate the local AI/ML model parameters reported from the plurality of UEs and, based on such aggregation, update the global AI/ML model. A subsequent iteration progresses much like the first iteration. The server may transmit the aggregated global model to a plurality of edge nodes. The above procedure is performed multiple iterations until the global AI/ML model is considered to be finalized, e.g., the AI/ML model is converged or the training stopping conditions are satisfied.
Notably, the wireless FL technique does not involve exchange of local data samples. Indeed, the local data samples remain at respective edge nodes.
AI technologies (which encompass ML technologies) may be applied in communication, including AI-based communication in the physical layer and/or AI-based communication in the MAC layer. For the physical layer, the AI communication may aim to optimize component design and/or improve the algorithm performance. For example, AI may be applied in relation to the implementation of: channel coding, channel modelling, channel estimation, channel decoding, modulation, demodulation, MIMO, waveform, multiple access, physical layer element parameter optimization and update, beam forming, tracking, sensing, and/or positioning, etc. For the MAC layer, the AI communication may aim to utilize the AI capability for learning, prediction, and/or making a decision to solve a complicated optimization problem with possible better strategy and/or optimal solution, e.g. to optimize the functionality in the MAC layer. For example, AI may be applied to implement: intelligent TRP management, intelligent beam management, intelligent channel resource allocation, intelligent power control, intelligent spectrum utilization, intelligent MCS, intelligent HARQ strategy, and/or intelligent transmission/reception mode adaption, etc.
An AI architecture may involve multiple nodes, where the multiple nodes may possibly be organized in one of two modes, i.e., centralized and distributed, both of which may be deployed in an access network, a core network, or an edge computing system or third party network. A centralized training and computing architecture is restricted by possibly large  communication overhead and strict user data privacy. A distributed training and computing architecture may comprise several frameworks, e.g., distributed machine learning and federated learning. In some embodiments, an AI architecture may comprise an intelligent controller which can perform as a single agent or a multi-agent, based on joint optimization or individual optimization. New protocols and signaling mechanisms are desired so that the corresponding interface link can be personalized with customized parameters to meet particular requirements while minimizing signaling overhead and maximizing the whole system spectrum efficiency by personalized AI technologies.
New protocols and signaling mechanisms are provided for operating within and switching between different modes of operation, including between AI and non-AI modes, and for measurement and feedback to accommodate the different possible measurements and information that may need to be fed back, depending upon the implementation.
An air interface that uses AI as part of the implementation, e.g. to optimize one or more components of the air interface, will be referred to herein as an “AI enabled air interface” . In some embodiments, there may be two types of AI operation in an AI enabled air interface: both the network and the UE implement learning; or learning is only applied by the network.
AI-related communications between the system node 120 and one or more UEs 110 may be via an interface such as the Uu link in 5G and 4G network systems, or may be via an AI-dedicated air interface (e.g., using an AI-related protocol on an AI-related logical layer, as discussed herein) . For example, AI-related communications between a system node 120 and a UE 110 served by the system node 120 may be over an AI-dedicated air interface, whereas non-AI-related communications may be over a 5G or 4G Uu link.
FIG. 1G illustrates a schematic diagram of an example model 100G in accordance with some example embodiments of the present disclosure. The pre-trained big model is also referred to as a global model, or called as foundation model. The pre-trained big model may be deployed at the core network (CN) or a third party to support multiple tasks. The pre-trained big model 100G is utilized here as a basis for AI tasks at the radio access network (RAN) side.
As illustrated in FIG. 1G, the model 100G is pre-trained for a plurality of tasks. When task-1 is input to the pre-trained big model, an inference-1 corresponding to the input task-1 can be obtained. Similarly, when task-2 is input to the pre-trained big model, an inference-2 corresponding to the input task-2 can be obtained. This goes on and on. When task-N (N is an integer larger than 2) is input to the pre-trained big model, an inference-N corresponding to the input task-N can be obtained.
Currently, more and more AI tasks will be in the future network, if for each AI task, RAN node (e.g. BS) trains its own model, the fragmented models are too expensive (because individual hardware should be prepared for each AI model) and not efficient. In this circumstance, the RAN side can obtain a basic customized model from the global model (e.g., the customized model is a smaller model than the global model) , and perform fine-tuning on the local model. This is the basic technical concept of some embodiments of this disclosure, and will be described later in more detail with reference to FIGS. 2-8.
To support the use of AI in a wireless network, an appropriate AI framework is needed. However, 5G only consider the AI use cases to improve network performance, does not support network providing AI service to UE. Considering there are massive devices in the NW which has data and computing capability, by distributed training/inference, network could provide AI service. In addition, sensing functionalities is not considered in current 5G AI framework.
The present disclosure defines an AI/ML functional framework for integrated AI and sensing. FIG. 2 illustrates a flowchart of an example method 200 implemented in an AI/ML functional framework in accordance with some example embodiments of the present disclosure. Only for the purpose of discussion, the method 200 will be described with reference to FIGS. 1A-1G. The method 200 may involve an AI/ML functional framework, which will be described in more detail with reference to FIG. 3.
As illustrated in FIG. 2, the method 200 includes, at 210, performing at least one operation based on an AI/ML functional framework, for example, the AI/ML function framework 300 as illustrated in FIG. 3. In this way, the AI/ML function framework 300 may be implemented to provide integrated AI and sensing.
FIG. 3 illustrates a schematic AI/ML functional framework 300 in accordance with some example embodiments of the present disclosure. The AI/ML functional framework 300 may include a first function 340, a second function 345, a third function 350, a fourth function 355 and at least one function 360 configured to operate based on sensing data. The first function 340 may be configured to perform model training of at least one of an AI/ML model, an AI/ML sub-model, an AI/ML functionality or an AI/ML sub-functionality. The second function 345 may be configured to perform management of the AI/ML model, AI/ML sub-model, AI/ML functionality or AI/ML sub-functionality. The third function 350 may be configured to perform inference of the AI/ML model to obtain inference results. The fourth function 355 may be configured to store the AI/ML model. Here, when referring to “aspecific function is configured to do something” , it may mean that the specific function is configured, for example, by a base station device or a core network device, to do something. Alternatively or in addition, it may mean that the specific function is pre-defined, for example, in a specification (for example, in a 3GPP specification) , to do something. Herein, sensing data collection can also be called as data collection, 3GPP sensing data collection, 3GPP and non-3GPP sensing data collection, data measurement, sensing measurement, etc. Sensing modeling can also be called as sensing results processing, sensing information processing, sensing data processing, sensing measurement processing, environment information processing, object information processing, environment and object information processing. Sensing results storage can also be called as sensing storage, RAN storage, local RAN storage, RAN and core network storage. Sensing management can also be called as sensing control, sensing results management, or simply management. Sensing application can also be called as sensing action, sensing in RAN, sensing usage, sensing use cases, sensing assisted communication, sensing service, sensing assisted communication and sensing service, etc.
The first function 340 may be further configured to perform validation or testing of the AI/ML model, AI/ML sub-model, AI/ML functionality or AI/ML sub-functionality. Alternatively or in addition, the first function 340 may be further configured to perform data preparation based on data received by the first function 340. In this way, the first function 340 can provide a more accurate AI/ML model, which in turn can provide more accurate inference results, thus the reliability of the AI/ML model can be improved.
The second function 345 may be further configured to perform control of the model training of the at least one of AI/ML model, AI/ML sub-model, AI/ML functionality or AI/ML sub-functionality. Alternatively or in addition, the second function 345 may be further configured to perform control of the inference of the AI/ML model. Alternatively or in addition, the second function 345 may be further configured to monitor output of the AI/ML model. In this way, the second function 345 can facilitate the first function to provide a more accurate AI/ML model, which in turn can provide more accurate inference results, thus the reliability of the AI/ML model can be improved.
The third function 350 may be further configured to perform an action based on the inference results. Alternatively or in addition, the third function 350 may be further configured to perform data preparation based on data received by the third function 350. In this way, the third function 350 can perform the action based on the inference results of the AI/ML model, improving the processing efficiency and reliability with the AI/ML model.
The first function 340 may transmit the trained AI/ML model to the fourth function 355. Alternatively or in addition, the first function 340 may receive AI/ML assistance information from the second function 345. Alternatively or in addition, the first function 340 may receive, from the second function 345, a performance level of the AI/ML model and a request to retrain the AI/ML model. In this way, the first function 340 can provide a more accurate (re) trained AI/ML model based on the AI/ML assistance information and/or the performance level of the AI/ML model. The (re) trained AI/ML model can, in turn, provide more accurate inference results, thus the reliability of the (re) trained AI/ML model can be improved.
The second function 345 may receive the inference results from the third function 350. In this way, the second function 345 can facilitate the first function to provide a more accurate retrained/updated AI/ML model based on the inference results of the current AI/ML model. The retrained/updated AI/ML model can, in turn, provide more accurate inference results, thus the reliability of the AI/ML model can be improved.
The second function 345 may determine that a performance level of the AI/ML model is below a threshold level based on the inference results received from the third function 350. Based on determining that the performance level is below the threshold level, the second function 345 may further transmit, to the first function 340, the performance level of the AI/ML model and a request to retrain the AI/ML model. In this way, the second function 345 can request the first function 340 to retrain the AI/ML model in response to the performance level of the currently used AI/ML model becoming below a threshold level. In this sense, the second function 345 can facilitate the first function 340 to provide a more accurate retrained/updated AI/ML model based on the inference results of the current AI/ML model. The retrained/updated AI/ML model can, in turn, provide more accurate inference results, thus the reliability of the retrained/updated AI/ML model can be improved as compared with the currently used AI/ML model.
The second function 345 may transmit AI/ML assistance information to the first function 340, transmitting, to the third function, a switching indication to switch from the AI/ML model to another AI/ML model. Alternatively or in addition, the second function 345 may transmit, to the third function 350, a fallback indication to apply a non-AI/ML model instead of the AI/ML model. Alternatively or in addition, the second function 345 may transmit, to the third function 350, an activating indication to activate one or more of a plurality of candidate AI/ML models. Alternatively or in addition, the second function 345 may transmit, to the third function 350, a deactivating indication to deactivate one or more of the plurality of candidate AI/ML models. In this way, the second function 345 can provide the AI/ML assistance information to the first function 340 to obtain a more accurate (re) trained AI/ML model based on the AI/ML assistance information. The retrained/updated AI/ML model can, in turn, provide more accurate inference results, thus the reliability of the retrained/updated AI/ML model can be improved as compared with the currently used AI/ML model. Also, the second function 345 can change/switch/ (de) select a desired AI/ML model for future use, improving the flexibility in management on the third function 350 and further the whole AI/ML functional framework 300.
The second function 345 may transmit, to the fourth function 355, a request that the fourth function 355 transmits the AI/ML model to the third function. In this way, the second function 345 can transmit the (re) trained AI/ML model to the third function 350 for future use, while the retrained/updated AI/ML model can provide more accurate inference results than the currently used AI/ML model at the third function 350. Therefore, the reliability of the retrained/updated AI/ML model can be improved as compared with the currently used AI/ML model.
The third function 350 may transmit the inference results to the second function 345. In this way, the second function 345 can determine whether the performance level of the AI/ML model is below a threshold level based on the inference results received from the third function 350. If so, the second function 345 can request the first function 340 to retrain the AI/ML model accordingly. In this sense, the third function 350 can help the second function 345 to facilitate the first function 340 to provide a more accurate retrained/updated AI/ML model based on the inference results of the current AI/ML model. The retrained/updated AI/ML model can, in turn, provide more accurate inference results as compared with the currently used AI/ML model at the third function 350, thus the reliability of the retrained/updated AI/ML model can be improved as compared with the currently used AI/ML model.
The third function 350 may receive, from the second function 345, a switching indication to switch from the AI/ML model to another AI/ML model. Alternatively or in addition, the third function 350 may receive, from the second function 345, a fallback indication to apply a non-AI/ML model instead of the AI/ML model. Alternatively or in addition, the third function 350 may receive, from the second function 345, an activating indication to activate one or more of a plurality of candidate AI/ML models. Alternatively or in addition, the third function 350 may receive, from the second  function 345, a deactivating indication to deactivate one or more of the plurality of candidate AI/ML models. In this way, the third function 350 can turn to use a desired AI/ML model indicated by the second function 345, improving the flexibility in management on the third function 350 and further the whole AI/ML functional framework 300.
The third function 350 may receive the AI/ML model from the fourth function 355. In this way, the third function 345 can use the retrained/updated AI/ML model to provide more accurate inference results as compared with the currently used AI/ML model at the third function 345, thus the reliability of the retrained/updated AI/ML model can be improved as compared with the currently used AI/ML model.
In some example embodiments, the AI/ML functional framework 300 may further comprise a fifth function configured to collect non-sensing data. In this way, with the collected non-sensing data, the first function 340 can obtain a more accurate AI/ML model, and the second function 345 and third function 350 can also work more accurately.
The at least one function 360 may further comprise a sixth function configured to collect radio frequency (RF) sensing data, a seventh function configured to collect non-RF sensing data, and an eighth function configured to obtain fused data based on the RF sensing data and the non-RF sensing data. In this way, with sensing data, the first function 340 can obtain a more accurate AI/ML model, AI/ML functionalities of the AI/ML functional framework 300 can be enhanced by the sensing data, and the second function 345 and third function 350 can also work more accurately. The RF sensing may be 3rd generation partnership project (3GPP) defined RF sensing or non-3GPP defined RF sensing. In this way, sensing data can be collected through RF sensing, for example, either 3GPP defined RF sensing or non-3GPP defined RF sensing.
The seventh function may be further configured to collect the non-RF sensing data using at least one of light detection and ranging (LIDAR) , non-3GPP defined RF sensing, wireless fidelity (WiFi) sensing, camera (s) , video (s) , or sensor (s) . In this way, the non-RF sensing data can be collected in various ways like LIDAR, non-3GPP defined RF sensing, WiFi sensing, camera (s) , video (s) , or sensor (s) . Therefore, it becomes easier and faster to obtain enough non-RF sensing data to be used by the first function, second function and third function.
The first function 340 may further receive first input data from at least one of the fifth function, the sixth function, the seventh function or the eighth function. In this way, a (re) trained AI/ML model can be (re) trained with the first input data as the training data. Since the first input data is from at least one of the fifth function, the sixth function, the seventh function or the eighth function, which implies the first input data may include sensing data, AI/ML functionalities of the AI/ML functional framework 300 can be enhanced by the sensing data. At the same time, with the large-quantity sensing data (including RF sensing data and/or non-RF sensing data, where the RF sensing data may include 3GPP defined RF sensing data and/or non-3GPP defined RF sensing data) , the training process of the (re) trained AI/ML model can be shortened and the accuracy of the (re) trained AI/ML model can be more accurate.
The second function 345 may receive second input data from at least one of the fifth function, the sixth function, the seventh function or the eighth function. In this way, the second function 345 can perform management of the AI/ML model based on the second input data. Since the second input data is from at least one of the fifth function, the sixth function, the seventh function or the eighth function, which implies the second input data may include sensing data, AI/ML functionalities of the AI/ML functional framework 300 can be enhanced by the sensing data. At the same time, with the large-quantity sensing data (including RF sensing data and/or non-RF sensing data, where the RF sensing data may include 3GPP defined RF sensing data and/or non-3GPP defined RF sensing data) , the management of the AI/ML model can be more efficient and accurate.
The third function 350 may receive third input data from at least one of the fifth function, the sixth function, the seventh function or the eighth function. In this way, the third function can perform inference of the AI/ML model based on the third input data. Since the third input data is from at least one of the fifth function, the sixth function, the seventh function or the eighth function, which implies the third input data may include sensing data, thus AI/ML functionalities of the AI/ML functional framework 300 can be enhanced by the sensing data.
The fifth function may transmit the non-sensing data to at least one of the first function 340, the second function 345 or the third function 350. In this way, the non-sensing data can be utilized by the first function 340 to train the AI/ML model to obtain a more accurate AI/ML model. At the same time, the non-sensing data can help the second function 345 to manage the AI/ML model more reliably and help the third function 350 to perform inference of the AI/ML model more accurately and thus reliably.
The sixth function may transmit the RF sensing data to at least one of the first function 340, the second function 345 or the third function 350. In this way, the RF sensing data can be utilized by the first function 340 to train the AI/ML model to obtain a more accurate AI/ML model. At the same time, the RF sensing data can help the second function 345 to manage the AI/ML model more reliably and help the third function 350 to perform inference of the AI/ML model more accurately and thus reliably. Meanwhile, in the sense of sensing for AI/ML, the RF sensing data can facilitate the first function 340, second function 345 and third function 350 to improve AI/ML functionalities of the AI/ML model and further the AI/ML functional framework 300.
The seventh function may transmit the non-RF sensing data to at least one of the first function 340, the second function 345 or the third function 350. In this way, the non-RF sensing data can be utilized by the first function 340 to train the AI/ML model to obtain a more accurate AI/ML model. At the same time, the non-RF sensing data can help the second function 345 to manage the AI/ML model more reliably and help the third function 350 to perform inference of the AI/ML model more accurately and thus reliably. Meanwhile, in the sense of sensing for AI/ML, the non-RF sensing data can facilitate the first function 340, second function 345 and third function 350 to improve AI/ML functionalities of the AI/ML model and further the AI/ML functional framework 300.
The eighth function may receive the RF sensing data from the sixth function, receiving the non-RF sensing data from the seventh function. Alternatively or in addition, the eighth function may perform data processing on the received RF sensing data and non-RF sensing data to obtain the fused data. In this way, the fused data can be obtained which is more accurate than either one of the RF sensing data and the non-RF sensing data, and is less in quantity than the sum of the RF sensing data and the non-RF sensing data.
The eighth function may transmit the fused data to at least one of the first function 340, the second function 345 or the third function 350. In this way, the fused data then can be utilized by the first function 340 to train the AI/ML model to obtain a more accurate AI/ML model. At the same time, the fused data can help the second function 345 to manage the AI/ML model more reliably and help the third function 350 to perform inference of the AI/ML model more accurately and thus reliably. Meanwhile, in the sense of sensing for AI/ML, the fused data can facilitate the first function 340, second function 345 and third function 350 to improve AI/ML functionalities of the AI/ML model and further the AI/ML functional framework 300.
In some example embodiments, the at least one function may comprise a ninth function configured to collect the sensing data, and a tenth function configured to obtain fused data based on the non-sensing data and the sensing data. In this way, the fused data can be obtained which is more accurate than either one of the non-sensing data and the sensing data, and is less in quantity than the sum of the non-sensing data and the sensing data.
The at least one function 360 may comprise at least one of an eleventh function configured to obtain a sensing model or a sensing result, a twelfth function configured to perform management of the sensing model or sensing result, or a thirteenth function configured to assist communication or determine an event based on the sensing model or sensing result. In this way, a sensing model can be obtained and used to assist communication or determine an event based on the sensing model.
The at least one function 360 may further comprise a fourteenth function configured to store the sensing model or the sensing result. In this way, the sensing model can be stored in the fourteenth function which is separate from the fourth  function 355, and the operations involving the storage and retrieval of the AI/ML model and the sensing model can be performed separately in a decoupled manner.
The first function 340 may further receive first input data from at least one of the fifth function, the ninth function or the tenth function. In this way, a (re) trained AI/ML model can be (re) trained with the first input data as the training data. Since the first input data is from at least one of the fifth function, the ninth function or the tenth function, which implies the first input data may include non-sensing data and sensing data, AI/ML functionalities of the AI/ML functional framework can be enhanced by the non-sensing data and the sensing data. At the same time, with the large-quantity sensing data, the training process of the (re) trained AI/ML model can be shortened and the accuracy of the (re) trained AI/ML model can be more accurate.
The second function 345 may further receive second input data from at least one of the fifth function, the ninth function or the tenth function. In this way, the second function 345 can perform management of the AI/ML model based on the second input data. Since the second input data is from at least one of the fifth function, the ninth function or the tenth function, which implies the second input data may include non-sensing data and sensing data, AI/ML functionalities of the AI/ML functional framework can be enhanced by the non-sensing data and the sensing data. At the same time, with the large-quantity sensing data, the management of the AI/ML model can be more efficient and accurate.
The third function may further receive third input data from at least one of the fifth function, the ninth function or the tenth function. In this way, the third function can perform inference of the AI/ML model based on the third input data. Since the third input data is from at least one of the fifth function, the ninth function or the tenth function, which implies the third input data may include non-sensing data and sensing data, where the non-sensing data can be utilized by the third function to perform inference of the AI/ML model more accurately and reliably.
The fifth function may transmit the non-sensing data to at least one of the first function 340, the second function 345 or the third function 350, and at least one of the eleventh function, the twelfth function or the thirteenth function. In this way, the non-sensing data can be utilized by the first function 340 to train the AI/ML model to obtain a more accurate AI/ML model. At the same time, the non-sensing data can help the second function 345 to manage the AI/ML model more reliably and help the third function 350 to perform inference of the AI/ML model more accurately and thus reliably. Further, the non-sensing data can be utilized by the eleventh function to train the sensing model to obtain a more accurate sensing model. At the same time, the non-sensing data can help the twelfth function to manage the sensing model more reliably and help the thirteenth function to perform inference of the sensing model more accurately and thus reliably.
The ninth function may transmit the sensing data to at least one of the first function 340, the second function 345 or the third function 350, and at least one of the eleventh function, the twelfth function or the thirteenth function. In this way, the sensing data can be utilized by the first function 340 to train the AI/ML model to obtain a more accurate AI/ML model. At the same time, the sensing data can help the second function 345 to manage the AI/ML model more reliably and help the third function 350 to perform inference of the AI/ML model more accurately and thus reliably. Meanwhile, in the sense of sensing for AI/ML, the sensing data can facilitate the first function 340, second function 345 and third function 350 to improve AI/ML functionalities of the AI/ML model and further the AI/ML functional framework. Further, the sensing data can be utilized by the eleventh function to train the sensing model to obtain a more accurate sensing model. At the same time, the sensing data can help the twelfth function to manage the sensing model more reliably and help the thirteenth function to perform inference of the sensing model more accurately and thus reliably.
The tenth function may receive the non-sensing data from the sixth function. Alternatively or in addition, the tenth function may receive the sensing data from the ninth function. Alternatively or in addition, the tenth function may perform data processing on the received non-sensing data and sensing data to obtain the fused data. In this way, the fused data can be obtained which is more accurate than either one of the non-sensing data and the sensing data, and is less in quantity than the sum of the non-sensing data and the sensing data.
The tenth function may transmit the fused data to at least one of the first function 340, the second function 345 or the third function 350, and at least one of the eleventh function, the twelfth function or the thirteenth function. In this way, the fused data then can be utilized by the first function 340 to train the AI/ML model to obtain a more accurate AI/ML model. At the same time, the fused data can help the second function 345 to manage the AI/ML model more reliably and help the third function 350 to perform inference of the AI/ML model more accurately and thus reliably. Meanwhile, in the sense of sensing for AI/ML, the fused data can facilitate the first function 340, second function 345 and third function 350 to improve AI/ML functionalities of the AI/ML model and further the AI/ML functional framework. Further, the fused data then can be utilized by the eleventh function to train the sensing model to obtain a more accurate sensing model. At the same time, the fused data can help the twelfth function to manage the sensing model more reliably and help the thirteenth function to perform inference of the sensing model more accurately and thus reliably.
The eleventh function may be further configured to perform data processing based on fourth input data obtained from at least two of the fifth function, the ninth function or the tenth function. In this way, based on the fourth input data as the training data for the sensing model, the eleventh function can train the sensing model more accurately.
The model training of the at least one of sensing model, sensing sub-model, sensing functionality or sensing sub-functionality may comprise at least one of environment reconstruction, channel reconstruction, target reconstruction or digital twin or object detection. In this way, the sensing model can be trained more accurately.
The twelfth function may be further configured to perform control of the model training of the at least one of sensing model, sensing sub-model, sensing functionality or sensing sub-functionality. In some cases, the twelfth function may be further configured to perform control of the inference of the sensing model, or monitor output of the sensing model. In this way, the twelfth function can facilitate the eleventh function to provide a more accurate sensing model, which can produce more accurate sensing inference results, thus the reliability of the sensing model can be improved.
The thirteenth function may be further configured to perform data preparation based on sixth input data obtained from at least one of the fifth function, the ninth function or the tenth function. In this way, data used in processing by the thirteenth function can be more organized as compared with the case where the sixth input data is used in the processing without data preparation, thus the processing by the thirteenth function can be more accurate with a higher speed.
The eleventh function may receive the fourth input data from at least one of the fifth function, the ninth function or the tenth function. Alternatively or in addition, the eleventh function may receive from the twelfth function, a performance level of the sensing model and a request to retrain the sensing model. Alternatively or in addition, the eleventh function may receive the sensing inference results from the thirteenth function. Alternatively or in addition, the eleventh function may receive sensing information from the twelfth function. Alternatively or in addition, the eleventh function may transmit the trained or retrained sensing model to the fourteenth function. In this way, the eleventh function can provide a more accurate (re) trained sensing model based on the fourth input data and/or the performance level of the sensing model and/or the sensing information and/or the sensing inference results. The (re) trained sensing model can, in turn, provide more accurate inference results, thus the reliability of the (re) trained sensing model can be improved.
The eleventh function may receive the inference results from the third function 350. In this way, in the sense of AI/ML for sensing, the inference results of the AI/ML model can help the eleventh function to improve the accuracy and performance of the (re) trained AI/ML model and further the AI/ML functional framework.
The twelfth function may receive fifth input data from at least one of the fifth function, the ninth function or the tenth function. Alternatively or in addition, the twelfth function may receive the sensing inference results from the thirteenth function. In this way, the twelfth function can facilitate the eleventh function to provide a more accurate sensing model, which in turn can provide more accurate sensing inference results, thus the reliability of the sensing model can be improved.
The twelfth function may determine that a performance level of the sensing model is below a threshold level based on the sensing inference results received from the thirteenth function. Alternatively or in addition, based on determining that  the performance level is below the threshold level, the twelfth function may transmit, to the eleventh function, the performance level of the sensing model and a request to retrain the sensing model. In this way, the twelfth function can request the eleventh function to retrain the sensing model in response to the performance level of the currently used sensing model becoming below a threshold level. In this sense, the twelfth function can facilitate the eleventh function to provide a more accurate retrained/updated sensing model based on the sensing inference results of the current sensing model. The retrained/updated sensing model can, in turn, provide more accurate sensing inference results, thus the reliability of the retrained/updated sensing model can be improved as compared with the currently used sensing model.
The twelfth function may transmit sensing information to the eleventh function. Alternatively or in addition, the twelfth function may transmit, to the thirteenth function, a switching indication to switch from the sensing model to another sensing model. Alternatively or in addition, the twelfth function may transmit, to the thirteenth function, a fallback indication to apply a non-sensing model instead of the sensing model. Alternatively or in addition, the twelfth function may transmit, to the thirteenth function, an activating indication to activate one or more of a plurality of candidate sensing models. Alternatively or in addition, the twelfth function may transmit, to the thirteenth function, a deactivating indication to deactivate one or more of the plurality of candidate sensing models. In this way, the twelfth function can provide the sensing information to the eleventh function to obtain a more accurate (re) trained sensing model based on the sensing information. The retrained/updated sensing model can, in turn, provide more accurate sensing inference results, thus the reliability of the retrained/updated sensing model can be improved as compared with the currently used sensing model. Also, the twelfth function can change/switch/ (de) select a desired sensing model for future use, improving the flexibility in management on the thirteenth function and further the whole AI/ML functional framework.
The twelfth function may transmit, to the fourteenth function, a request that the fourteenth function transmits the sensing model to the thirteenth function. In this way, the twelfth function can request the fourteenth function to transmit the (re) trained sensing model to the thirteenth function for future use, while the retrained/updated sensing model can provide more accurate sensing inference results than the currently used sensing model at the thirteenth function. Therefore, the reliability of the retrained/updated sensing model can be improved as compared with the currently used sensing model.
The twelfth function may receive the inference results from the third function 350. In this way, in the sense of AI/ML for sensing, the inference results can facilitate the twelfth function to improve sensing functionalities of the sensing model and further the AI/ML functional framework 300.
The thirteenth function may receive sixth input data from at least one of fifth function, the ninth function or the tenth function. Alternatively or in addition, the thirteenth function may transmit the sensing inference results to the twelfth function. In this way, with the sixth input data, the thirteenth function can determine the sensing inference results, and send the sensing inference results to the twelfth function. With the sensing inference results, the twelfth function can determine whether the performance level of the sensing model is below a threshold level based on the sensing inference results received from the thirteenth function. If so, the twelfth function can request the eleventh function to retrain the sensing model accordingly. In this sense, the thirteenth function can help the twelfth function to facilitate the eleventh function to provide a more accurate retrained/updated sensing model based on the sensing inference results. The retrained/updated sensing model can, in turn, provide more accurate sensing inference results as compared with the currently used sensing model at the thirteenth function, thus the reliability of the retrained/updated sensing model can be improved as compared with the currently used sensing model.
The thirteenth function may transmit the sensing inference results to at least one of the first function 340, the second function 345 or the third function 350. Alternatively or in addition, the thirteenth function may receive the sensing model from the fourteenth function. In this way, in the sense of sensing for AI/ML, the sensing inference results can facilitate the first function 340, the second function 345 or the third function 350 to improve AI/ML functionalities of the AI/ML model and further the AI/ML functional framework.
The thirteenth function may receive, from the twelfth function, a switching indication to switch from the sensing model to another sensing model. Alternatively or in addition, the thirteenth function may receive, from the twelfth function, a fallback indication to apply a non-sensing model instead of the sensing model. Alternatively or in addition, the thirteenth function may receive, from the twelfth function, an activating indication to activate one or more of a plurality of candidate sensing models. Alternatively or in addition, the thirteenth function may receive, from the twelfth function, a deactivating indication to deactivate one or more of the plurality of candidate sensing models. In this way, the thirteenth function can turn to use a desired sensing model indicated by the twelfth function, improving the flexibility in management on the thirteenth function and further the whole AI/ML functional framework.
The fourteenth function may receive the trained sensing model from the eleventh function. Alternatively or in addition, based on receiving, from the twelfth function, a request that the fourteenth function transmits the sensing model to the thirteenth function, the fourteenth function may transmit the sensing model to the thirteenth function. In this way, the fourteenth function can provide the sensing model to the thirteenth function, such that the thirteenth function can use the (re) trained sensing model to provide more accurate sensing inference results as compared with the currently used sensing model at the thirteenth function, thus the reliability of the (re) trained sensing model can be improved as compared with the currently used sensing model.
The request may comprise at least one of a model ID of the requested sensing model, a sensing functionality ID for the requested sensing functionality, or a sensing performance requirement indicating the requested sensing performance. In this way, a sensing model desired by the twelfth function to be used at the thirteenth function can be requested using various parameters, improving the flexibility and usability of the AI/ML functional framework.
In some example embodiments, the AL/ML functional framework 300 may further comprise a fifteenth function configured to perform sensing inference to obtain a sensing result. At the same time, the first function 340 may be further configured to perform model training of at least a sensing model, a sensing sub-model, a sensing functionality or a sensing sub-functionality, and the second function 345 may be further configured to perform management of the at least sensing model, sensing sub-model, sensing functionality or sensing sub-functionality. In this way, the first function 340 can not only train an AI/ML model, but also can train a sensing model, the second function 345 can monitor not only the AI/ML model but also the sensing model. Meanwhile, the fifteenth function which is in charge of sensing inference of the sensing model is separate from the third function 345 which is in charge of model inference of the AI/ML model.
In some example embodiments, the at least one function 360 may further comprise a sixteenth function configured to obtain fused data. The fused data may be obtained by processing on non-sensing data and sensing data. In this way, the fused data, which is less in quantity than the sum of the non-sensing data and the sensing data, can be used in future processing to improve data accuracy and decrease data processing volume.
In this case, the first function 340 may be further configured to perform data preparation based on seventh input data obtained from the sixteenth function. In this way, data used in processing by the first function 340 can be more organized as compared with the case where the seventh input data is used in the processing without data preparation, thus the processing by the first function 340 can be more accurate with a higher speed.
At the same time, the second function 345 may be further configured to perform control of the model training of the at least one of sensing model, sensing sub-model, sensing functionality or sensing sub-functionality. Alternatively or in addition, the second function 345 may perform control of the sensing inference of the sensing model. Alternatively or in addition, the second function 345 may monitor output of the sensing model. In this way, the second function 345, which performs management of the AI/ML model, can also perform management of the sensing model (including model training and inference of the sensing model) .
The first function 340 may further receive the seventh input data from the sixteenth function. Alternatively or in addition, the first function 340 may receive, from the second function 345, a performance level of the sensing model and a  request to retrain the sensing model. Alternatively or in addition, the first function 340 may receive sensing information from the second function 345. Alternatively or in addition, the first function 340 may transmit the trained or retrained sensing model to the fourth function 355. In this way, the first function 340 can provide a more accurate (re) trained sensing model based on the seventh input data and/or the performance level of the sensing model and/or the sensing information. The (re) trained sensing model can, in turn, provide more accurate inference results, thus the reliability of the (re) trained sensing model can be improved.
The second function 345 may further receive eighth input data from the sixteenth function. Alternatively or in addition, the second function 345 may receive the sensing inference results from the fifteenth function. In this way, the second function 345 can facilitate the first function 340 to provide a more accurate retrained/updated AI/ML model and/or sensing model based on the eighth input data and/or the sensing inference results of the current sensing model. In the sense of sensing for AI/ML, the sensing inference results can facilitate the second function 345 to improve AI/ML functionalities of the AI/ML model and further the AI/ML functional framework. More specifically, the retrained/updated AI/ML model can, in turn, provide more accurate inference results, thus the reliability of the AI/ML model can be improved. Also, the retrained/updated sensing model can, in turn, provide more accurate inference results, thus the reliability of the sensing model can be improved.
The second function 345 may further determine that a performance level of the sensing model is below a threshold level based on the sensing inference results received from the fifteenth function. Alternatively or in addition, based on determining that the performance level is below the threshold level, the second function 345 may transmit, to the first function 340, the performance level of the sensing model and a request to retrain the sensing model. In this way, the second function 345 can request the first function 340 to retrain the sensing model in response to the performance level of the currently used sensing model becoming below a threshold level. In this sense, the second function 345 can facilitate the first function 340 to provide a more accurate retrained/updated sensing model based on the inference results of the current sensing model. The retrained/updated sensing model can, in turn, provide more accurate inference results, thus the reliability of the retrained/updated sensing model can be improved as compared with the currently used sensing model.
The second function 345 may further transmit sensing information to the first function 340. Alternatively or in addition, the second function 345 may transmit, to the fifteenth function, a switching indication to switch from the sensing model to another sensing model. Alternatively or in addition, the second function 345 may transmit, to the fifteenth function, a fallback indication to apply a non-sensing model instead of the sensing model. Alternatively or in addition, the second function 345 may transmit, to the fifteenth function, an activating indication to activate one or more of a plurality of candidate sensing models. Alternatively or in addition, the second function 345 may transmit, to the fifteenth function, a deactivating indication to deactivate one or more of the plurality of candidate sensing models. In this way, the second function 345 can provide the sensing information to the first function 340 to obtain a more accurate (re) trained sensing model based on the sensing information. The retrained/updated sensing model can, in turn, provide more accurate sensing inference results, thus the reliability of the retrained/updated sensing model can be improved as compared with the currently used sensing model. Also, the second function 345 can change/switch/ (de) select a desired sensing model for future use, improving the flexibility in management on the fifteenth function and further the whole AI/ML functional framework.
The second function 345 may further transmit, to the fourth function 355, a request that the fourth function 355 transmits the sensing model to the fifteenth function. In this way, the second function 345 can transmit the (re) trained sensing model to the fifteenth function for future use, while the retrained/updated sensing model can provide more accurate inference results than the currently used sensing model at the third function 350. Therefore, the reliability of the retrained/updated sensing model can be improved as compared with the currently used sensing model.
The third function 350 may further receive ninth input data from the sixteenth function. In this way, the third function 350 can provide more accurate sensing inference result (s) based on the ninth input data.
The third function 350 may further transmit the inference results to the fifteenth function. Alternatively or in addition, the third function 350 may receive the sensing result (or, sensing inference result) from the fifteenth function. In this way, on one hand, in the sense of AI/ML for sensing, the inference results can facilitate the fifteenth function to improve sensing functionalities of the sensing model. On the other hand, in the sense of sensing for AI/ML, the sensing result can facilitate the third function 350 to improve inference results of the AI/ML model and further the AI/ML functional framework 300.
The fifteenth function may receive tenth input data from the sixteenth function. Alternatively or in addition, the fifteenth function may receive the sensing model from the fourth function 360. In this way, with the tenth input data and the sensing model, the fifteenth function can perform sensing inference and obtain the sensing result.
The fifteenth function may further receive the inference results from the second function 345. Alternatively or in addition, the fifteenth function may transmit the sensing results to the second function 345. In this way, on one hand, in the sense of AI/ML for sensing, the inference results can facilitate the fifteenth function to improve sensing functionalities of the sensing model. On the other hand, in the sense of sensing for AI/ML, the sensing result can facilitate the second function 345 to improve management of the AI/ML model and further the AI/ML functional framework 300.
The fifteenth function may further receive, from the second function 345, a switching indication to switch from the sensing model to another sensing model. Alternatively or in addition, the fifteenth function may further receive, from the second function 345, a fallback indication to apply a non-sensing model instead of the sensing model. Alternatively or in addition, the fifteenth function may receive, from the second function 345, an activating indication to activate one or more of a plurality of candidate sensing models. Alternatively or in addition, the fifteenth function may receive, from the second function 345, a deactivating indication to deactivate one or more of the plurality of candidate sensing models. In this way, the fifteenth function can change/switch to a desired sensing model as indicated by the second function 345 for future use, improving the flexibility in management the sensing model and further the whole AI/ML functional framework 300.
In some example embodiments, the AI/ML functional framework 300 may further comprise a seventeenth function configured to collect non-sensing data, and the at least one function 360 may further comprise an eighteenth function configured to collect sensing data. In this way, both non-sensing data and sensing data can be utilized in the AI/ML functional framework 300, thus accuracy and performance of the AI/ML model and the sensing model can be improved.
The sixteenth function may further receive the non-sensing data from the seventeenth function. Alternatively or in addition, the sixteenth function may further receive the sensing data from the eighteenth function. Alternatively or in addition, the sixteenth function may perform data processing on the received non-sensing data and sensing data to obtain the fused data. In this way, the fused data can be obtained by processing on the non-sensing data from the seventeenth function and the sensing data from the eighteenth function. With the fused data, which is less in quantity than the sum of the non-sensing data and the sensing data, data accuracy can be improved and data processing volume can be decreased.
The sixteenth function may further transmit the fused data to at least one of the first function 340, the second function 345, the third function 350 or the fifteenth function. In this way, the fused data then can be utilized by the first function 340 to train the AI/ML model and/or the sensing model to obtain a more accurate AI/ML model and/or sensing model. At the same time, the fused data can help the second function 345 to manage the AI/ML model and/or the sensing model more reliably, help the third function 350 to perform inference of the AI/ML model more accurately and thus reliably, and help the fifteenth function to perform inference of the sensing model more accurately and thus reliably.
The AI/ML functional framework 300 may further comprise at least two of: a nineteenth function configured to provide ground-truth sensing data, a twentieth function configured to provide non-ground-truth sensing data, or a twenty-first function configured to provide non-sensing ground-truth data. In this way, ground-truth sensing data, non-ground-truth sensing data and non-sensing ground-truth data can be utilized in the AI/ML functional framework 300, thus accuracy and  performance of the AI/ML model and the sensing model can be improved, and performance of the AI/ML model and sensing model can be more flexible.
The sixteenth function may further receive at least two of: ground-truth sensing data from the nineteenth function, the non-ground-truth sensing data from the twentieth function, or the non-sensing ground-truth data from the twenty-first function. Alternatively or in addition, the sixteenth function may perform data processing on the received data to obtain the fused data. In this way, the fused data then can be utilized by the first function 340 to train the AI/ML model and/or the sensing model to obtain a more accurate AI/ML model and/or sensing model. At the same time, the fused data can help the second function 345 to manage the AI/ML model and/or the sensing model more reliably, help the third function 355 to perform inference of the AI/ML model more accurately and thus reliably, and help the fifteenth function to perform sensing inference of the sensing model more accurately and thus reliably. In the sense of sensing for AI/ML, the fused data can facilitate the first function 340, second function 345 and third function 350 to improve AI/ML functionalities of the AI/ML model and further the AI/ML functional framework 300.
The sixteenth function may further transmit the fused data to at least one of the first function 340, the second function 345, the third function 350 or the fifteenth function. In this way, the first function 340 can utilize the fused data to train the AI/ML model and/or the sensing model to obtain a more accurate AI/ML model and/or sensing model. At the same time, the second function 345 can utilize the fused data to manage the AI/ML model and/or the sensing model more reliably. The third function 350 can utilize the fused data to perform inference of the AI/ML model more accurately and thus reliably. The fifteenth function can utilize the fused data to perform sensing inference of the sensing model more accurately and thus reliably. In the sense of sensing for AI/ML, the first function 340, second function 345 and third function 350 can utilize the fused data to improve AI/ML functionalities of the AI/ML model and further the AI/ML functional framework 300.
In some example embodiments, the data processing may comprise at least one of data pre-processing, data cleaning, data formatting, data transformation, or data integration. In this way, data obtained through data processing can be more organized as compared with the case where data is used without data processing, thus future processing on the data can be more accurate and efficient.
In some example embodiments, at least one of the first function 340, the second function 345, the third function 350, the fourth function 355, the fifth function, the sixth function, the seventh function, the eighth function, the ninth function, the tenth function, the eleventh function, the twelfth function, the thirteenth function, the fourteenth function, the fifteenth function, the sixteenth function, the seventeenth function, the eighteenth function, the nineteenth function, the twentieth function or the twenty-first function may be implemented in one of the a terminal device, an access network device, a core network device, or a third party device. In this way, each function may be implemented in one of the terminal device, access network device, core network device or third party device in a “distributed” manner, improving the flexibility of implementation and enabling dynamic implementation with various modules where each module may, by itself or in combination with other module (s) , implement one or more functions as described here.
In this way, according to the first aspect and its example embodiments, AI/ML functional framework 300 for integrated AI and sensing can be defined for high-accuracy purpose to facilitate communication.
FIG. 4 illustrates a schematic diagram of an example AI/ML functional framework 400 and the flowchart of operations in the AI/ML functional framework 400 in accordance with some embodiments of the present disclosure. The AI/ML functional framework 400 as shown in FIG. 4 includes 8 parts, i.e., a model training function 440, a management function 445, an inference function 450, a model storage function 455, an RF sensing data collection 460-1, a non-RF sensing data collection 460-2, a non-sensing data collection 465 and a data fusion function 470. The model training function 440, management function 445, inference function 450 and model storage function 455 may each be an example of the first function 340, second function 345, third function 350 and fourth function 355 as illustrated in FIG. 3, respectively. The RF  sensing data collection 460-1, non-RF sensing data collection 460-2 and data fusion function 470 may each be an example of the function 375 which is configured to operate based on sensing data as illustrated in FIG. 3, respectively.
The non-sensing data collection function 465 is a function that provides input data (in FIG. 4, the non-sensing data 401) to model training function 440, management function 445 and inference function 450. For the input data, it is collected by non-sensing schemes, e.g. by measurement of reference signal (s) . The non-sensing data collection function 465 may include interfaces for training data, monitoring data and inference data. Training data is Data needed as input for the model training function 440, e.g., data for model training, include assistance information. Monitoring data is data needed as input for the management function 445. Inference data is data needed as input for the inference function 450.
The RF Sensing data collection 460-1 is a function that provides input data (here, in FIG. 4, sensing data 403) to the model training function 440, management function 445 and inference function 450. For the input data, it is collected by RF sensing. RF sensing means that a transmitter sends a RF signal and obtains the surrounding information by receiving and processing either this RF signal or the echoed (reflected) RF signal. For the RF sensing, it can be 3GPP defined RF sensing and/or non-3GPP defined RF sensing. For 3GPP defined RF sensing, the transmitter sends a 3GPP defined RF signal and the receiver detects the sensing results. For non-3GPP defined RF sensing, the transmitter sends a non-3GPP defined RF signal, e.g. radar sensing, WIFI sensing. The RF Sensing data collection 460-1 may include interfaces for training data, monitoring data, inference data as well as input to data fusion function 470. Training data is data needed as input for the model training function 440, e.g. data for model training, include assistance information. Monitoring data is data needed as input for the management function 445. Inference data is data needed as input for the inference function 450.
The non-RF sensing data collection function 460-2 is a function that provides input data to the model training function 440, management function 445 and the data fusion function 470. Optionally, the non-RF sensing data collection function 460-2 may also provide input data to the inference function 450 (not shown in FIG. 4) . Such input data is collected by non-RF sensing. For the non-RF sensing, the sensing results are obtained not by radio frequency signal detection as the RF sensing data collection function 460-1, e.g., but by LIDAR (light detection and ranging) , camera, video, sensor, etc. In addition, the non-RF sensing may also include non-3GPP defined RF sensing, e.g. by WIFI sensing. The non-RF sensing data collection function 460-2 may include interfaces for training data, monitoring data, inference data as well as input to data fusion function 470. The training data is data needed as input for the model training function 440, e.g. data for model training, include assistance information. The monitoring data is data needed as input for the management function 445. The inference data is data needed as input for the inference function 450.
Here, in some embodiments of this disclosure, RF sensing data may comprise only 3GPP RF sensing data, and non-3GPP RF sensing data may be regarded as non-RF sensing data. Alternatively, non-3GPP RF sensing data may also be regarded as RF sensing data, i.e., the RF sensing data includes both 3GPP RF sensing data and non-3GPP RF sensing data.
The data fusion function 470 is a function that provides input data to the model training function 440, management function 445 and inference function 450. It should be noted that data fusion function 470 could also be called as data collection function. The data fusion function 470 receives input from the RF sensing data collection function 460-1 and the non-RF sensing data collection function 460-2.
The data fusion function 470 is responsible for data processing. Data processing may include data pre-processing and cleaning, formatting, and transformation, integrating multiple data sources to produce more useful information than that provided by any individual data source (here, the RF sensing data collection function 460-1 and the non-RF sensing data collection function 460-2) . For example, the data fusion function 470 may combine RF sensing data from the RF sensing data collection function 460-1 and non-RF sensing data from the non-RF sensing data collection function 460-2 such that the resulting information has less uncertainty than that when the RF sensing data or non-RF sensing data is used individually. The data fusion function 470 may include interfaces for training data, monitoring data and inference data. The training data is data needed as input for the model training function 440, e.g., data for model training, include assistance information. The  monitoring data is data needed as input for the management function 445. The inference data is data needed as input for the inference function 450.
The modelling training function 440 is a function that performs the ML model training, validation, and testing which may generate model performance metrics as part of the model testing procedure. The model training function 440 is also responsible for data preparation (e.g., data pre-processing and cleaning, formatting, and transformation) . Interaction between model training function and other function includes training data and trained/updated model.
Specifically, the modelling training function 440 may receive training data from at least one of the non-sensing data collection function 465, RF sensing data collection function 460-1, non-RF sensing data collection function 460-2 and data fusion function 470.
The modelling training function 440 may send a trained AI model (e.g. by distributed learning) to the model storage function 455. Alternatively, the modelling training function 440 may deliver an updated AI model to the model storage function 455.
The management function 445 is a function that is responsible for performing model control to model training function 440 and inference function 450, and it monitors the model output (i.e., monitoring output 410) . Based on the model output, the management function 445 may determine whether the model qualities are applicable. If it is determined that the model qualities are no longer applicable, the management function 445 may request the model training function 440 to re-train the model, and it will indicate the model inference function 450 to switch the model.
For the model performance monitoring, the management function 445 receives the monitoring data from data collection function (i.e., receive data from the non-sensing data collection function 465, RF sensing data collection function 460-1, non-RF sensing data collection function 460-2 and data fusion function 470) . In other words, the management function 445 receives the ground truth data from the data collection function. With the ground truth data, the management function 445 may compare the AI/ML model output and the ground truth. After comparing the AI/ML model output and the ground truth, the model performance can be evaluated.
The management function 445 also receives the output of the model inference function, the output includes the performance of the model inference.
If certain information derived from the inference function 450 or information derived from the performance monitoring in the management function 445 is suitable for improvement of the AI model trained in model training function 440, a performance feedback/re-model request may be applied.
When the Management function 445 observes that the performance of current AI model is not good enough, the management function 445 will send current AI performance to the model training function 440, including current AI model output and its accuracy, etc. In addition, the management function 445 also requests the model training function 440 to retrain the model, and request to get an updated AI model.
AI model selection/ (de) activation/switching/fallback
When the management function 445 observes that the performance of current AI model is not good enough, it can send model switching signalling to the inference function 450 to switch to another AI model, or send a fallback signalling to indicate the inference function 450 to use non-AI mode.
When there are multiple candidate AI models available at the inference function 450, the management function 445 can indicate the inference function 450 to use which AI model, and activate or de-activate one or multiple the candidate AI models.
The management function 445 may send an AI model transfer request to the model storage function 455 to request a model for the inference function 450. The request may be an initial model transfer request for an initially trained AI/ML model or an updated model transfer request for an updated AI/ML model obtained by re-training an existing model.
The inference function 450 is a function that provides inference results. The inference function 450 is also responsible for performing actions according to inference results. For example, the inference function 450 may trigger or perform corresponding actions according to inference decision and it may trigger actions directed to other entities or to itself. The inference function 450 may also be responsible for data preparation (e.g. data pre-processing and cleaning, formatting, and transformation) based on inference data delivered by a data collection function (i.e., receive data from at least one of the non-sensing data collection function 465, RF sensing data collection function 460-1, non-RF sensing data collection function 460-2 or data fusion function 470) .
The inference function 450 may send inference output 410 to management function 445 to monitor the performance of the AI model.
The model storage function 455 is a function that store the models. The storage location can be within RAN (e.g. BS and/or UE side) , or outside RAN (e.g. core network or the third party) . It receives the model from the model training function 440. The model stored at the model storage function 455 may be the first trained model, or the re-trained/updated model.
The model storage function 455 may receive a model transfer request 406 from the management function 445. In response to reception of the model transfer request 406, the model storage function 455 will send the corresponding model to the inference function 450. For example, the model transfer request 406 may indicate the requested model ID, then the model storage function 455 may send the model with the requested ID to the inference function 450. Alternatively or in addition, the model transfer request 406 may indicate the requested/desired AI functionality ID and/or AI performance requirement (e.g. AI accuracy, AI complexity, AI model size) , then the model storage function may deliver a model satisfying the indicated AI functionality and the performance requirement.
For an AI function in FIG. 4, it can be located at UE, BS, core network, or the 3rd party. For different AI functions, they may be located at the same physical entity or different physical entities.
With the AI/ML functional framework 400 and the flowchart of operations performed in the AI/ML functional framework 400, an AI/ML functional framework (including the signaling interface among functions) can be defined to support integrated AI and sensing, including sensing to improve AI performance and AI to improve sensing performance.
FIG. 5 illustrates a schematic diagram of another example AI/ML functional framework 500 and the flowchart of operations in the AI/ML functional framework 500 in accordance with some embodiments of the present disclosure. The AI/ML functional framework 500 as shown in FIG. 5 includes 11 parts, i.e., a model training function 540, a management function 545, an inference function 550, a model storage function 555, a sensing data collection function 560, a non-sensing data collection function 565, a data fusion function 570, a sensing modeling function 582, a sensing management function 584, a sensing application function 586 and sensing results storage function 588. The model training function 540, management function 545, inference function 550 and model storage function 555 may each be an example of the first function 340, second function 345, third function 350, fourth function 355 as illustrated in FIG. 3, respectively, and may be the same or similar to the model training function 440, the management function 445, the inference function 450 and the model storage function 455 as illustrated in FIG. 4, respectively. The sensing data collection 560 and data fusion function 570 may each be an example of the function 375 which is configured to operate based on sensing data as illustrated in FIG. 3, respectively. The sensing modeling function 582, sensing management function 584, sensing application function 586 and sensing results storage function 588 may each, in logic, be similar to the model training function 540, management function 545, inference function 550 and model storage function 555, respectively, with the difference that the sensing modeling function 582, sensing management function 584, sensing application function 586 and sensing results storage function 588 focus on sensing, while the model training function 540, management function 545, inference function 550 and model storage function 555 focus on AI. The trained /updated model 502, performance feedback /retraining request 504, model transfer request 506, model selection/activation/deactivation/switching/fallback 508, inference output 510 and model  transfer 512 as illustrated in FIG. 5 may have similar meaning as the trained /updated model 402, performance feedback /retraining request 404, model transfer request 406, model selection/activation/deactivation/switching/fallback 408, inference output 410 and model transfer 412 as illustrated in FIG. 4.
The AI/ML functional framework 500 differs from the AI/ML functional framework 400 as shown in FIG. 4 mainly in the following aspects. First, there are separate sensing modeling function 582, sensing management function 584, sensing application function 586 and sensing results storage function 588 dedicated for sensing in the AI/ML functional framework 500, while in the AI/ML functional framework 400, such functions are incorporated in corresponding AI functions (i.e., the model training function 540, management function 545, inference function 550 and model storage function 555) . Second, the data collection function in the AI/ML functional framework 500 includes a sensing data collection 560 for collecting sensing data, while in the AI/ML functional framework 400, RF sensing data collection function 460-1 and non-RF sensing data collection function 460-2 are included to provide RF sensing data and non-RF sensing data, respectively. Third, the data fusion function 470 takes the output from the sensing data collection 560 as well as the non-sensing data collection function 565 as input, while the data fusion function 470 takes output from the RF sensing data collection 460-1, non-RF sensing data collection function 460-2 but not non-sensing data collection function 475 as the input to perform data fusion. (please be noted that, it may be considered that sensing data includes RF sensing data and non-RF sensing data (provided by RF sensing data collection function like the RF sensing data collection function 460-1) and non-RF sensing data (provided by non-RF sensing data collection function like the non-RF sensing data collection function 460-2) . ) Hereafter, the difference will be described, and the elements similar to those of FIG. 4 may refer to the corresponding description and thus may be omitted for simplicity.
The AI/ML functional framework 500 and the flowchart of operations in the AI/ML functional framework 500 as shown in FIG. 5 support sensing for AI and AI for sensing.
In the aspect of sensing for AI (or called sensing assisted AI) , the sensing results delivered by the sensing application function 586 can provide input data for model training function 540, management function 545 and inference function 550. For example, the sensing results 526 can provide extra data for AI model, and may also provide approximate ground-truth to AI model, e.g. location, channel obtained by sensing function like the sensing data collection function 560.
In the aspect of AI for sensing (or called AI assisted sensing) , the inference results 506 delivered by the inference function 550 can provide input data for sensing modeling function 582, sensing management function 584 and sensing application function 586. For example, for channel sensing, the AI model can provide predicated channel information (for example, in the inference results 506) for better sensing.
As described herein, the non-sensing data collection function 565 is a function for collecting non-sensing data. The sensing data collection function 560 is a function for collecting sensing data, including RF sensing data and non-RF sensing data collection, therefore, the sensing data collection function 560 may also be considered including an RF sensing data collection like the RF sensing data collection function 460-1 in FIG. 4 and a non-RF sensing data collection function like the non-RF sensing data collection function 460-2 in FIG. 4.
The data fusion function 570 is responsible for data processing. The data processing may include data pre-processing and cleaning, formatting, and transformation, integrating multiple data sources to produce more useful information than that provided by any individual data source.
The sensing modelling function 582 is a function that reconstructs the physical world (that is, gets a model for the physical world) . Specifically, the sensing modelling function 582 may be responsible for environment reconstruction, channel reconstruction (by a ray tracing scheme, for example) , target reconstruction, digital twin, and so on. Other features or functions that may be supported or provided in the context of physical world reconstruction may include target detection and/or target tracking. The sensing modelling function 582 should be able to request specific information to be used to train the sensing model and to avoid reception of unnecessary information. The sensing modelling function 582 may train a  sensing model in some embodiments and training is one way to obtain a sensing model. Generally, a sensing model may be trained or otherwise obtained.
In addition, the sensing modelling function 582 may also be responsible for data processing. Here, data processing may include data pre-processing and cleaning, formatting, and transformation based on training data delivered by a sensing data collection function 560 and/or data fusion function 570, if required.
The trained/updated model 522 in FIG. 5 indicates that, the sensing modelling function 582 may send a trained sensing model to the sensing results storage function 588. Alternatively, the sensing modelling function 582 may deliver an updated sensing model to the sensing results storage function 588.
The sensing management function 584 is a function that is responsible for performing sensing control on the sensing modelling function 582 and the sensing application function 586. The sensing management function 584 may also monitor the sensing output. Based on the sensing output, the sensing management function 584 may determine whether the sensing results is applicable, for example, by comparing the sensing results with a pre-determined or pre-defined threshold. If it is determined that the sensing results are no longer applicable, the sensing management function 584 may request the sensing modelling function 582 to re-train the (sensing) model, and may indicate the sensing application function 586 to switch the (sensing) model.
The sensing management function can also be referred to as sensing control function, sensing results management function, or simply management function. The name “sensing manager” is also used herein as a general term for a sensing management element in a sensing system.
For the sensing performance monitoring, the sensing management function 584 receives the monitoring data from data fusion function, e.g. the ground truth data. With the monitoring data, the sensing management function 584 can compare the sensing output and the ground truth to determine the performance of the sensing model. After comparing the sensing output and the ground truth (i.e., after determined the performance of the sensing model) , the sensing performance can be evaluated.
The sensing management function 584 may also receive the inference output 530 from the sensing application function 586. The inference output 530 may include the performance of the sensing application function 586.
If certain information derived from the sensing application function 586 or information derived from the sensing management function 584 is suitable for improvement of the sensing model trained in the sensing modelling function 582, the performance feedback /re-model request 524 may be applied, for example, when the sensing management function 584 observes that the sensing performance of current sensing model is not good enough. For example, during channel construction, a sensing model is generated according to a static environment map, but when there are many moving targets in the environment, causing too much signal refection, the channel construction model may be inapplicable. In this case, the sensing management function 584 may send current sensing performance to the sensing modelling function 582, including current sensing output and its accuracy, resolution, etc. In addition, the sensing management function 584 may also request the sensing modelling function 582 to retrain the model, and request the sensing application function 576 to get an updated sensing model from the sensing results storage function 588.
When the sensing management function 584 observes that the sensing performance of current sensing model is not good enough, it may send a model switching signalling to the sensing application function 586 to switch to another sensing model. Alternatively, the sensing management function 584 may send a fallback signalling to indicate the sensing application function 586 to use non-sensing mode. Moreover, when there are multiple candidate sensing models, the sensing management function 584 may indicate the sensing application function 586 to use which sensing model, and activate or de-activate one or multiple the candidate sensing models.
The sensing management function 584 may send a sensing model transfer request 526 to the sensing results storage function 588 to request a model for the sensing application function 586. The sensing model transfer request 526 may be an initial model transfer request for an initially trained AI/ML model or an updated model transfer request for an updated AI/ML model obtained by re-training an existing model.
The sensing application function 586 is a function that provides sensing decision output or sensing inference output (predictions or detections, for example) . Target detection and channel prediction may be some examples of the sensing decision output or sensing inference output. The sensing application function 586 is also responsible for performing actions according to sensing results. For example, the sensing application function 586 may trigger or perform corresponding actions according to sensing decision or prediction, and it may trigger actions directed to other entities or to itself. The sensing application function 586 may also be responsible for data preparation (such as data pre-processing and cleaning, formatting, and transformation) based on action data delivered by a data fusion function (here, for example, the data fusion function 570) .
The sensing application function 586 can also be referred to as sensing action function, sensing function in RAN, sensing usage function, sensing use cases, sensing assisted communication, sensing service, or sensing assisted communication. The name “output generator” is also used herein as a general term for a sensing application element in a sensing system.
As an example of sensing assisted communication, the sensing modelling function 522 may obtain the sensing results (e.g. environment map) 526, application functions may then use the environment map to assist communication (e.g. beam prediction) , where the data for the application functions may be a reference signal (RS) with low density for beam management. In this way, the data for the application functions is optional, which means the application functions may totally depend on the sensing results. Therefore, the sensing results 526 is beneficial to reduce the RS overhead. As another example, the sensing modelling function 522 may obtain the sensing results (e.g. environment map) 526, then it may generate multiple sensing results (for example, multiple sensing models, among which some may be for a static environment, and some may be for moving objects environment) , and the management function 545 may indicate the application functions to use which model among the multiple sensing models.
As an example of sensing service, during object detection procedure, the sensing modelling function 522 may obtain the sensing results 526 (e.g. whether there is an object and object information (for intruder detection, it is an intruder information) ) . The sensing results 526 may also be provided to the 3rd party. For intruder detection, the procedure ends here. In addition, the object information (location, shape, etc. ) can also assist communication. In this case, the sensing application can use the sensing results 526 for beam management. Here the action data is the RS for beam management. In this case, it is the sensing modelling function 522 who determines whether there is an object and also the object information. As another example of sensing service used in object detection procedure, application functions may determine whether there is an object. In this case, the sensing modelling function 522 may obtain the sensing model for object data (e.g. according to the received sensing signal to determine the object information) . In doing so, the application functions may obtain the object information. In this case, the action data may be the received sensing signal.
The inference output 530 is the output of the sensing model produced by the sensing application function 586. The sensing application function 586 should signal the inference output 530 to nodes that have explicitly requested them (e.g. via subscription) , or nodes that are subject to actions based on the output from the sensing application function 586.
The sensing results storage function 588 is a function that stores the sensing models, for example, the reconstructed physical world (environment map, target and its location, for example) . The storage location may be within RAN (at BS and/or UE side, for example) , or outside RAN (at the core network or the third party, for example) . The sensing results storage function 588 may receive the sensing model from the sensing modelling function 582. The sensing model may be an initially trained sensing model, or a re-trained/updated sensing model.
The sensing results storage function can also be referred to as sensing storage function, RAN storage function, local RAN storage function, or RAN and Core Network storage function. The name “storage subsystem” is also used herein as a general term for a sensing results storage element in a sensing system. A model is one type of sensing result shown in FIG. 1. More generally, the sensing results storage function 588 may store sensing results, which may, but need not necessarily, include a (sensing) model.
The sensing results storage function 588 may receive the sensing model transfer request 526 from the sensing management function 584, as described before in connection with the sensing management function 584. In response to reception of the sensing model transfer request 526, the sensing results storage function 588 may send the corresponding model to the sensing application function 586. For example, the sensing model transfer request 526 may indicate the requested model ID. Then, in response to the sensing model transfer request 526, the sensing results storage function 588 may send the model with the requested ID. Alternatively, the sensing model transfer request 526 may indicate the required (or desired) sensing functionality ID and/or sensing performance requirement (e.g. sensing accuracy, sensing distance/speed/angle resolution) . Then, in response to the sensing model transfer request 526, the sensing results storage function may deliver a model satisfying the indicated sensing functionality and the performance requirement.
For a function among the 11 parts of the AI/ML functional framework 500 as illustrated in FIG. 5, it can be located at UE, BS, core network, or the 3rd party. For different sensing functions, they may be located at the same physical entity or different physical entities.
With the AI/ML functional framework 500 and the flowchart of operations performed in the AI/ML functional framework 500, an AI/ML functional framework 500 (including the signaling interface among functions) can be defined to support integrated AI and sensing, including sensing to improve AI performance and AI to improve sensing performance.
FIG. 6 illustrates a schematic diagram of a third example AI/ML functional framework 600 and the flowchart of operations in the AI/ML functional framework 600 in accordance with some embodiments of the present disclosure. The AI/ML functional framework 600 as shown in FIG. 6 includes 8 parts, i.e., a model training function 640, a management function 645, a model inference function 650, a sensing application function 652, a model storage function 655, a sensing data collection function 660, a non-sensing data collection function 665 and a data fusion function 670. The model training function 640, management function 645, model inference function 650 and model storage function 655 may each be an example of the first function 340, second function 345, third function 350, fourth function 355 as illustrated in FIG. 3, respectively, and may be the same or similar to the model training function 440, the management function 445, the inference function 450 and the model storage function 455 as illustrated in FIG. 4, respectively, and may be the same or similar to the model training function 540, the management function 545, the inference function 550 and the model storage function 555 as illustrated in FIG. 5, respectively.
The sensing data collection 660 and data fusion function 670 may each be an example of the function 375 which is configured to operate based on sensing data as illustrated in FIG. 3, respectively. The sensing data collection function 660, non-sensing data collection function 665 and data fusion function 670 may be the same as or similar to the sensing data collection function 560, non-sensing data collection function 565 and data fusion function 570, respectively. The training data 601, trained /updated model 602, monitoring data 603, performance feedback /retraining request 604, inference data 605, model transfer request 606, model selection/activation/deactivation/switching/fallback 608, inference output 610 and model transfer 612 as illustrated in FIG. 6 may be the same as or similar to the training data 501, trained /updated model 502, monitoring data 503, performance feedback /retraining request 504, inference data 505, model transfer request 506, model selection/activation/deactivation/switching/fallback 508, inference output 510 and model transfer 512 as illustrated in FIG. 5. The AI/ML functional framework 600 may be considered as a combination of the left part of FIG. 5 (the non-sensing data collection function 565, sensing data collection 560 and the data fusion function 570) and the right part of FIG. 4 (that is, the model training function 440, management function 445, inference function 450 and the model storage function  455) plus a separate sensing application function 652. Hereafter, the difference will be described, and the elements similar to those of FIGS. 4-5 may refer to the corresponding description and thus may be omitted for simplicity.
As illustrated in FIG. 6, the non-sensing data collection function 665 is a function for collecting non-sensing data. The sensing data collection function 660 is a function for collecting sensing data. The sensing data may include RF sensing data and/or non-RF sensing data. The data fusion function 670 is responsible for data processing. The data processing may include data pre-processing and cleaning, formatting, and transformation, integrating multiple data sources to produce more useful information than that provided by any individual data source.
The model training function 640 is a function that performs the ML model training, validation, and testing which may generate model performance metrics as part of the model testing procedure. The model training function 640 is also responsible for sensing model training. The model training function 640 is also responsible for data preparation (e.g. data pre-processing and cleaning, formatting, and transformation) . Interaction between model training function and other function may include training data 601, trained/updated model 602 and sensing information and/or AI assistance information 607. The training data 601 is training data received by the model training function 640 from at least one of the non-sensing data collection function 665, sensing data collection function 660, RF sensing data collection function, non-RF sensing data collection function or data fusion function 670. In FIG. 6, only the non-sensing data collection function 665, sensing data collection function 660 and data fusion function 670 are shown. As described above, the sensing data collection function (here, sensing data collection function 660) may also be considered including an RF sensing data collection like the RF sensing data collection function 460-1 in FIG. 4 and a non-RF sensing data collection function like the non-RF sensing data collection function 460-2 in FIG. 4. In other words, in some embodiments, the sensing data collection function 660 in FIG. 6 may be replaced by an RF sensing data collection (like the RF sensing data collection function 460-1 in FIG. 4) and a non-RF sensing data collection function (like the non-RF sensing data collection function 460-2 in FIG. 4) .
The model training function 602 may involve a trained/updated model 602 and/or sensing information and/or AI assistance information 607. The model training function 602 may send a (initially) trained AI model (e.g. by distributed learning) to the model storage function 665. Alternatively, the model training function 602 may deliver an updated (i.e., re-trained) AI model to the model storage function 655. Also, the model training function 602 may receive sensing information and/or assistance information 607 from the management function 645.
The management function 645 is a function that is responsible for performing model control on model training function 640, model inference function 650 and sensing application function 652. The management function 645 may also monitor the model output of the AI and/or sensing model, and determine whether the AI or sensing model qualities is applicable. If it is determined that the AI or sensing model qualities are no longer applicable, the management function 645 may request the model training function 640 to re-train the AI and/or sensing model accordingly, and indicate the model inference function 650 and/or sensing application function 652 to switch the AI or sensing model.
For the AI and/or sensing model performance monitoring, the management function 645 may receive the monitor data 603 from the data fusion function 670. The management function 645 may also receive the output of the model inference function 650 and/or sensing application function 652. In FIG. 6, the output of the model inference function 650 is illustrated as output 610, and the output of the sensing application function 652 is illustrated as output 611. The output 610 may include the performance of the model inference, and/or the output 611 may include the performance of the sensing inference results.
If certain information derived from model inference function 650 and/or sensing application function 652 or information derived from the management function 645 is suitable for improvement of the AI/sensing model trained in the model training function 640, the performance feedback /re-model (retraining) request 604 is applied.
When the management function 645 observes that the performance of current AI/sensing model is not good enough, the management function 645 may send current AI/sensing performance to the model training function 640,  including current AI/sensing model output and its accuracy, etc. In addition, the management function 645 may also request the model training function 640 to retrain the AI/sensing model and send the retrained AI/sensing model to the model storage function 655, and request the model inference function 650 to get an updated AI model, and request the sensing application function 652 to get an updated sensing model.
When the management function 645 observes that the performance of current AI/model is not good enough, it may send a model switching signalling to the model inference function 650 to switch to another AI model, or send a fallback signalling to indicate the model inference function 650 to use non-AI mode.
When the management function 645 observes that the performance of current sensing model is not good enough, it may send a model switching signalling to the sensing application function 652 to switch to another sensing model, or send a fallback signalling to indicate the sensing application function 652 to use non-sensing mode.
When there are multiple candidate AI models, the management function 645 may indicate the model inference function 650 to use which AI model, and activate or de-activate one or multiple of the candidate AI models. Also, when there are multiple candidate sensing models, the management function 645 may indicate the sensing application function 652 to use which sensing model, and activate or de-activate one or multiple of the candidate sensing models.
The management function 645 may send an AI model transfer request 606 to the model storage function 655 to request an AI model for the model inference function 650. The AI model transfer request 606 may be an initial AI model transfer request for an initially trained AI/ML model or an updated AI model transfer request for an updated AI/ML model obtained by re-training an existing AI/ML model. Also, the management function 645 may send a sensing model transfer request 606 to the model storage function 655 to request a sensing model for the sensing application function 652. The sensing model transfer request 606 may be an initial AI model transfer request for an initially trained sensing model or an updated sensing model transfer request for an updated sensing model obtained by re-training an existing sensing model.
Model inference function 650 is a function that provides inference results. The model inference function 650 is also responsible for performing actions according to inference results. For example, the model inference function 650 may trigger or perform corresponding actions according to inference decision and it may trigger actions directed to other entities or to itself. The model inference function 650 may also be responsible for data preparation (e.g. data pre-processing and cleaning, formatting, and transformation) based on inference data delivered by a data collection function (data received from the non-sensing data collection function 665, sensing data collection function 660, RF sensing data collection function, non-RF sensing data collection function or data fusion function 670) .
The model inference function 650 may send the model inference results to sensing application function 652 to assist sensing inference.
Sensing application function 652 is a function that provides sensing decision output or sensing inference output (e.g. predictions or detections, for example, target detection, channel prediction, etc. ) . The sensing application function 652 is also responsible for performing actions according to sensing results. For example, the sensing application function 652 may trigger or perform corresponding actions according to sensing decision or prediction, and it may trigger actions directed to other entities or to itself. The sensing application function 652 may also be responsible for data preparation (e.g. data pre-processing and cleaning, formatting, and transformation) .
The sensing application function 652 may send the sensing results to model inference function 650 to assist model inference for better model inference results.
Model storage function 655 is a function that stores the AI/sensing models. The storage location can be within RAN (e.g. BS and/or UE side) , or outside RAN (e.g. core network or the third party) .
For a function among the 8 parts of the AI/ML functional framework 600 as illustrated in FIG. 6, they may be located at UE, BS, core network, or the 3rd party. For different AI functions, they may be located at the same physical entity  or different physical entities.
With the AI/ML functional framework 600 and the flowchart of operations performed in the AI/ML functional framework 600, an AI/ML functional framework 600 (including the signaling interface among functions) can be defined to support integrated AI and sensing, including sensing to improve AI performance and AI to improve sensing performance.
FIG. 7 illustrates a schematic diagram of a fourth example AI/ML functional framework 700 and the flowchart of operations in the AI/ML functional framework 700 in accordance with some embodiments of the present disclosure. The AI/ML functional framework 700 as shown in FIG. 7 includes 10 parts, i.e., a model training function 740, a management function 745, a model inference function 750, a sensing application function 752, a model storage function 755, an anchor management function 766, an AI anchor data collection function 762, a sensing anchor data collection function 760, a non-anchor data collection function 764 and a data fusion function 770. The model training function 740, management function 745, model inference function 750 and model storage function 755 may each be an example of the first function 340, second function 345, third function 350, fourth function 355 as illustrated in FIG. 3, respectively. The AI anchor data collection function 762, sensing anchor data collection function 760, non-anchor data collection function 764 and data fusion function 770 may each be an example of the function 375 which is configured to operate based on sensing data as illustrated in FIG. 3. The model training function 740, management function 745, model inference function 750, sensing application function 752 and model storage function 755 may each be the same as or similar to the model training function 640, management function 645, model inference function 650, sensing application function 652 and model storage function 655 as illustrated in FIG. 6. The training data 701, trained /updated model 702, monitoring data 703, performance feedback /retraining request 704, inference data 705, model transfer request 706, model selection/activation/deactivation/switching/fallback 708, model inference output 710, sensing inference output 711, AI model transfer 712 and sensing model transfer 713 as illustrated in FIG. 7 may be the same as or similar to the training data 601, trained /updated model 602, monitoring data 603, performance feedback /retraining request 604, inference data 605, model transfer request 606, model selection/activation/deactivation/switching/fallback 608, model inference output 610, sensing inference output 611, AI model transfer 612 and sensing model transfer 613 as illustrated in FIG. 6. The AI/ML functional framework 700 differs from the AI/ML functional framework 600 as shown in FIG. 6 mainly in the data collection function, i.e., the left part of FIG. 7. Hereafter, the difference will be described, and the elements similar to those of FIG. 6 may refer to the corresponding description and thus may be omitted for simplicity.
Anchor management function 766 is a function that is responsible for performing control on AI anchors, sensing anchors and non-anchors. The anchor Management function 766 can configure which node is the AI anchor or sensing anchor or non-anchor, and indicate a specific anchor to perform data collection with a corresponding data type. In addition, the anchor management function 766 may also indicate non-anchor to perform data collection with a corresponding collected data type.
An anchor may be a node which can report ground truth to other functions. For example, the anchor is deployed by the network operator at a known location, and the anchor performs measurement and reports the collected data to the network, including the measurement data and the ground truth. Here, the ground truth includes the label data information for an AI model. The anchor may include AI anchor and sensing anchor, the sensing anchor is deployed for sensing data collection, and the AI anchor is deployed for AI data collection, training, monitoring and/or inference.
As another example, an anchor may be a passive object. For example, an anchor may be an object with known information such as shape, size, orientation, speed, location, distances or relative motion between objects. Such anchor information can be indicated from a base station to a UE for example, in which case the UE can perform sensing measurement and compare its sensing results with the anchor information, so as to calibrate its sensing results. In this case, the sensing anchor data collection function 760 may be a function that collects sensing anchor data from passive sensing anchors. The AI anchor data collection function 762 may be a function that collects AI anchor data from passive AI anchors.  The collected sensing anchor data and/or AI anchor data may be used by the model training function 740 and/or management function 745 and/or model inference function 750 and/or sensing application function 752 to process other data (for example, for data calibration) . As a specific example, the model training function 740 may use the sensing anchor data and/or AI anchor data to perform data preparation. The management function 745 may use the sensing anchor data and/or AI anchor data to perform model performance monitoring. For example, if a shape of an object obtained based on the inference output 711 from the sensing application function 752 deviates from the shape of the object as indicated in the sensing anchor data and/or AI anchor data, the management function 745 may be aware that the model performance of the sensing model currently used degrades, and then transmits a performance feedback and/or retraining request 704 to the model training function 740 to, for example, re-train (and update) the sensing model for better model performance. The sensing application function 752 may also use the sensing anchor data and/or AI anchor data, for example, to perform data preparation and/or self-check of its inference results. For example, the sensing application function 752 may check whether its inference results is precise enough using the sensing anchor data and/or AI anchor data, for example, by comparing an object shape derived from its inference results and the actual object shape as indicated in the sensing anchor data and/or AI anchor data to confirm whether the difference between the two is within a pre-defined or pre-configured or required threshold.
Sensing anchor data collection function 760 is a function that provides input data to data fusion function 770. Specifically, the sensing anchor data collection function 760 may use sensing anchors to collect sensing anchor data and then provide the sensing anchor data as input data to the data fusion function 770 for data fusion. The input data may include ground truth information. Ground truth refers to the true answer to a specific problem or question. For example, for channel prediction by AI, the ground truth is the exact channel information. Examples of input data may include measurements from UEs or different network entities. Here, the measurement may be RF sensing measurement, non-RF sensing (LIDAR (Light Detection and Ranging) , camera, video, sensor, etc. ) measurement.
AI anchor data collection function 762 is a function that provides input data to data fusion function. Specifically, the AI anchor data collection function 762 may use AI anchors to collect AI anchor data and then provide the AI anchor data as input data to the data fusion function 770 for data fusion. The input data may include ground truth information. Examples of input data may include measurements results from UEs or different network entities. Here, the measurement results are not obtained by sensing, e.g. it may be obtained by measurement of a reference signal.
Non-anchor data collection function 764 is a function that provides input data to data fusion function 770. Specifically, the non-anchor data collection function 764 may use non-anchors to collect non-anchor data and then provide the non-anchor data as input data to the data fusion function 770 for data fusion. The input data does not include the ground truth information. Examples of input data may include measurements from UEs or different network entities. Here, the measurement maybe RF sensing measurement, non-RF sensing (LIDAR (Light Detection and Ranging) , camera, video, sensor, etc. ) measurement and non-sensing data.
The data fusion function 770 is responsible for data processing. The data processing may include data pre-processing and cleaning, formatting, and transformation, integrating multiple data sources to produce more useful information than that provided by any individual data source. The data fusion function 770 combines the input from sensing anchor data collection function 760, AI anchor data collection function 762 and non-anchor data collection function 764, so as to derive fused data to be used in the training data 701, the monitoring data 703 and the inference data 705.
Model training function 740 is a function that performs the ML model training, validation, and testing which may generate model performance metrics as part of the model testing procedure. The model training function 740 may also be responsible for sensing model training. The model training function 740 may also be responsible for data preparation (e.g. data pre-processing and cleaning, formatting, and transformation) . Interaction between model training function 740 and other functions may include the training data 701, trained/updated model 702 and sensing information and/or AI assistance information 707.
Specifically, the model training function 740 receives training data 701 from at least one of the AI anchor data collection function 762, sensing anchor data collection function 760, non-anchor data collection function 764 or data fusion function 770. In FIG. 7, the model training function 740 receives training data 701 from the data fusion function 770. However, output of at least one of the AI anchor data collection function 762, sensing anchor data collection function 760, non-anchor data collection function 764 may also be sent to the model training function 740 as the training data 701.
After a trained AI model is obtained (e.g. by distributed learning) or updated (re-trained) , the model training function 740 may send the trained/updated (re-trained) model 702 to the model storage function 755.
The model training function 740 may receive sensing information and/or assistance information 707 from management function 745.
The management function 745 is a function that is responsible for performing model control on model training function 740, model inference function 750 and sensing application function 752. The management function 745 may also monitor the model output of the AI and/or sensing model, and determine whether the AI or sensing model qualities is applicable. If it is determined that the AI or sensing model qualities are no longer applicable, the management function 745 may request the model training function 740 to re-train the AI and/or sensing model accordingly, and indicate the model inference function 750 and/or sensing application function 752 to switch the AI or sensing model.
For the AI and/or sensing model performance monitoring, the management function 745 may receive the monitor data 703 from the data fusion function 770. The management function 745 may also receive the output of the model inference function 750 and/or sensing application function 752. In FIG. 6, the output of the model inference function 750 is illustrated as output 710, and the output of the sensing application function 752 is illustrated as output 711. The output 710 may include the performance of the model inference, and/or the output 711 may include the performance of the sensing inference results.
If certain information derived from model inference function 750 and/or sensing application function 752 or information derived from the management function 745 is suitable for improvement of the AI/sensing model trained in the model training function 740, the performance feedback /re-model (retraining) request 704 is applied.
When the management function 745 observes that the performance of current AI/sensing model is not good enough, the management function 745 may send current AI/sensing performance to the model training function 740, including current AI/sensing model output and its accuracy, etc. In addition, the management function 745 may also request the model training function 740 to retrain the AI/sensing model and send the retrained AI/sensing model to the model storage function 755, and request the model inference function 750 to get an updated AI model, and request the sensing application function 752 to get an updated sensing model.
When the management function 745 observes that the performance of current AI/model is not good enough, it may send a model switching signalling to the model inference function 750 to switch to another AI model, or send a fallback signalling to indicate the model inference function 750 to use non-AI mode.
When the management function 745 observes that the performance of current sensing model is not good enough, it may send a model switching signalling to the sensing application function 752 to switch to another sensing model, or send a fallback signalling to indicate the sensing application function 752 to use non-sensing mode.
When there are multiple candidate AI models, the management function 745 may indicate the model inference function 750 to use which AI model, and activate or de-activate one or multiple of the candidate AI models. Also, when there are multiple candidate sensing models, the management function 745 may indicate the sensing application function 752 to use which sensing model, and activate or de-activate one or multiple of the candidate sensing models.
The management function 745 may send an AI model transfer request 706 to the model storage function 755 to request an AI model for the model inference function 750. The AI model transfer request 706 may be an initial AI model  transfer request for an initially trained AI/ML model or an updated AI model transfer request for an updated AI/ML model obtained by re-training an existing AI/ML model. Also, the management function 745 may send a sensing model transfer request 706 to the model storage function 755 to request a sensing model for the sensing application function 752. The sensing model transfer request 706 may be an initial AI model transfer request for an initially trained sensing model or an updated sensing model transfer request for an updated sensing model obtained by re-training an existing sensing model.
Model inference function 750 is a function that provides inference results. The model inference function 750 is also responsible for performing actions according to inference results. For example, the model inference function 750 may trigger or perform corresponding actions according to inference decision and it may trigger actions directed to other entities or to itself. The model inference function 750 may also be responsible for data preparation (e.g. data pre-processing and cleaning, formatting, and transformation) based on inference data delivered by a data collection function (data received from the non-sensing data collection function 765, sensing data collection function 760, RF sensing data collection function, non-RF sensing data collection function or data fusion function 770) .
The model inference function 750 may send the model inference results to sensing application function 752 to assist sensing inference.
Sensing application function 752 is a function that provides sensing decision output or sensing inference output (e.g. predictions or detections, for example, target detection, channel prediction, etc. ) . The sensing application function 752 is also responsible for performing actions according to sensing results. For example, the sensing application function 752 may trigger or perform corresponding actions according to sensing decision or prediction, and it may trigger actions directed to other entities or to itself. The sensing application function 752 may also be responsible for data preparation (e.g. data pre-processing and cleaning, formatting, and transformation) .
The Sensing application function 752 may send the sensing results to model inference function 750 to assist model inference for better model inference results.
Model storage function 755 is a function that stores the AI/sensing models. The storage location can be within RAN (e.g. BS and/or UE side) , or outside RAN (e.g. core network or the third party) .
For a function among the 8 parts of the AI/ML functional framework 700 as illustrated in FIG. 7, they may be located at UE, BS, core network, or the 3rd party. For different AI functions, they may be located at the same physical entity or different physical entities.
With the AI/ML functional framework 700 and the flowchart of operations performed in the AI/ML functional framework 700, an AI/ML functional framework (including the signaling interface among functions) can be defined to support integrated AI and sensing, including sensing to improve AI performance and AI to improve sensing performance.
FIG. 8 illustrates a block diagram of an electronic device (ED) 800 that may be used for implementing the devices and methods disclosed herein. In some embodiments, the electronic device 800 may be an element of communications network infrastructure, such as a base station (for example, a NodeB, an evolved Node B (eNodeB, or eNB) , a next generation NodeB (sometimes referred to as a gNodeB or gNB) , a home subscriber server (HSS) , a gateway (GW) such as a packet gateway (PGW) or a serving gateway (SGW) or various other nodes or functions within a core network (CN) or a Public Land Mobility Network (PLMN) . In other embodiments, the electronic device may be a device that connects to the network infrastructure over a radio interface, such as a mobile phone, smart phone or other such device that may be classified as a User Equipment (UE) . In some embodiments, ED 800 may be a Machine Type Communications (MTC) device (also referred to as a machine-to-machine (M2M) device) , or another such device that may be categorized as a UE despite not providing a direct service to a user. In some embodiments, ED 800 may be a road side unit (RSU) , a vehicle UE (V-UE) , pedestrian UE (P-UE) or an infrastructure UE (I-UE) . In some scenarios, an ED may also be referred to as a mobile device, a term intended to reflect devices that connect to mobile network, regardless of whether the device itself is designed for, or capable of, mobility. Specific devices may utilize all of the components shown or only a subset of the components,  and levels of integration may vary from device to device. Furthermore, a device may contain multiple instances of a component, such as multiple processors, memories, transmitters, receivers, etc.
The electronic device 800 typically includes a processor 802, such as a Central Processing Unit (CPU) , and may further include specialized processors such as a Graphics Processing Unit (GPU) or other such processor, a memory 804, a network interface 806 and a bus 808 to connect the components of ED 800. ED 800 may optionally also include components such as a mass storage device 810, a video adapter 812, and an I/O interface 816 (shown in dashed lines) .
The memory 804 may comprise any type of non-transitory system memory, readable by the processor 802, such as static random access memory (SRAM) , dynamic random access memory (DRAM) , synchronous DRAM (SDRAM) , read-only memory (ROM) , or a combination thereof. In an embodiment, the memory 804 may include more than one type of memory, such as ROM for use at boot-up, and DRAM for program and data storage for use while executing programs. The bus 808 may be one or more of any type of several bus architectures including a memory bus or memory controller, a peripheral bus, or a video bus.
The electronic device 800 may also include one or more network interfaces 806, which may include at least one of a wired network interface and a wireless network interface. As illustrated in FIG. X, network interface 806 may include a wired network interface to connect to a network 822, and also may include a radio access network interface 820 for connecting to other devices over a radio link. When ED 800 is a network infrastructure element, the radio access network interface 820 may be omitted for nodes or functions acting as elements of the PLMN other than those at the radio edge (e.g., an eNB) . When ED 800 is infrastructure at the radio edge of a network, both wired and wireless network interfaces may be included. When ED 800 is a wirelessly connected device, such as a User Equipment, radio access network interface 820 may be present and it may be supplemented by other wireless interfaces such as WiFi network interfaces. The network interfaces 806 allow the electronic device 800 to communicate with remote entities such as those connected to network 822.
The mass storage 810 may comprise any type of non-transitory storage device configured to store data, programs, and other information and to make the data, programs, and other information accessible via the bus 808. The mass storage 810 may comprise, for example, one or more of a solid state drive, hard disk drive, a magnetic disk drive, or an optical disk drive. In some embodiments, the mass storage 810 may be remote to the electronic device 800 and accessible through use of a network interface such as interface 806. In the illustrated embodiment, the mass storage 810 is distinct from memory 804 where it is included, and may generally perform storage tasks compatible with higher latency, but may generally provide lesser or no volatility. In some embodiments, the mass storage 810 may be integrated with a heterogeneous memory 804.
The optional video adapter 812 and the I/O interface 816 (shown in dashed lines) provide interfaces to couple the electronic device 800 to external input and output devices. Examples of input and output devices include a display 814 coupled to the video adapter 812 and an I/O device 818 such as a touch-screen coupled to the I/O interface 816. Other devices may be coupled to the electronic device 800, and additional or fewer interfaces may be utilized. For example, a serial interface such as Universal Serial Bus (USB) (not shown) may be used to provide an interface for an external device. Those skilled in the art will appreciate that in embodiments in which ED 800 is part of a data center, I/O interface 816 and Video Adapter 812 may be virtualized and provided through network interface 806.
The embodiments of the present disclosure may be implemented by means of a software program so that the electronic device 800 may perform any process of the embodiments of the disclosure as discussed with reference to FIG. 2-8. The embodiments of the present disclosure may also be implemented by hardware or by a combination of software and hardware.
In some example embodiments, the software program may be tangibly contained in a computer-readable medium which may be included in the electronic device 800 (such as in the memory 804 or mass storage 810) or other storage devices that are accessible by the electronic device 800. The electronic device 800 may load the software program from the  computer-readable medium to the memory 804 for execution. The computer-readable medium may include any types of tangible non-volatile storage, such as ROM, EPROM, a flash memory, a hard disk, CD, DVD, and the like.
Generally, various embodiments of the present disclosure may be implemented in hardware or special purpose circuits, software, logic or any combination thereof. Some aspects may be implemented in hardware, while other aspects may be implemented in firmware or software which may be executed by a controller, microprocessor or other computing device. While various aspects of embodiments of the present disclosure are illustrated and described as block diagrams, flowcharts, or using some other pictorial representations, it is to be understood that the block, apparatus, system, technique or method described herein may be implemented in, as non-limiting examples, hardware, software, firmware, special purpose circuits or logic, general purpose hardware or controller or other computing devices, or some combination thereof.
FIG. 9 illustrates a schematic diagram of a structure of an apparatus 900 in accordance with some embodiments of the present disclosure. As shown in FIG. 9, the apparatus 900 includes a performing unit 902. The apparatus 900 may be applied to the communication system as shown in FIG. 1, and may implement any of the methods provided in the foregoing embodiments. Optionally, a physical representation form of the apparatus 900 may comprise a communication device (for example, a network device, or a UE, or a core network device, or a 3rd party device) , or a part of the communication device. Alternatively, the apparatus 900 may be another apparatus that can implement a function of the communication device, for example, a processor or a chip inside the communication device. Specifically, the apparatus 900 may be some programmable chips such as a field-programmable gate array (field-programmable gate array, FPGA) , a complex programmable logic device (complex programmable logic device, CPLD) , an application-specific integrated circuit (application-specific integrated circuits, ASIC) , or a system on a chip (System on a chip, SOC) .
In some embodiments, the performing unit 902 may be configured to perform at least one operation based on an AI/ML functional framework. The AI/ML functional framework may comprise at least one of a first function configured to determine first one or more devices for participating in a training process of an AI/ML model, a second function configured to determine second one or more devices for performing model monitoring or functionality monitoring of the AI/ML model, or third function configured to determine third one or more devices for performing model inference based on the AI/ML model. In addition, the AI/ML functional framework may further comprise at least one of a fourth function configured to perform model training of the AI/ML model based on the training process, a fifth function configured to perform model management of the AI/ML model, a sixth function configured to provide at least one inference result of the model inference, a seventh function configured to provide first input data to the first function, provide second input data to the second function, and provide third input data to the third function, or an eighth function configured to store the AI/ML model.
In some other embodiments, the apparatus 900 can include various other units or modules which may be configured to perform various operations or functions as described in connection with the foregoing method embodiments. The details can be obtained referring to the detailed description of the foregoing method embodiments and are not described herein again.
It should be noted that division into the units or modules in the foregoing embodiments of the present disclosure is an example, and is merely logical function division. In actual implementation, there may be another division manner. In addition, function units in embodiments of the present disclosure may be integrated into one performing unit, or may exist alone physically, or two or more units may be integrated into one unit. The integrated unit may be implemented in a form of hardware, or may be implemented in a form of a software function unit.
The present disclosure also provides at least one computer program product tangibly stored on a non-transitory computer-readable storage medium. The computer program product includes computer-executable instructions, such as those included in program modules, being executed in a device on a target real or virtual processor, to carry out the method 200 or the flowchart as described above with reference to FIGS. 2-8. Generally, program modules include routines, programs, libraries, objects, classes, components, data structures, or the like that perform particular tasks or implement  particular abstract data types. The functionality of the program modules may be combined or split between program modules as desired in various embodiments. Machine-executable instructions for program modules may be executed within a local or distributed device. In a distributed device, program modules may be located in both local and remote storage media.
Program code for carrying out methods of the present disclosure may be written in any combination of one or more programming languages. These program codes may be provided to a processor or controller of a general purpose computer, special purpose computer, or other programmable data processing apparatus, such that the program codes, when executed by the processor or controller, cause the functions/operations specified in the flowcharts and/or block diagrams to be implemented. The program code may execute entirely on a machine, partly on the machine, as a stand-alone software package, partly on the machine and partly on a remote machine or entirely on the remote machine or server.
In the context of the present disclosure, the computer program codes or related data may be carried by any suitable carrier to enable the device, apparatus or processor to perform various processes and operations as described above. Examples of the carrier include a signal, computer-readable medium, and the like.
The computer-readable medium may be a computer-readable signal medium or a computer-readable storage medium. A computer-readable medium may include but not limited to an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of the computer-readable storage medium would include an electrical connection having one or more wires, a portable computer diskette, a hard disk, a random access memory (RAM) , a read-only memory (ROM) , an erasable programmable read-only memory (EPROM or Flash memory) , an optical fiber, a portable compact disc read-only memory (CD-ROM) , an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
Further, while operations are depicted in a particular order, this should not be understood as requiring that such operations be performed in the particular order shown or in sequential order, or that all illustrated operations be performed, to achieve desirable results. In certain circumstances, multitasking and parallel processing may be advantageous. Likewise, while several specific implementation details are contained in the above discussions, these should not be construed as limitations on the scope of the present disclosure, but rather as descriptions of features that may be specific to particular embodiments. Certain features that are described in the context of separate embodiments may also be implemented in combination in a single embodiment. Conversely, various features that are described in the context of a single embodiment may also be implemented in multiple embodiments separately or in any suitable sub-combination.
Although the present disclosure has been described in languages specific to structural features and/or methodological acts, it is to be understood that the present disclosure defined in the appended claims is not necessarily limited to the specific features or acts described above. Rather, the specific features and acts described above are disclosed as example forms of implementing the claims.

Claims (76)

  1. A method comprising:
    performing at least one operation based on an artificial intelligence/machine learning (AI/ML) functional framework, wherein the AI/ML functional framework comprises:
    a first function configured to perform model training of at least one of an AI/ML model, an AI/ML sub-model, an AI/ML functionality or an AI/ML sub-functionality;
    a second function configured to perform management of the AI/ML model;
    a third function configured to perform inference of the AI/ML model to obtain inference results;
    a fourth function configured to store the AI/ML model; and
    at least one function configured to operate based on sensing data.
  2. The method of claim 1, wherein the first function is further configured to perform at least one of the following:
    validation of the AI/ML model;
    testing of the AI/ML model; or
    data preparation based on data received by the first function.
  3. The method of claim 1 or 2, wherein the second function is further configured to at least one of the following:
    perform control of the model training of the at least one of AI/ML model, AI/ML sub-model, AI/ML functionality or AI/ML sub-functionality;
    perform control of the inference of the AI/ML model; or
    monitor output of the AI/ML model.
  4. The method of any of claims 1-3, wherein the third function is further configured to at least one of the following:
    perform an action based on the inference results; or
    perform data preparation based on data received by the third function.
  5. The method of any of claims 1-4, wherein the at least one operation comprises at least one of the following operations performed by the first function:
    transmitting the trained AI/ML model to the fourth function,
    receiving AI/ML assistance information from the second function, or
    receiving, from the second function, a performance level of the AI/ML model and a request to retrain the AI/ML model.
  6. The method of any of claims 1-5, wherein the at least one operation comprises the following operations performed by the second function:
    receiving the inference results from the third function.
  7. The method of claim 6, wherein the at least one operation further comprises the following operations performed by the second function:
    determining that a performance level of the AI/ML model is below a threshold level based on the inference results received from the third function; and
    based on determining that the performance level is below the threshold level, transmitting, to the first function, the performance level of the AI/ML model and a request to retrain the AI/ML model.
  8. The method of any of claims 1-7, wherein the at least one operation comprises at least one of the following operations performed by the second function:
    transmitting AI/ML assistance information to the first function,
    transmitting, to the third function, a switching indication to switch from the AI/ML model to another AI/ML model;
    transmitting, to the third function, a fallback indication to apply a non-AI/ML model instead of the AI/ML model;
    transmitting, to the third function, an activating indication to activate one or more of a plurality of candidate AI/ML models; or
    transmitting, to the third function, a deactivating indication to deactivate one or more of the plurality of candidate AI/ML models.
  9. The method of any of claims 1-8, wherein the at least one operation comprises the following operation performed by the second function:
    transmitting, to the fourth function, a request that the fourth function transmits the AI/ML model to the third function.
  10. The method of any of claims 1-9, wherein the at least one operation comprises the following operations performed by the third function:
    transmitting the inference results to the second function.
  11. The method of any of claims 1-10, wherein the at least one operation comprises at least one of the following operations performed by the third function:
    receiving, from the second function, a switching indication to switch from the AI/ML model to another AI/ML model;
    receiving, from the second function, a fallback indication to apply a non-AI/ML model instead of the AI/ML model;
    receiving, from the second function, an activating indication to activate one or more of a plurality of candidate AI/ML models; or
    receiving, from the second function, a deactivating indication to deactivate one or more of the plurality of candidate AI/ML models.
  12. The method of any of claims 1-11, wherein the at least one operation comprises the following operation performed by the third function:
    receiving the AI/ML model from the fourth function.
  13. The method of any of claims 1-12, wherein the AI/ML functional framework further comprises:
    a fifth function configured to collect non-sensing data.
  14. The method of claim 13, wherein the at least one function comprises:
    a sixth function configured to collect radio frequency (RF) sensing data;
    a seventh function configured to collect non-RF sensing data; and
    an eighth function configured to obtain fused data based on the RF sensing data and the non-RF sensing data.
  15. The method of claim 14, wherein the RF sensing is one of:
    3rd generation partnership project (3GPP) defined RF sensing, or
    non-3GPP defined RF sensing.
  16. The method of claim 14 or 15, wherein the seventh function is further configured to collect the non-RF sensing data using at least one of light detection and ranging (LIDAR) , non-3GPP defined RF sensing, wireless fidelity (WiFi) sensing, camera (s) , video (s) , or sensor (s) .
  17. The method of any of claims 13-16, wherein the at least one operation comprises following operation performed by the first function:
    receiving first input data from at least one of the fifth function, the sixth function, the seventh function or the eighth function.
  18. The method of any of claims 13-17, wherein the at least one operation comprises the following operation performed by the second function:
    receiving second input data from at least one of the fifth function, the sixth function, the seventh function or the eighth function.
  19. The method of any of claims 13-18, wherein the at least one operation comprises the following operation performed by the third function:
    receiving third input data from at least one of the fifth function, the sixth function, the seventh function or the eighth function.
  20. The method of any of claims 13-19, wherein the at least one operation comprises the following operation performed by the fifth function:
    transmitting the non-sensing data to at least one of the first function, the second function or the third function.
  21. The method of any of claims 13-20, wherein the at least one operation comprises the following operation performed by the sixth function:
    transmitting the RF sensing data to at least one of the first function, the second function or the third function.
  22. The method of any of claims 13-21, wherein the at least one operation comprises the following operation performed by the seventh function:
    transmitting the non-RF sensing data to at least one of the first function, the second function or the third function.
  23. The method of any of claims 13-22, wherein the at least one operation comprises the following operations performed by the eighth function:
    receiving the RF sensing data from the sixth function,
    receiving the non-RF sensing data from the seventh function, and
    performing data processing on the received RF sensing data and non-RF sensing data to obtain the fused data.
  24. The method of claim 23, wherein the at least one operation further comprises the following operation performed by the eighth function:
    transmitting the fused data to at least one of the first function, the second function or the third function.
  25. The method of claim 13, wherein the at least one function comprises:
    a ninth function configured to collect the sensing data; and
    a tenth function configured to obtain fused data based on the non-sensing data and the sensing data.
  26. The method of claim 25, wherein the at least one function further comprises at least one of the following:
    an eleventh function configured to obtain a sensing model or a sensing result;
    a twelfth function configured to perform management of the sensing model or sensing result; or
    a thirteenth function configured to assist communication or determine an event based on the sensing model or sensing result.
  27. The method of claim 26, wherein the at least one function further comprises:
    a fourteenth function configured to store the sensing model or the sensing result.
  28. The method of any of claims 25-27, wherein the at least one operation comprises at least one of the following operations performed by the first function:
    receiving first input data from at least one of the fifth function, the ninth function or the tenth function.
  29. The method of any of claims 25-28, wherein the at least one operation comprises the following operation performed by the second function:
    receiving second input data from at least one of the fifth function, the ninth function or the tenth function.
  30. The method of any of claims 25-29, wherein the at least one operation comprises the following operation performed by the third function:
    receiving third input data from at least one of the fifth function, the ninth function or the tenth function.
  31. The method of any of claims 25-30, wherein the at least one operation comprises the following operation performed by the fifth function:
    transmitting the non-sensing data to at least one of the first function, the second function or the third function, and at least one of the eleventh function, the twelfth function or the thirteenth function.
  32. The method of any of claims 25-31, wherein the at least one operation comprises the following operation performed by the ninth function:
    transmitting the sensing data to at least one of the first function, the second function or the third function, and at least one of the eleventh function, the twelfth function or the thirteenth function.
  33. The method of any of claims 25-32, wherein the at least one operation comprises the following operations performed by the tenth function:
    receiving the non-sensing data from the sixth function,
    receiving the sensing data from the ninth function, and
    performing data processing on the received non-sensing data and sensing data to obtain the fused data.
  34. The method of claim 33, wherein the at least one operation further comprises the following operation performed by the tenth function:
    transmitting the fused data to at least one of the first function, the second function or the third function, and at least one of the eleventh function, the twelfth function or the thirteenth function.
  35. The method of any of claims 26-34, wherein the eleventh function is further configured to:
    perform data processing based on fourth input data obtained from at least two of the fifth function, the ninth function or the tenth function.
  36. The method of any of claims 26-35, wherein the model training of the at least one of sensing model, sensing sub-model, sensing functionality or sensing sub-functionality comprises at least one of the following:
    environment reconstruction, channel reconstruction, target reconstruction or digital twin or object detection.
  37. The method of any of claims 26-36, wherein the twelfth function is further configured to at least one of the following:
    perform control of the model training of the at least one of sensing model, sensing sub-model, sensing functionality or sensing sub-functionality;
    perform control of the inference of the sensing model; or
    monitor output of the sensing model.
  38. The method of any of claims 26-37, wherein the thirteenth function is further configured to:
    perform data preparation based on sixth input data obtained from at least one of the fifth function, the ninth function or the tenth function.
  39. The method of any of claims 35-38, wherein the at least one operation comprises at least one of the following operations performed by the eleventh function:
    receiving the fourth input data from at least one of fifth function, the ninth function or the tenth function;
    receiving, from the twelfth function, a performance level of the sensing model and a request to retrain the sensing model;
    receiving the sensing inference results from the thirteenth function,
    receiving sensing information from the twelfth function, or
    transmitting the trained or retrained sensing model to the fourteenth function.
  40. The method of any of claims 26-39, wherein the at least one operation further comprises the following operation performed by the eleventh function:
    receiving the inference results from the third function.
  41. The method of any of claims 26-40, wherein the at least one operation comprises the following operations performed by the twelfth function:
    receiving fifth input data from at least one of the fifth function, the ninth function or the tenth function; and
    receiving the sensing inference results from the thirteenth function.
  42. The method of claim 41, wherein the at least one operation further comprises the following operations performed by the twelfth function:
    determining that a performance level of the sensing model is below a threshold level based on the sensing inference results received from the thirteenth function; and
    based on determining that the performance level is below the threshold level, transmitting, to the eleventh function, the performance level of the sensing model and a request to retrain the sensing model.
  43. The method of any of claims 26-42, wherein the at least one operation comprises at least one of the following operations performed by the twelfth function:
    transmitting sensing information to the eleventh function,
    transmitting, to the thirteenth function, a switching indication to switch from the sensing model to another sensing model;
    transmitting, to the thirteenth function, a fallback indication to apply a non-sensing model instead of the sensing model;
    transmitting, to the thirteenth function, an activating indication to activate one or more of a plurality of candidate sensing models; or
    transmitting, to the thirteenth function, a deactivating indication to deactivate one or more of the plurality of candidate sensing models.
  44. The method of any of claims 26-43, wherein the at least one operation comprises the following operation performed by the twelfth function:
    transmitting, to the fourteenth function, a request that the fourteenth function transmits the sensing model to the thirteenth function.
  45. The method of any of claims 26-44, wherein the at least one operation comprises the following operation performed by the twelfth function:
    receiving the inference results from the third function.
  46. The method of any of claims 26-45, wherein the at least one operation comprises the following operations performed by the thirteenth function:
    receiving sixth input data from at least one of fifth function, the ninth function or the tenth function;
    transmitting the sensing inference results to the twelfth function.
  47. The method of any of claims 26-46, wherein the at least one operation further comprises at least one of the following operation performed by the thirteenth function:
    transmitting the sensing inference results to at least one of the first function, the second function or the third function, or
    receiving the sensing model from the fourteenth function.
  48. The method of any of claims 26-47, wherein the at least one operation comprises at least one of the following operations performed by the thirteenth function:
    receiving, from the twelfth function, a switching indication to switch from the sensing model to another sensing model;
    receiving, from the twelfth function, a fallback indication to apply a non-sensing model instead of the sensing model;
    receiving, from the twelfth function, an activating indication to activate one or more of a plurality of candidate sensing models; or
    receiving, from the twelfth function, a deactivating indication to deactivate one or more of the plurality of candidate sensing models.
  49. The method of any of claims 27-46, wherein the at least one operation comprises at least one of the following operations performed by the fourteenth function:
    receiving the trained sensing model from the eleventh function; or
    based on receiving, from the twelfth function, a request that the fourteenth function transmits the sensing model to the thirteenth function, transmitting the sensing model to the thirteenth function.
  50. The method of claim 49, wherein the request comprises at least one of the following:
    a model ID of the requested sensing model,
    a sensing functionality ID for the requested sensing functionality, or
    a sensing performance requirement indicating the requested sensing performance.
  51. The method of any of claims 1-12, wherein the AL/ML functional framework further comprises:
    a fifteenth function configured to perform sensing inference to obtain a sensing result,
    wherein the first function is further configured to perform model training of at least a sensing model, a sensing sub-model, a sensing functionality or a sensing sub-functionality, and the second function is further configured to perform management of the sensing model.
  52. The method of claim 51, wherein the at least one function further comprises:
    a sixteenth function configured to obtain fused data.
  53. The method of claim 51 or 52, wherein the first function is further configured to:
    perform data preparation based on seventh input data obtained from the sixteenth function.
  54. The method of any of claims 51-53, wherein the second function is further configured to at least one of the following:
    perform control of the model training of the at least one of sensing model, sensing sub-model, sensing functionality or sensing sub-functionality;
    perform control of the inference of the sensing model; or
    monitor output of the sensing model.
  55. The method of any of claims 51-54, wherein the at least one operation comprises at least one of the following operations performed by the first function:
    receiving the seventh input data from the sixteenth function;
    receiving, from the second function, a performance level of the sensing model and a request to retrain the sensing model;
    receiving sensing information from the second function, or
    transmitting the trained or retrained sensing model to the fourth function.
  56. The method of any of claims 51-55, wherein the at least one operation comprises the following operations performed by the second function:
    receiving eighth input data from the sixteenth function; and
    receiving the sensing inference results from the fifteenth function.
  57. The method of claim 56, wherein the at least one operation further comprises the following operations performed by the second function:
    determining that a performance level of the sensing model is below a threshold level based on the sensing inference results received from the fifteenth function; and
    based on determining that the performance level is below the threshold level, transmitting, to the first function, the performance level of the sensing model and a request to retrain the sensing model.
  58. The method of any of claims 51-57, wherein the at least one operation comprises at least one of the following operations performed by the second function:
    transmitting sensing information to the first function,
    transmitting, to the fifteenth function, a switching indication to switch from the sensing model to another sensing model;
    transmitting, to the fifteenth function, a fallback indication to apply a non-sensing model instead of the sensing model;
    transmitting, to the fifteenth function, an activating indication to activate one or more of a plurality of candidate sensing models; or
    transmitting, to the fifteenth function, a deactivating indication to deactivate one or more of the plurality of candidate sensing models.
  59. The method of any of claims 51-58, wherein the at least one operation comprises the following operation performed by the second function:
    transmitting, to the fourth function, a request that the fourth function transmits the sensing model to the fifteenth function.
  60. The method of any of claims 51-59, wherein the at least one operation comprises the following operation performed by the third function:
    receiving ninth input data from the sixteenth function.
  61. The method of any of claims 51-59, wherein the at least one operation comprises at least one of the following operation performed by the third function:
    transmitting the inference results to the fifteenth function, or
    receiving the sensing result from the fifteenth function.
  62. The method of any of claims 51-61, wherein the at least one operation comprises the following operations performed by the fifteenth function:
    receiving tenth input data from at least one of the sixteenth function, the nineteenth function or the sixteenth function; and
    receiving the sensing model from the fourth function.
  63. The method of any of claims 51-62, wherein the at least one operation further comprises at least one of the following operation performed by the fifteenth function:
    receiving the inference results from the second function, or
    transmitting the sensing inference results to the second function.
  64. The method of any of claims 51-63, wherein the at least one operation comprises at least one of the following operations performed by the fifteenth function:
    receiving, from the second function, a switching indication to switch from the sensing model to another sensing model;
    receiving, from the second function, a fallback indication to apply a non-sensing model instead of the sensing model;
    receiving, from the second function, an activating indication to activate one or more of a plurality of candidate sensing models; or
    receiving, from the second function, a deactivating indication to deactivate one or more of the plurality of candidate sensing models.
  65. The method of any of claims 52-64, wherein the AI/ML functional framework further comprises:
    a seventeenth function configured to collect non-sensing data, and
    the at least one function further comprises:
    an eighteenth function configured to collect sensing data.
  66. The method of claim 51-65, wherein the at least one operation comprises the following operations performed by the sixteenth function:
    receiving the non-sensing data from the seventeenth function,
    receiving the sensing data from the eighteenth function, and
    performing data processing on the received non-sensing data and sensing data to obtain the fused data.
  67. The method of claim 66, wherein the at least one operation further comprises the following operation performed by the sixteenth function:
    transmitting the fused data to at least one of the first function, the second function, the third function or the fifteenth function.
  68. The method of any of claims 52-64, wherein the AI/ML functional framework further comprises at least two of:
    a nineteenth function configured to provide ground-truth sensing data,
    a twentieth function configured to provide non-ground-truth sensing data, or
    a twenty-first function configured to provide non-sensing ground-truth data.
  69. The method of claim 68, wherein the at least one operation comprises the following operations performed by the sixteenth function:
    receiving at least two of: ground-truth sensing data from the nineteenth function, the non-ground-truth sensing data from the twentieth function, or the non-sensing ground-truth data from the twenty-first function, and
    performing data processing on the received data to obtain the fused data.
  70. The method of claim 69, wherein the at least one operation further comprises the following operation performed by the sixteenth function:
    transmitting the fused data to at least one of the first function, the second function, the third function or the fifteenth function.
  71. The method of any of claims 23, 33, 35, 55, 66 and 69, wherein the data processing comprises at least one of the following:
    data pre-processing, data cleaning, data formatting, data transformation, or data integration.
  72. The method of any of claims 1-71, wherein at least one of the first function, the second function, the third function, the fourth function, the fifth function, the sixth function, the seventh function, the eighth function, the ninth function, the tenth function, the eleventh function, the twelfth function, the thirteenth function, the fourteenth function, the fifteenth function, the sixteenth function, the seventeenth function, the eighteenth function, the nineteenth function, the twentieth function or the twenty-first function is implemented in one of the following:
    a terminal device, an access network device, a core network device, or a third party device.
  73. An apparatus comprising:
    a transceiver; and
    a processor communicatively coupled with the transceiver,
    wherein the processor is configured to perform at least one operation based on an artificial intelligence/machine learning (AI/ML) functional framework,
    wherein the AI/ML functional framework comprises:
    a first function configured to perform model training of at least one of an AI/ML model, an AI/ML sub-model, an AI/ML functionality or an AI/ML sub-functionality;
    a second function configured to perform management of the AI/ML model;
    a third function configured to perform inference of the AI/ML model;
    a fourth function configured to store the AI/ML model; and
    at least one function configured to operate based on sensing data.
  74. A non-transitory computer readable medium comprising computer program stored thereon, the computer program, when executed on at least one processor, causing the at least one processor to perform the method of any of claims 1-72.
  75. A chip comprising at least one processing circuit configured to perform the method of any of claims 1-72.
  76. A computer program product tangibly stored on a computer-readable medium and comprising computer-executable instructions which, when executed, cause an apparatus to perform the method of any of claims 1-72.
PCT/CN2024/075274 2023-09-21 2024-02-01 Ai/ml framework for communication Pending WO2025060324A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US202363584313P 2023-09-21 2023-09-21
US63/584,313 2023-09-21

Publications (1)

Publication Number Publication Date
WO2025060324A1 true WO2025060324A1 (en) 2025-03-27

Family

ID=95073402

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2024/075274 Pending WO2025060324A1 (en) 2023-09-21 2024-02-01 Ai/ml framework for communication

Country Status (1)

Country Link
WO (1) WO2025060324A1 (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20210407648A1 (en) * 2020-06-29 2021-12-30 GE Precision Healthcare LLC Systems and methods for respiratory support recommendations
US20220036123A1 (en) * 2021-10-20 2022-02-03 Intel Corporation Machine learning model scaling system with energy efficient network data transfer for power aware hardware
CN114443556A (en) * 2020-11-05 2022-05-06 英特尔公司 Device and method for man-machine interaction of AI/ML training host
US20220332350A1 (en) * 2019-11-04 2022-10-20 Intel Corporation Maneuver coordination service in vehicular networks

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20220332350A1 (en) * 2019-11-04 2022-10-20 Intel Corporation Maneuver coordination service in vehicular networks
US20210407648A1 (en) * 2020-06-29 2021-12-30 GE Precision Healthcare LLC Systems and methods for respiratory support recommendations
CN114443556A (en) * 2020-11-05 2022-05-06 英特尔公司 Device and method for man-machine interaction of AI/ML training host
US20220036123A1 (en) * 2021-10-20 2022-02-03 Intel Corporation Machine learning model scaling system with energy efficient network data transfer for power aware hardware
CN116011511A (en) * 2021-10-20 2023-04-25 英特尔公司 Machine Learning Model Scaling System for Power-Aware Hardware

Similar Documents

Publication Publication Date Title
US20240314884A1 (en) Sensing-assisted mobility management
US20240413947A1 (en) Method, apparatus, and system for multi-static sensing and communication
US20250234368A1 (en) Methods, apparatus, and system for communication-assisted sensing
WO2024227331A9 (en) Communication systems, apparatuses, methods, and non-transitory computer-readable storage devices for integrated sensing and communication
WO2025060324A1 (en) Ai/ml framework for communication
WO2025060327A1 (en) Ai/ml framework for communication
WO2025060347A1 (en) Life cycle management for sensing
WO2025060348A1 (en) Methods, devices, and computer readable medium for artificial intelligence (ai) service
WO2025073175A1 (en) System and method for leveraging quasi-colocation (qcl) in communication and sensing operations
WO2025073174A1 (en) Scheduling of sensing operation and communication operation
WO2025091799A1 (en) Apparatus, method and readable storage medium for communication
WO2024227333A1 (en) Communication systems, apparatuses, methods, and non-transitory computer-readable storage devices for integrated sensing and communication using differential sensing reports
WO2025060474A1 (en) Collision resolution for time-frequency resources
WO2024227336A1 (en) Communication systems, apparatuses, methods, and non-transitory computer-readable storage devices for integrated sensing and communication with two-stage downlink control information for unified uplink control information and local-traffic report
WO2024227332A1 (en) Communication systems, apparatuses, methods, and non-transitory computer-readable storage devices for integrated sensing and communication using cooperative sensing
WO2024227335A1 (en) Communication systems, apparatuses, methods, and non-transitory computer-readable storage devices for integrated sensing and communication with alarms and corresponding uplink transmissions triggered by sensing
WO2025050578A1 (en) 6G Sensing Framework
WO2024174561A1 (en) M2m with generative pretrained models
WO2025077052A1 (en) Communication-parameter-map-assisted fast wakeup and data transmission methods for wireless communications, and apparatuses, systems, and non-transitory computer-readable storage devices employing same
WO2024227334A1 (en) Communication systems, apparatuses, methods, and non-transitory computer-readable storage devices for integrated sensing and communication using cooperative sensing with timing alignment
WO2025077053A1 (en) Communication-parameter-map-assisted fast wakeup and data transmission methods using reference signals for wireless communications, and apparatuses, systems, and non-transitory computer-readable storage devices employing same
WO2025011128A1 (en) Method, apparatus and system for data transmission
WO2025251229A1 (en) Methods and systems for use of reference point signatures
WO2025011147A1 (en) Method, apparatus and system for data transmission
WO2025086497A1 (en) Fast wakeup and data transmission methods using informative wakeup signals for wireless communications, and apparatuses, systems, and non-transitory computer-readable storage devices employing same

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 24866751

Country of ref document: EP

Kind code of ref document: A1