WO2025156420A1 - Feature predictions for communication devices, apparatus, and computer-readable medium - Google Patents
Feature predictions for communication devices, apparatus, and computer-readable mediumInfo
- Publication number
- WO2025156420A1 WO2025156420A1 PCT/CN2024/085767 CN2024085767W WO2025156420A1 WO 2025156420 A1 WO2025156420 A1 WO 2025156420A1 CN 2024085767 W CN2024085767 W CN 2024085767W WO 2025156420 A1 WO2025156420 A1 WO 2025156420A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- data
- function
- time
- inference
- model
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04W—WIRELESS COMMUNICATION NETWORKS
- H04W64/00—Locating users or terminals or network equipment for network management purposes, e.g. mobility management
- H04W64/006—Locating users or terminals or network equipment for network management purposes, e.g. mobility management with additional information processing, e.g. for direction or speed determination
Definitions
- This disclosure is generally related to wireless communication, and more particularly to positioning prediction and/or beam management.
- Artificial intelligence includes devices, components, software, and modules that have self-learning, such as machine learning (ML) , deep learning, reinforcement learning, migration learning, deep reinforcement learning, and meta-learning.
- artificial intelligence is implemented by using an artificial intelligence network (or referred to as a neural network) .
- the neural network includes multiple layers, and each layer includes at least one node.
- the neural network includes an input layer, an output layer, and at least one hidden layer.
- the neural network of each layer includes is not limited to using at least one of a full connection layer, a dense layer, a convolutional layer, a transposed convolutional layer, a direct connection layer, an activation function, a normalization layer, a pooling layer.
- each layer of the neural network may include one sub-neural network, such as a Residual Network block (or Resnet block) , a dense network block (Densenet Block) , a cyclic network (Recurrent Neural Network, or an RNN) .
- a Residual Network block or Resnet block
- a dense network block or Density Network
- a cyclic network Recurrent Neural Network, or an RNN
- the artificial intelligence network includes a neural network model and/or a neural network parameter corresponding to the neural network model.
- the neural network model may be referred to as a network model in short, and the neural network parameter may be referred to as a network parameter in short.
- a network model may define an architecture of a network such as a quantity of layers of a neural network, a size of each layer, an activation function, a link status, a convolution kernel and a convolution step of a size, a convolution type (for example, 1D convolution, 2D convolution, 3D convolution, hollow convolution, transposed convolution, divided convolution, packet convolution, or extended convolution) .
- a convolution type for example, 1D convolution, 2D convolution, 3D convolution, hollow convolution, transposed convolution, divided convolution, packet convolution, or extended convolution
- a network parameter is a weight and/or an offset of a network of each layer in the network model and a value of the weight and/or the offset of the network of each layer in the network model.
- One network model may correspond to a plurality of different sets of neural network parameter values to adapt to different scenarios.
- the value of the network parameter may be obtained through offline training and/or online training.
- the neural network parameter is obtained by training the neural network model by inputting at least one sample and a label.
- AI/ML Artificial Intelligence/Machine Learning
- AI/ML Artificial Intelligence/Machine Learning
- the system operating efficiency is expected to be improved, for example, reducing the overhead of reference signal via AI/ML inference and prediction.
- the accuracy of terminal positioning is expected be enhanced.
- Other possible benefits for adopting AI/ML into mobile communication system or completely merged into mobile communication system would be expected.
- model refers to a general term, which is to describe a device in a mobile system is capable of doing a processing method, a functionality, a feature, or a feature group.
- Model can be functionality, function, functionality module, function module, processing method, information processing method, implementation, feature, feature group, configuration, configuration set, dataset (e.g., for model training) or data-driven algorithms.
- a wireless communication method includes providing a first function; providing first data of time T1 and at least one of: second data of time T2 and third data of time T3 to the first function, wherein time T2 preceding time T1 and the time T3 following time T1; and performing inference by the first function or performing training of the first function according to the first data and at least one of the second data or the third data to generate a result.
- another wireless communication method includes providing a first function; providing data of beam features to the first function; and performing inference by the first function or performing training of the first function according to the data to obtain a result, the result including a positioning result or an intermediate result.
- Still another embodiment of this disclosure provides a wireless communication apparatus, including one or more memory units storing one or more programs and one or more processors electrically coupled to the one or more memory units and configured to execute the one or more programs to perform any method or step or their combinations in this disclosure.
- Still another embodiment of this disclosure provides non-transitory computer-readable storage medium, storing one or more programs, the one or more programs being configured to, when performed by at least one processor, cause to perform any method or step or their combinations in this disclosure.
- one or more wireless communication methods are further disclosed, the methods include combinations of certain methods, aspects, elements, and steps (either in a generic view or specific view) disclosed in the various embodiments of this disclosure.
- FIG. 1 shows a schematic diagram of various embodiments in the present disclosure.
- FIG. 2A shows a schematic diagram of an exemplary embodiment in the present disclosure.
- FIG. 2B shows a schematic diagram of another exemplary embodiment in the present disclosure.
- FIG. 2C shows a schematic diagram of another exemplary embodiment in the present disclosure.
- FIG. 2D shows a schematic diagram of another exemplary embodiment in the present disclosure.
- FIG. 2E shows a schematic diagram of another exemplary embodiment in the present disclosure.
- FIGS. 3A and 3B show a diagram of an AI positioning model for training and inference with historical data.
- FIGS. 4A and 4B show a diagram of an AI positioning model for training and inference with historical data and prediction data.
- FIG. 5 shows a diagram of sharing of assistance information.
- FIGS. 6A and 6B show a diagram of an AI positioning model for training and inference with historical data of timing measurements and beam measurements.
- FIGS. 7A and 7B show a diagram of an AI positioning model for training and inference with historical data and prediction data of timing measurements and beam measurements.
- FIGS. 8A and 8B show a diagram of an AI positioning and beam management model for training and inference with historical data and prediction data of timing measurements and beam measurements.
- FIGs. 9A-9C together illustrate a block diagram of an exemplary wireless communication system.
- FIG. 1 shows an exemplary schematic for a basic AI/ML (Artificial Intelligence/Machine Learning) framework used in communication systems.
- the general framework may include a data collection 110, a model training 120, a management 130, an inference 140, and/or a model storage 150.
- the data collection include a data collector.
- the data collection is a function that provides input data to other part of the framework, such as the model training, the management, and the inference functions.
- the data used as the input for the AI/ML model training function may include training data.
- the data used as the input for the management of AI/ML models or AI/ML functionalities include monitoring data.
- the data used as the input for the AI/ML inference function include inference data.
- the data collection can provide data preparation including data pre-processing and cleaning, formatting, and transformation.
- a data generation function may be a function to be perform in advance of the data collection in some devices and may provide the measurement data or other data to the data collection function.
- the model training may include a model trainer and is a function that performs AI/ML model training, validation, and testing.
- the model training function may also be responsible for data pre-processing and cleaning, formatting, and transformation based on training data delivered by the data collection function when the above data preparation work is not done in the data collection function. Trained, validated, and tested AI/ML models is delivered to the model storage function.
- the management is a function that oversees the operation (e.g., selection/ (de) activation/switching/fallback) and monitoring (e.g., performance) of AI/ML models or AI/ML functionalities.
- This function is also responsible for making decisions to ensure the proper inference operation based on data received from the data collection function and the inference function.
- Management instructions may include the information needed as input to manage the inference function.
- Model transfer/delivery request may be used to request model (s) to the model storage function.
- Performance feedback/retraining request includes the information needed as input for the model training function, e.g., for model (re) training or updating purposes.
- the inference is a function that provides outputs from the process of applying AI/ML models or AI/ML functionalities, using the data that is provided by the data collection function (i.e., inference data) as an input.
- the inference function is also responsible for data preparation (e.g., data pre-processing and cleaning, formatting, and transformation) based on inference data delivered by a data collection function, when the above data preparation work is not done in the data collection function.
- Inference output is the final output of the whole AI/ML model or functionality to be used for other parts in mobile system.
- the output data can also be internally used by the management function to monitor the performance of AI/ML models or AI/ML functionalities.
- model storage is a function responsible for storing trained/updated models that can be used to perform the inference function.
- a data cleaning is a process of preparing data analysis by modifying, adding, or deleting data. This process is also referred to as data preprocessing.
- the quality of the data after pre-processing may directly affect all the conclusions and opinions obtained from the data or the trained model.
- Most real-life data has many places to be preprocessed, such as missing values, non-informative features, and so that the data may always need to be cleaned before being used to achieve the best results.
- the typical preprocessing ways/means include handling missing values, encoding categorical features, outliers detection, and transformations etc.
- the data cleaning is better to be handled in the data collection function.
- the data after cleaning may be used as input to functions of the model training, interference, and/or management.
- the data cleaning may also be separately located in the functions of the model training, interference, and/or management when the data collection function doesn’ t provide the data after cleaning.
- the data cleaning may also be located in the function of data generation which is before the data collection function.
- the mostly data for training, inference, management is from the measurement in a base station (BS) or user equipment (UE) , for example, received signal strength indicator (RSSI) , reference signal received power (RSRP) , reference signal received quality (RSRQ) , signal interference noise ratio (SINR) , and etc., some may come from the personality data of the device in mobile system, for example, the location information of the UE. Other data sources are not precluded.
- the measurement results may be generated in generation function or be directly forwarded for data collection and cleaned before training, inference and management. When the measurements occur in the BS or UE, the data cleaning is highly handled in the BS or UE correspondingly together with measurements, data generation or data collection.
- the training, inference or management also occur in the BS or UE, basically, it can be confirmed that the data cleaning is handled in the BS or UE with data generation, data collection, training, inference or management.
- training, inference or management doesn’ t occur in the BS or UE but in the other device, e.g., the core network (CN) , OAM (operations, administration, and maintenance) , it is possible that data cleaning may be handled in the above other devices.
- the PRU positioning reference unit
- the PRU positioning reference unit
- the UE term may replace by (or include) the PRU.
- various embodiments/implementations in the present disclosure may use position/location as non-limiting examples, and may be applicable to other physical or environmental parameters.
- the AI/ML is intended to enhance the accuracy of UE positioning. Depending on the device with model training/inference, whether the directed or assisted mode of positioning, and the device of output of UE final location, the AI/ML positioning can be described as a portion or all of the following cases.
- the new entity, location management function (LMF) is central in the 5G positioning architecture.
- the core network may include the LMF, which receives measurements and assistance information from a base station (BS) (e.g., a next generation radio access network (NG-RAN) node) and/or the mobile device (or user equipment (UE) ) to compute the position of the UE.
- BS e.g., a next generation radio access network (NG-RAN) node
- UE user equipment
- the communication to/from LMF may be via an access and mobility management function (AMF) over the NLs interface.
- AMF access and mobility management function
- the application in FIG. 2A includes UE-based positioning with UE-side model, direct AI/ML or AI/ML assisted positioning for model training 210, and/or model inference 215.
- the application in FIG. 2B includes UE-assisted/LMF-based positioning with UE-side model, AI/ML assisted positioning for model training 220, and/or model inference 225.
- the application in FIG. 2C includes UE-assisted/LMF-based positioning with LMF-side model, direct AI/ML positioning for model training 230, and/or model inference 235.
- 2D includes a NG-RAN node assisted positioning with gNB-side model, AI/ML assisted positioning for model training 240, and/or model inference 245.
- the application in FIG. 2E includes a NG-RAN node assisted positioning with LMF-side model, direct AI/ML positioning for model training 250, and/or model inference 255.
- the AI/ML model is located in the UE, the final position information is from LMF, and/or the mode of positioning is assisted mode, i.e., the model in the UE assists the final calculation and output from LMF.
- the AI/ML model is located in the LMF, and the device is same with the device for final calculation and output of position information, and the mode of positioning is assisted mode, i.e., the measurements from the BS (e.g., NG-RAN) assist the positioning processing.
- the AI/ML may be adopted for beam management, wherein the main purposes are intended to predict the spatial-domain downlink (DL) beam or temporal DL beam.
- the data cleaning or data preprocessing is also needed for AI/ML technology for beam management.
- data cleaning for AI positioning separate embodiments are provided based on whether the data cleaning is handled in a UE, a BS or other device (e.g., LMF) .
- the rules of the data preprocessing or data cleaning may be different, and/or the rules may be individually defined and configured.
- the application of the AI/ML model may collect input from the downlink measurements at the UE side (such as the examples in FIGS. 2A, 2B, 2C) or measurement at the BS side (such as the examples in FIGS. 2D and 2E) .
- the data for model input such as: the DL (downlink) RSTD (reference signal time difference) , UE Rx –Tx time difference, DL PRS RSRP (DL PRS Reference Signal Received Power) , DL RSRQ (Reference Signal Received Quality) , DL RSSI (Received Signal Strength Indicator) , SINR (Signal to Interference &Noise Ratio) , DL PRS-RSRPP (PRS Reference Signal Received Path Power) , DL-AOD (Angle-of-Departure) , DL RSCP (Received Signal Code Power) , can be generated from the measurements in the UE.
- DL PRS RSRP DL PRS Reference Signal Received Power
- DL RSRQ Reference Signal Received Quality
- RSSI Received Signal Strength Indicator
- SINR Signal to Interference &Noise Ratio
- DL PRS-RSRPP PRS Reference Signal Received Path Power
- the data for model input can also include the uplink measurements in BS such as UL RTOA (Relative Time of Arrival) , Timing advance, base station Rx-Tx Time Difference, UL (uplink) SRS-RSRP (Sounding Reference Signal Reference Signal Received Power) , UL SRS-RSRPP (Sounding Reference Signal Reference Signal Received Path Power) , UL AOA (Uplink Angle of Arrival) , and UL RSCP (Received Signal Code Power) , CIR (Channel Impulse Response) , PDP (Power Delay Profile) , or DP (Delay Profile) .
- UL RTOA Relative Time of Arrival
- Timing advance base station Rx-Tx Time Difference
- UL uplink
- SRS-RSRP Sounding Reference Signal Reference Signal Received Power
- UL SRS-RSRPP Sounding Reference Signal Reference Signal Received Path Power
- UL AOA Uplink Angle of Arrival
- UL RSCP Receiveived Signal Code
- the measurements can be based on the time domain, the power domain, and/or the angel or phase domain.
- the time domain measurements include the DL RSTD, the UE Rx –Tx time difference, and/or the time delay in the CIR/PDP/DP.
- the power domain measurements include the DL PRS RSRP, the DL RSRQ, the DL RSSI, the SINR, the DL PRS-RSRPP, and/or the power in the CIR/PDP.
- the phase/angel domain measurement includes the phase in the DL RSCP, the CIR, or the angel in the DL AOD.
- the AI/ML model may collect the input of labels from the UE, BS, and/or LMF.
- the AI/ML model may output the position of the UE (as shown in FIGS. 2A, 2C, and 2E) or the intermediate features/parameters (FIGS. 2B and 2D) for the next step of position calculation.
- the AI/ML is used for beam management too.
- the main purposes is to predict the spatial-domain DL beam or temporal DL beam.
- the beamforming can be the key technology in mobile communication systems. Through the beamforming, the RF energy is concentrated and propagated between the UE and BS with a narrow direction.
- Beam management includes but is not limited to beam scanning, beam tracking, and beam recovery.
- the core problem to be solved includes how to obtain accurate beam pairs by using the lowest possible resource overhead.
- the beam prediction function via the AI/ML model can be used to reduce the overhead of finding the beam pairs. That is, only the beam parameters corresponding to a portion of beams, M beams are input into the AI model, and the beam parameters corresponding to N beams are predicted by AI model as output. When M ⁇ N, the prediction can reduce the overhead.
- the N beams can include M beams.
- the legacy positioning function without AI/ML assistance or with AI/ML assistance typically uses the measurement at only current time instance to calculate, inference, or output the position or location of UE based on the measurement corresponding to the current time instance. Due to the delay from process of calculation, inference, the real time instance of output final position result is later than the time instance of measurement.
- the delay causes the problem that the output position value may have some lag behind the actual position when the terminal moves at a certain very high speed. For example, if the measurement is provided at time T, and the process delay is 200 ms (millisecond) , and the UE moving speed is 100 m/s, then the output of position is T+200 ms, the actual position of UE is away from the output of position at least 20 meters.
- the performance of high-speed moving UE positioning should therefore be enhanced, and the possible way is the position prediction for future time instance based on AI/ML.
- the AI model can include a LSTM (Long Short-Term Memory) NN (neural network) , a GRU (Gate Recurrent Unit) NN, and/or other possible NN suitable for processing time sequence.
- LSTM Long Short-Term Memory
- GRU Gate Recurrent Unit
- the model input not only includes the measurement input of the current time instance but also includes the one or more measurement value within the time window N, which precedes the current time instance T.
- the ground truth label for model training can also be generated from the location/position or intermediate parameters on the future time instance after time T.
- FIG. 3A shows exemplary model training according to some embodiments of this disclosure.
- FIG. 3B shows exemplary model inference according to some embodiments of this disclosure.
- the measurements within the time window N are the preceding measurement results in advance of time T.
- the definition of time window N is not necessarily be pre-configured, and the input within the window N can be indicated by a parameter from a signaling.
- the indication can be a total number of measurement results including at time T and before the time T.
- the ground truth label includes the value and parameter at time T+ ⁇ t, where ⁇ t can be explicitly or implicitly indicated when the communication node collecting the input for model training.
- ⁇ t can be the processing time used for model inference or deduction of the UE position.
- the reference parameter and the value used for model training can be the value or parameter at the time T plus the time used for model inference ( ⁇ t) . If the time T+ ⁇ t is not exactly the actual label collection time, the actual T+ ⁇ t can be selected as the nearest actual label collection time.
- the relative or absolute time index/stamp may be the input of model for training and inference.
- the inputs of model are the measurements or ground truth label combined with the corresponding time index/stamp.
- the time index/stamp may be mandatory reported with the measurements or ground truth label when the function of future prediction is enabled.
- the time index/stamp of the input data is the index/stamp on time when measurement or other action happens, and the measured time domain information is the actual measurement on time domain, such as the DL RSTD and/or UL RTOA.
- the model input can also include the prediction of future measurement features.
- the DL RSTD or UL RTOA can be based on prediction of future values, instead of a real measurement.
- FIG. 4A shows exemplary model training according to some embodiments of this disclosure.
- FIG. 4B shows exemplary model inference according to some embodiments of this disclosure.
- the predicted values can also be the input for model training and model inference.
- the data for the ground truth label can include the measured or predicted value.
- the input for the AI model can be marked as measurement inputs or prediction input. That is, the input data can include a field, specifying that the data on DL RSTD, UL RTOA, CIR/PDP/DP or other possible features are measurement values or prediction values. Additionally or alternatively, the time index/stamp coupled with according input can be marked as measurement or prediction values. Additionally or alternatively, this field of time index/stamp specifies the time instance at DL RSTD, UL RTOA, CIR/PDP/DP or other possible features whether the measurement or the prediction is performed. Additionally or alternatively, the input is marked with measurement or prediction directly without time index/stamp involved.
- the coupled label as the model input and the corresponding time index/stamp can be applied to deduce the terminal moving speed and moving direction.
- the moving speed can be calculated from the distance between two locations and the time offset between the two time stamps.
- the moving direction can also be deduced from the two locations.
- the moving speed can be a condition for function/model management, such as the operation of function/model update, switch, fallback, and other possible behaviors.
- function/model management such as the operation of function/model update, switch, fallback, and other possible behaviors.
- the model training and inference based on one current time index/stamp is assumed as a baseline function.
- Model training and inference based on the future location/position prediction can be deemed to be another function or supplementary function.
- LCM life cycle management
- a moving speed threshold can be configured. When the moving speed exceeds the threshold, the supplementary function for future position prediction can be enabled.
- the baseline function can be disabled correspondingly. Likewise, when the speed drops below the threshold, the baseline function can be enabled again, while the supplement function is disabled.
- the moving speed is actually the condition of function switching.
- the moving speed, speed threshold, or the LCM decision for function/model management based on the speed can be used as the assistant information to be signaled among the different entities, including the network or UE.
- the assistance information can be shared between different entities. There are several examples to show the signaling how to transfer among the entities.
- the entity that derives the metrics or the entity that gives the LCM decisions is assumed to be the core network, such as the LMF (Location Management Function) in FIGS. 2A-2D. Yet, it can be the BS or UE that derives the metrics or the gives the LCM decisions.
- the model training and inference are done in the UE side, and the metrics derivation, threshold setting or LCM management is in the LMF (in core network) .
- the metrics derivation, threshold setting or LCM management is therefore not in the same entity handling the model processing.
- the assistant information between LMF and UE is therefore to be shared. When the UE gets the moving speed information, the UE can determine whether to switch the model by itself.
- the UE When the UE gets the speed threshold, if the UE can calculate the speed by itself, the UE determines the switching of model according to the rule of threshold, for example, by applying the supplementary model in case of higher speed, or applying the baseline model in case of lower speed. Likewise, when the UE gets the LCM decisions from the LMF, the UE should execute the order from LCM.
- the model processing is in the UE side in FIG. 2B and in the BS side in FIG. 2D.
- the metrics derivation, threshold setting or LCM management is in LMF, one of entities in Core Network.
- the assistant information can be signaled between the LMF and the UE or between the LMF and the BS. For example in FIG. 2D, if the metrics derivation is done in one LMF based on the output of several BSs but not one, then the assistant information will be signaled to such several BSs.
- the moving speed is not the only the condition for function switching, but also the moving direction and rotation can be the conditions for function switching, if these pieces of information can be derived.
- These conditions can be derived in the network (like the LMF) or the UE, and the network can have the knowledge on the condition so that the network can decide on function switching. On the other side, alternatively the UE can use these conditions to decide the function without intervene from network.
- the moving speed cannot be derived, it means the model management cannot be based on the speed. If the measurements within the time window N in FIGS. 3A, 3B, 4A, and 4D cannot be the input of the model, it means the future position prediction cannot be made. Some alternative solutions may be considered to assist the model management or the future position prediction.
- the UE can report the residence time in a small area, for example, if UE stays in a small room for a long time.
- the residence time can partially represent the status of the UE’s moving.
- the UE can report the residence time directly, but the direct residence time report may not be concise for a mobile system signaling.
- a graded report can be provided. For example, if residence time is less than 1s, the UE reports level 1; if residence time is between 1s and 2s, the UE reports level 2, and so on.
- the UE can report the Truth/False of long-time stay.
- the UE can trigger the report of Truth of long-time stay.
- the time of stay doesn’ t exceed the threshold of residence time
- the UE may trigger the report of False of long-time stay.
- the UE can also report the residence time by the time index/stamp.
- the time stamp is defined as or attached a time duration field which represents the residence time.
- the time duration field can also be replaced by the condition indication of Truth/False of long-time stay.
- the threshold of residence time is also needed to be indicated to UE if Truth/False report is adopted.
- the features for measurement include the time domain measurement, such as the DL RSTD, UL RTOA and so on. Additionally or alternatively, the features for angel, phase, or beam measurements can also be considered as the model inputs for positioning inference or the training of the model. The detail is discussed below.
- the beam measurement has been supported in wireless communication systems to aid with the beam management.
- the downlink beam measurement can be based on DL (downlink) reference signals such as SS Blocks (SSB) , a CSI-RS (Channel Start Information Reference Signal) , a DMRS (Demodulation reference signals) , or other possible signals for the beam measurement.
- the uplink beam measurement is based on UL (uplink) reference signals such as the SRS, RACH, DMRS, or other possible signals, for example, the downlink reference signal if the channel reciprocity is guaranteed.
- the characters to reflect or represent the beam are the beam metric parameters, including but not limit to such parameters related to reference signal like a resource ID, a resource set ID, a beam pair ID, a RSRP, a RSRQ, a SINR, a Beam Domain Receive Power Map (BDRPM) , a Resource Indicator, AOA (Angle of Arrival) , ZOA (zenith angle of arrival) , AOD (Antenna-on-Display) , ZOD (zenith angle of departure) , and so on.
- These metric parameters can be regarded as the beam measurements in general.
- the beam measurement can be combined with the mentioned measurements for positioning, such as timing measurement, as an input for model training and inference.
- the mentioned measurements for positioning such as timing measurement
- the measurements in the FIGS. 6A and 6B include time domain measurement and beam domain measurement.
- the beam information used as the input for the positioning AI model can be output from an AI model that makes the prediction of the beam information.
- the prediction of the beam information can also be used as the input of AI positioning model to further enhance the accuracy of positioning.
- the input of the AI model for positioning inference include the measurement values at the preceding time windows, the measurements at time T, the predicted measurement at time T+ ⁇ t, the beam information, which can be beam measurements at time T or prediction at time T+ ⁇ t.
- the ground truth table can be of the parameter at time T+ ⁇ t, considering the processing time ⁇ t.
- the measurements may include the DL (downlink) RSTD (reference signal time difference) , UE Rx –Tx time difference, DL PRS RSRP (DL PRS Reference Signal Received Power) , DL RSRQ (Reference Signal Received Quality) , DL RSSI (Received Signal Strength Indicator) , SINR (Signal to Interference &Noise Ratio) , DL PRS-RSRPP (PRS Reference Signal Received Path Power) , DL-AOD (Angle-of-Departure) , DL RSCP (Received Signal Code Power) , UL RTOA (Relative Time Of Arrival) , Timing advance, Base station Rx-Tx Time Difference, UL SRS-RSRP (Sounding Reference Signal Reference Signal Received Power) , UL SRS-RSRPP (Sounding Reference Signal Reference Signal Received Path Power) , UL AOA, UL RSCP (Received Signal Code Power) , C
- the output of the AI beam prediction from the beam management model includes a reference signal resource ID, a reference signal resource set ID, a beam pair ID, an RSRP, an RSRQ, an SINR, a Beam Domain Receive Power Map (BDRPM) , a Resource Indicator, an AOA, a ZOA, an AOD, an ZOD, and so on.
- the certainty or uncertainty of the prediction of the beam information can also be the output of the AI beam prediction model for further usage.
- the prediction of the beam can be represented by the reference signal resource ID.
- the certainty of the prediction of reference signal resource ID is the possibility of the credibility that this ID is the true future reference signal resource ID.
- the certainty can be represented in a percentage format, or by a grade (e.g.
- the certainty is lower than a threshold pre-configured or marked as false, it means this input into AI positioning model is not reliable and may negatively affect the position prediction. In such case, the predicted beam may be disregarded as the model input of the positioning model. Additionally or alternatively, the positioning model can deprioritize the corresponding input of prediction. Similarly, the time prediction as AI positioning model input can apply the similar principle of selection based on the corresponding threshold or mark of certainty.
- the AI beam prediction would implicitly affect the AI positioning performance.
- the reference signal for positioning (such as PRS and others) could be optimized according to the beam predicted as the QCL (Quasi co-location) of PRS has relation with the QCL of beam.
- AI beam management model and AI positioning model can be merged together as shown in FIGS. 8A and 8B.
- the inputs not only include the time and beam measurement at current time T, and also optionally the time and beam measurement within the history measurement time window.
- the time and beam prediction at time T+ ⁇ t can also be used as the input of the model training and model inference.
- the ground truth label on positioning/location and beam measurements at a future time can also be used T+ ⁇ t.
- the input include the time and beam measurement at current time and within the history measurement time window.
- the time and beam prediction at time T+ ⁇ t can also be used as the input of the model training and model inference.
- the output of the combined model includes the positioning or intermediate features of the positioning and the beam or intermediate feature for beam.
- the merged AI model in FIGS. 8A and 8B can include the combination of two virtual separate models of AI positioning and beam management.
- output of the dependency on the input can help the management entity to judge which part of input should be adjusted or which part of model parameters can be adjusted or retrained.
- the dependency can be evaluated by a probability of output depending on the input.
- there are two kinds of inputs of time and beam measurement and there are two kinds of outputs of position and beam or the possible related intermediate features. Then, there are four dependencies such as the dependency of position on time measurement, position on beam measurement, beam on time measurement and beam on beam measurement.
- the monitoring or decision entity can guide the AI model to adjust the model features like model map, parameters, input or others based on the dependencies.
- the assistance information about the guidance is needed, for example, the signaling from LMF as decision entity to the gNB as model entity.
- model integration is not limited to the above AI positioning and AI beam management.
- Many models can be considered to be merged together, for example, the AI mobility model and AI positioning model, or AI mobility model and AI beam management.
- FIGS. 9A-9C together illustrate a block diagram of an exemplary wireless communication system 20, in accordance with some embodiments of this disclosure.
- the system 20 may perform the methods/steps and their combinations or sub-combinations disclosed in this disclosure.
- the system 20 may include components and elements configured to support operating features that need not be described in detail herein.
- the system 20 may include at least one base station (BS) 110 (or a RAN node) , at least one user equipment (UE) 120, and at least one core network (CN) 130, including the NF, RNDF, AF, AMF, CREF, or other functions.
- the BS 110 includes a BS transceiver or transceiver module/circuitry 112, a BS antenna system 116, a BS memory or memory module/circuitry 114, a BS processor or processor module/circuitry 113, and a network interface 111.
- the components of BS 110 may be electrically coupled and in communication with one another as necessary via a data communication bus 190.
- the UE 120 includes a UE transceiver or transceiver module/circuitry 122, a UE antenna system 126, a UE memory or memory module/circuitry 124, a UE processor or processor module/circuitry 123, and an I/O interface 121.
- the components of the UE 120 may be electrically coupled and in communication with one another as necessary via a data communication bus 190.
- the UE 120 communicates with the one or more BSs 110 via communication channels there between, which can be any wireless channel or other medium known in the art suitable for transmission of data as described herein.
- the CN 130 includes at least one CN transceiver or transceiver module/circuitry 132, at least one CN antenna system 136, at least one CN memory or memory module/circuitry 134, at least one CN processor or processor module/circuitry 133, and at least one network interface 131.
- the CN can be formed by a distributed system, including multiple devices 130.
- the components of the CN 130 may be electrically coupled and in communication with one another as necessary via a data communication bus 190.
- the CN can communicate with one or more application servers and one or more base stations (RAN node) via wired or wireless communication.
- the processor module/circuitry 113, 123, 133 may be implemented, or realized, with a general-purpose processor, a content addressable memory, a digital signal processor, an application specific integrated circuit, a field programmable gate array, any suitable programmable logic device, discrete gate or transistor logic, discrete hardware components, or any combination thereof, designed to perform the functions described herein.
- a processor module/circuitry may be realized as a microprocessor, a controller, a microcontroller, a state machine, or the like.
- a processor module/circuitry may also be implemented as a combination of computing devices, e.g., a combination of a digital signal processor and a microprocessor, a plurality of microprocessors, one or more microprocessors in conjunction with a digital signal processor core, or any other such configuration.
- the steps of a method or algorithm described in connection with the embodiments disclosed herein may be embodied directly in hardware, in firmware, in a software module performed by the processor module/circuitry 113, 123, 133, respectively, or in any practical combination thereof.
- the memory module/circuitry 114, 124, 134 may be realized as RAM memory, flash memory, EEPROM memory, registers, ROM memory, EPROM memory, a hard disk, a removable disk, a CD-ROM, or any other form of storage medium known in the art.
- the memory module/circuitry 114, 124, 134 may be coupled to the processor module/circuitry 113, 123, 133 respectively, such that the processors module/circuitry 113, 123, 133 can read information from, and write information to, memory modules 114, 124, 134 respectively.
- the memory module/circuitry 114, 124, 134 may also be integrated into their respective processor module/circuitry 113, 123, 133.
- the memory module/circuitry 114, 124, 134 may each include a cache memory for storing temporary variables or other intermediate information during execution of instructions to be performed by the processor module/circuitry 113, 123, 133 respectively.
- the memory module/circuitry 114, 124, 134 may also each include non-volatile memory for storing instructions to be performed by the processor module/circuitry 113, 123, 133, respectively.
- circuitry that includes an instruction processor or controller, such as a Central Processing Unit (CPU) , microcontroller, or a microprocessor; or as an Application Specific Integrated Circuit (ASIC) , Programmable Logic Device (PLD) , or Field Programmable Gate Array (FPGA) ; or as circuitry that includes discrete logic or other circuit components, including analog circuit components, digital circuit components or both; or any combination thereof.
- the circuitry may include discrete interconnected hardware components or may be combined on a single integrated circuit die, distributed among multiple integrated circuit dies, or implemented in a Multiple Chip Module (MCM) of multiple integrated circuit dies in a common package, as examples.
- MCM Multiple Chip Module
- the circuitry may store or access instructions for execution, or may implement its functionality in hardware alone.
- the instructions may be stored in a tangible storage medium that is other than a transitory signal, such as a flash memory, a Random Access Memory (RAM) , a Read Only Memory (ROM) , an Erasable Programmable Read Only Memory (EPROM) ; or on a magnetic or optical disc, such as a Compact Disc Read Only Memory (CDROM) , Hard Disk Drive (HDD) , or other magnetic or optical disk; or in or on another machine-readable medium.
- a product such as a computer program product, may include a storage medium and instructions stored in or on the medium, and the instructions when performed by the circuitry in a device may cause the device to implement any of the processing described above or illustrated in the drawings.
- the circuitry may include multiple distinct system components, such as multiple processors and memories, and may span multiple distributed processing systems.
- Parameters, databases, and other data structures may be separately stored and managed, may be incorporated into a single memory or database, may be logically and physically organized in many different ways, and may be implemented in many different ways.
- Example implementations include linked lists, program variables, hash tables, arrays, records (e.g., database records) , objects, and implicit storage mechanisms. Instructions may form parts (e.g., subroutines or other code sections) of a single program, may form multiple separate programs, may be distributed across multiple memories and processors, and may be implemented in many different ways.
- Example implementations include stand-alone programs, and as part of a library, such as a shared library like a Dynamic Link Library (DLL) .
- the library may contain shared data and one or more shared programs that include instructions that perform any of the processing described above or illustrated in the drawings, when performed by the circuitry.
- each unit, subunit, and/or module of the system may include a logical component.
- Each logical component may be hardware or a combination of hardware and software.
- each logical component may include an application specific integrated circuit (ASIC) , a Field Programmable Gate Array (FPGA) , a digital logic circuit, an analog circuit, a combination of discrete circuits, gates, or any other type of hardware or combination thereof.
- ASIC application specific integrated circuit
- FPGA Field Programmable Gate Array
- each logical component may include memory hardware, such as a portion of the memory, for example, that includes instructions executable with the processor or other processors to implement one or more of the features of the logical components.
- each logical component may or may not include the processor.
- each logical component may just be the portion of the memory or other physical memory that includes instructions executable with the processor or other processor to implement the features of the corresponding logical component without the logical component including any other hardware. Because each logical component includes at least some hardware even when the included hardware includes software, each logical component may be interchangeably referred to as a hardware logical component.
- a second action may be said to be “in response to” a first action independent of whether the second action results directly or indirectly from the first action.
- the second action may occur at a substantially later time than the first action and still be in response to the first action.
- the second action may be said to be in response to the first action even if intervening actions take place between the first action and the second action, and even if one or more of the intervening actions directly cause the second action to be performed.
- a second action may be in response to a first action if the first action sets a flag and a third action later initiates the second action whenever the flag is set.
- the phrases “at least one of ⁇ A> , ⁇ B> , ...and ⁇ N> ” or “at least one of ⁇ A> , ⁇ B> , ... ⁇ N> , or combinations thereof” or “ ⁇ A> , ⁇ B> , ...and/or ⁇ N> ” or “at least one of ⁇ A> , ⁇ B> , ...or ⁇ N> ” are defined by the Applicant in the broadest sense, superseding any other implied definitions hereinbefore or hereinafter unless expressly asserted by the Applicant to the contrary, to mean one or more elements selected from the group comprising A, B, ...and N.
- the phrases mean any combination of one or more of the elements A, B, ...or N including any one element alone or the one element in combination with one or more of the other elements which may also include, in combination, additional elements not listed.
Landscapes
- Engineering & Computer Science (AREA)
- Computer Networks & Wireless Communication (AREA)
- Signal Processing (AREA)
- Mobile Radio Communication Systems (AREA)
Abstract
A wireless communication method includes providing a first function; providing first data of time Tl and at least one of: second data of time T2 and third data of time T3 to the first function, wherein time T2 preceding time T1 and the time T3 following time T1; and performing inference by the first function or performing training of the first function according to the first data and at least one of the second data or the third data to generate a result. Another method includes providing a first function; providing data of beam features to the first function; and performing inference by the first function or performing training of the first function according to the data to obtain a result, the result including a positioning result or an intermediate result.
Description
This disclosure is generally related to wireless communication, and more particularly to positioning prediction and/or beam management.
Artificial intelligence includes devices, components, software, and modules that have self-learning, such as machine learning (ML) , deep learning, reinforcement learning, migration learning, deep reinforcement learning, and meta-learning. In some cases, artificial intelligence is implemented by using an artificial intelligence network (or referred to as a neural network) . The neural network includes multiple layers, and each layer includes at least one node. Typically, the neural network includes an input layer, an output layer, and at least one hidden layer. The neural network of each layer includes is not limited to using at least one of a full connection layer, a dense layer, a convolutional layer, a transposed convolutional layer, a direct connection layer, an activation function, a normalization layer, a pooling layer. In some other cases, each layer of the neural network may include one sub-neural network, such as a Residual Network block (or Resnet block) , a dense network block (Densenet Block) , a cyclic network (Recurrent Neural Network, or an RNN) .
The artificial intelligence network includes a neural network model and/or a neural network parameter corresponding to the neural network model. The neural network model may be referred to as a network model in short, and the neural network parameter may be referred to as a network parameter in short. A network model may define an architecture of a network such as a quantity of layers of a neural network, a size of each layer, an activation function, a link status, a convolution kernel and a convolution step of a size, a convolution type (for example, 1D convolution, 2D convolution, 3D convolution, hollow convolution, transposed convolution, divided convolution, packet convolution, or extended convolution) .
A network parameter is a weight and/or an offset of a network of each layer in the network model and a value of the weight and/or the offset of the network of each layer in the
network model. One network model may correspond to a plurality of different sets of neural network parameter values to adapt to different scenarios. The value of the network parameter may be obtained through offline training and/or online training. For example, the neural network parameter is obtained by training the neural network model by inputting at least one sample and a label.
AI/ML (Artificial Intelligence/Machine Learning) is a promising enhancement direction for a mobile communication system. With the introduction of AI/ML technology into the mobile communication system, the system operating efficiency is expected to be improved, for example, reducing the overhead of reference signal via AI/ML inference and prediction. The accuracy of terminal positioning is expected be enhanced. Other possible benefits for adopting AI/ML into mobile communication system or completely merged into mobile communication system would be expected.
For the communication system with the AI/ML technology, an AI/ML model is adopted. In all the parts of this application, ‘model’ refers to a general term, which is to describe a device in a mobile system is capable of doing a processing method, a functionality, a feature, or a feature group. ‘Model’ can be functionality, function, functionality module, function module, processing method, information processing method, implementation, feature, feature group, configuration, configuration set, dataset (e.g., for model training) or data-driven algorithms.
This summary is a brief description of certain aspects of this disclosure. It is not intended to limit the scope of this disclosure.
According to some embodiments of this disclosure, a wireless communication method is disclosed. The method includes providing a first function; providing first data of time T1 and at least one of: second data of time T2 and third data of time T3 to the first function, wherein time T2 preceding time T1 and the time T3 following time T1; and performing inference by the first function or performing training of the first function according to the first data and at least one of the second data or the third data to generate a result.
According to some embodiments of this disclosure, another wireless communication method is disclosed. The method includes providing a first function; providing data of beam features to the first function; and performing inference by the first function or performing training of the first function according to the data to obtain a result, the result including a positioning result or an intermediate result.
Still another embodiment of this disclosure provides a wireless communication apparatus, including one or more memory units storing one or more programs and one or more processors electrically coupled to the one or more memory units and configured to execute the one or more programs to perform any method or step or their combinations in this disclosure.
Still another embodiment of this disclosure provides non-transitory computer-readable storage medium, storing one or more programs, the one or more programs being configured to, when performed by at least one processor, cause to perform any method or step or their combinations in this disclosure.
According to some embodiments of this disclosure, one or more wireless communication methods are further disclosed, the methods include combinations of certain methods, aspects, elements, and steps (either in a generic view or specific view) disclosed in the various embodiments of this disclosure.
The above and other aspects and their implementations are described in greater detail in the drawings, the descriptions, and the claims.
Various exemplary embodiments of the present disclosure are described in detail below with reference to the following drawings. The drawings are provided for purposes of illustration only and merely depict exemplary embodiments of the present disclosure to facilitate the understanding of the present disclosure. Therefore, the drawings should not be considered as limiting of the breadth, scope, or applicability of the present disclosure. It should be noted that for clarity and ease of illustration these drawings are not necessarily drawn to scale.
FIG. 1 shows a schematic diagram of various embodiments in the present disclosure.
FIG. 2A shows a schematic diagram of an exemplary embodiment in the present disclosure.
FIG. 2B shows a schematic diagram of another exemplary embodiment in the present disclosure.
FIG. 2C shows a schematic diagram of another exemplary embodiment in the present disclosure.
FIG. 2D shows a schematic diagram of another exemplary embodiment in the present disclosure.
FIG. 2E shows a schematic diagram of another exemplary embodiment in the present disclosure.
FIGS. 3A and 3B show a diagram of an AI positioning model for training and inference with historical data.
FIGS. 4A and 4B show a diagram of an AI positioning model for training and inference with historical data and prediction data.
FIG. 5 shows a diagram of sharing of assistance information.
FIGS. 6A and 6B show a diagram of an AI positioning model for training and inference with historical data of timing measurements and beam measurements.
FIGS. 7A and 7B show a diagram of an AI positioning model for training and inference with historical data and prediction data of timing measurements and beam measurements.
FIGS. 8A and 8B show a diagram of an AI positioning and beam management model for training and inference with historical data and prediction data of timing measurements and beam measurements.
FIGs. 9A-9C together illustrate a block diagram of an exemplary wireless communication system.
FIG. 1 shows an exemplary schematic for a basic AI/ML (Artificial Intelligence/Machine Learning) framework used in communication systems. The general framework may
include a data collection 110, a model training 120, a management 130, an inference 140, and/or a model storage 150.
Exemplarily, the data collection include a data collector. The data collection is a function that provides input data to other part of the framework, such as the model training, the management, and the inference functions. The data used as the input for the AI/ML model training function may include training data. The data used as the input for the management of AI/ML models or AI/ML functionalities include monitoring data. The data used as the input for the AI/ML inference function include inference data. The data collection can provide data preparation including data pre-processing and cleaning, formatting, and transformation. Additionally or alternatively, a data generation function may be a function to be perform in advance of the data collection in some devices and may provide the measurement data or other data to the data collection function.
The model training may include a model trainer and is a function that performs AI/ML model training, validation, and testing. The model training function may also be responsible for data pre-processing and cleaning, formatting, and transformation based on training data delivered by the data collection function when the above data preparation work is not done in the data collection function. Trained, validated, and tested AI/ML models is delivered to the model storage function.
Additionally or alternatively, the management is a function that oversees the operation (e.g., selection/ (de) activation/switching/fallback) and monitoring (e.g., performance) of AI/ML models or AI/ML functionalities. This function is also responsible for making decisions to ensure the proper inference operation based on data received from the data collection function and the inference function. Management instructions may include the information needed as input to manage the inference function. Model transfer/delivery request may be used to request model (s) to the model storage function. Performance feedback/retraining request includes the information needed as input for the model training function, e.g., for model (re) training or updating purposes.
Additionally or alternatively, the inference is a function that provides outputs from the
process of applying AI/ML models or AI/ML functionalities, using the data that is provided by the data collection function (i.e., inference data) as an input. The inference function is also responsible for data preparation (e.g., data pre-processing and cleaning, formatting, and transformation) based on inference data delivered by a data collection function, when the above data preparation work is not done in the data collection function. Inference output is the final output of the whole AI/ML model or functionality to be used for other parts in mobile system. The output data can also be internally used by the management function to monitor the performance of AI/ML models or AI/ML functionalities.
Additionally or alternatively, the model storage is a function responsible for storing trained/updated models that can be used to perform the inference function.
In some implementations, a data cleaning is a process of preparing data analysis by modifying, adding, or deleting data. This process is also referred to as data preprocessing.
The quality of the data after pre-processing may directly affect all the conclusions and opinions obtained from the data or the trained model. Most real-life data has many places to be preprocessed, such as missing values, non-informative features, and so that the data may always need to be cleaned before being used to achieve the best results. In some implementations, the typical preprocessing ways/means include handling missing values, encoding categorical features, outliers detection, and transformations etc.
Additionally or alternatively, considering the framework of AI/ML used in communication system, after the data generation, the data cleaning is better to be handled in the data collection function. The data after cleaning may be used as input to functions of the model training, interference, and/or management. In some implementations, alternatively, the data cleaning may also be separately located in the functions of the model training, interference, and/or management when the data collection function doesn’ t provide the data after cleaning. In some implementations, the data cleaning may also be located in the function of data generation which is before the data collection function.
Additionally or alternatively, for mobile communication systems, the mostly data for training, inference, management is from the measurement in a base station (BS) or user
equipment (UE) , for example, received signal strength indicator (RSSI) , reference signal received power (RSRP) , reference signal received quality (RSRQ) , signal interference noise ratio (SINR) , and etc., some may come from the personality data of the device in mobile system, for example, the location information of the UE. Other data sources are not precluded. The measurement results may be generated in generation function or be directly forwarded for data collection and cleaned before training, inference and management. When the measurements occur in the BS or UE, the data cleaning is highly handled in the BS or UE correspondingly together with measurements, data generation or data collection. When the training, inference or management also occur in the BS or UE, basically, it can be confirmed that the data cleaning is handled in the BS or UE with data generation, data collection, training, inference or management. When training, inference or management doesn’ t occur in the BS or UE but in the other device, e.g., the core network (CN) , OAM (operations, administration, and maintenance) , it is possible that data cleaning may be handled in the above other devices. In some implementations, the PRU (positioning reference unit) is kind of special UE which has all the function of normal UE with known position. In the below description, the UE term may replace by (or include) the PRU.
Additionally or alternatively, various embodiments/implementations in the present disclosure may use position/location as non-limiting examples, and may be applicable to other physical or environmental parameters. The AI/ML is intended to enhance the accuracy of UE positioning. Depending on the device with model training/inference, whether the directed or assisted mode of positioning, and the device of output of UE final location, the AI/ML positioning can be described as a portion or all of the following cases. The new entity, location management function (LMF) , is central in the 5G positioning architecture. The core network may include the LMF, which receives measurements and assistance information from a base station (BS) (e.g., a next generation radio access network (NG-RAN) node) and/or the mobile device (or user equipment (UE) ) to compute the position of the UE. In some implementations, the communication to/from LMF may be via an access and mobility management function (AMF) over the NLs interface.
The application in FIG. 2A includes UE-based positioning with UE-side model, direct AI/ML or AI/ML assisted positioning for model training 210, and/or model inference 215. Alternatively or additionally, the application in FIG. 2B includes UE-assisted/LMF-based positioning with UE-side model, AI/ML assisted positioning for model training 220, and/or model inference 225. Alternatively or additionally, the application in FIG. 2C includes UE-assisted/LMF-based positioning with LMF-side model, direct AI/ML positioning for model training 230, and/or model inference 235. Alternatively or additionally, the application in FIG. 2D includes a NG-RAN node assisted positioning with gNB-side model, AI/ML assisted positioning for model training 240, and/or model inference 245. Alternatively or additionally, the application in FIG. 2E includes a NG-RAN node assisted positioning with LMF-side model, direct AI/ML positioning for model training 250, and/or model inference 255.
In some implementations, for example in FIG. 2B, the AI/ML model is located in the UE, the final position information is from LMF, and/or the mode of positioning is assisted mode, i.e., the model in the UE assists the final calculation and output from LMF. For another example in Case 5, the AI/ML model is located in the LMF, and the device is same with the device for final calculation and output of position information, and the mode of positioning is assisted mode, i.e., the measurements from the BS (e.g., NG-RAN) assist the positioning processing. In various embodiments, the AI/ML may be adopted for beam management, wherein the main purposes are intended to predict the spatial-domain downlink (DL) beam or temporal DL beam. The data cleaning or data preprocessing is also needed for AI/ML technology for beam management. In various embodiments for data cleaning for AI positioning, separate embodiments are provided based on whether the data cleaning is handled in a UE, a BS or other device (e.g., LMF) . In some implementations, as the intentions of the functions of training, interference and monitoring are different, the rules of the data preprocessing or data cleaning may be different, and/or the rules may be individually defined and configured.
In the above examples, the application of the AI/ML model may collect input from
the downlink measurements at the UE side (such as the examples in FIGS. 2A, 2B, 2C) or measurement at the BS side (such as the examples in FIGS. 2D and 2E) . The data for model input, such as: the DL (downlink) RSTD (reference signal time difference) , UE Rx –Tx time difference, DL PRS RSRP (DL PRS Reference Signal Received Power) , DL RSRQ (Reference Signal Received Quality) , DL RSSI (Received Signal Strength Indicator) , SINR (Signal to Interference &Noise Ratio) , DL PRS-RSRPP (PRS Reference Signal Received Path Power) , DL-AOD (Angle-of-Departure) , DL RSCP (Received Signal Code Power) , can be generated from the measurements in the UE. The data for model input can also include the uplink measurements in BS such as UL RTOA (Relative Time of Arrival) , Timing advance, base station Rx-Tx Time Difference, UL (uplink) SRS-RSRP (Sounding Reference Signal Reference Signal Received Power) , UL SRS-RSRPP (Sounding Reference Signal Reference Signal Received Path Power) , UL AOA (Uplink Angle of Arrival) , and UL RSCP (Received Signal Code Power) , CIR (Channel Impulse Response) , PDP (Power Delay Profile) , or DP (Delay Profile) .
In general, the measurements can be based on the time domain, the power domain, and/or the angel or phase domain. For example, the time domain measurements include the DL RSTD, the UE Rx –Tx time difference, and/or the time delay in the CIR/PDP/DP. The power domain measurements include the DL PRS RSRP, the DL RSRQ, the DL RSSI, the SINR, the DL PRS-RSRPP, and/or the power in the CIR/PDP. The phase/angel domain measurement includes the phase in the DL RSCP, the CIR, or the angel in the DL AOD.
Additionally or alternatively, the AI/ML model may collect the input of labels from the UE, BS, and/or LMF. The AI/ML model may output the position of the UE (as shown in FIGS. 2A, 2C, and 2E) or the intermediate features/parameters (FIGS. 2B and 2D) for the next step of position calculation.
Additionally or alternatively, the AI/ML is used for beam management too. The main purposes is to predict the spatial-domain DL beam or temporal DL beam. The beamforming can be the key technology in mobile communication systems. Through the beamforming, the RF energy is concentrated and propagated between the UE and BS with a narrow direction.
Beam management includes but is not limited to beam scanning, beam tracking, and beam recovery. The core problem to be solved includes how to obtain accurate beam pairs by using the lowest possible resource overhead. The beam prediction function via the AI/ML model can be used to reduce the overhead of finding the beam pairs. That is, only the beam parameters corresponding to a portion of beams, M beams are input into the AI model, and the beam parameters corresponding to N beams are predicted by AI model as output. When M<N, the prediction can reduce the overhead. In a special case, the N beams can include M beams.
The legacy positioning function without AI/ML assistance or with AI/ML assistance typically uses the measurement at only current time instance to calculate, inference, or output the position or location of UE based on the measurement corresponding to the current time instance. Due to the delay from process of calculation, inference, the real time instance of output final position result is later than the time instance of measurement. The delay causes the problem that the output position value may have some lag behind the actual position when the terminal moves at a certain very high speed. For example, if the measurement is provided at time T, and the process delay is 200 ms (millisecond) , and the UE moving speed is 100 m/s, then the output of position is T+200 ms, the actual position of UE is away from the output of position at least 20 meters. The performance of high-speed moving UE positioning should therefore be enhanced, and the possible way is the position prediction for future time instance based on AI/ML. The AI model can include a LSTM (Long Short-Term Memory) NN (neural network) , a GRU (Gate Recurrent Unit) NN, and/or other possible NN suitable for processing time sequence.
Use of Preceding Inputs
Some embodiments in this disclosure are different from the above common positioning function. In the AI/ML positioning model training and model inference, the model input not only includes the measurement input of the current time instance but also includes the one or more measurement value within the time window N, which precedes the current time instance T. The ground truth label for model training can also be generated from
the location/position or intermediate parameters on the future time instance after time T. FIG. 3A shows exemplary model training according to some embodiments of this disclosure. FIG. 3B shows exemplary model inference according to some embodiments of this disclosure. In FIGS. 3A and 3B, the measurements within the time window N are the preceding measurement results in advance of time T. The definition of time window N is not necessarily be pre-configured, and the input within the window N can be indicated by a parameter from a signaling. The indication can be a total number of measurement results including at time T and before the time T.
The ground truth label includes the value and parameter at time T+Δt, where Δt can be explicitly or implicitly indicated when the communication node collecting the input for model training. Δt can be the processing time used for model inference or deduction of the UE position. The reference parameter and the value used for model training can be the value or parameter at the time T plus the time used for model inference (Δt) . If the time T+Δt is not exactly the actual label collection time, the actual T+Δt can be selected as the nearest actual label collection time.
As the model training for future prediction heavily depends on the time index/stamp, the relative or absolute time index/stamp may be the input of model for training and inference. Typically, the inputs of model are the measurements or ground truth label combined with the corresponding time index/stamp. The time index/stamp may be mandatory reported with the measurements or ground truth label when the function of future prediction is enabled. The time index/stamp of the input data is the index/stamp on time when measurement or other action happens, and the measured time domain information is the actual measurement on time domain, such as the DL RSTD and/or UL RTOA.
Use of Predicted Input
Additionally or alternatively, besides the real measurement at time T (as well as the measurements at the window N) as the model input, the model input can also include the prediction of future measurement features. For example, the DL RSTD or UL RTOA can be based on prediction of future values, instead of a real measurement. FIG. 4A shows
exemplary model training according to some embodiments of this disclosure. FIG. 4B shows exemplary model inference according to some embodiments of this disclosure. In FIG. 4A and FIG. 4B, the predicted values can also be the input for model training and model inference. Also, the data for the ground truth label can include the measured or predicted value.
The input for the AI model can be marked as measurement inputs or prediction input. That is, the input data can include a field, specifying that the data on DL RSTD, UL RTOA, CIR/PDP/DP or other possible features are measurement values or prediction values. Additionally or alternatively, the time index/stamp coupled with according input can be marked as measurement or prediction values. Additionally or alternatively, this field of time index/stamp specifies the time instance at DL RSTD, UL RTOA, CIR/PDP/DP or other possible features whether the measurement or the prediction is performed. Additionally or alternatively, the input is marked with measurement or prediction directly without time index/stamp involved.
As explained above, the coupled label as the model input and the corresponding time index/stamp can be applied to deduce the terminal moving speed and moving direction. For example, if the label is a terminal location, the moving speed can be calculated from the distance between two locations and the time offset between the two time stamps. Also, the moving direction can also be deduced from the two locations.
Model Management
When a network or UE has moving speed information, the moving speed can be a condition for function/model management, such as the operation of function/model update, switch, fallback, and other possible behaviors. For example, in the current framework, the model training and inference based on one current time index/stamp is assumed as a baseline function. Model training and inference based on the future location/position prediction can be deemed to be another function or supplementary function. These two functions can be switched with each other through an LCM (life cycle management) according to the terminal moving speed. If the baseline function is applied, whether to switch to the supplementary
function for future position prediction may depend on the moving speed. A moving speed threshold can be configured. When the moving speed exceeds the threshold, the supplementary function for future position prediction can be enabled. The baseline function can be disabled correspondingly. Likewise, when the speed drops below the threshold, the baseline function can be enabled again, while the supplement function is disabled. Here, the moving speed is actually the condition of function switching.
The moving speed, speed threshold, or the LCM decision for function/model management based on the speed can be used as the assistant information to be signaled among the different entities, including the network or UE. Especially, when the entity handling the model training and inference is different with the entity that derives the metrics, such as the above mentioned moving speed, speed threshold, or the entity that gives LCM decisions, the assistance information can be shared between different entities. There are several examples to show the signaling how to transfer among the entities.
Under normal circumstances, the entity that derives the metrics or the entity that gives the LCM decisions is assumed to be the core network, such as the LMF (Location Management Function) in FIGS. 2A-2D. Yet, it can be the BS or UE that derives the metrics or the gives the LCM decisions. According to FIG. 2A and FIG. 5, the model training and inference are done in the UE side, and the metrics derivation, threshold setting or LCM management is in the LMF (in core network) . The metrics derivation, threshold setting or LCM management is therefore not in the same entity handling the model processing. The assistant information between LMF and UE is therefore to be shared. When the UE gets the moving speed information, the UE can determine whether to switch the model by itself. When the UE gets the speed threshold, if the UE can calculate the speed by itself, the UE determines the switching of model according to the rule of threshold, for example, by applying the supplementary model in case of higher speed, or applying the baseline model in case of lower speed. Likewise, when the UE gets the LCM decisions from the LMF, the UE should execute the order from LCM.
According to FIGS. 2B and 2D, the similar principle is applied. The model processing
is in the UE side in FIG. 2B and in the BS side in FIG. 2D. The metrics derivation, threshold setting or LCM management is in LMF, one of entities in Core Network. The assistant information can be signaled between the LMF and the UE or between the LMF and the BS. For example in FIG. 2D, if the metrics derivation is done in one LMF based on the output of several BSs but not one, then the assistant information will be signaled to such several BSs.
The moving speed is not the only the condition for function switching, but also the moving direction and rotation can be the conditions for function switching, if these pieces of information can be derived. These conditions can be derived in the network (like the LMF) or the UE, and the network can have the knowledge on the condition so that the network can decide on function switching. On the other side, alternatively the UE can use these conditions to decide the function without intervene from network.
If the moving speed cannot be derived, it means the model management cannot be based on the speed. If the measurements within the time window N in FIGS. 3A, 3B, 4A, and 4D cannot be the input of the model, it means the future position prediction cannot be made. Some alternative solutions may be considered to assist the model management or the future position prediction.
Additionally or alternatively, the UE can report the residence time in a small area, for example, if UE stays in a small room for a long time. The residence time can partially represent the status of the UE’s moving. The UE can report the residence time directly, but the direct residence time report may not be concise for a mobile system signaling. Some extended solutions based on residence time are illustrated as follows.
(1) A graded report can be provided. For example, if residence time is less than 1s, the UE reports level 1; if residence time is between 1s and 2s, the UE reports level 2, and so on.
(2) The UE can report the Truth/False of long-time stay. When the time of stay exceeds the threshold of residence time, the UE can trigger the report of Truth of long-time stay. When the time of stay doesn’ t exceed the threshold of residence time, the UE may trigger the report of False of long-time stay.
(3) The UE can also report the residence time by the time index/stamp. The time
stamp is defined as or attached a time duration field which represents the residence time. The time duration field can also be replaced by the condition indication of Truth/False of long-time stay. The threshold of residence time is also needed to be indicated to UE if Truth/False report is adopted.
Use of Beam Information
In the examples above, the features for measurement include the time domain measurement, such as the DL RSTD, UL RTOA and so on. Additionally or alternatively, the features for angel, phase, or beam measurements can also be considered as the model inputs for positioning inference or the training of the model. The detail is discussed below.
The beam measurement has been supported in wireless communication systems to aid with the beam management. The downlink beam measurement can be based on DL (downlink) reference signals such as SS Blocks (SSB) , a CSI-RS (Channel Start Information Reference Signal) , a DMRS (Demodulation reference signals) , or other possible signals for the beam measurement. The uplink beam measurement is based on UL (uplink) reference signals such as the SRS, RACH, DMRS, or other possible signals, for example, the downlink reference signal if the channel reciprocity is guaranteed. The characters to reflect or represent the beam are the beam metric parameters, including but not limit to such parameters related to reference signal like a resource ID, a resource set ID, a beam pair ID, a RSRP, a RSRQ, a SINR, a Beam Domain Receive Power Map (BDRPM) , a Resource Indicator, AOA (Angle of Arrival) , ZOA (zenith angle of arrival) , AOD (Antenna-on-Display) , ZOD (zenith angle of departure) , and so on. These metric parameters can be regarded as the beam measurements in general.
As shown in FIGS. 6A and 6B, the beam measurement can be combined with the mentioned measurements for positioning, such as timing measurement, as an input for model training and inference. Similarly during the model training and inference, not only the current measurements or prediction of timing, positioning, and beam information, but also the previous measurements can be used. For example, the measurements in the FIGS. 6A and 6B include time domain measurement and beam domain measurement.
Use of Predicted Beam Information
Alternatively or additionally, as shown in FIGS. 7A and 7B, the beam information used as the input for the positioning AI model can be output from an AI model that makes the prediction of the beam information. The prediction of the beam information can also be used as the input of AI positioning model to further enhance the accuracy of positioning. As shown in FIGS. 7A and 7B, the input of the AI model for positioning inference include the measurement values at the preceding time windows, the measurements at time T, the predicted measurement at time T+ Δt, the beam information, which can be beam measurements at time T or prediction at time T+ Δt. The ground truth table can be of the parameter at time T+ Δt, considering the processing time Δt. The measurements may include the DL (downlink) RSTD (reference signal time difference) , UE Rx –Tx time difference, DL PRS RSRP (DL PRS Reference Signal Received Power) , DL RSRQ (Reference Signal Received Quality) , DL RSSI (Received Signal Strength Indicator) , SINR (Signal to Interference &Noise Ratio) , DL PRS-RSRPP (PRS Reference Signal Received Path Power) , DL-AOD (Angle-of-Departure) , DL RSCP (Received Signal Code Power) , UL RTOA (Relative Time Of Arrival) , Timing advance, Base station Rx-Tx Time Difference, UL SRS-RSRP (Sounding Reference Signal Reference Signal Received Power) , UL SRS-RSRPP (Sounding Reference Signal Reference Signal Received Path Power) , UL AOA, UL RSCP (Received Signal Code Power) , CIR (Channel Impulse Response) , PDP (Power delay profile) , or DP (delay profile) .
The output of the AI beam prediction from the beam management model includes a reference signal resource ID, a reference signal resource set ID, a beam pair ID, an RSRP, an RSRQ, an SINR, a Beam Domain Receive Power Map (BDRPM) , a Resource Indicator, an AOA, a ZOA, an AOD, an ZOD, and so on. The certainty or uncertainty of the prediction of the beam information can also be the output of the AI beam prediction model for further usage. For example, the prediction of the beam can be represented by the reference signal resource ID. The certainty of the prediction of reference signal resource ID is the possibility of the credibility that this ID is the true future reference signal resource ID. The certainty can
be represented in a percentage format, or by a grade (e.g. from 1 to N) , or by hard decision ( ‘1’ is true, ‘0’ is false) . If the certainty is lower than a threshold pre-configured or marked as false, it means this input into AI positioning model is not reliable and may negatively affect the position prediction. In such case, the predicted beam may be disregarded as the model input of the positioning model. Additionally or alternatively, the positioning model can deprioritize the corresponding input of prediction. Similarly, the time prediction as AI positioning model input can apply the similar principle of selection based on the corresponding threshold or mark of certainty.
The AI beam prediction would implicitly affect the AI positioning performance. When the beam is predicted, the reference signal for positioning (such as PRS and others) could be optimized according to the beam predicted as the QCL (Quasi co-location) of PRS has relation with the QCL of beam.
Positioning Prediction Based on Hybrid AI Model
Alternatively or additionally, the functions of AI beam management model and AI positioning model can be merged together as shown in FIGS. 8A and 8B.
For the model training, the inputs not only include the time and beam measurement at current time T, and also optionally the time and beam measurement within the history measurement time window. Also, the time and beam prediction at time T+Δt can also be used as the input of the model training and model inference. The ground truth label on positioning/location and beam measurements at a future time can also be used T+Δt. For model inference, the input include the time and beam measurement at current time and within the history measurement time window. Also, the time and beam prediction at time T+Δt can also be used as the input of the model training and model inference. The output of the combined model includes the positioning or intermediate features of the positioning and the beam or intermediate feature for beam.
The merged AI model in FIGS. 8A and 8B can include the combination of two virtual separate models of AI positioning and beam management. For model monitoring, output of the dependency on the input can help the management entity to judge which part of input
should be adjusted or which part of model parameters can be adjusted or retrained. The dependency can be evaluated by a probability of output depending on the input. For example, in the AI positioning and AI beam management hybrid model, there are two kinds of inputs of time and beam measurement and there are two kinds of outputs of position and beam or the possible related intermediate features. Then, there are four dependencies such as the dependency of position on time measurement, position on beam measurement, beam on time measurement and beam on beam measurement. Additionally or alternatively, if the monitoring or decision entity is different from the entity embedded with the AI model, the monitoring or decision entity can guide the AI model to adjust the model features like model map, parameters, input or others based on the dependencies. The assistance information about the guidance is needed, for example, the signaling from LMF as decision entity to the gNB as model entity.
Additionally or alternatively, the model integration is not limited to the above AI positioning and AI beam management. Many models can be considered to be merged together, for example, the AI mobility model and AI positioning model, or AI mobility model and AI beam management.
FIGS. 9A-9C together illustrate a block diagram of an exemplary wireless communication system 20, in accordance with some embodiments of this disclosure. The system 20 may perform the methods/steps and their combinations or sub-combinations disclosed in this disclosure. The system 20 may include components and elements configured to support operating features that need not be described in detail herein.
The system 20 may include at least one base station (BS) 110 (or a RAN node) , at least one user equipment (UE) 120, and at least one core network (CN) 130, including the NF, RNDF, AF, AMF, CREF, or other functions. The BS 110 includes a BS transceiver or transceiver module/circuitry 112, a BS antenna system 116, a BS memory or memory module/circuitry 114, a BS processor or processor module/circuitry 113, and a network interface 111. The components of BS 110 may be electrically coupled and in communication with one another as necessary via a data communication bus 190. Likewise, the UE 120
includes a UE transceiver or transceiver module/circuitry 122, a UE antenna system 126, a UE memory or memory module/circuitry 124, a UE processor or processor module/circuitry 123, and an I/O interface 121. The components of the UE 120 may be electrically coupled and in communication with one another as necessary via a data communication bus 190. The UE 120 communicates with the one or more BSs 110 via communication channels there between, which can be any wireless channel or other medium known in the art suitable for transmission of data as described herein.
The CN 130 includes at least one CN transceiver or transceiver module/circuitry 132, at least one CN antenna system 136, at least one CN memory or memory module/circuitry 134, at least one CN processor or processor module/circuitry 133, and at least one network interface 131. The CN can be formed by a distributed system, including multiple devices 130. The components of the CN 130 may be electrically coupled and in communication with one another as necessary via a data communication bus 190. The CN can communicate with one or more application servers and one or more base stations (RAN node) via wired or wireless communication.
The processor module/circuitry 113, 123, 133 may be implemented, or realized, with a general-purpose processor, a content addressable memory, a digital signal processor, an application specific integrated circuit, a field programmable gate array, any suitable programmable logic device, discrete gate or transistor logic, discrete hardware components, or any combination thereof, designed to perform the functions described herein. In this manner, a processor module/circuitry may be realized as a microprocessor, a controller, a microcontroller, a state machine, or the like. A processor module/circuitry may also be implemented as a combination of computing devices, e.g., a combination of a digital signal processor and a microprocessor, a plurality of microprocessors, one or more microprocessors in conjunction with a digital signal processor core, or any other such configuration.
Furthermore, the steps of a method or algorithm described in connection with the embodiments disclosed herein may be embodied directly in hardware, in firmware, in a software module performed by the processor module/circuitry 113, 123, 133, respectively, or
in any practical combination thereof. The memory module/circuitry 114, 124, 134 may be realized as RAM memory, flash memory, EEPROM memory, registers, ROM memory, EPROM memory, a hard disk, a removable disk, a CD-ROM, or any other form of storage medium known in the art. In this regard, the memory module/circuitry 114, 124, 134 may be coupled to the processor module/circuitry 113, 123, 133 respectively, such that the processors module/circuitry 113, 123, 133 can read information from, and write information to, memory modules 114, 124, 134 respectively. The memory module/circuitry 114, 124, 134 may also be integrated into their respective processor module/circuitry 113, 123, 133. In some embodiments, the memory module/circuitry 114, 124, 134 may each include a cache memory for storing temporary variables or other intermediate information during execution of instructions to be performed by the processor module/circuitry 113, 123, 133 respectively. The memory module/circuitry 114, 124, 134 may also each include non-volatile memory for storing instructions to be performed by the processor module/circuitry 113, 123, 133, respectively.
Various exemplary embodiments of the present disclosure are described herein with reference to the accompanying figures to enable a person of ordinary skill in the art to make and use the present disclosure. The present disclosure is not limited to the exemplary embodiments and applications described and illustrated herein. Additionally, the specific order and/or hierarchy of steps in the methods disclosed herein are merely exemplary approaches. Based upon design preferences, the specific order or hierarchy of steps of the disclosed methods or processes can be re-arranged while remaining within the scope of the present disclosure. Thus, those of ordinary skill in the art would understand that the methods and techniques disclosed herein present various steps or acts in exemplary order (s) , and the present disclosure is not limited to the specific order or hierarchy presented unless expressly stated otherwise.
This disclosure is intended to cover any conceivable variations, uses, combination, or adaptive changes of this disclosure following the general principles of this disclosure, and includes well-known knowledge and conventional technical means in the art and undisclosed
in this application.
It is to be understood that this disclosure is not limited to the precise structures or operation described above and shown in the accompanying drawings, and various modifications and changes may be made without departing from the scope of this application. The scope of this application is subject only to the appended claims.
The methods, devices, processing, circuitry, and logic described above may be implemented in many different ways and in many different combinations of hardware and software. For example, all or parts of the implementations may be circuitry that includes an instruction processor or controller, such as a Central Processing Unit (CPU) , microcontroller, or a microprocessor; or as an Application Specific Integrated Circuit (ASIC) , Programmable Logic Device (PLD) , or Field Programmable Gate Array (FPGA) ; or as circuitry that includes discrete logic or other circuit components, including analog circuit components, digital circuit components or both; or any combination thereof. The circuitry may include discrete interconnected hardware components or may be combined on a single integrated circuit die, distributed among multiple integrated circuit dies, or implemented in a Multiple Chip Module (MCM) of multiple integrated circuit dies in a common package, as examples.
Accordingly, the circuitry may store or access instructions for execution, or may implement its functionality in hardware alone. The instructions may be stored in a tangible storage medium that is other than a transitory signal, such as a flash memory, a Random Access Memory (RAM) , a Read Only Memory (ROM) , an Erasable Programmable Read Only Memory (EPROM) ; or on a magnetic or optical disc, such as a Compact Disc Read Only Memory (CDROM) , Hard Disk Drive (HDD) , or other magnetic or optical disk; or in or on another machine-readable medium. A product, such as a computer program product, may include a storage medium and instructions stored in or on the medium, and the instructions when performed by the circuitry in a device may cause the device to implement any of the processing described above or illustrated in the drawings.
The implementations may be distributed. For instance, the circuitry may include multiple distinct system components, such as multiple processors and memories, and may
span multiple distributed processing systems. Parameters, databases, and other data structures may be separately stored and managed, may be incorporated into a single memory or database, may be logically and physically organized in many different ways, and may be implemented in many different ways. Example implementations include linked lists, program variables, hash tables, arrays, records (e.g., database records) , objects, and implicit storage mechanisms. Instructions may form parts (e.g., subroutines or other code sections) of a single program, may form multiple separate programs, may be distributed across multiple memories and processors, and may be implemented in many different ways. Example implementations include stand-alone programs, and as part of a library, such as a shared library like a Dynamic Link Library (DLL) . The library, for example, may contain shared data and one or more shared programs that include instructions that perform any of the processing described above or illustrated in the drawings, when performed by the circuitry.
In some examples, each unit, subunit, and/or module of the system may include a logical component. Each logical component may be hardware or a combination of hardware and software. For example, each logical component may include an application specific integrated circuit (ASIC) , a Field Programmable Gate Array (FPGA) , a digital logic circuit, an analog circuit, a combination of discrete circuits, gates, or any other type of hardware or combination thereof. Alternatively or in addition, each logical component may include memory hardware, such as a portion of the memory, for example, that includes instructions executable with the processor or other processors to implement one or more of the features of the logical components. When any one of the logical components includes the portion of the memory that includes instructions executable with the processor, the logical component may or may not include the processor. In some examples, each logical component may just be the portion of the memory or other physical memory that includes instructions executable with the processor or other processor to implement the features of the corresponding logical component without the logical component including any other hardware. Because each logical component includes at least some hardware even when the included hardware includes software, each logical component may be interchangeably referred to as a hardware
logical component.
A second action may be said to be “in response to” a first action independent of whether the second action results directly or indirectly from the first action. The second action may occur at a substantially later time than the first action and still be in response to the first action. Similarly, the second action may be said to be in response to the first action even if intervening actions take place between the first action and the second action, and even if one or more of the intervening actions directly cause the second action to be performed. For example, a second action may be in response to a first action if the first action sets a flag and a third action later initiates the second action whenever the flag is set.
To clarify the use of and to hereby provide notice to the public, the phrases “at least one of <A> , <B> , …and <N> ” or “at least one of <A> , <B> , … <N> , or combinations thereof” or “ <A> , <B> , …and/or <N> ” or “at least one of <A> , <B> , …or <N> ” are defined by the Applicant in the broadest sense, superseding any other implied definitions hereinbefore or hereinafter unless expressly asserted by the Applicant to the contrary, to mean one or more elements selected from the group comprising A, B, …and N. In other words, the phrases mean any combination of one or more of the elements A, B, …or N including any one element alone or the one element in combination with one or more of the other elements which may also include, in combination, additional elements not listed.
Claims (29)
- An inference or training method, comprising:providing a first function;providing first data of time T1 and at least one of: second data of time T2 and third data of time T3 to the first function, wherein time T2 preceding time T1 and the time T3 following time T1; andperforming inference by the first function or performing training of the first function according to the first data and at least one of the second data or the third data to generate a result.
- The method according to claim 1, wherein the first data include a first measurement of time T1 and a time indication representing time T1, or the second data include a second measurement of time T2 and a time indication representing time T2.
- The method according to claim 1, wherein the third data include a ground truth label or a prediction of time T3.
- The method according to claim 2 or 3, wherein: the time indication is marked with the label of measurement or prediction; or the data with time indication is marked with the label of measurement or prediction.
- The method according to claim 1, wherein the third data include a prediction of time T3 and the method further comprises providing a ground truth label of time T3.
- The method according to claim 1, wherein providing the first function comprising managing the first function according to derivation from or themselves of at least one of the first data, the second data, or the third data.
- The method according to claim 6, wherein managing the function comprising enabling or disabling a prediction function of the first function when the derivation from or themselves of first data, the second data or the third data meet a certain condition.
- The method according to claim 7, wherein the condition comprises at least one of a moving speed, a moving direction information, or a rotation information of a communication node.
- The method according to claim 6, further comprising signaling assistant information to a communication node, wherein the assistant information comprises at least one of speed information, threshold of speed or decisions from a management unit.
- The method according to claim 1, wherein at least one of the first to third data include a feature or measurement from at least one of time, power, beam, angle, or phase data and the result includes a position result of a communication node or intermediate parameters.
- The method according to claim 1 or 10, wherein at least one of the first to third data include data partially or integrally from a second function.
- The method according to claim 10, wherein the time, power, beam, angle, or phase data includes a certainty or confidence information of at least one of the first to third data.
- The method of claim 12, wherein the certainty or confidence information of the data is represented by a percentage, a range, or a grade.
- The method according to claim 12, further comprising determining whether the certainty or confidence information of the data meets a condition before providing at least one of the first to third data to the first function.
- The method of claim 1, wherein:the first function is configured for positioning inference/training and beam management inference/training; andperforming inference by the first function or performing training of the first function comprises performing inference/training for the positioning and beam management.
- The method according to claim 1, wherein:the first function performs inference/training for positioning inference and beam management inference and the result includes a positioning result and a beam management result or dependency representing a probability of output depending on the input.
- The method according to claim 16, further comprising adjusting on a map, parameters, an input based on the dependency.
- The method according to claim 1, further comprising adjusting on a map, parameters, an input based on assistant information indicated from a monitoring or decision entity to a function entity.
- An inference or training method, comprising:providing a first function;providing data of beam features to the first function; andperforming inference by the first function or performing training of the first function according to the data to obtain a result, the result including a positioning result or an intermediate result.
- The method according to claim 19, wherein the data of beam features include angle, phase, or beam measurement data or prediction data.
- The method according to claim 19, wherein providing the data of beam features to the first function comprises receiving the data from a second function and providing the data to the first function.
- The method according to claim 19, wherein the data of beam features include a certainty or confidence information of the data of beam features.
- The method of claim 22, wherein the certainty or confidence information of the data is represented by a percentage, a range, or a grade.
- The method according to claim 22, further comprising determining whether the certainty or confidence information of the data of beam features meets a condition before providing the data to the first function.
- The method according to claim 19, wherein the data include positioning data.
- The method according to claim 19, wherein:the first function is configured for positioning inference and beam management inference and the result further includes a beam management result.
- The method according to claim 19, further comprising managing the first function according to dependency representing a probability of output depending on the input.
- A wireless communication apparatus, comprising memory circuitry storing one or more programs and one or more processors electrically coupled to the memory circuitry and configured to execute the one or more programs to perform any one of the methods or their combinations or sub-combinations of claims 1 to 27.
- A non-transitory computer-readable storage medium, storing one or more programs, the one or more programs being configured to, when executed by at least one processor, cause to perform any one of the methods or their combinations or sub-combinations of claims 1 to 27.
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| PCT/CN2024/085767 WO2025156420A1 (en) | 2024-04-03 | 2024-04-03 | Feature predictions for communication devices, apparatus, and computer-readable medium |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| PCT/CN2024/085767 WO2025156420A1 (en) | 2024-04-03 | 2024-04-03 | Feature predictions for communication devices, apparatus, and computer-readable medium |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| WO2025156420A1 true WO2025156420A1 (en) | 2025-07-31 |
Family
ID=96544264
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| PCT/CN2024/085767 Pending WO2025156420A1 (en) | 2024-04-03 | 2024-04-03 | Feature predictions for communication devices, apparatus, and computer-readable medium |
Country Status (1)
| Country | Link |
|---|---|
| WO (1) | WO2025156420A1 (en) |
Citations (4)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20210350225A1 (en) * | 2020-05-08 | 2021-11-11 | International Business Machines Corporation | Determining multivariate time series data dependencies |
| CN116418686A (en) * | 2021-12-31 | 2023-07-11 | 华为技术有限公司 | Model data processing method and device |
| WO2023153988A1 (en) * | 2022-02-11 | 2023-08-17 | Telefonaktiebolaget Lm Ericsson (Publ) | Beam indication for prediction-based beam management |
| US20230354247A1 (en) * | 2022-04-29 | 2023-11-02 | Qualcomm Incorporated | Machine learning model positioning performance monitoring and reporting |
-
2024
- 2024-04-03 WO PCT/CN2024/085767 patent/WO2025156420A1/en active Pending
Patent Citations (4)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20210350225A1 (en) * | 2020-05-08 | 2021-11-11 | International Business Machines Corporation | Determining multivariate time series data dependencies |
| CN116418686A (en) * | 2021-12-31 | 2023-07-11 | 华为技术有限公司 | Model data processing method and device |
| WO2023153988A1 (en) * | 2022-02-11 | 2023-08-17 | Telefonaktiebolaget Lm Ericsson (Publ) | Beam indication for prediction-based beam management |
| US20230354247A1 (en) * | 2022-04-29 | 2023-11-02 | Qualcomm Incorporated | Machine learning model positioning performance monitoring and reporting |
Non-Patent Citations (1)
| Title |
|---|
| CMCC (MODERATOR): "Summary of CB: # AIRAN5_MobilitySolution", 3GPP DRAFT; R3-214223, vol. RAN WG3, 24 August 2021 (2021-08-24), pages 1 - 33, XP052043388 * |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| US12113590B2 (en) | Beam management method, apparatus, electronic device and computer readable storage medium | |
| US20240220514A1 (en) | Updating data models to manage data drift and outliers | |
| CN102111871B (en) | Terminal location method and device based on cell identifier location technology | |
| Luo et al. | Enhancing Wi-Fi fingerprinting for indoor positioning using human-centric collaborative feedback | |
| US20230180086A1 (en) | Method and apparatus for accessing network cell | |
| US20210012180A1 (en) | Methods and systems for low power wide area network localization | |
| CN104023394A (en) | WSN positioning method based on self-adaptation inertia weight | |
| CN102685766B (en) | Wireless network flow prediction method based on local minimax probability machine | |
| Cipriani et al. | Effectiveness of link and path information on simultaneous adjustment of dynamic OD demand matrix | |
| US20250261163A1 (en) | Communication method and communication apparatus | |
| CN116996994A (en) | UWB positioning method and related equipment based on adaptive Kalman filter | |
| Xue et al. | Deep learning based channel prediction for massive MIMO systems in high-speed railway scenarios | |
| WO2024199858A1 (en) | Apparatuses and methods for machine learning model reliability assessment | |
| WO2025156420A1 (en) | Feature predictions for communication devices, apparatus, and computer-readable medium | |
| Xie et al. | An improved indoor location algorithm based on backpropagation neural network | |
| CN114173350B (en) | Wireless network planning method and device | |
| Yaseen et al. | Intelligent WSN localization using multi-linear regression and a mobile anchor node | |
| Chaalal et al. | Mobility prediction for aerial base stations for a coverage extension in 5G networks | |
| Maduranga et al. | Rssi-based indoor localization using deep learning with a custom loss function | |
| WO2022028793A1 (en) | Instantiation, training, and/or evaluation of machine learning models | |
| Chen et al. | Neural network for WGDOP approximation and mobile location | |
| WO2024256902A1 (en) | Apparatuses and methods for machine learning prediction window parameters management | |
| US11962475B2 (en) | Estimating properties of units using system state graph models | |
| Cruz et al. | Development of machine learning-based predictive models for wireless indoor localization application with feature ranking via recursive feature elimination algorithm | |
| KR20250044347A (en) | Task-specific models for wireless networks |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| 121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 24919510 Country of ref document: EP Kind code of ref document: A1 |