CN117560046A - Beam tracking method and device, equipment and storage medium - Google Patents
Beam tracking method and device, equipment and storage medium Download PDFInfo
- Publication number
- CN117560046A CN117560046A CN202311452397.3A CN202311452397A CN117560046A CN 117560046 A CN117560046 A CN 117560046A CN 202311452397 A CN202311452397 A CN 202311452397A CN 117560046 A CN117560046 A CN 117560046A
- Authority
- CN
- China
- Prior art keywords
- prediction
- time
- predicted
- hidden state
- beam training
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04B—TRANSMISSION
- H04B7/00—Radio transmission systems, i.e. using radiation field
- H04B7/02—Diversity systems; Multi-antenna system, i.e. transmission or reception using multiple antennas
- H04B7/04—Diversity systems; Multi-antenna system, i.e. transmission or reception using multiple antennas using two or more spaced independent antennas
- H04B7/08—Diversity systems; Multi-antenna system, i.e. transmission or reception using multiple antennas using two or more spaced independent antennas at the receiving station
- H04B7/0837—Diversity systems; Multi-antenna system, i.e. transmission or reception using multiple antennas using two or more spaced independent antennas at the receiving station using pre-detection combining
- H04B7/0842—Weighted combining
- H04B7/086—Weighted combining using weights depending on external parameters, e.g. direction of arrival [DOA], predetermined weights or beamforming
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/213—Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/217—Validation; Performance evaluation; Active pattern learning techniques
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N20/00—Machine learning
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02D—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
- Y02D30/00—Reducing energy consumption in communication networks
- Y02D30/70—Reducing energy consumption in communication networks in wireless communication networks
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Data Mining & Analysis (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Evolutionary Computation (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- Artificial Intelligence (AREA)
- Life Sciences & Earth Sciences (AREA)
- Evolutionary Biology (AREA)
- Bioinformatics & Computational Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Software Systems (AREA)
- Signal Processing (AREA)
- Computer Networks & Wireless Communication (AREA)
- Medical Informatics (AREA)
- Computing Systems (AREA)
- Mathematical Physics (AREA)
- Radar Systems Or Details Thereof (AREA)
Abstract
The embodiment of the application discloses a beam tracking method, a device, equipment and a storage medium, comprising the following steps: the method comprises the steps of obtaining a group of receiving signals of target beam training before a prediction time, wherein the group of receiving signals comprise receiving signals corresponding to a plurality of candidate beams in a preset codebook, the target beam training is the beam training which is the latest time from the prediction time, the group of receiving signals and the prediction time are input into a prediction model to obtain a prediction beam corresponding to the prediction time, the prediction beam is one beam of the plurality of candidate beams, the prediction model is obtained by learning signal characteristics of sample receiving signals and hidden states corresponding to the signal characteristics, and the sample receiving signals are receiving signals corresponding to the beam training before the prediction time. The method can improve the accuracy of beam tracking, reduce the frequency of beam training, reduce the cost of time-frequency resources and improve the performance and efficiency of a communication system by utilizing a prediction model to learn and predict a sample received signal.
Description
Technical Field
Embodiments of the present application relate to beam prediction technology, and relate to, but are not limited to, a beam tracking method, a device, equipment, and a storage medium.
Background
Terahertz communication has wide application prospects in the fields of high-capacity short-range communication, long-distance satellite space communication, wireless security access network and the like. However, since the path loss in the terahertz frequency band is large, it is necessary to use a very large-scale antenna array and a beamforming technique to combat the poor transmission characteristics.
In beam communication, the accuracy of the beam direction is critical. However, conventional beam tracking methods face challenges due to beam misalignment caused by user movement, etc. To solve this problem, in the prior art, it is proposed to implement beam tracking by using position assistance information, by estimating the time when a user moves out of the coverage area of a base station beam, and adjusting the narrow beam used according to the estimated relative position. However, this method has a certain limitation in that the accuracy requirements of the positioning sensor and algorithm are high. Or, the sensor information of the terminal is utilized to establish a channel evolution model, and the model is modified to improve the accuracy of beam tracking. However, this approach has limited ability to handle highly non-linear relationships between user motion and angular changes. Furthermore, a beam tracking algorithm based on a deep neural network is used. This approach exploits the strong fitting ability of the deep neural network to predict the optimal beam at discrete time points. However, this approach is less flexible.
Therefore, how to ensure that beam tracking can be realized in continuous time, reduce the frequency of beam training, reduce the cost of time-frequency resources, and improve the flexibility and accuracy of beam tracking is a problem to be solved urgently.
Disclosure of Invention
In view of this, the beam tracking method, device, equipment and storage medium provided in the embodiments of the present application can implement beam tracking on continuous time, reduce the frequency of beam training, reduce the overhead of time-frequency resources, and improve the flexibility and accuracy of beam tracking. The beam tracking method, the device, the equipment and the storage medium provided by the embodiment of the application are realized in the following way:
the beam tracking method provided by the embodiment of the application comprises the following steps:
acquiring a group of received signals of target beam training before a predicted time, wherein the group of received signals comprises received signals corresponding to a plurality of candidate beams in a preset codebook one by one, and the target beam training is the last beam training from the predicted time;
and inputting the group of received signals and the prediction time to a prediction model to obtain a prediction beam corresponding to the prediction time, wherein the prediction beam is one of the plurality of candidate beams, the prediction model is obtained by learning signal characteristics of sample received signals and hidden states corresponding to the signal characteristics, and the sample received signals are received signals corresponding to beam training before the prediction time.
In some embodiments, the prediction model includes a preprocessing module, a hidden state acquisition module, and a prediction module, where inputting the set of received signals and the prediction time to the prediction model, obtains a prediction beam corresponding to the prediction time, includes:
extracting feature information corresponding to each received signal through the preprocessing module, acquiring a plurality of feature information, and inputting the feature information into the hidden state acquisition module;
processing the plurality of characteristic information through the hidden state acquisition module to acquire the hidden state of the predicted moment;
and calculating the hidden state of the predicted moment through the prediction module, and determining a predicted wave beam corresponding to the predicted moment.
In some embodiments, the hidden state obtaining module includes a long-term and short-term memory network and a differential equation solver based on a fully connected network, and the processing, by the hidden state obtaining module, the plurality of feature information to obtain the hidden state of the predicted time includes:
acquiring a target hiding state corresponding to the moment of the target beam training through the memory data of the last beam training before the target beam training, the last hiding state of the last beam training and the plurality of characteristic information in the long-short-term memory network, wherein the memory data of the last beam training is used for indicating the dependency relationship among a plurality of groups of receiving signals of the plurality of beam training before the target beam training;
And obtaining the hidden state of the predicted moment through the differential equation solver based on the fully connected network.
In some embodiments, the derivative function is fitted to a classification function by a neural network, the classification function modeling a process of selecting a best beam from the plurality of candidate beams.
In some embodiments, the obtaining, by the fully-connected network-based differential equation solver, the hidden state of the predicted time instant includes:
expanding the derivative function at a preset point through a Taylor formula to obtain a first multi-order polynomial corresponding to the derivative function at the preset point;
obtaining a second polynomial corresponding to the adjacent point according to the distance between the preset point and the adjacent point of the preset point and the first polynomial;
and obtaining the hiding state of the prediction moment according to the target hiding state and the second multi-order polynomial.
In some embodiments, the calculating, by the prediction module, the hidden state of the predicted time, and determining a predicted beam corresponding to the predicted time, includes:
acquiring probability vectors of a plurality of candidate beams corresponding to the hidden state of the prediction moment, wherein each probability component in the probability vectors is used for representing the probability that the corresponding candidate beam is the prediction beam corresponding to the prediction moment;
And comparing each probability component in the probability vectors, and determining the maximum probability vector, wherein the target candidate beam corresponding to the maximum probability component is the predicted beam corresponding to the predicted time.
In some embodiments, the hidden state acquisition module further includes a memory unit and a hidden state storage unit, where the memory unit is configured to record memory data of beam training, and the hidden state storage unit is configured to record a hidden state corresponding to the beam training, and after the acquiring the hidden state at the predicted time, the method further includes:
updating the memory unit according to the hidden state of the predicted moment and the plurality of characteristic information;
and updating the hidden state storage unit according to the hidden state of the predicted moment.
In some embodiments, before the inputting the set of received signals and the predicted time to a prediction model to obtain a predicted beam corresponding to the predicted time, the method includes:
judging whether the difference between the predicted time and the target beam training time is smaller than the time interval between two times of training;
and when the difference between the predicted time and the target beam training time is smaller than the time interval between the two training times, inputting the group of received signals and the predicted time into a prediction model to obtain a predicted beam corresponding to the predicted time.
The embodiment of the application provides a beam tracking device, which comprises:
the acquisition module is used for acquiring a group of received signals of target beam training before the prediction time, wherein the group of received signals comprise received signals which are in one-to-one correspondence with a plurality of candidate beams in a preset codebook, and the target beam training is the last beam training from the prediction time;
the input module is configured to input the set of received signals and the prediction time to a prediction model to obtain a predicted beam corresponding to the prediction time, where the predicted beam is one of the plurality of candidate beams, the prediction model is obtained by learning signal features of a sample received signal and a hidden state corresponding to the signal features, and the sample received signal is a received signal corresponding to beam training before the prediction time.
The computer device provided by the embodiment of the application comprises a memory and a processor, wherein the memory stores a computer program capable of running on the processor, and the processor realizes the method described by the embodiment of the application when executing the program.
The computer readable storage medium provided in the embodiments of the present application stores a computer program thereon, which when executed by a processor implements the method provided in the embodiments of the present application.
According to the beam tracking method, the device, the computer equipment and the computer readable storage medium, a set of receiving signals for target beam training before a prediction time is obtained, the set of receiving signals comprise receiving signals corresponding to a plurality of candidate beams in a preset codebook one by one, the target beam training is the last beam training from the prediction time, the set of receiving signals and the prediction time are input into a prediction model to obtain a prediction beam corresponding to the prediction time, the prediction beam is one beam of the plurality of candidate beams, the prediction model is obtained by learning signal characteristics of sample receiving signals and hidden states corresponding to the signal characteristics, and the sample receiving signals are receiving signals corresponding to the beam training before the prediction time. Therefore, the prediction model is used for learning and predicting the sample received signal, so that the accuracy of beam tracking is improved, the beam switching speed is accelerated, the beam tracking overhead is reduced, and the performance and the efficiency of a communication system are improved.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the application and, together with the description, serve to explain the technical aspects of the application.
Fig. 1 is an application scenario diagram of a beam tracking method disclosed in an embodiment of the present application;
fig. 2A is a schematic flow chart of a beam tracking method disclosed in an embodiment of the present application;
fig. 2B is a flowchart of an implementation of a beam tracking method disclosed in an embodiment of the present application;
FIG. 2C is a flow chart of an implementation of another beam tracking method disclosed in an embodiment of the present application;
FIG. 3 is a general flow chart of a beam tracking method disclosed in an embodiment of the present application;
FIG. 4 is a flow chart of an implementation of another beam tracking method disclosed in an embodiment of the present application;
FIG. 5 is a schematic diagram of a change in beamforming with predicted time according to an embodiment of the present disclosure;
fig. 6 is a schematic diagram of a change of beam forming with a moving speed according to an embodiment of the present application;
fig. 7 is a schematic structural diagram of a beam tracking apparatus according to an embodiment of the present disclosure;
fig. 8 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
For the purposes, technical solutions and advantages of the embodiments of the present application to be more apparent, the specific technical solutions of the present application will be described in further detail below with reference to the accompanying drawings in the embodiments of the present application. The following examples are illustrative of the present application, but are not intended to limit the scope of the present application.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs. The terminology used herein is for the purpose of describing embodiments of the present application only and is not intended to be limiting of the present application.
In the following description, reference is made to "some embodiments" which describe a subset of all possible embodiments, but it is to be understood that "some embodiments" can be the same subset or different subsets of all possible embodiments and can be combined with one another without conflict.
It should be noted that the term "first/second/third" in reference to the embodiments of the present application is used to distinguish similar or different objects, and does not represent a specific ordering of the objects, it being understood that the "first/second/third" may be interchanged with a specific order or sequence, as permitted, to enable the embodiments of the present application described herein to be implemented in an order other than that illustrated or described herein.
In view of this, the embodiments of the present application provide a beam tracking method, which is applied to an intelligent electronic device. Fig. 1 is an application scenario diagram of a beam tracking method according to one embodiment. As shown in fig. 1, a user may carry, wear, or use an electronic device 10, which electronic device 10 may include, but is not limited to, a cell phone, a wearable device (e.g., a smart watch, a smart bracelet, smart glasses, etc.), a tablet computer, a notebook computer, a vehicle-mounted terminal, a PC (Personal Computer, a personal computer), etc. The functions performed by the method may be performed by a processor in an electronic device, which may of course be stored in a computer storage medium, as will be seen, comprising at least a processor and a storage medium.
Fig. 2A is a schematic implementation flow chart of a beam tracking method according to an embodiment of the present application. As shown in fig. 2A, the method may include the following steps 201 to 202:
in step 201, a set of received signals for target beam training before the predicted time is obtained, where the set of received signals includes received signals corresponding to a plurality of candidate beams in a preset codebook, and the target beam training is the last beam training from the predicted time.
In the embodiment of the application, the time range of target beam training is determined, namely, beam training is performed within a period of time before the predicted time. Optionally, an appropriate receiving device is selected and parameters are set to obtain received signals corresponding to a plurality of candidate beams in a preset codebook. These received signals may come from antenna arrays, microwave receivers, etc. Further, the received signal is sampled and digitized for subsequent predictive model input.
Step 202, inputting a set of received signals and a prediction time to a prediction model, so as to obtain a prediction beam corresponding to the prediction time, wherein the prediction beam is one of a plurality of candidate beams, the prediction model is obtained by learning signal characteristics and hidden states corresponding to the signal characteristics of a sample received signal, and the sample received signal is a received signal corresponding to beam training before the prediction time.
Referring to fig. 2B, a flowchart of an implementation of a beam tracking method according to an embodiment of the present application is shown in fig. 2B, where a predicted time is determinedThereafter, the received signal y of the last beam training before the predicted time is first obtained n Will receive the signal y n Input into the predictive model, the predicted beam +.>
In the embodiment of the application, a proper prediction model is selected according to task requirements and data characteristics. Machine learning methods, such as deep learning neural networks, may be used to learn and predict based on signal characteristics and hidden states of the sample received signal. Further, a training data set is prepared, comprising the sample received signal and its corresponding hidden state. The hidden state may be information of a beam direction angle, a beam width, etc. May be obtained by manual labeling or other beam tracking algorithms. Dividing the training set, the verification set and the test set, and carrying out standardization or normalization treatment on the data. And then training the model by using the training set, performing model tuning by combining the verification set, and selecting proper parameter configuration and training strategies. And evaluating the performance of the model, verifying by using a test set, and considering indexes such as the accuracy, the robustness and the like of beam tracking.
At the prediction time, the beam eigenvectors obtained by the input of the prediction model are input into the model together with the input data at the prediction time. These input data may include received signals, time stamps, location information, etc. The predictive model will calculate the most likely predicted beam from the input data. The probability prediction output may be used to represent the probability distribution of the plurality of candidate beams. And according to the prediction result, selecting the prediction beam with the highest probability as an estimated value of the target beam at the prediction moment.
As an example, the prediction model includes a preprocessing module, a hidden state acquisition module, and a prediction module, and inputs a set of received signals and a prediction time to the prediction model to obtain a prediction beam corresponding to the prediction time, including: and extracting the characteristic information corresponding to each received signal through the preprocessing module, acquiring a plurality of characteristic information, and inputting the characteristic information into the hidden state acquisition module. Alternatively, a time-frequency analysis method such as short-time fourier transform (STFT) or wavelet transform may be used to convert the received signal to the time-frequency domain and extract the features of energy, spectral shape, etc.
Referring to fig. 2C, in order to disclose a flowchart for implementing a beam tracking method according to an embodiment of the present application, as shown in fig. 2C, the prediction model includes a preprocessing module 211, a hidden state obtaining module 212, and a prediction module 213, where the preprocessing module 211 obtains a received signal y n After pretreatment, a receiving signal y is obtained n Characteristic information s of (2) n The hidden state acquisition module 212 acquires the feature information s n And then processing to obtain a received signal y n Hidden state h (t n ). From which a predicted beam can be obtainedDetermining the predicted time +.>The predicted beam may then be obtained by the prediction module 213
According to the importance and the relevance of the features, a feature selection algorithm (such as chi-square test, pearson correlation coefficient and the like) is used for screening the extracted features, and the features most relevant to the prediction target are selected.
Further, the hidden state obtaining module is used for processing the plurality of characteristic information to obtain the hidden state of the prediction moment. Optionally, a deep learning model such as a multi-layer perceptron (MLP, multilayer Perceptron), a convolutional neural network (CNN, convolutional Neural Network) or a recurrent neural network (RNN, recurrent Neural Network) is used to combine and transform the multiple feature information output by the preprocessing module, so as to obtain a higher-level hidden feature representation.
As an example, the hidden state acquisition module includes a long-term and short-term memory network and a differential equation solver based on a fully-connected network, processes a plurality of feature information through the hidden state acquisition module, and acquires a hidden state at a predicted time, including: and acquiring the target hidden state corresponding to the moment of the target beam training through the memory data of the last beam training before the target beam training, the last hidden state of the last beam training and a plurality of characteristic information in the long-short-term memory network, wherein the memory data of the last beam training is used for indicating the dependency relationship among a plurality of groups of receiving signals of the plurality of beam training before the target beam training. Optionally, the Memory data of the last beam training before the target beam training, the last hidden state of the last beam training and a plurality of feature information are used as inputs of an LSTM (Long Short-Term Memory network). The memory data may include hidden states and output sequences of previous beam training for multiple times, for representing historical dependencies. At the same time, the memory data may also be obtained by an additional model, such as an additional LSTM or a fully connected neural network. Further, various LSTM structures, such as LSTM with gating mechanism or LSTM with residual connection, can be adopted to improve the generalization performance and convergence speed of the network. The time of the target beam training may be the time of the last beam training or the time of the beam training closest to the predicted time.
Further, the hidden state of the predicted moment is obtained through a differential equation solver based on a fully connected network. Alternatively, classical Euler integration or Longer-Curie method may be used to numerically solve the differential equation.
Further, regularization terms, such as L1 and L2 norms, may be added to the differential equation to prevent overfitting.
And inputting the result after feature combination through a trained model (such as a long and short memory network LSTM, a gate control circulation unit GRU and the like), and obtaining the hidden state at the predicted moment.
As an example, the derivative function is fitted by a neural network to a classification function that models the process of choosing the best beam from among a plurality of candidate beams. Alternatively, a set of received signals and their corresponding candidate beams may first need to be collected. These received signals may come from signals received by the antenna array transmitted by transmitters of known locations. The candidate beams may be a selection of different directions or angles of the antenna array.
And inputting each received signal into a preprocessing module for feature extraction, and determining feature information related to the input signal. These characteristics include power, frequency, phase, etc.
The LSTM network is used to process the previous history data and characteristic information to predict the hidden state at the next moment. Through supervised learning of the LSTM network, a proper hidden state change rule is learned from the historical data.
And obtaining the hidden state of the predicted moment through a differential equation solver based on a fully connected network. The classification function of the solver may be generated by training to model the process of selecting the best beam from a plurality of candidate beams. The selection of candidate beams may be determined using maximum likelihood estimation, bayesian inference, and the like.
And calculating a derivative function by using a differential equation solver according to the hidden state at the prediction moment to obtain a predicted value of the hidden state. And predicting by using the hidden state of the prediction moment and the candidate beam information, and selecting the optimal beam.
Further, the hidden state of the predicted time is calculated through the prediction module, and the predicted beam corresponding to the predicted time is determined. Alternatively, a classifier such as a support vector machine (SVM, support Vector Machine), a Random Forest (RF), or a regression model (such as linear regression, neural network, etc.) may be used, and an appropriate prediction model may be selected according to the specific situation.
And using the characteristics with the hidden state as input, training a model or carrying out an reasoning process to obtain a predicted beam corresponding to the predicted moment.
As an example, obtaining the hidden state of the predicted moment by a fully connected network based differential equation solver includes: and expanding the derivative function at a preset point through a Taylor formula to obtain a first multi-order polynomial corresponding to the derivative function at the preset point. Optionally, the derivative function of the hidden state with respect to time is expanded into a polynomial at a preset point. And expanding the derivative function at a preset point by using a Taylor formula to obtain a first derivative value and a second derivative value corresponding to the point. These derivative values are then used to calculate a first polynomial of the order of the derivative function corresponding to the preset point.
In particular, the derivative of the hidden state h (t) with respect to time is obtainedSolving the initial function of the ordinary differential equation by the second order Longer-Kutta method to be +.>Find the hidden state->Wherein the second order lange-kuta method uses a second order taylor expansion to numerically solve the differential equation. For general differential equations->Let the function y (x) be x n The Taylor expansion is performed at
Further, according to the distance between the preset point and the adjacent point of the preset point and the first multi-order polynomial, a second multi-order polynomial corresponding to the adjacent point is obtained. Optionally, a second polynomial corresponding to the adjacent point is calculated according to the distance between the preset point and the adjacent point and the first polynomial corresponding to the preset point by the derivative function. The distance between two adjacent points can be calculated by the difference value on the time axis. The second polynomial may be derived by deriving the first polynomial.
In particular, according to Taylor expansion
The second order of it is approximately
Further, the hidden state of the predicted moment is obtained according to the target hidden state and the second multi-order polynomial. Optionally, the target hidden state and the second polynomial are used to calculate the hidden state at the predicted instant. Substituting coefficients in the second polynomial into the polynomial to obtain a second derivative value at the prediction time, substituting the second derivative value into a differential equation of a derivative function, and combining the target hidden state to obtain the hidden state at the prediction time.
Specifically, in the second-order equationLet Δx=x n+1 -x n Taking x=x n+1 Has the following components
And y' (x) n ) Can representY' (x) n ) At x n Forward difference quotient of the points, so
Therefore, it isThis is about y n+1 Since the hidden function of (2) has a large calculation amount in solving, it is rewritten as
As an example, calculating, by the prediction module, the hidden state of the predicted time, and determining the predicted beam corresponding to the predicted time includes: and obtaining probability vectors of a plurality of candidate beams corresponding to the hidden state of the prediction moment, wherein each probability component in the probability vectors is used for representing the probability that the corresponding candidate beam is the prediction beam corresponding to the prediction moment. Optionally, according to the hidden state of the predicted time, each candidate beam is calculated to obtain a probability component corresponding to the candidate beam. These probability components are used to represent the probability that each candidate beam will be the predicted beam at the predicted time.
Further, each probability component in the probability vectors is compared, the largest probability component is determined, and the target candidate beam corresponding to the largest probability component is the predicted beam corresponding to the predicted time. Alternatively, all probability components are compared, and the component with the largest probability is found as the target. I.e. the candidate beam with the highest probability is determined as the predicted beam at the predicted time instant. And outputting the candidate beam with the highest probability to a beam tracking system for corresponding processing and operation as a predicted beam at the predicted moment.
As an example, the hidden state obtaining module further includes a memory unit and a hidden state storage unit, where the memory unit is configured to record the memory data of the beam training, and the hidden state storage unit is configured to record the hidden state corresponding to the beam training, and after obtaining the hidden state at the predicted time, the method further includes: and updating the memory unit according to the hidden state of the prediction moment and the plurality of characteristic information. Alternatively, when beam tracking prediction is performed according to the hidden state at the prediction time and a plurality of pieces of characteristic information, these pieces of information may be input into the memory unit for updating. The memory unit is usually implemented by a network structure for long-term short-term memory. Specifically, the memory unit includes three key parameters: input gating: controlling the input of new information; forget gating: controlling the reservation of old information; output gating: and controlling information output in the memory unit. In the different operation processes of new information input, old information forget, information output and the like, the memory unit can adaptively adjust according to the current hidden state, the input characteristic information and the like, and the memory data is combined with the current hidden state to obtain updated memory data. The updating of the memory unit can be realized through a back propagation algorithm, namely, the gradient of the loss function to each weight parameter is calculated, and the gradient is optimized so as to complete the training of the model.
Further, the hidden state storage unit is updated according to the hidden state at the predicted time. Alternatively, a corresponding hidden state is obtained in each beam training. This hidden state may be recorded in a hidden state storage unit for use in the next beam training. When beam tracking is needed at the prediction time, the latest hidden state is firstly read from the hidden state storage unit, and is used as one of the inputs of the prediction model to be added into the neural network for beam tracking prediction. When updating the hidden state storage unit, an optimization algorithm such as gradient descent can be used, and parameters in the hidden state storage unit can be adjusted by calculating the gradient of the loss function to the hidden state so as to update the stored hidden state. Meanwhile, the result of each beam tracking can be evaluated according to the effect of the prediction model.
As an example, before inputting a set of received signals and a predicted time to a prediction model to obtain a predicted beam corresponding to the predicted time, the method includes: and judging whether the difference between the predicted time and the target beam training time is smaller than the time interval between two training times. Optionally, first, a time difference between the target beam training time and the predicted time is calculated. The time interval between two exercises is acquired. The time interval can be a fixed preset value or can be dynamically adjusted according to the requirements in practical application.
Further, when the difference between the predicted time and the target beam training time is smaller than the time interval between two times of training, a group of received signals and the predicted time are input into the prediction model to obtain a predicted beam corresponding to the predicted time. Optionally, it is determined whether the time difference is less than the time interval between two exercises. If it is smaller, the next step is performed, and a set of received signals and the predicted time instants are passed as inputs to the prediction model. The prediction model comprises a preprocessing module, a hidden state acquisition module and a prediction module. And in the hidden state acquisition module, the hidden state at the prediction moment is acquired according to the plurality of characteristic information, the memory data of the last beam training before the target beam training and the last hidden state. This can be achieved by long and short term memory networks and differential equation solvers based on fully connected networks. In the prediction module, the hidden state calculation at the prediction moment is utilized, and the prediction beam corresponding to the prediction moment is determined. This can be achieved by obtaining probability vectors for a plurality of candidate beams corresponding to the hidden states at the predicted time, and comparing each probability component in the probability vectors, selecting the target candidate beam with the highest probability as the predicted beam. The predicted beam is returned as an output result and the memory unit and hidden state storage unit are updated for use in the next prediction.
If equal, the predicted beam result of the cycle of target beam training is taken as the predicted beam.
If so, the period of the target beam training is increased until the difference between the predicted time and the time of the target beam training is less than the time interval between the two exercises.
Specifically, the predicted time is obtained For specific time, the prediction time is normalized to improve the stability and convergence rate of the model, and avoid the difficulty in model training caused by overlarge numerical value difference. Setting the predicted time +.>Within the range of 0-1, calculating the training time t of the target beam n And forecast time->The time difference between the two is T, the period of the target beam training time is n, if +.>Updating n=n+1 and repeating the beam prediction flow; if->Then normalization processing is performed to calculate +.> The predicted time after normalization processing; if->Then give +.>Predicted beam at time.
According to the method, a group of receiving signals for target beam training before the prediction time is obtained, the group of receiving signals comprise receiving signals corresponding to a plurality of candidate beams in a preset codebook one by one, the target beam training is the beam training which is the last time away from the prediction time, the group of receiving signals and the prediction time are input into a prediction model to obtain a prediction beam corresponding to the prediction time, the prediction beam is one of the plurality of candidate beams, the prediction model is obtained by learning signal characteristics of sample receiving signals and hidden states corresponding to the signal characteristics, and the sample receiving signals are receiving signals corresponding to the beam training before the prediction time. According to the method and the device, the sample receiving signals can be studied and predicted by using the prediction model, so that the accuracy of beam tracking is improved, the frequency of beam training is reduced, the cost of time-frequency resources is reduced, and the performance and the efficiency of a communication system are improved.
An exemplary application of the embodiments of the present application in a practical application scenario will be described below.
Considering the uplink scenario, assuming that the base station and the user are equipped with M antennas and a single antenna, respectively, a uniform linear phased array is used at the base station. At the t time slot, the received signal of the base station is
Wherein uplink pilotMeeting the power constraint |x t | 2 =P,Is the channel gain vector between the base station and the user, < >>Is additive white Gaussian noise, w t Is the base station beamforming vector. It is assumed that the base station has only one radio frequency link and only analog beamforming is considered. Let us assume preset codebook +>Consists of Q candidate beams,wherein the q-th candidate beamq.epsilon. {1,2, …, Q }. The goal of beam training is to find the optimal beam +.>So that
In order to provide uninterrupted high quality communications to users moving at high speeds, frequent beam training is required at both ends of the transceiver to ensure proper connectivity of the communication link, which results in a significant loss of communication resources. By utilizing the motion information of the user, the optimal beam at any moment can be predicted between two times of beam training, and the frequency of beam training is reduced. Consider the beam prediction after the nth beam training. Let T be the interval between two beam trains, T n For the time of the nth beam training, for any time before the (n+1) th trainingDefining normalized prediction time +.>
Since the received signals of the previous beam training reflect the state of the user's movement, these received signals can be utilized as inputs for beam prediction. In particular, assume that the received signal for the nth beam training isWherein->Representing the received signal corresponding to the use of the q-th candidate beam. Considering that the number of candidate beams is limited, the best beam prediction can be modeled as a multi-classification task:
wherein the method comprises the steps ofIs->The label corresponding to the best beam at the moment, F, represents the classification function.
The relationship between the change in the optimal beam and the user movement is highly nonlinear and it is difficult to solve directly for F. In view of the rapid theoretical development of recent machine learning, deep learning and the like, the deep neural network has excellent fitting capability to highly nonlinear functions, so that the classification function F is fitted by using the deep neural network. In order to fully extract the motion information of the user, any moment is predictedThe network needs to have the ability to learn long-term dependent information. For this purpose, a differential equation long-term memory network is constructed, which is trained with periodic beams of the received signal { y } k } k≥1 And time->As input, directly output +.>Predicted best beam corresponding tag +.>The differential equation long-term memory network mainly comprises three modules: the device comprises a preprocessing module, a hidden state acquisition module and a prediction module. The preprocessing module consists of a batch normalization layer (BN, batch Normalization) and a convolution layer, and is mainly used for extracting features from a received signal of beam training; the hidden state acquisition module consists of two parts, namely a long-term memory network and a short-term memory networkAnd a differential equation (ODE, ordinary Differential Equation) solver based on a fully connected network, wherein the LSTM part utilizes a memory unit to learn the long-term dependence of a data sequence and dynamically updates the hidden state at discrete moments according to the characteristic information provided by the preprocessing module, the ODE solver part utilizes the fully connected network to fit the derivative of the hidden state with respect to time, and then solves the initial value problem of the ordinary differential equation by waiting for tracking the evolution process of the hidden state at a continuous interval; the prediction module is formed by a single-layer fully-connected network and is used for predicting the optimal beam according to the hidden state output by the ODE solver. The specific steps of beam tracking based on the differential equation long-term and short-term memory network are as follows:
Fig. 3 is a general flow chart of a beam tracking method according to an embodiment of the present application. As shown in fig. 3, the method includes the following steps 301 to 306:
step 301, initializing relevant parameters including beam training frequency n=1, initial memory unit c (0) =0, initial hidden state h (0) =0, and prediction time
Step 302, performing beam training, and the receiving end measures the received signals corresponding to the Q candidate beams in the preset codebook
Step 303, from y, using batch normalization layer and convolution layer n Extracting feature information s from a terminal n 。
Step 304, the long-short term memory network stores the data according to the past memory unit c (t n-1 ) =0, hidden state h (t n-1 ) =0 and the feature information of the current beam training to obtain the current hidden state h (t n )。
In step 305, the hidden state of the predicted time is obtained by the differential equation solver based on the fully connected network, and the specific implementation manner is described in step 202 above, which is not described herein.
Step 306: will hide the stateInputting a single-layer full-connection network obtaining prediction module to obtain probability vectors of each candidate beam as the optimal beam>Selecting a candidate beam with the highest probabilityAs->Is used for the beam optimization.
By the technical scheme, the performance and the prediction accuracy of the beam forming system are improved. The characteristic information is extracted from the received signal through the batch normalization layer and the convolution layer, so that signal noise and interference can be effectively reduced, and the reliability and accuracy of the received signal are improved. And solving probability vectors of the best beams of each candidate beam according to the hidden state and the time derivative function through the fully connected network and the prediction module, and selecting the candidate beam with the highest probability as the best beam. The accuracy and efficiency of beam selection can be improved, so that the system can be better adapted to different communication environments and conditions. The current hiding state is updated in real time by utilizing a long-short-period memory network and combining the past memory units and the hiding state, so that the accuracy and the prediction performance of the system for beam selection at the future moment are improved.
As an example, fig. 4 is a flowchart of another implementation of a beam tracking method disclosed in the embodiment of the present application, and as shown in fig. 4, the prediction model includes a normalization layer, a convolution layer, a long-term memory network, a differential equation solver, and a full connection layer, where the prediction time is assumed to beAcquiring the beam training t closest to the predicted time n Is a received signal y of (2) n Then, normalization and convolution processing are carried out to extract the characteristic information s of the received signal n Then acquire beam training t from memory cell n Memory data c (t n-1 ) And beam training t stored from hidden state storage unit n Hidden state h (t n-1 ) The characteristic information s is checked by the upper short-term memory network n Memory data c (t) n-1 ) Hidden state h (t n-1 ) Performing operation to obtain beam training t n Corresponding target hidden states h (t n ) And obtaining the hidden state of the predicted moment by a differential equation solver>Conceal the status->Inputting a single-layer full-connection network to generate a predicted beam +.>
As an example, parameters of the respective networks in the prediction model shown in fig. 4 are shown in table 1.
TABLE 1
As shown in table 1:
in the normalization layer with the layer number of 1, f i =2 denotes receiving 2 input channels, f o =2 means that it is mapped to 2 output channels.
In the convolution layer with the layer number of 2, f i =2 denotes receiving 2 input channels, f o =64 means that it is mapped to 2 output channels, (3, 1) means that the convolution kernel size is 3X3, convolution calculation is performed using the convolution kernel of 3X3, and the activation function adopts ReLU. In addition, the convolved boundary is filled with 0's to keep the output size the same as the input size.
Similarly, the parameters of the convolution layers with the number of layers of 3 and the number of layers of 4 have the same meaning as the convolution layers with the number of layers of 2, and are not repeated here.
In the pooling layer with the layer number of 5, f i 256 means that the number of input channels is 256, f o 256 means the number of output channels is 256, and the average pooling is an operation in which the pooling operation averages each small region (e.g., a window of 2x2 or 3x 3) in the input feature map. The method aims to reduce the size of the feature map and extract average feature information in the region.
In LSTM with layer number 6, f i 256 means that the number of input channels is 256, f o 256 means that the output channel number is 256.
In an ODE FC with a layer number of 7, f i 256 means that the number of input channels is 256, f o 256 means that the number of output channels is 256, and tanh is a hyperbolic tangent function, which is a commonly used activation function for introducing nonlinear transformation. For each element x, the calculation formula of tanh is tanh (x) = (exp (x) -exp (-x))/(exp (x) +exp (-x)). Its value range is [ -1,1 ]The distribution of the output values is similar to a Sigmoid function, but the range is [ -1,1]。
The parameters of the ODE FC with the layer number of 8 and the ODE FC with the layer number of 7 have the same meaning, and are not described here again.
In the output layer with the layer number of 9, f i 256 means that the layer accepts 256-dimensional input, f o =64 means that the layer maps the input into a 64-dimensional output vector. dropout is a common regularization technique that functions to randomly zero some neuron outputs during training, thereby reducing the risk of overfitting of the network model. Dropout=0.3 here means that in the output layer, each neuron has a probability of being randomly discarded of 30%. Softmax is a common activation function used to convert the output of the last layer of the neural network into a probability distribution. Softmax ensures that the output values of the output layer form an effective probability distribution such that all outputs are at 0,1]And the sum of all outputs is equal to 1 to facilitate interpretation and comparison of the results.
By the flow method, the optimal wave beam at any moment can be obtained according to the corresponding hiding state prediction, and continuous time wave beam tracking is realized.
In order to illustrate the benefits of the solution compared with other solutions, three groups of experiments for realizing effects by different technical solutions are performed, wherein the three groups of different technical solutions are an extended kalman filter algorithm (EKF), a beam tracking algorithm based on a conventional long-short-term memory network and a beam tracking algorithm based on a multi-layer long-short-term memory network, and in the specific experiments, the three groups of technical solutions are as follows:
Simulation experiments are carried out, the base station is provided with M=64 antennas, and the user is in the process of carrying out the speed ofAcceleration ofWherein the direction of motion is in [0,2 pi ]]Randomly generated in the range. A series of data points is selected from the dataset based on the user trajectory, forming a data sample. In the network training stage, random sampling time is +.>To predict the optimal beam; in the network prediction stage, the sampling time is uniform>To predict the optimal beam. F is noted i For the input dimension of the network layer, f o For the output dimension (c) 1 ,c 2 ,c 3 ) For the size of convolution kernel, sampling step and zero filling size, the test adopts torch to realize the training of the network architecture model for 160 periods (epoch), the optimizer selects Adam optimizer, and the learning rate is set to r=3×10 -3 . For fair comparison, angular velocity and acceleration are introduced into the state of the extended Kalman filtering method to process moving scenes, whereas the conventional long-short-term memory network and the multi-layer long-short-term memory network are applied in comparison with the differential equation long-short-term memory network described above, without ODEA connection layer, wherein the multi-layer long and short term memory network has 9 LSTM layers. The performance evaluation standard adopts normalized wave beam gain G N It is defined as
To reduce the impact of training network initialization on the training model, the average result of 5 trains is used as an evaluation index.
Fig. 5 is a schematic diagram showing the variation of beamforming with predicted time according to an embodiment of the present application, as shown in fig. 5, when the user speed is atWhen different schemes normalize beam gain G at different prediction moments N . G of the EKF scheme due to lack of sufficient capability to simulate a highly non-linear relationship between user motion and angular change N Significantly degrading with increasing τ. In addition, the conventional LSTM scheme exhibits a remarkable unimodal performance in a high-speed scenario because the single-layer network has a limited characterization capability, and only an optimal beam at a certain time can be predicted, which tends to accurately predict an optimal beam at an intermediate time to ensure minimum degradation of overall performance. In contrast, the multi-layer long-short-term memory network and the differential equation long-short-term memory network can accurately predict the optimal beam at any moment, but the beam forming gain of the multi-layer long-short-term memory network is not as stable as that of the differential equation long-short-term memory network, and the beam tracking effect based on the differential equation long-short-term memory network is better. / >
Fig. 6 is a schematic diagram showing a change of beam forming along with a moving speed according to an embodiment of the present application, as shown in fig. 6, since the change of an optimal beam is faster, the extended kalman filtering method and the reachable G of the conventional long-short-term memory network N Rapidly decreasing with increasing speed, in contrast to the reachable G of multi-layer long and short term memory networks and differential equation long and short term memory networks N The magnitude of the decrease is smaller, wherein the beam tracking effect of the long-term memory network is based on differential equationThe method has the advantages that the beam gain is more stable than that of a multi-layer long-short-period memory network.
It should be understood that, although the steps in the flowcharts described above are shown in order as indicated by the arrows, these steps are not necessarily performed in order as indicated by the arrows. The steps are not strictly limited to the order of execution unless explicitly recited herein, and the steps may be executed in other orders. Moreover, at least some of the steps in the flowcharts described above may include a plurality of sub-steps or a plurality of stages, which are not necessarily performed at the same time, but may be performed at different times, and the order of execution of the sub-steps or stages is not necessarily sequential, but may be performed alternately or alternately with at least a part of the sub-steps or stages of other steps or other steps.
Based on the foregoing embodiments, the embodiments of the present application provide a beam tracking apparatus, where the apparatus includes each module included, and each unit included in each module may be implemented by a processor; of course, the method can also be realized by a specific logic circuit; in an implementation, the processor may be a Central Processing Unit (CPU), a Microprocessor (MPU), a Digital Signal Processor (DSP), a Field Programmable Gate Array (FPGA), or the like.
Fig. 7 is a schematic structural diagram of a beam tracking apparatus according to an embodiment of the present application, as shown in fig. 7, an apparatus 700 includes an acquisition module 701 and an input module 702, where:
an obtaining module 701, configured to obtain a set of received signals of target beam training before a prediction time, where the set of received signals includes received signals corresponding to a plurality of candidate beams in a preset codebook one by one, and the target beam training is a beam training that is the last time from the prediction time;
the input module 702 is configured to input a set of received signals and a prediction time to a prediction model, to obtain a predicted beam corresponding to the prediction time, where the predicted beam is one of a plurality of candidate beams, the prediction model is obtained by learning signal features and hidden states corresponding to the signal features of a sample received signal, and the sample received signal is a received signal corresponding to beam training before the prediction time.
In some embodiments, the obtaining module 701 is further configured to extract, by using the preprocessing module, feature information corresponding to each received signal, obtain a plurality of feature information, and input the plurality of feature information to the hidden state obtaining module;
further, the obtaining module 701 is further configured to process the plurality of feature information through the hidden state obtaining module, and obtain a hidden state at the predicted time;
further, the input module 702 is further configured to calculate, by using the prediction module, the hidden state at the predicted time, and determine a predicted beam corresponding to the predicted time.
In some embodiments, the obtaining module 701 is further configured to obtain, through the memory data of the previous beam training before the target beam training, the previous hidden state of the previous beam training, and the plurality of feature information in the long-short-term memory network, a target hidden state corresponding to a time of the target beam training, where the memory data of the previous beam training is used to indicate a dependency relationship between a plurality of sets of receiving signals of the plurality of beam training before the target beam training;
further, the input module 702 is further configured to obtain the hidden state of the predicted moment through a differential equation solver based on a fully connected network.
In some embodiments, the obtaining module 701 is further configured to perform fitting on the classification function through a neural network to obtain a derivative function, and model a process of selecting the best beam from the plurality of candidate beams to obtain the classification function.
In some embodiments, the obtaining module 701 is further configured to spread the derivative function at a preset point by using a taylor formula, so as to obtain a first polynomial corresponding to the derivative function at the preset point;
further, the input module 702 is further configured to obtain a second polynomial corresponding to the neighboring point according to the distance between the preset point and the neighboring point of the preset point and the first polynomial;
further, the input module 702 is further configured to obtain a hidden state at the predicted moment according to the target hidden state and the second polynomial.
In some embodiments, the obtaining module 701 is further configured to obtain probability vectors of a plurality of candidate beams corresponding to the hidden states at the predicted time, where each probability component in the probability vectors is used to characterize a probability that the corresponding candidate beam is the predicted beam corresponding to the predicted time;
further, the input module 702 is further configured to compare each probability component in the probability vector, determine a largest probability component, and the target candidate beam corresponding to the largest probability component is the predicted beam corresponding to the predicted time.
In some embodiments, the input module 702 is further configured to update the memory unit according to the hidden state of the predicted time and the plurality of feature information;
further, the input module 702 is further configured to update the hidden state storage unit according to the hidden state of the predicted time.
In some embodiments, the input module 702 is further configured to determine whether a difference between the predicted time and the time of the target beam training is less than a time interval between two training;
further, the input module 702 is further configured to input a set of received signals and a predicted time to the prediction model when a difference between the predicted time and a time of training the target beam is smaller than a time interval between two training, so as to obtain a predicted beam corresponding to the predicted time.
The description of the apparatus embodiments above is similar to that of the method embodiments above, with similar advantageous effects as the method embodiments. For technical details not disclosed in the device embodiments of the present application, please refer to the description of the method embodiments of the present application for understanding.
It should be noted that, in the embodiment of the present application, the division of the modules by the beam tracking device shown in fig. 7 is schematic, which is merely a logic function division, and other division manners may be adopted in actual implementation. In addition, each functional unit in the embodiments of the present application may be integrated in one processing unit, or may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units. Or in a combination of software and hardware.
It should be noted that, in the embodiment of the present application, if the method is implemented in the form of a software functional module, and sold or used as a separate product, the method may also be stored in a computer readable storage medium. Based on such understanding, the technical solutions of the embodiments of the present application may be essentially or part contributing to the related art, and the computer software product may be stored in a storage medium, including several instructions for causing an electronic device to execute all or part of the methods described in the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read Only Memory (ROM), a magnetic disk, an optical disk, or other various media capable of storing program codes. Thus, embodiments of the present application are not limited to any specific combination of hardware and software.
The embodiment of the application provides a computer device, which may be a server, and an internal structure diagram thereof may be shown in fig. 8. The computer device includes a processor, a memory, and a network interface connected by a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device includes a non-volatile storage medium and an internal memory. The non-volatile storage medium stores an operating system, computer programs, and a database. The internal memory provides an environment for the operation of the operating system and computer programs in the non-volatile storage media. The database of the computer device is for storing data. The network interface of the computer device is used for communicating with an external terminal through a network connection. Which computer program, when being executed by a processor, carries out the above-mentioned method.
The present embodiment provides a computer-readable storage medium, on which a computer program is stored, which when executed by a processor, implements the steps of the method provided in the above embodiment.
The present application provides a computer program product comprising instructions which, when run on a computer, cause the computer to perform the steps of the method provided by the method embodiments described above.
It will be appreciated by those skilled in the art that the structure shown in fig. 8 is merely a block diagram of some of the structures associated with the present application and is not limiting of the computer device to which the present application may be applied, and that a particular computer device may include more or fewer components than shown, or may combine certain components, or have a different arrangement of components.
In one embodiment, the beam tracking apparatus provided herein may be implemented in the form of a computer program that is executable on a computer device as shown in fig. 8. The memory of the computer device may store the various program modules that make up the apparatus. The computer program of each program module causes a processor to perform the steps in the methods of each embodiment of the present application described in the present specification.
It should be noted here that: the description of the storage medium and apparatus embodiments above is similar to that of the method embodiments described above, with similar benefits as the method embodiments. For technical details not disclosed in the storage medium, storage medium and device embodiments of the present application, please refer to the description of the method embodiments of the present application for understanding.
It should be appreciated that reference throughout this specification to "one embodiment" or "an embodiment" or "some embodiments" means that a particular feature, structure or characteristic described in connection with the embodiment is included in at least one embodiment of the present application. Thus, the appearances of the phrases "in one embodiment" or "in an embodiment" or "in some embodiments" in various places throughout this specification are not necessarily referring to the same embodiment. Furthermore, the particular features, structures, or characteristics may be combined in any suitable manner in one or more embodiments. It should be understood that, in various embodiments of the present application, the sequence numbers of the foregoing processes do not mean the order of execution, and the order of execution of the processes should be determined by the functions and internal logic thereof, and should not constitute any limitation on the implementation process of the embodiments of the present application. The foregoing embodiment numbers of the present application are merely for describing, and do not represent advantages or disadvantages of the embodiments. The foregoing description of various embodiments is intended to highlight differences between the various embodiments, which may be the same or similar to each other by reference, and is not repeated herein for the sake of brevity.
The term "and/or" is herein merely an association relation describing associated objects, meaning that there may be three relations, e.g. object a and/or object B, may represent: there are three cases where object a alone exists, object a and object B together, and object B alone exists.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
In the several embodiments provided in this application, it should be understood that the disclosed apparatus and method may be implemented in other ways. The above-described embodiments are merely illustrative, and the division of the modules is merely a logical function division, and other divisions may be implemented in practice, such as: multiple modules or components may be combined, or may be integrated into another system, or some features may be omitted, or not performed. In addition, the various components shown or discussed may be coupled or directly coupled or communicatively coupled to each other via some interface, whether indirectly coupled or communicatively coupled to devices or modules, whether electrically, mechanically, or otherwise.
The modules described above as separate components may or may not be physically separate, and components shown as modules may or may not be physical modules; can be located in one place or distributed to a plurality of network units; some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional module in each embodiment of the present application may be integrated in one processing unit, or each module may be separately used as one unit, or two or more modules may be integrated in one unit; the integrated modules may be implemented in hardware or in hardware plus software functional units.
Those of ordinary skill in the art will appreciate that: all or part of the steps for implementing the above method embodiments may be implemented by hardware related to program instructions, and the foregoing program may be stored in a computer readable storage medium, where the program, when executed, performs steps including the above method embodiments; and the aforementioned storage medium includes: a mobile storage device, a Read Only Memory (ROM), a magnetic disk or an optical disk, or the like, which can store program codes.
Alternatively, the integrated units described above may be stored in a computer readable storage medium if implemented in the form of software functional modules and sold or used as a stand-alone product. Based on such understanding, the technical solutions of the embodiments of the present application may be essentially or part contributing to the related art, and the computer software product may be stored in a storage medium, including several instructions for causing an electronic device to execute all or part of the methods described in the embodiments of the present application. And the aforementioned storage medium includes: various media capable of storing program codes, such as a removable storage device, a ROM, a magnetic disk, or an optical disk.
The methods disclosed in the several method embodiments provided in the present application may be arbitrarily combined without collision to obtain a new method embodiment.
The features disclosed in the several product embodiments provided in the present application may be combined arbitrarily without conflict to obtain new product embodiments.
The features disclosed in the several method or apparatus embodiments provided in the present application may be arbitrarily combined without conflict to obtain new method embodiments or apparatus embodiments.
The foregoing is merely an embodiment of the present application, but the protection scope of the present application is not limited thereto, and any person skilled in the art can easily think about changes or substitutions within the technical scope of the present application, and the changes and substitutions are intended to be covered in the protection scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.
Claims (11)
1. A method of beam tracking, the method comprising:
acquiring a group of received signals of target beam training before a predicted time, wherein the group of received signals comprises received signals corresponding to a plurality of candidate beams in a preset codebook one by one, and the target beam training is the last beam training from the predicted time;
and inputting the group of received signals and the prediction time to a prediction model to obtain a prediction beam corresponding to the prediction time, wherein the prediction beam is one of the plurality of candidate beams, the prediction model is obtained by learning signal characteristics of sample received signals and hidden states corresponding to the signal characteristics, and the sample received signals are received signals corresponding to beam training before the prediction time.
2. The method of claim 1, wherein the prediction model includes a preprocessing module, a hidden state acquisition module, and a prediction module, the inputting the set of received signals and the prediction time into the prediction model, obtaining a prediction beam corresponding at the prediction time, comprises:
extracting feature information corresponding to each received signal through the preprocessing module, acquiring a plurality of feature information, and inputting the feature information into the hidden state acquisition module;
processing the plurality of characteristic information through the hidden state acquisition module to acquire the hidden state of the predicted moment;
and calculating the hidden state of the predicted moment through the prediction module, and determining a predicted wave beam corresponding to the predicted moment.
3. The method according to claim 2, wherein the hidden state acquisition module includes a long-term and short-term memory network and a fully-connected network-based differential equation solver, the processing the plurality of feature information by the hidden state acquisition module to acquire the hidden state of the predicted time includes:
acquiring a target hiding state corresponding to the moment of the target beam training through the memory data of the last beam training before the target beam training, the last hiding state of the last beam training and the plurality of characteristic information in the long-short-term memory network, wherein the memory data of the last beam training is used for indicating the dependency relationship among a plurality of groups of receiving signals of the plurality of beam training before the target beam training;
And obtaining the hidden state of the predicted moment through the differential equation solver based on the fully connected network.
4. A method according to claim 3, wherein the derivative function is fitted by a neural network to a classification function modeling the process of selecting the best beam from the plurality of candidate beams.
5. A method according to claim 3, wherein said obtaining, by said fully-connected network-based differential equation solver, a hidden state of said predicted instant comprises:
expanding the derivative function at a preset point through a Taylor formula to obtain a first multi-order polynomial corresponding to the derivative function at the preset point;
obtaining a second polynomial corresponding to the adjacent point according to the distance between the preset point and the adjacent point of the preset point and the first polynomial;
and obtaining the hiding state of the prediction moment according to the target hiding state and the second multi-order polynomial.
6. The method according to claim 2, wherein the calculating, by the prediction module, the hidden state of the predicted time, and determining the predicted beam corresponding to the predicted time, includes:
Acquiring probability vectors of a plurality of candidate beams corresponding to the hidden state of the prediction moment, wherein each probability component in the probability vectors is used for representing the probability that the corresponding candidate beam is the prediction beam corresponding to the prediction moment;
and comparing each probability component in the probability vectors, and determining the maximum probability component, wherein the target candidate beam corresponding to the maximum probability component is the predicted beam corresponding to the predicted time.
7. The method of claim 3, wherein the hidden state acquisition module further comprises a memory unit and a hidden state storage unit, the memory unit is configured to record memory data of beam training, the hidden state storage unit is configured to record a hidden state corresponding to the beam training, and after the acquiring the hidden state at the predicted time, the method further comprises:
updating the memory unit according to the hidden state of the predicted moment and the plurality of characteristic information;
and updating the hidden state storage unit according to the hidden state of the predicted moment.
8. The method of claim 6, wherein before said inputting the set of received signals and the predicted time to a prediction model to obtain a predicted beam corresponding to the predicted time, comprising:
Judging whether the difference between the predicted time and the target beam training time is smaller than the time interval between two times of training;
and when the difference between the predicted time and the target beam training time is smaller than the time interval between the two training times, inputting the group of received signals and the predicted time into a prediction model to obtain a predicted beam corresponding to the predicted time.
9. A beam tracking apparatus, comprising:
the acquisition module is used for acquiring a group of received signals of target beam training before the prediction time, wherein the group of received signals comprise received signals which are in one-to-one correspondence with a plurality of candidate beams in a preset codebook, and the target beam training is the last beam training from the prediction time;
the input module is configured to input the set of received signals and the prediction time to a prediction model to obtain a predicted beam corresponding to the prediction time, where the predicted beam is one of the plurality of candidate beams, the prediction model is obtained by learning signal features of a sample received signal and a hidden state corresponding to the signal features, and the sample received signal is a received signal corresponding to beam training before the prediction time.
10. A computer device comprising a memory and a processor, the memory storing a computer program executable on the processor, characterized in that the processor implements the steps of the method of any of claims 1 to 8 when the program is executed.
11. A computer readable storage medium, on which a computer program is stored, which computer program, when being executed by a processor, implements the method according to any one of claims 1 to 8.
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202311452397.3A CN117560046A (en) | 2023-11-02 | 2023-11-02 | Beam tracking method and device, equipment and storage medium |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202311452397.3A CN117560046A (en) | 2023-11-02 | 2023-11-02 | Beam tracking method and device, equipment and storage medium |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| CN117560046A true CN117560046A (en) | 2024-02-13 |
Family
ID=89813814
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN202311452397.3A Pending CN117560046A (en) | 2023-11-02 | 2023-11-02 | Beam tracking method and device, equipment and storage medium |
Country Status (1)
| Country | Link |
|---|---|
| CN (1) | CN117560046A (en) |
Cited By (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN120343715A (en) * | 2025-06-18 | 2025-07-18 | 中国电信股份有限公司 | Optimal switching beam pair search method and device |
| CN120389773A (en) * | 2025-06-05 | 2025-07-29 | 北京邮电大学 | Beam prediction method, device, equipment and medium based on multimodal large model |
-
2023
- 2023-11-02 CN CN202311452397.3A patent/CN117560046A/en active Pending
Cited By (4)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN120389773A (en) * | 2025-06-05 | 2025-07-29 | 北京邮电大学 | Beam prediction method, device, equipment and medium based on multimodal large model |
| CN120389773B (en) * | 2025-06-05 | 2025-10-10 | 北京邮电大学 | Beam prediction method, device, equipment and medium based on multi-mode large model |
| CN120343715A (en) * | 2025-06-18 | 2025-07-18 | 中国电信股份有限公司 | Optimal switching beam pair search method and device |
| CN120343715B (en) * | 2025-06-18 | 2025-11-18 | 中国电信股份有限公司 | Optimal switching beam pair searching method and device |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| Zhou et al. | Deep-learning-based spatial–temporal channel prediction for smart high-speed railway communication networks | |
| Hsieh et al. | Deep learning-based indoor localization using received signal strength and channel state information | |
| CN119357642B (en) | Embodied Agent Time Series Data Modeling Method Based on Frequency Domain Learning | |
| Khatab et al. | A fingerprint technique for indoor localization using autoencoder based semi-supervised deep extreme learning machine | |
| Pan et al. | Deep stacked autoencoder-based long-term spectrum prediction using real-world data | |
| CN117560046A (en) | Beam tracking method and device, equipment and storage medium | |
| Zhao et al. | DeepCount: Crowd counting with Wi-Fi using deep learning | |
| Liu et al. | Large-scale deep learning framework on FPGA for fingerprint-based indoor localization | |
| Elmezughi et al. | Path loss modeling based on neural networks and ensemble method for future wireless networks | |
| Yoo et al. | Distributed estimation using online semi‐supervised particle filter for mobile sensor networks | |
| Yu et al. | Fingerprint extraction and classification of wireless channels based on deep convolutional neural networks | |
| Chen et al. | ACT‐GAN: Radio map construction based on generative adversarial networks with ACT blocks | |
| Guo et al. | DSIL: An effective spectrum prediction framework against spectrum concept drift | |
| Wang et al. | Deep learning models for spectrum prediction: A review | |
| Radhakrishnan et al. | Performance analysis of long short-term memory-based Markovian spectrum prediction | |
| Sun et al. | Enabling lightweight device-free wireless sensing with network pruning and quantization | |
| Guangliang et al. | Multi-channel multi-step spectrum prediction using transformer and stacked Bi-LSTM | |
| Wandale et al. | Simulated annealing assisted sparse array selection utilizing deep learning | |
| CN116017280B (en) | A fast indoor path tracking method without carrying equipment | |
| Guo et al. | High-precision reconstruction method based on MTS-GAN for electromagnetic environment data in SAGIoT | |
| CN116597656A (en) | Method, equipment and medium for predicting road traffic flow based on big data analysis | |
| Peng et al. | Integration of attention mechanism and CNN-BiGRU for TDOA/FDOA collaborative mobile underwater multi-scene localization algorithm | |
| Trentin et al. | Unsupervised nonparametric density estimation: A neural network approach | |
| CN119442824A (en) | Antenna design method and device | |
| CN116168249A (en) | Search method and device, chip, computer-readable storage medium |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| PB01 | Publication | ||
| PB01 | Publication | ||
| SE01 | Entry into force of request for substantive examination | ||
| SE01 | Entry into force of request for substantive examination |