Base station selecting method based on deeply study in LTE-V
Technical field
The present invention relates to the LTE-V communication technologys and DRL technology, and in particular to a kind of base based on the continuous decision of neural network
It stands selection method, for reducing LTE-V network congestion rate.
Background technique
LTE-V (long term evolution-vehicle, Long Term Evolution-Vehicl) is that China has independent intellectual property right
V2X technology, be the ITS based on timesharing long term evolution (Time Division-Long Term Evolution, TD-LTE)
System scheme belongs to the important application branch of the subsequent evolution technology of LTE.2 months 2015,3GPP working group LTE-V standard
Change research work formally to start, the proposition of Release 14 indicates that LTE-V technical standard is formulated work and counted in 3GPP working group
Formal beginning in drawing, while compatible and performance be substantially improved will be also obtained in 5G.LTE V2V Core part in
The end of the year 2016 finished, and LTE V2X Core part finishes at the beginning of 2017, and V2V is the core of LTE-V, it is contemplated that the end of the year 2018 are complete
Knot, system and equipment based on LTE-V technical standard are estimated will to start commercialization after the year two thousand twenty.
In peak time and congested link, the very big periodic broadcast of the load capacity that road safety and traffic efficiency can generate
Information.If without reasonably congestion control scheme, load caused by these message will lead to serious message delay, and
Acid test can be brought to LTE network capacity.In addition to this, the base that vehicle selects channel conditions best by random competition
It stands, this is easy to cause network congestion in the biggish situation of vehicle flowrate.Therefore, it is necessary to design one kind effectively simultaneously for LTE-V
And eNB (best base station, evolved node B) selection algorithm that robustness is good.
Summary of the invention
The purpose of the present invention is to the delay performance for the cellular communications network for introducing the LTE-V communication technology and network congestions
Deficiency existing for aspect, and the base station selecting method based on deeply study in a kind of LTE-V is provided.
The purpose of the present invention can be achieved through the following technical solutions:
A kind of base station selecting method based on deeply study in LTE-V, comprising the following steps:
1) according to LTE-V network communication feature and base station selected performance indicator, Q function is constructed;
2) mobile management unit obtains the status information of vehicle in network, constructs state matrix, and be stored in experience replay pond;
3) using experience replay pond as sample, the Q function based on building obtains one using competition-dual training method training
For selecting the main DQN of optimal access base station;
4) the main DQN obtained with training handles input information, output selection access base station.
Further, the LTE-V network communication feature includes communication bandwidth and signal-to-noise ratio, and the base station selected performance refers to
Mark includes user's receiving velocity and load of base station.
Further, the Q function specifically constructs as follows:
In formula, μ indicates that user's receiving velocity, L indicate that load of base station, R indicate that reward function, α indicate learning rate, Q (st,
at) indicating that being in state s in t moment takes movement a to can be obtained expectation reward, subscript s' expression is taken dynamic at state s
Make next state of a entrance, γ ∈ [0,1] is discount factor, w1、w2For weight coefficient,It indicates in t+1
It carves and takes different movements to can be obtained greatest hope reward in state s.
Further, in the competition-dual training method:
A target DQN and a main DQN are established based on Q function, base station, the Q function maxima of the base station are selected by main DQN
It is calculated and is generated by target DQN.
Further, in the competition-dual training method, whether restrained using loss function as whether training of judgement is tied
The foundation of beam, the loss function are as follows:
In formula, rt+1Indicate that being located at state s at the t+1 moment takes the reward size harvested after movement a, QtargetIndicate mesh
Mark the Q function maxima that DQN is generated, QmainIndicate the Q function maxima that main DQN is generated, γ ∈ [0,1] is discount factor.
Further, in the competition-dual training method, training selects access base using ε-greedy algorithm every time
It stands, while updating network parameter using back-propagation algorithm and adaptability moments estimation algorithm.
Further, the exploration probability of the ε-greedy algorithm is as follows:
εt+1(s)=δ × f (s, a, σ)+(1- δ) × εt(s)
In formula, δ is current state selectable movement sum, and f (s, a, σ) characterizes the uncertainty of environment, σ ∈ [0,
1] direction and sensitivity, ε are indicatedt+1(s) it indicates to be located at the probability that state s takes DQN generation movement a at the t+1 moment.
Further, in the competition-dual training method, optimal hyper parameter is selected using cross-validation method.
Further, the capacity in experience replay pond is T, preferential to delete most when the quantity of the state matrix of deposit is greater than T
The state matrix being early stored in.
Compared with prior art, the present invention combines the delay performance and load-balancing performance of communication, enables vehicle
It is enough timely and reliably to be communicated, it has the advantages that
1) present invention is according to the relevant Q function of LTE-V communication special point design, to convert reinforcing for congestion control problem
Optimal decision-making problem in study, improves base station selected efficiency.
2) present invention is used as Agent (generation with MME (mobile management unit, Mobility Management Entity)
Reason), base station side network congestion probability and the receiving velocity of receiving end are considered in car networking to design reward (reward) letter
Number proposes the base learnt based on deeply in conjunction with Q (movement-value) function modelling is carried out in LTE-V the characteristics of vehicle communication
It stands eNB selection method, makes the congestion probability of network under a maximum value, to guarantee the load balancing of whole network.
3) the present invention is based on competition-double-depth Q networks (Dueling-Double Deep Q Network) to fit within
The Q function modeled under LTE-V network, and using reception delay, network congestion probability as base station selected standard, it is selected for vehicle
It is most not susceptible to the base station of network congestion, guarantees LTE-V network delay performance and load balancing, to promote communication performance.
4) present invention exists, and training selects access base station using ε-greedy algorithm every time, while being calculated using backpropagation
Method and adaptability moments estimation (Adaptive moment estimation, Adam) algorithm update network parameter, effectively increase
Motion space is rich
5) present invention carries out hyper parameter selection using cross-validation method, more preferably network model can be obtained, to improve
Base station selected precision.
Detailed description of the invention
Fig. 1 is application scenarios schematic diagram of the invention;
Fig. 2 is flow diagram of the invention.
Specific embodiment
The present invention is described in detail with specific embodiment below in conjunction with the accompanying drawings.The present embodiment is with technical solution of the present invention
Premised on implemented, the detailed implementation method and specific operation process are given, but protection scope of the present invention is not limited to
Following embodiments.
The present invention be directed to long term evolution-vehicle (Long Term Evolution-Vehicle, LTE-V) under vehicle with
Machine contention access network, the problem of be easy to causeing network congestion, provide the base station choosing based on deeply study in a kind of LTE-V
Selection method combines the delay performance and load-balancing performance of communication, allows the vehicle to timely and reliably be communicated, answer
It is as shown in Figure 1 with scene.The present invention using mobile management unit in LTE core network (Mobility Management Entity,
MME) as agency (agent), while considering network lateral load and receiving end receiving velocity, the matching for completing vehicle and eNB is asked
Topic reduces network congestion probability, reduces network delay.Use competition-double-depth Q network (Dueling-Double Deep Q
Network, DQN) come fit object movement-evaluation function (action-value function), dimensional state input is completed,
The conversion of low-dimensional movement output.
As shown in Fig. 2, method includes the following steps:
Step 1: according to LTE-V network communication feature and base station selected performance indicator, constructing Q function.
The LTE-V network communication feature includes communication bandwidth Bandwidth and signal-to-noise ratio SINR, the base station selected property
Energy index includes user's receiving velocity μ and load of base station L, then Q function specifically constructs as follows:
μ=Bandwidth × log2(1+SINR)
In formula, μ indicates that user's receiving velocity, L indicate that load of base station, R indicate that reward function, α indicate learning rate, Q (st,
at) indicating that being in state s in t moment takes movement a to can be obtained expectation reward, subscript s' expression is taken dynamic at state s
Make next state of a entrance, subscript k indicates k-th of base station, and γ ∈ [0,1] is discount factor, w1、w2For weight coefficient,Indicate that being in state s at the t+1 moment takes different movements to can be obtained greatest hope reward.
Step 2: mobile management unit obtains the status information of vehicle in network, constructs state matrix, and is stored in experience and returns
Put pond.The capacity in experience replay pond is T, when the quantity of the state matrix of deposit is greater than T, preferentially deletes the state being stored in earliest
Matrix.
Step 3: randomly selecting a part of sample-feed DQN study from experience replay pond D when training every time.With experience replay
Pond is sample, the Q function based on building, obtains one for selecting optimal access base station using competition-dual training method training
Main DQN.
The present invention is intended using competition-double-depth Q network (Dueling-Double Deep Q Network, DQN)
It closes target action-evaluation function (action-value function), completes dimensional state input, low-dimensional movement output turns
Change.Competition-dual training method specifically: a target DQN and a main DQN are established based on Q function, main DQN passes through its Q function most
Big value (abbreviation Q value) selects eNB, then goes to obtain this Q value of the movement on target DQN.Master network is responsible for selecting eNB in this way,
And the Q value of this chosen eNB is then generated by target DQN.
In the competition-dual training method, whether the foundation whether terminated as training of judgement is restrained using loss function,
The loss function are as follows:
In formula, rt+1Indicate that being located at state s at the t+1 moment takes the reward size harvested after movement a, QtargetIndicate mesh
Mark the Q value that DQN is generated, QmainIndicate the Q value that main DQN is generated, γ ∈ [0,1] is discount factor.
Rich in order to increase motion space, training selects access base station using ε-greedy algorithm every time, makes simultaneously
Network parameter is updated with back-propagation algorithm and adaptability moments estimation algorithm.
ε-greedy algorithm is the movement (utilizing) for having the probability selection of ε to be generated by DQN in each state, there is 1-
The probability of ε takes movement (exploring) at random, it is therefore an objective to expand optional motion space.In training process of the present invention, according to probability ε
It judges whether to explore, if so, random selection base station or no, then select the corresponding base station of Q function maxima.
The exploration probability of the ε-greedy algorithm is as follows:
εt+1(s)=δ × f (s, a, σ)+(1- δ) × εt(s)
In formula, δ is current state selectable movement sum, and f (s, a, σ) characterizes the uncertainty of environment, σ ∈ [0,
1] direction and sensitivity, ε are indicatedt+1(s) it indicates to be located at the probability that state s takes DQN generation movement a at the t+1 moment.
The forward-propagating of neural network, i.e. reasoning process calculate loss function Loss, profound nerve net using input
Network is considered as multi hierarchical and nested function, and backpropagation is to be become using the chain rule differentiated to each of function
Amount is differentiated, and carrys out more new variables using gradient.
Adam is a kind of adaptive learning rate optimization algorithm, even if the exponent-weighted average of variable first derivative is rectified
The more new direction and size of positive gradient, second order gradient reconcile the learning rate size updated every time, so that change of gradient is cracking
Variable update slows down.
In the competition-dual training method, optimal hyper parameter is selected using cross-validation method.Cross-validation method is one
Kind hyper parameter selection algorithm, i.e., so-called tune ginseng.Training data is divided into K parts, is trained using K-1 parts therein, it is remaining
Portion is used as test set, tests K times in this way, the average value on test set is taken to correspond to the performance of model as current hyper parameter collection.
It repeats that optimal model can be obtained from M model M times.
Step 4: the main DQN obtained with training handles input information, output selection access base station.
After the convergence of DQN parameter, when practical application, only needs to retain main DQN, is directly exported and is selected according to its forward-propagating
Access base station.
The preferred embodiment of the present invention has been described in detail above.It should be appreciated that those skilled in the art without
It needs creative work according to the present invention can conceive and makes many modifications and variations.Therefore, all technologies in the art
Personnel are available by logical analysis, reasoning, or a limited experiment on the basis of existing technology under this invention's idea
Technical solution, all should be within the scope of protection determined by the claims.