[go: up one dir, main page]

CN111476058A - A Gesture Recognition Method Based on Millimeter Wave Radar - Google Patents

A Gesture Recognition Method Based on Millimeter Wave Radar Download PDF

Info

Publication number
CN111476058A
CN111476058A CN201910063997.8A CN201910063997A CN111476058A CN 111476058 A CN111476058 A CN 111476058A CN 201910063997 A CN201910063997 A CN 201910063997A CN 111476058 A CN111476058 A CN 111476058A
Authority
CN
China
Prior art keywords
gesture
layer
wave radar
trajectory
recognized
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910063997.8A
Other languages
Chinese (zh)
Other versions
CN111476058B (en
Inventor
吴永乐
郑洪涛
黎淑兰
王卫民
刘元安
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing University of Posts and Telecommunications
Original Assignee
Beijing University of Posts and Telecommunications
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing University of Posts and Telecommunications filed Critical Beijing University of Posts and Telecommunications
Priority to CN201910063997.8A priority Critical patent/CN111476058B/en
Publication of CN111476058A publication Critical patent/CN111476058A/en
Application granted granted Critical
Publication of CN111476058B publication Critical patent/CN111476058B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • G06V40/28Recognition of hand or arm movements, e.g. recognition of deaf sign language
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Biophysics (AREA)
  • Molecular Biology (AREA)
  • Biomedical Technology (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Computational Linguistics (AREA)
  • Software Systems (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Psychiatry (AREA)
  • Social Psychology (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

本发明公开了一种基于毫米波雷达的手势识别方法,该方法包括:构建卷积神经网络模型;获取多种手势的轨迹图作为训练集F,基于所述训练集F对所述卷积神经网络模型训练,得到优化识别模型;所述手势轨迹图为最大峰值对应的动目标在距离‑多普勒坐标系下的移动轨迹;将识别手势的轨迹图输入所述优化识别模型,以识别出所述识别手势的手势类型。本发明实施方式提供的手势识别方法中利用卷积神经网络模型对多种手势的轨迹图训练,得到优化识别模型,将识别手势的轨迹图输入优化识别模型,能够快速并准确的得到手势类型。手势识别方法较为简单,数据处理量少,计算简单。

Figure 201910063997

The invention discloses a gesture recognition method based on millimeter wave radar, the method includes: constructing a convolutional neural network model; The network model is trained to obtain an optimized recognition model; the gesture trajectory diagram is the movement trajectory of the moving target corresponding to the maximum peak in the distance-Doppler coordinate system; the trajectory diagram of the recognized gesture is input into the optimized recognition model to identify The gesture type of the recognized gesture. In the gesture recognition method provided by the embodiment of the present invention, the convolutional neural network model is used to train the trajectory graphs of various gestures to obtain an optimized recognition model, and the trajectory graphs of the recognized gestures are input into the optimized recognition model, which can quickly and accurately obtain the gesture type. The gesture recognition method is relatively simple, the amount of data processing is small, and the calculation is simple.

Figure 201910063997

Description

一种基于毫米波雷达的手势识别方法A Gesture Recognition Method Based on Millimeter Wave Radar

技术领域technical field

本发明涉及手势识别技术领域,尤其是涉及一种基于毫米波雷达的手势识别方法。The invention relates to the technical field of gesture recognition, in particular to a gesture recognition method based on a millimeter wave radar.

背景技术Background technique

手势动作的识别一般是基于摄像头采集的信息进行处理,以实现不同手势的分类识别。手势识别的应用很广泛,比如远程开启开关、操纵微型电子设备、手语自动翻译等等,可以极大提高人民生活的便利性。但是,用摄像头识别手势动作存在以下的缺点:The recognition of gesture actions is generally processed based on the information collected by the camera, so as to realize the classification and recognition of different gestures. Gesture recognition has a wide range of applications, such as remotely turning on switches, manipulating tiny electronic devices, automatic sign language translation, etc., which can greatly improve the convenience of people's lives. However, using the camera to recognize gestures has the following disadvantages:

(1)摄像头容易受到光线的影响,导致对手势的识别效果不好,一般当光照强度减小一半时,手势识别的准确率会减少三分之一。(1) The camera is easily affected by light, resulting in poor gesture recognition. Generally, when the light intensity is reduced by half, the accuracy of gesture recognition will be reduced by one-third.

(2)摄像头受探测距离的影响,当距离较远时,识别效果不好。而且,当人离传感器越远远时,往往就需要越高的摄像头分辨率,会造成数据量过大和成本过高的问题。(2) The camera is affected by the detection distance. When the distance is long, the recognition effect is not good. Moreover, when the person is farther away from the sensor, higher camera resolution is often required, which will cause problems of excessive data volume and high cost.

(3)基于摄像头的手势识别算法较为复杂,数据处理量大,导致功耗较大,对计算资源要求高,不便于集成到小型设备中。(3) The camera-based gesture recognition algorithm is relatively complex, with a large amount of data processing, resulting in high power consumption, high requirements on computing resources, and inconvenient integration into small devices.

(4)当一个摄像头联网后,很容易受到不法分子的攻击,导致隐私泄露。(4) When a camera is connected to the Internet, it is easily attacked by criminals, resulting in privacy leakage.

发明内容SUMMARY OF THE INVENTION

本发明的目的是提供一种基于毫米波雷达的手势识别方法,手势势识别方法中利用卷积神经网络模型对多种手势的轨迹图训练,得到优化识别模型,将识别手势的轨迹图输入优化识别模型,能够快速并准确的得到手势类型。手势识别方法较为简单,数据处理量少,计算简单。The purpose of the present invention is to provide a gesture recognition method based on millimeter wave radar. In the gesture gesture recognition method, the convolutional neural network model is used to train the trajectory graphs of various gestures, and an optimized recognition model is obtained, and the trajectory graphs of the recognized gestures are input to optimize the input. The recognition model can quickly and accurately obtain the gesture type. The gesture recognition method is relatively simple, the amount of data processing is small, and the calculation is simple.

为解决上述问题,本发明的第一方面提供了一种基于毫米波雷达的手势识别方法,该方法包括:构建卷积神经网络模型;获取多种手势的轨迹图作为训练集F,基于所述训练集F对所述卷积神经网络模型训练,得到优化识别模型;将识别手势的轨迹图输入所述优化识别模型,以识别出所述识别手势的手势类型。In order to solve the above problems, the first aspect of the present invention provides a gesture recognition method based on millimeter wave radar, the method includes: constructing a convolutional neural network model; The training set F trains the convolutional neural network model to obtain an optimized recognition model; input the trajectory map of the recognized gesture into the optimized recognition model to recognize the gesture type of the recognized gesture.

进一步地,手势的轨迹图获取方法包括:获取毫米波雷达对一个手势扫描产生的回波数据;基于回波数据得到多张RD图像;分别对每张所述RD图像的谱峰搜索并求出该图像最大峰值,进而得到多个最大峰值点;在RD坐标系下,将多个最大峰值点依次连线,得到该手势的轨迹图。Further, the method for acquiring the trajectory map of gestures includes: acquiring echo data generated by a millimeter-wave radar scanning a gesture; obtaining multiple RD images based on the echo data; searching and obtaining the spectral peaks of each RD image respectively. The maximum peak value of the image is obtained, and then multiple maximum peak points are obtained; in the RD coordinate system, the multiple maximum peak points are sequentially connected to obtain the trajectory map of the gesture.

进一步地,卷积神经网络模型的结构依次包括:输入层、第一卷积层、第一激励层、第一池化层、第二卷积层、第二激励层、第二池化层、第一全连接层、第二全连接层和输出层。Further, the structure of the convolutional neural network model sequentially includes: an input layer, a first convolutional layer, a first excitation layer, a first pooling layer, a second convolutional layer, a second excitation layer, a second pooling layer, The first fully connected layer, the second fully connected layer and the output layer.

进一步地,识别出识别手势的手势类型步骤包括:基于优化识别模型,分别得到识别手势的轨迹图与训练集F中所有类型手势的概率,所有类型的手势概率之和为1;确定概率高于95%的手势为识别手势的类型。Further, the step of recognizing the gesture type of the recognized gesture includes: based on the optimized recognition model, respectively obtaining the trajectory map of the recognized gesture and the probabilities of all types of gestures in the training set F, and the sum of the probabilities of all types of gestures is 1; the determination probability is higher than 95% of gestures are of the type that recognizes the gesture.

进一步地,所述不同手势包括前后移动、左右移动、按钮和翻转中的一种或多种。Further, the different gestures include one or more of forward and backward movement, left and right movement, button and flip.

本发明的第二方面,还提供了一种基于毫米波雷达的手势识别系统,该系统包括:建模模块,构建卷积神经网络模型;训练模块,获取多种手势轨迹图作为训练集F,基于所述训练集F对所述卷积神经网络模型训练,得到优化识别模型;手势识别模块,将识别手势的轨迹图输入所述优化识别模型,以识别出所述识别手势的手势类型。The second aspect of the present invention further provides a gesture recognition system based on millimeter-wave radar, the system includes: a modeling module for constructing a convolutional neural network model; a training module for acquiring various gesture trajectory graphs as a training set F, The convolutional neural network model is trained based on the training set F to obtain an optimized recognition model; the gesture recognition module inputs the trajectory map of the recognized gesture into the optimized recognition model to recognize the gesture type of the recognized gesture.

进一步地,训练模块获取手势的轨迹图的方法包括:获取毫米波雷达对一个手势扫描产生的回波数据;基于所述回波数据得到多张RD图像;分别对每张所述RD图像的谱峰搜索并求出该图像最大峰值,进而得到多个最大峰值点;在RD坐标系下,将多个最大峰值点依次连线,得到该手势的轨迹图。Further, the method for acquiring the trajectory map of the gesture by the training module includes: acquiring echo data generated by a millimeter-wave radar scanning a gesture; obtaining multiple RD images based on the echo data; Peak search is performed to obtain the maximum peak value of the image, and then multiple maximum peak points are obtained; in the RD coordinate system, multiple maximum peak points are connected in sequence to obtain the trajectory map of the gesture.

进一步地,卷积神经网络模型的结构依次包括:输入层、第一卷积层、第一激励层、第一池化层、第二卷积层、第二激励层、第二池化层、第一全连接层、第二全连接层和输出层。Further, the structure of the convolutional neural network model sequentially includes: an input layer, a first convolutional layer, a first excitation layer, a first pooling layer, a second convolutional layer, a second excitation layer, a second pooling layer, The first fully connected layer, the second fully connected layer and the output layer.

进一步地,手势识别模块识别出识别手势的手势类型的步骤包括:基于优化识别模型,分别得到识别手势为训练集F中所有类型手势的概率,其中,所有类型的手势概率之和为1;确定概率高于95%的手势为所述识别手势的类型。Further, the step that the gesture recognition module recognizes the gesture type of the recognition gesture includes: based on the optimized recognition model, obtaining the probability that the recognition gesture is all types of gestures in the training set F respectively, wherein, the sum of the gesture probabilities of all types is 1; Gestures with a probability higher than 95% are the type of the identified gesture.

进一步地,手势包括前后移动、左右移动、按钮和翻转中的一种或多种。Further, the gesture includes one or more of forward and backward movement, left and right movement, button and flip.

本发明的上述技术方案具有如下有益的技术效果:The above-mentioned technical scheme of the present invention has the following beneficial technical effects:

(1)本发明实施例方式提供的一种手势的轨迹图获取方法,对毫米波雷达对手势扫描产生的回波数据处理,得到手势的轨迹图,图上的每个点指的是最大峰值对应的动目标在一段时间内的运动信息,该运动信息包括动目标与雷达的距离和在这段时间内,动目标相对于毫米波雷达的径向速度。将该手势的轨迹图作为手势识别的一种特征,相对于现有技术,数据的处理量大幅度减少,计算较为简单。(1) A method for obtaining a trajectory graph of a gesture provided by an embodiment of the present invention processes the echo data generated by the millimeter wave radar scanning the gesture to obtain a trajectory graph of the gesture, and each point on the graph refers to the maximum peak value The motion information of the corresponding moving target within a period of time, the motion information includes the distance between the moving target and the radar and the radial velocity of the moving target relative to the millimeter-wave radar within this period of time. Taking the trajectory map of the gesture as a feature of gesture recognition, compared with the prior art, the amount of data processing is greatly reduced, and the calculation is relatively simple.

(2)本发明实施方式提供的一种基于毫米波雷达的手势识别方法及系统,手势识别方法中利用卷积神经网络模型对多种手势的轨迹图训练,得到优化识别模型,将识别手势的轨迹图输入优化识别模型,能够快速并准确的得到手势类型。手势识别方法较为简单,数据处理量少,计算简单。(2) A millimeter-wave radar-based gesture recognition method and system provided by the embodiments of the present invention. In the gesture recognition method, a convolutional neural network model is used to train the trajectory graphs of various gestures to obtain an optimized recognition model. The trajectory map input optimizes the recognition model, which can quickly and accurately obtain the gesture type. The gesture recognition method is relatively simple, the amount of data processing is small, and the calculation is simple.

附图说明Description of drawings

图1是根据本发明第一实施方式的一种手势轨迹图获取方法的流程示意图;1 is a schematic flowchart of a method for acquiring a gesture trajectory map according to a first embodiment of the present invention;

图2是根据本发明第一实施方式的一种上下移动的手势的RD图;2 is an RD diagram of a gesture of moving up and down according to the first embodiment of the present invention;

图3是根据本发明第二实施方式的一种基于毫米波雷达的手势识别方法的流程示意图;3 is a schematic flowchart of a gesture recognition method based on a millimeter wave radar according to a second embodiment of the present invention;

图4a为手指按钮手势动作的示意图;Fig. 4a is the schematic diagram of finger button gesture action;

图4b为手指做按钮的轨迹图;Fig. 4b is the trajectory diagram of finger making button;

图5a为手掌翻转运动的示意图;Fig. 5a is the schematic diagram of palm turning movement;

图5b为手掌翻转运动的轨迹图;Fig. 5b is a trajectory diagram of palm flipping motion;

图6a为左右移动手势的示意图;6a is a schematic diagram of a left and right movement gesture;

图6b为左右移动手势的轨迹图;Figure 6b is a trajectory diagram of a left and right movement gesture;

图7a为上下移动手势的示意图;Fig. 7a is the schematic diagram of up and down movement gesture;

图7b为上下移动手势的轨迹图;Fig. 7b is the trajectory diagram of the up and down movement gesture;

图8是根据本发明第三实施方式的一种手势识别系统的结构示意图。FIG. 8 is a schematic structural diagram of a gesture recognition system according to a third embodiment of the present invention.

具体实施方式Detailed ways

为使本发明的目的、技术方案和优点更加清楚明了,下面结合具体实施方式并参照附图,对本发明进一步详细说明。应该理解,这些描述只是示例性的,而并非要限制本发明的范围。此外,在以下说明中,省略了对公知结构和技术的描述,以避免不必要地混淆本发明的概念。In order to make the objectives, technical solutions and advantages of the present invention clearer, the present invention will be further described in detail below with reference to the specific embodiments and the accompanying drawings. It should be understood that these descriptions are exemplary only and are not intended to limit the scope of the invention. Also, in the following description, descriptions of well-known structures and techniques are omitted to avoid unnecessarily obscuring the concepts of the present invention.

图1是根据本发明第一实施方式的一种手势的轨迹图获取方法的流程示意图。FIG. 1 is a schematic flowchart of a method for acquiring a trajectory map of a gesture according to a first embodiment of the present invention.

如图1所示,该方法包括步骤S102-步骤S108。As shown in FIG. 1 , the method includes steps S102 to S108.

在一个可选的实施例中,该方法包括步骤S102-步骤S108:In an optional embodiment, the method includes steps S102-S108:

步骤S102,获取毫米波雷达对一个手势扫描产生的回波数据。其中,回波数据为回波基带数据。Step S102, acquiring echo data generated by scanning a gesture by the millimeter-wave radar. The echo data is echo baseband data.

具体地,在手部动作存在的时间内,毫米波雷达每隔几毫秒对目标发射chirp信号,采集发射信号的回波基带数据。例如,手部动作持续时间为5秒,这5秒毫米波雷达采集100帧的数据,而每一帧的数据是毫米波雷达分多次采集的。比如说,毫米波雷达每隔5毫秒采集一次数据,采集一帧数据需毫米波雷达采集10次。Specifically, during the time when the hand movement exists, the millimeter-wave radar transmits a chirp signal to the target every few milliseconds, and collects the echo baseband data of the transmitted signal. For example, the duration of the hand movement is 5 seconds, and the millimeter-wave radar collects 100 frames of data in these 5 seconds, and the data of each frame is collected by the millimeter-wave radar in multiple times. For example, the millimeter-wave radar collects data every 5 milliseconds, and the millimeter-wave radar needs to collect 10 times to collect a frame of data.

可选的,毫米波雷达选用线性调频连续波(linear frequency modulatedcontinuous wave,LFMCW)雷达,进一步可选的,采用77GHz LFMCW毫米波雷达。77GHz LFMCW毫米波雷达包含两个发射天线和四个接收天线,发射信号为线性调频连续波(LFMCW),发射信号最大带宽为4GHz,理论最大距离分辨率为3.75cm,可以探测到较为精细的手指运动。Optionally, the millimeter-wave radar uses a linear frequency modulated continuous wave (LFMCW) radar, and further optionally, a 77GHz LFMCW millimeter-wave radar is used. The 77GHz LFMCW millimeter-wave radar includes two transmitting antennas and four receiving antennas. The transmitting signal is a linear frequency modulated continuous wave (LFMCW), the maximum bandwidth of the transmitting signal is 4GHz, and the theoretical maximum distance resolution is 3.75cm. It can detect relatively fine fingers sports.

需要说明的是,LFMCW是频率随时间周期性变化的波形,每个周期其频率一般随时间线性增长,称这个线性增长为chirp,chirp的参数(如时间,斜率)影响系统性能,本发明设计了chirp信号的相关参数:带宽B为3.4404GHz、chirp周期为778us、每帧chirp数为32、每秒帧数为50以及每个chirp采样176点。发送方式采用双天线时分复用的方式发送,每秒共发射20帧chirp信号,每帧共发送64个chirp,由天线A,B交替发送。It should be noted that LFMCW is a waveform whose frequency changes periodically with time. The frequency of each cycle generally increases linearly with time. This linear increase is called chirp. The parameters of chirp (such as time and slope) affect system performance. The design of the present invention The relevant parameters of the chirp signal are as follows: the bandwidth B is 3.4404GHz, the chirp period is 778us, the number of chirps per frame is 32, the number of frames per second is 50, and each chirp sample is 176 points. The transmission mode adopts the dual-antenna time-division multiplexing mode to transmit, a total of 20 frames of chirp signals are transmitted per second, and a total of 64 chirps are sent in each frame, which are sent alternately by antennas A and B.

步骤S104,基于所述回波数据得到多张RD图像。Step S104, obtaining a plurality of RD images based on the echo data.

具体地,将每次采集的回波数据进行快速傅里叶变换(FFT变换),每次都得到一个距离维的一维矩阵。Specifically, fast Fourier transform (FFT transform) is performed on the echo data collected each time, and a one-dimensional matrix of distance dimension is obtained each time.

第一次采集回波数据得到的一维矩阵作为一个二维矩阵N的第一行,将第二次采集回波数据得到的一维矩阵作为该二维矩阵N的第二行,以此类推,得到该二维矩阵N,该二维矩阵N与雷达采集一帧的手势数据对应。The one-dimensional matrix obtained by collecting echo data for the first time is taken as the first row of a two-dimensional matrix N, and the one-dimensional matrix obtained by collecting echo data for the second time is taken as the second row of the two-dimensional matrix N, and so on. , the two-dimensional matrix N is obtained, and the two-dimensional matrix N corresponds to one frame of gesture data collected by the radar.

对该二维矩阵N的列向量做傅里叶变换,得到一个新的二维矩阵M。该二维矩阵M表示为雷达采集一帧的数据对应的一张RD图像(Range-Doppler,二维距离-多普勒)。Fourier transform is performed on the column vector of the two-dimensional matrix N, and a new two-dimensional matrix M is obtained. The two-dimensional matrix M is represented as an RD image (Range-Doppler, two-dimensional range-Doppler) corresponding to one frame of data collected by the radar.

同理,利用上述步骤,采集同一手势的多张RD图像。比如,一个手势共有100帧数据,那么此处就会得到100张的RD图像。Similarly, using the above steps, multiple RD images of the same gesture are collected. For example, if a gesture has 100 frames of data, 100 RD images will be obtained here.

步骤S106,分别对每张所述RD图像的谱峰搜索,得到多个最大峰值点。In step S106, the spectral peaks of each of the RD images are searched respectively to obtain a plurality of maximum peak points.

具体地,对每张RD图像的谱峰搜索,得到手部动作对应的行坐标i和列坐标j以及值Mij,该Mij即为每张RD图的一个峰值点。Specifically, the spectral peaks of each RD image are searched to obtain the row coordinate i and column coordinate j corresponding to the hand motion and the value M ij , and the M ij is a peak point of each RD image.

将每张RD图中的多个峰值点Mij,中的i,j,Mij作为行向量,多组行向量形成一个新矩阵O。对上述矩阵O求第三列的最大值,并把最大值所在行向量记为Vmax,该Vmax为该张RD图中的最大峰值点。以此类推,得到多张RD图中的每张图的最大峰值点。The i, j, and M ij in the multiple peak points M ij in each RD graph are taken as row vectors, and multiple sets of row vectors form a new matrix O. Calculate the maximum value of the third column of the above matrix O, and record the row vector where the maximum value is located as V max , where V max is the maximum peak point in the RD graph. And so on, to get the maximum peak point of each picture in multiple RD pictures.

步骤S108,在RD坐标系下,将多个最大峰值点依次连线,得到手部动作的轨迹图。手势的轨迹图为最大峰值对应的动目标在距离-多普勒坐标系下的移动轨迹。Step S108, in the RD coordinate system, connect a plurality of maximum peak points in sequence to obtain a trajectory diagram of the hand movement. The trajectory diagram of the gesture is the movement trajectory of the moving target corresponding to the maximum peak value in the range-Doppler coordinate system.

具体地,依次将每个最大值行向量Vmax的第一列作为横坐标,第二列作为纵坐标用星号标在一个二维坐标系中并同时连接相邻两个行向量的星号,最终得到一个图像,该图像则为手势的轨迹图。Specifically, the first column of each maximum row vector V max is used as the abscissa, and the second column as the ordinate is marked with asterisks in a two-dimensional coordinate system, and the asterisks of two adjacent row vectors are connected at the same time. , and finally get an image, which is the trajectory map of the gesture.

该手势的轨迹图是二维的距离-多普勒轨迹图,图上的每个点指的是最大峰值对应的动目标在一段时间内的运动信息,该运动信息包括动目标与雷达的距离和在这段时间内,动目标相对于毫米波雷达的径向速度。将该手势的轨迹图作为手势识别的一种特征,相对于现有技术,数据的处理量大幅度减少,计算较为简单。The trajectory graph of the gesture is a two-dimensional distance-Doppler trajectory graph. Each point on the graph refers to the motion information of the moving target corresponding to the maximum peak value within a period of time, and the motion information includes the distance between the moving target and the radar. and the radial velocity of the moving target relative to the millimeter-wave radar during this time. Taking the trajectory map of the gesture as a feature of gesture recognition, compared with the prior art, the amount of data processing is greatly reduced, and the calculation is relatively simple.

图2为根据本发明第一实施方式提供的上下移动的手势的RD图(距离-多普勒图),在该图中,中间的黑白相间的格子表示为动目标,本申请中动目标是指手或者手的一部分,颜色的深浅表示为整个动目标对应的回波强度。颜色越深表示为动目标的该位置对应的回波强度越强。在该RD图中,最中间的黑色格子代表了在该RD图中的最大峰值。在此图中只有一个峰值。这个峰值对应动目标的回波最强的位置,横坐标表示该径向速度,纵坐标表示距雷达的距离。2 is an RD diagram (distance-Doppler diagram) of a gesture of moving up and down according to the first embodiment of the present invention. In this diagram, the black and white grid in the middle is represented as a moving target. In this application, the moving target is Refers to the hand or a part of the hand, the depth of the color represents the echo intensity corresponding to the entire moving target. The darker the color, the stronger the echo intensity corresponding to the position of the moving target. In the RD plot, the black grid in the middle represents the largest peak in the RD plot. There is only one peak in this graph. This peak corresponds to the position where the echo of the moving target is the strongest, the abscissa indicates the radial velocity, and the ordinate indicates the distance from the radar.

图3是根据本发明第二实施方式的一种手势识别方法的流程示意图。FIG. 3 is a schematic flowchart of a gesture recognition method according to a second embodiment of the present invention.

如图3所示,该方法包括步骤S201~S203:As shown in FIG. 3, the method includes steps S201-S203:

步骤S201,构建卷积神经网络模型。Step S201, constructing a convolutional neural network model.

具体地,卷积神经网络模型的结构依次包括:输入层、第一卷积层、第一激励层、第一池化层、第二卷积层、第二激励层、第二池化层、第一全连接层、第二全连接层和输出层。Specifically, the structure of the convolutional neural network model sequentially includes: an input layer, a first convolutional layer, a first excitation layer, a first pooling layer, a second convolutional layer, a second excitation layer, a second pooling layer, The first fully connected layer, the second fully connected layer and the output layer.

进一步具体地,在搭建神经网络之前,需要进行卷积神经网络学习速率的配置以及卷积神经网络各层的权值初始化,以调节卷积神经网络训练速度和识别效果。例如,初始的卷积神经网络的各个参数为:学习速率等参数设置:learning_rate(学习率)为0.001,beta1(一阶矩估计的指数衰减率)为0.9,beta2(二阶矩估计的指数衰减率)为0.999,epsilon(该参数是非常小的数,其为了防止在实现中除以零)为10E-8。权重和偏置初始化为随机初始化。More specifically, before building a neural network, it is necessary to configure the learning rate of the convolutional neural network and initialize the weights of each layer of the convolutional neural network to adjust the training speed and recognition effect of the convolutional neural network. For example, the parameters of the initial convolutional neural network are: learning rate and other parameter settings: learning_rate (learning rate) is 0.001, beta1 (exponential decay rate of first-order moment estimation) is 0.9, beta2 (exponential decay of second-order moment estimation) ratio) is 0.999 and epsilon (this parameter is a very small number to prevent division by zero in implementations) is 10E-8. Weights and biases are initialized randomly.

步骤S202,获取多种手势的轨迹图作为训练集F,基于训练集F对卷积神经网络模型训练,得到优化识别模型。其中,训练集F中的手势的轨迹图都设置好对应的手势的标签,也就是说,训练集F中的手势都是已知的。Step S202, obtaining trajectory graphs of various gestures as a training set F, and training a convolutional neural network model based on the training set F to obtain an optimized recognition model. Among them, the trajectories of the gestures in the training set F are all set with corresponding gesture labels, that is, the gestures in the training set F are all known.

可选的,对卷积神经网络模型训练包括以下步骤:Optionally, training the convolutional neural network model includes the following steps:

步骤1,由于输入层只采用灰度信息,因此将训练数据集L中的每个手势数据设置为90x90x1的矩阵。Step 1, since the input layer only uses grayscale information, each gesture data in the training dataset L is set as a 90x90x1 matrix.

步骤2,将输入层数据依次经过第一卷积层、第一激励层和第一池化层进行过滤。其中,第一卷积层采用32个滤波器,通道数为1,卷积核为3x3,卷积步长为1,padding方式采用Same方式,即输入输出大小一致,第一激励层采用ReLU函数,经过第一卷积和激励层后输出为90x90x32的矩阵。第一池化层采用maxpooling方式,滤波器大小为2x2,步长为2,经过第一池化层后的输出为45x45x32。Step 2: The input layer data is filtered through the first convolution layer, the first excitation layer and the first pooling layer in sequence. Among them, the first convolution layer uses 32 filters, the number of channels is 1, the convolution kernel is 3x3, the convolution step size is 1, the padding method adopts the Same method, that is, the input and output sizes are the same, and the first excitation layer uses the ReLU function , after the first convolution and excitation layer, the output is a 90x90x32 matrix. The first pooling layer adopts the maxpooling method, the filter size is 2x2, the step size is 2, and the output after the first pooling layer is 45x45x32.

步骤3,将第一池化层输出数据依次经过第二卷积层和第二池化层进行过滤。其中,第二卷积层与第一卷积层类似,只是通道数为32,激励函数也是采用ReLU函数,经过第二卷积层后的输出为45x45x32的矩阵,然后经过第二池化层,参数跟第一池化层参数类似,输出为23x23x32。Step 3: Filter the output data of the first pooling layer through the second convolution layer and the second pooling layer in sequence. Among them, the second convolutional layer is similar to the first convolutional layer, except that the number of channels is 32, and the excitation function also uses the ReLU function. The parameters are similar to those of the first pooling layer, and the output is 23x23x32.

步骤4,将第二池化层的输出拉成一维向量,然后依次经过第一全连接层和第二全连接层,得到输出为4x1的矩阵。Step 4: Pull the output of the second pooling layer into a one-dimensional vector, and then pass through the first fully-connected layer and the second fully-connected layer in turn to obtain a matrix with an output of 4x1.

步骤5,计算根据步骤1-4中各层得到输出值与目标值的误差,当误差大于预设的期望值时,将误差传回网络中,即根据求得的输出值与目标值的误差对权值进行更新。Step 5: Calculate the error between the output value and the target value obtained according to each layer in steps 1-4. When the error is greater than the preset expected value, the error is transmitted back to the network, that is, according to the obtained output value and the target value. The weights are updated.

将所有训练集F内的多种类型的多张手势的轨迹图输入卷积神经网络模型中并训练,最终得到一个精确参数的优化识别模型。The trajectory maps of various types of gestures in all training sets F are input into the convolutional neural network model and trained, and finally an optimized recognition model with precise parameters is obtained.

步骤S203,将识别手势的轨迹图输入优化识别模型,以识别出该识别手势的手势类型。Step S203, input the trajectory map of the recognized gesture into the optimized recognition model to recognize the gesture type of the recognized gesture.

具体地,基于优化识别模型,分别得到所述识别手势为所述训练集F中所有手势类型的概率,其中,所有类型的手势概率之和为1;确定概率高于95%的手势为识别手势的类型。例如,训练集F中有4种手势,优化识别模型计算识别手势为A手势的概率为a%,为B手势的概率为b%,为C手势的概率为c%,为D手势的概率为d%。其中,a%、b%、c%和d%之和为1。最后,优化识别模型最将概率高于95%的手势类型输出。Specifically, based on the optimized recognition model, the probabilities that the recognized gestures are all gesture types in the training set F are respectively obtained, wherein the sum of the probabilities of all types of gestures is 1; the gestures with a determined probability higher than 95% are recognized gestures type. For example, there are 4 kinds of gestures in the training set F, and the optimized recognition model calculates that the probability that the recognized gesture is a gesture is a%, the probability that it is a gesture B is b%, the probability that it is a gesture C is c%, and the probability that it is a gesture D is d%. Wherein, the sum of a%, b%, c% and d% is 1. Finally, the optimized recognition model outputs gesture types with probability higher than 95%.

可选的,上述训练集F内多种类型的手势的轨迹图包括四种手势动作:上下移动、左右移动、手掌翻转运动和手指做按钮动作。Optionally, the trajectory graphs of the various types of gestures in the above training set F include four gesture actions: up and down movement, left and right movement, palm turning movement, and finger button action.

下面作为示意性的给出了四种手势的动作示意图和按照第一实施方式得到的轨迹图。图4a为手指做按钮手势动作的示意图,图4b为按钮的轨迹图。图5a为手掌翻转运动的示意图,图5b为手掌翻转运动的轨迹图。图6a为左右移动手势的示意图,图6b为左右移动手势的轨迹图。图7a为上下移动手势的示意图,图7b为上下移动手势的轨迹图。The action diagrams of the four gestures and the trajectory diagram obtained according to the first embodiment are given below as a schematic diagram. FIG. 4a is a schematic diagram of a finger making a button gesture, and FIG. 4b is a trajectory diagram of the button. FIG. 5a is a schematic diagram of the palm turning movement, and FIG. 5b is a trajectory diagram of the palm turning movement. FIG. 6a is a schematic diagram of a left-right movement gesture, and FIG. 6b is a trajectory diagram of the left-right movement gesture. FIG. 7a is a schematic diagram of the up and down movement gesture, and FIG. 7b is a trajectory diagram of the up and down movement gesture.

在本发明的第二实施方式中,一共采用了上述四种手势,采用132组手势轨迹图(每种手势33张)进行训练,并采用28组手势轨迹图(每种手势7张)进行测试,测试的手势的RD图象对应的手势类型全部识别正确。可见采用本发明第二实施方式提供的方法,具有较高的识别率。In the second embodiment of the present invention, a total of the above four gestures are used, 132 groups of gesture trajectory maps (33 for each gesture) are used for training, and 28 groups of gesture trajectory diagrams (7 for each gesture) are used for testing , the gesture types corresponding to the RD images of the tested gestures are all recognized correctly. It can be seen that the method provided by the second embodiment of the present invention has a higher recognition rate.

本发明实施方式提供的一种基于毫米波雷达的手势识别方法及系统,手势识别方法中利用卷积神经网络模型对多种手势的轨迹图训练,得到优化识别模型,将识别手势的轨迹图输入优化识别模型,能够快速并准确的得到手势类型。手势识别方法较为简单,数据处理量少,计算简单。Embodiments of the present invention provide a method and system for gesture recognition based on millimeter-wave radar. In the gesture recognition method, a convolutional neural network model is used to train the trajectory graphs of various gestures to obtain an optimized recognition model, and the trajectory graphs of the recognized gestures are input into the Optimize the recognition model to get the gesture type quickly and accurately. The gesture recognition method is relatively simple, the amount of data processing is small, and the calculation is simple.

图8是根据本发明第三实施方式的一种手势识别系统的结构示意图。FIG. 8 is a schematic structural diagram of a gesture recognition system according to a third embodiment of the present invention.

如图8所示,该系统包括:建模模块、训练模块和手势识别模块。As shown in Figure 8, the system includes: a modeling module, a training module and a gesture recognition module.

建模模块用于构建卷积神经网络模型。其中,卷积神经网络模型的结构依次包括:输入层、第一卷积层、第一激励层、第一池化层、第二卷积层、激励层、第二池化层、第一全连接层、第二全连接层和输出层。The modeling module is used to build convolutional neural network models. Among them, the structure of the convolutional neural network model sequentially includes: an input layer, a first convolutional layer, a first excitation layer, a first pooling layer, a second convolutional layer, an excitation layer, a second pooling layer, and a first full connection layer, second fully connected layer and output layer.

训练模块,获取多种手势轨迹图作为训练集F,基于训练集F对卷积神经网络模型训练,得到优化识别模型。可选的,卷积神经网络模型的训练过程可根据Adam算法,用分类结果已知的训练集估计出各层的权重和偏置,根据不断输入的训练集,权重和偏置会越来越精确。The training module obtains various gesture trajectory graphs as the training set F, and trains the convolutional neural network model based on the training set F to obtain an optimized recognition model. Optionally, in the training process of the convolutional neural network model, the weight and bias of each layer can be estimated by using the training set with known classification results according to the Adam algorithm. According to the continuously input training set, the weight and bias will become more and more accurate.

可选的,对卷积神经网络模型训练包括以下步骤:Optionally, training the convolutional neural network model includes the following steps:

步骤1,由于输入层只采用灰度信息,因此将训练数据集L中的每个手势数据设置为90x90x1的矩阵。Step 1, since the input layer only uses grayscale information, each gesture data in the training dataset L is set as a 90x90x1 matrix.

步骤2,将输入层数据依次经过第一卷积层、第一激励层和第一池化层进行过滤。其中,第一卷积层采用32个滤波器,通道数为1,卷积核为3x3,卷积步长为1,padding方式采用Same方式,即输入输出大小一致,第一激励层采用ReLU函数,经过第一卷积和激励层后输出为90x90x32的矩阵。第一池化层采用maxpooling方式,滤波器大小为2x2,步长为2,经过第一池化层后的输出为45x45x32。Step 2: The input layer data is filtered through the first convolution layer, the first excitation layer and the first pooling layer in sequence. Among them, the first convolution layer uses 32 filters, the number of channels is 1, the convolution kernel is 3x3, the convolution step size is 1, the padding method adopts the Same method, that is, the input and output sizes are the same, and the first excitation layer uses the ReLU function , after the first convolution and excitation layer, the output is a 90x90x32 matrix. The first pooling layer adopts the maxpooling method, the filter size is 2x2, the step size is 2, and the output after the first pooling layer is 45x45x32.

步骤3,将第一池化层输出数据依次经过第二卷积层和第二池化层进行过滤。其中,第二卷积层与第一卷积层类似,只是通道数为32,激励函数也是采用ReLU函数,经过第二卷积层后的输出为45x45x32的矩阵,然后经过第二池化层,参数跟第一池化层参数类似,输出为23x23x32。Step 3: Filter the output data of the first pooling layer through the second convolution layer and the second pooling layer in sequence. Among them, the second convolutional layer is similar to the first convolutional layer, except that the number of channels is 32, and the excitation function also uses the ReLU function. The parameters are similar to those of the first pooling layer, and the output is 23x23x32.

步骤4,将第二池化层的输出拉成一维向量,然后依次经过第一全连接层和第二全连接层,得到输出为4x1的矩阵。Step 4: Pull the output of the second pooling layer into a one-dimensional vector, and then pass through the first fully-connected layer and the second fully-connected layer in turn to obtain a matrix with an output of 4x1.

步骤5,计算根据步骤1-4中各层得到输出值与目标值的误差,当误差大于预设的期望值时,将误差传回网络中,即根据求得的输出值与目标值的误差对权值进行更新。Step 5: Calculate the error between the output value and the target value obtained according to each layer in steps 1-4. When the error is greater than the preset expected value, the error is transmitted back to the network, that is, according to the obtained output value and the target value. The weights are updated.

将所有训练集F内的多种类型的多张手势的轨迹图输入卷积神经网络模型中并训练,最终得到一个精确参数的优化识别模型。The trajectory maps of various types of gestures in all training sets F are input into the convolutional neural network model and trained, and finally an optimized recognition model with precise parameters is obtained.

手势识别模块,将识别手势的轨迹图输入优化识别模型,以识别出识别手势的手势类型。The gesture recognition module inputs the trajectory map of the recognized gesture into the optimized recognition model to recognize the gesture type of the recognized gesture.

具体地,识别出所述识别手势的手势类型的步骤包括:基于所述优化识别模型,分别得到所述识别手势为所述训练集F中所有类型手势的概率,其中,所有类型的手势概率之和为1;确定概率高于95%的手势为所述识别手势的类型。Specifically, the step of recognizing the gesture type of the recognized gesture includes: based on the optimized recognition model, respectively obtaining the probability that the recognized gesture is all types of gestures in the training set F, wherein the sum of the probabilities of all types of gestures The sum is 1; gestures with a probability higher than 95% are determined to be the type of the recognized gesture.

可选的,当卷积神经网络模型计算得到所有类型的手势概率中,没有任何手势的概率高于95%,则模型输出“无法该识别手势”。Optionally, when the convolutional neural network model calculates the probability of all types of gestures, and the probability of no gesture is higher than 95%, the model outputs "the gesture cannot be recognized".

可选的,上述训练集F内多种类型的手势的轨迹图包括四种手势动作:上下移动、左右移动、手掌翻转运动和手指做按钮动作。Optionally, the trajectory graphs of the various types of gestures in the above training set F include four gesture actions: up and down movement, left and right movement, palm turning movement, and finger button action.

本发明实施方式提供的一种基于毫米波雷达的手势识别方法及系统,手势识别方法中利用卷积神经网络模型对多种手势的轨迹图训练,得到优化识别模型,将识别手势的轨迹图输入优化识别模型,能够快速并准确的得到手势类型。手势识别方法较为简单,数据处理量少,计算简单。Embodiments of the present invention provide a method and system for gesture recognition based on millimeter-wave radar. In the gesture recognition method, a convolutional neural network model is used to train the trajectory graphs of various gestures to obtain an optimized recognition model, and the trajectory graphs of the recognized gestures are input into the Optimize the recognition model to get the gesture type quickly and accurately. The gesture recognition method is relatively simple, the amount of data processing is small, and the calculation is simple.

应当理解的是,本发明的上述具体实施方式仅仅用于示例性说明或解释本发明的原理,而不构成对本发明的限制。因此,在不偏离本发明的精神和范围的情况下所做的任何修改、等同替换、改进等,均应包含在本发明的保护范围之内。此外,本发明所附权利要求旨在涵盖落入所附权利要求范围和边界、或者这种范围和边界的等同形式内的全部变化和修改例。It should be understood that the above-mentioned specific embodiments of the present invention are only used to illustrate or explain the principle of the present invention, but not to limit the present invention. Therefore, any modifications, equivalent replacements, improvements, etc. made without departing from the spirit and scope of the present invention should be included within the protection scope of the present invention. Furthermore, the appended claims of this invention are intended to cover all changes and modifications that fall within the scope and boundaries of the appended claims, or the equivalents of such scope and boundaries.

Claims (5)

1. A gesture recognition method based on millimeter wave radar is characterized by comprising the following steps:
constructing a convolutional neural network model;
acquiring a trajectory graph of various gestures as a training set F, and training the convolutional neural network model based on the training set F to obtain an optimized recognition model; the gesture track graph is a moving track of a moving target corresponding to the maximum peak value in a range-Doppler coordinate system;
inputting a trajectory diagram of a recognized gesture into the optimized recognition model to recognize a gesture type of the recognized gesture.
2. The millimeter wave radar-based gesture recognition method according to claim 1, wherein the gesture trajectory graph acquisition method comprises:
acquiring echo data generated by a millimeter wave radar for scanning a gesture;
obtaining a plurality of RD images based on the echo data;
respectively searching the spectral peak of each RD image and solving the maximum peak value of the image so as to obtain a plurality of maximum peak value points;
and sequentially connecting the maximum peak points in an RD coordinate system to obtain a track graph of the gesture.
3. The millimeter wave radar-based gesture recognition method according to claim 1, wherein the structure of the convolutional neural network model sequentially comprises: the device comprises an input layer, a first convolution layer, a first excitation layer, a first pooling layer, a second convolution layer, a second excitation layer, a second pooling layer, a first full-connection layer, a second full-connection layer and an output layer.
4. The millimeter wave radar-based gesture recognition method according to claim 1, wherein the step of recognizing the gesture type of the recognized gesture comprises:
based on the optimized recognition model, respectively obtaining probabilities that the recognized gesture is all gesture types in the training set F, wherein the sum of the gesture probabilities of all types is 1;
determining a gesture with a probability higher than 95% as the type of the recognized gesture.
5. The millimeter wave radar-based gesture recognition method according to any one of claims 1 to 4, wherein the gesture includes one or more of a back-and-forth movement, a left-and-right movement, a button, and a flip.
CN201910063997.8A 2019-01-23 2019-01-23 Gesture recognition method based on millimeter wave radar Active CN111476058B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910063997.8A CN111476058B (en) 2019-01-23 2019-01-23 Gesture recognition method based on millimeter wave radar

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910063997.8A CN111476058B (en) 2019-01-23 2019-01-23 Gesture recognition method based on millimeter wave radar

Publications (2)

Publication Number Publication Date
CN111476058A true CN111476058A (en) 2020-07-31
CN111476058B CN111476058B (en) 2024-05-14

Family

ID=71743314

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910063997.8A Active CN111476058B (en) 2019-01-23 2019-01-23 Gesture recognition method based on millimeter wave radar

Country Status (1)

Country Link
CN (1) CN111476058B (en)

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112363156A (en) * 2020-11-12 2021-02-12 苏州矽典微智能科技有限公司 Air gesture recognition method and device and intelligent equipment
CN112415510A (en) * 2020-11-05 2021-02-26 深圳大学 Double-station radar gesture recognition method, device and system and storage medium
CN113267773A (en) * 2021-04-14 2021-08-17 北京航空航天大学 Millimeter wave radar-based accurate detection and accurate positioning method for indoor personnel
TWI756122B (en) * 2021-04-30 2022-02-21 開酷科技股份有限公司 Distance Doppler Radar Angle Sensing Method and Device
US20220082684A1 (en) * 2021-09-07 2022-03-17 Hangzhou Innovation Research Institute of Beijing University of Aeronautics and Astronautics Millimeter wave radar gesture recognition method and device based on trajectory judgment
CN114236492A (en) * 2022-02-23 2022-03-25 南京一淳科技有限公司 Millimeter wave radar micro gesture recognition method
US11474232B2 (en) 2021-03-19 2022-10-18 KaiKuTek Inc. Range doppler angle detection method and range doppler angle detection device
WO2023029390A1 (en) * 2021-09-01 2023-03-09 东南大学 Millimeter wave radar-based gesture detection and recognition method
US12223114B2 (en) 2021-06-24 2025-02-11 Beijing Boe Technology Development Co., Ltd. Interactive control apparatus and interactive system
CN120234618A (en) * 2025-05-29 2025-07-01 大连海事大学 A few-sample wireless gesture recognition method based on meta-action

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105740823A (en) * 2016-02-01 2016-07-06 北京高科中天技术股份有限公司 Dynamic gesture trace recognition method based on depth convolution neural network
CN107024685A (en) * 2017-04-10 2017-08-08 北京航空航天大学 A kind of gesture identification method based on apart from velocity characteristic
CN108334814A (en) * 2018-01-11 2018-07-27 浙江工业大学 A kind of AR system gesture identification methods based on convolutional neural networks combination user's habituation behavioural analysis
CN109188414A (en) * 2018-09-12 2019-01-11 北京工业大学 A kind of gesture motion detection method based on millimetre-wave radar

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105740823A (en) * 2016-02-01 2016-07-06 北京高科中天技术股份有限公司 Dynamic gesture trace recognition method based on depth convolution neural network
CN107024685A (en) * 2017-04-10 2017-08-08 北京航空航天大学 A kind of gesture identification method based on apart from velocity characteristic
CN108334814A (en) * 2018-01-11 2018-07-27 浙江工业大学 A kind of AR system gesture identification methods based on convolutional neural networks combination user's habituation behavioural analysis
CN109188414A (en) * 2018-09-12 2019-01-11 北京工业大学 A kind of gesture motion detection method based on millimetre-wave radar

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112415510A (en) * 2020-11-05 2021-02-26 深圳大学 Double-station radar gesture recognition method, device and system and storage medium
CN112363156A (en) * 2020-11-12 2021-02-12 苏州矽典微智能科技有限公司 Air gesture recognition method and device and intelligent equipment
US11474232B2 (en) 2021-03-19 2022-10-18 KaiKuTek Inc. Range doppler angle detection method and range doppler angle detection device
CN113267773A (en) * 2021-04-14 2021-08-17 北京航空航天大学 Millimeter wave radar-based accurate detection and accurate positioning method for indoor personnel
CN113267773B (en) * 2021-04-14 2023-02-21 北京航空航天大学 Millimeter wave radar-based accurate detection and accurate positioning method for indoor personnel
TWI756122B (en) * 2021-04-30 2022-02-21 開酷科技股份有限公司 Distance Doppler Radar Angle Sensing Method and Device
US12223114B2 (en) 2021-06-24 2025-02-11 Beijing Boe Technology Development Co., Ltd. Interactive control apparatus and interactive system
WO2023029390A1 (en) * 2021-09-01 2023-03-09 东南大学 Millimeter wave radar-based gesture detection and recognition method
US20220082684A1 (en) * 2021-09-07 2022-03-17 Hangzhou Innovation Research Institute of Beijing University of Aeronautics and Astronautics Millimeter wave radar gesture recognition method and device based on trajectory judgment
CN114236492A (en) * 2022-02-23 2022-03-25 南京一淳科技有限公司 Millimeter wave radar micro gesture recognition method
CN120234618A (en) * 2025-05-29 2025-07-01 大连海事大学 A few-sample wireless gesture recognition method based on meta-action

Also Published As

Publication number Publication date
CN111476058B (en) 2024-05-14

Similar Documents

Publication Publication Date Title
CN111476058A (en) A Gesture Recognition Method Based on Millimeter Wave Radar
CN113837131B (en) Multi-scale feature fusion gesture recognition method based on FMCW millimeter wave radar
Zhao et al. Cubelearn: End-to-end learning for human motion recognition from raw mmwave radar signals
CN109271838B (en) A three-parameter feature fusion gesture recognition method based on FMCW radar
Wang et al. TS-I3D based hand gesture recognition method with radar sensor
CN114661142B (en) A gesture recognition method and device
US9098116B2 (en) Object and movement detection
CN110765974A (en) Micro-motion gesture recognition method based on millimeter wave radar and convolutional neural network
Wu et al. Dynamic hand gesture recognition using FMCW radar sensor for driving assistance
CN107024685A (en) A kind of gesture identification method based on apart from velocity characteristic
CN111541511A (en) Communication jamming signal identification method based on target detection in complex electromagnetic environment
JP2018163096A (en) Information processing method and information processing device
CN114708663B (en) A millimeter wave radar sensing gesture recognition method
CN110262653A (en) A kind of millimeter wave sensor gesture identification method based on convolutional neural networks
WO2023029390A1 (en) Millimeter wave radar-based gesture detection and recognition method
CN115343704A (en) Hand gesture recognition method for FMCW millimeter wave radar based on multi-task learning
CN115937977B (en) Multi-dimensional feature fusion-based few-sample human body action recognition method
CN118486038A (en) Recognition method based on millimeter wave radar and camera fusion
Li et al. Hand gesture recognition using ir-uwb radar with shufflenet v2
CN114527459A (en) Multi-feature image fusion gesture recognition method based on frequency modulation continuous wave radar
CN114168058A (en) Air handwritten character recognition method and device for FMCW single millimeter wave radar
CN110275616A (en) Gesture recognition module, control method and electronic device
Li et al. Dynamic gesture recognition method based on millimeter-wave radar
Song et al. SISO radar-based human movement direction determination using micro-doppler signatures
Hao et al. UltraSonicg: Highly robust gesture recognition on ultrasonic devices

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant