CN106406516A - Local real-time movement trajectory characteristic extraction and identification method for smartphone - Google Patents
Local real-time movement trajectory characteristic extraction and identification method for smartphone Download PDFInfo
- Publication number
- CN106406516A CN106406516A CN201610732089.XA CN201610732089A CN106406516A CN 106406516 A CN106406516 A CN 106406516A CN 201610732089 A CN201610732089 A CN 201610732089A CN 106406516 A CN106406516 A CN 106406516A
- Authority
- CN
- China
- Prior art keywords
- data
- feature
- feature points
- user
- smart phone
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/017—Gesture based interaction, e.g. based on a set of recognized hand gestures
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2218/00—Aspects of pattern recognition specially adapted for signal processing
- G06F2218/08—Feature extraction
- G06F2218/10—Feature extraction by analysing the shape of a waveform, e.g. extracting parameters relating to peaks
Landscapes
- Engineering & Computer Science (AREA)
- General Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Telephone Function (AREA)
- Mobile Radio Communication Systems (AREA)
Abstract
一种智能手机本地实时的运动轨迹特征提取与识别方法,分为训练与识别两个阶段,在训练阶段用户携带智能手机做不同的动作,采集其三轴加速度传感器数据;对用户姿态行为提取波峰波谷特征点进而将其二进制编码;编码后特征点向量化;并将波峰波谷数量值相同的动作进行矩阵化,采集多个样本在上位机进行训练,建立用户动作特征标准库。在识别阶段:移植用户动作标准库至手机,用户携带手机做动作,在手机端采集传感器数据,智能手机本地进行特征点提取,匹配用户动作标准库,进而识别用户动作。本发明识别手机运动轨迹所需存储的特征向量为二进制且数据规模小,识别过程较为简单无需进行大量计算,适用于资源受限的智能手机等嵌入式设备。
A method for feature extraction and recognition of local real-time motion trajectory of a smart phone, which is divided into two stages of training and recognition. In the training stage, the user carries the smart phone to do different actions and collects the data of the three-axis acceleration sensor; The feature points of the valley are then encoded in binary; after encoding, the feature points are vectorized; and the actions with the same peak and valley values are matrixed, and multiple samples are collected for training on the host computer to establish a standard library of user action features. In the identification stage: transplant the user action standard library to the mobile phone, the user carries the mobile phone to make actions, collect sensor data on the mobile phone end, and the smartphone performs feature point extraction locally, matches the user action standard library, and then recognizes the user action. The feature vectors required to be stored in the invention for identifying the motion trajectory of the mobile phone are binary and have a small data scale, the identification process is relatively simple and does not require a large number of calculations, and is suitable for embedded devices such as smart phones with limited resources.
Description
技术领域technical field
本发明涉及到智能手机轨迹识别技术领域,具体指智能手机本地实时的运动轨迹特征提取与识别方法。The invention relates to the technical field of smart phone track recognition, and specifically refers to a local real-time motion track feature extraction and recognition method for smart phones.
背景技术Background technique
手机内置有加速度传感器,根据加速度传感器可用来检测手机加速度的大小和方向,进而感知三维空间中的状态。对于手机加速度传感器有X,Y,Z三个方向的坐标,当用户携带手机做不同的动作时,对于X,Y,Z三个坐标轴方向都会产生不同的数值,三轴同时产生的数据反映了用户的状态。目前对于智能手机运动轨迹特征提取与识别过程比较复杂,且智能手机计算能力弱,存储空间小,需要借助额外设备进行非实时方式的运动轨迹特征提取与识别的现状,无法满足智能手机设备本地实时的进行运动轨迹的识别。The built-in acceleration sensor of the mobile phone can be used to detect the magnitude and direction of the acceleration of the mobile phone according to the acceleration sensor, and then perceive the state in the three-dimensional space. For the mobile phone acceleration sensor, there are coordinates in the three directions of X, Y, and Z. When the user carries the mobile phone to do different actions, different values will be generated for the three coordinate axes of X, Y, and Z. The data generated by the three axes at the same time reflects the user's status. At present, the motion trajectory feature extraction and recognition process of smartphones is relatively complicated, and the computing power of smartphones is weak, and the storage space is small, and additional equipment is needed for non-real-time motion trajectory feature extraction and recognition, which cannot meet the local real-time requirements of smartphone devices. for the identification of motion trajectories.
发明内容:Invention content:
本发明提出一种智能手机本地实时的运动轨迹特征提取与识别方法,该方法识别手机运动轨迹所需存储的特征点向量为二进制且数据规模小,识别过程较为简单无需进行大量计算,适合在资源受限的智能手机等嵌入式设备上进行。The present invention proposes a local real-time feature extraction and recognition method for the motion trajectory of a smart phone. The feature point vector stored in the method to identify the motion trajectory of the mobile phone is binary and the data scale is small. The recognition process is relatively simple and does not require a large number of calculations. Embedded devices such as smartphones are restricted.
为此,所采用的技术方案为:For this reason, the adopted technical scheme is:
一种智能手机本地实时的运动轨迹特征提取与识别方法,该方法分为训练与识别两个阶段;在训练阶段:用户携带智能手机做不同的动作,采集分离智能手机三轴加速度传感器数据;对用户姿态行为提取波峰波谷特征点;对波峰波谷特征点进行二进制编码量化;编码后特征点向量化;并将波峰波谷数量相同的动作进行矩阵化,采集多个样本在上位机进行训练,建立用户动作特征标准库;在识别阶段:移植用户动作特征标准库至智能手机,用户携带智能手机做不同的动作时,在手机端采集传感器数据,在手机端开拓缓冲区,提取运动轨迹数据,建立多线程提取分段特征点,匹配用户动作特征标准库,智能手机本地实时的进行特征点提取,进而识别用户动作。A method for feature extraction and recognition of local real-time motion trajectory of a smart phone, the method is divided into two stages of training and recognition; in the training stage: the user carries the smart phone to do different actions, and collects and separates the three-axis acceleration sensor data of the smart phone; Extract the feature points of peaks and valleys from user gestures and behaviors; perform binary code quantization on the feature points of peaks and valleys; vectorize the feature points after encoding; matrixize the actions with the same number of peaks and valleys, collect multiple samples for training on the host computer, and establish user Action feature standard library; in the recognition stage: transplant the user action feature standard library to the smart phone, when the user carries the smart phone to do different actions, collect sensor data on the mobile phone end, open up the buffer area on the mobile phone end, extract motion trajectory data, and establish multiple The thread extracts segmented feature points, matches the standard library of user action features, and the smartphone performs feature point extraction locally in real time, and then recognizes user actions.
相对于针对智能手机运动轨迹特征提取与识别过程复杂,而智能手机计算能力弱、存储空间小,需要借助额外设备进行非实时方式的运动轨迹特征提取与识别的现状,本发明对智能手机运动轨迹的识别具有以下优点:(1)可以针对智能手机或者一些小型嵌入式设备进行运动轨迹的识别。(2)计算特征点简单,减少对特征点的归一化处理,适合计算能力弱的智能手机。(3)对运动轨迹特征点进行量化编码之后,识别手机运动轨迹所需存储的特征点向量为二进制且数据规模小,适合存储资源受限的智能手机等嵌入式设备。Compared with the complex process of feature extraction and recognition of the motion trajectory of smart phones, and the weak computing power and small storage space of smart phones, it is necessary to use additional equipment to perform non-real-time feature extraction and recognition of motion trajectory. The identification has the following advantages: (1) It can identify the motion trajectory for smart phones or some small embedded devices. (2) The calculation of feature points is simple, and the normalization processing of feature points is reduced, which is suitable for smart phones with weak computing power. (3) After the feature points of the motion trajectory are quantized and encoded, the feature point vectors required to identify the motion trajectory of the mobile phone are binary and the data size is small, which is suitable for embedded devices such as smartphones with limited storage resources.
附图说明:Description of drawings:
图1 为本发明智能手机运动轨迹训练流程图;Fig. 1 is the flow chart of smart phone motion track training of the present invention;
图2为本发明智能手机运动轨迹识别流程图;Fig. 2 is the flow chart of smart phone motion track recognition of the present invention;
图3为本发明智能手机传感器产生数据波形图;Fig. 3 is the data waveform diagram that smart phone sensor of the present invention produces;
图4为本发明分离传感器数据以及特征提取编码图;Fig. 4 is the coding diagram of separating sensor data and feature extraction in the present invention;
图5为本发明利用窗口检测波峰波谷图;Fig. 5 is that the present invention utilizes the window to detect peak and valley figure;
图6为本发明波峰波谷特征向量化图;Fig. 6 is a peak and valley feature vectorization diagram of the present invention;
图7为本发明特殊动作波峰波谷特征矩阵化图。Fig. 7 is a matrix diagram of peak and valley characteristics of special actions in the present invention.
具体实施方式:detailed description:
下面结合附图对本发明作进一步详细说明。The present invention will be described in further detail below in conjunction with the accompanying drawings.
本发明的训练流程示意图如图1所示:The schematic diagram of the training process of the present invention is as shown in Figure 1:
首先是用户携带智能手机或小型嵌入式设备做不同的动作,三轴加速度传感器产生数据,按照X,Y,Z轴将数据进行分离,然后在各个方向上寻找波峰波谷特征点,并进行编码,将特征点向量化,对于波峰波谷数目相同的姿态矩阵化,采集多个样本,利用人工神经网络训练样本,形成用户动作特征标准库。First, the user carries a smart phone or a small embedded device to do different actions. The three-axis acceleration sensor generates data, separates the data according to the X, Y, and Z axes, and then searches for the peak and valley feature points in each direction and encodes them. The feature points are vectorized, and for gestures with the same number of peaks and troughs, they are matrixed, multiple samples are collected, and the artificial neural network is used to train the samples to form a standard library of user action features.
将智能手机加速度传感器沿着X,Y,Z三轴进行分离。当对于某一动作在X轴方向有其独特性,即可识别。若在X轴无法识别,对于Y,Z轴就不必进行比对。并依次对Y轴,Z轴进行处理。利用该方法可减少在手机端数据的处理,节约智能手机计算资源。Separate the smartphone acceleration sensor along the X, Y, and Z axes. When a certain action has its uniqueness in the X-axis direction, it can be recognized. If it cannot be identified on the X axis, there is no need to compare the Y and Z axes. And process the Y axis and Z axis in turn. Using the method can reduce data processing at the mobile phone end and save computing resources of the smart phone.
利用一段数据分段的形式对用户姿态行为提取波峰波谷特征点。该方法提供了一种数据波峰波谷特征点的提取方式,方法简单,计算量较小而且可以过滤杂点找出关键特征点。Use the form of a piece of data segmentation to extract peak and valley feature points for user gesture behavior. This method provides a method for extracting feature points of peaks and valleys of data. The method is simple, the amount of calculation is small, and the key feature points can be found by filtering noise points.
对编码后的波峰波谷特征点用向量的形式表示,通过向量化可以将特征点格式化,方便进一步计算,而且可以提高计算效率。The coded peak and valley feature points are expressed in the form of vectors, and the feature points can be formatted through vectorization, which is convenient for further calculation and can improve calculation efficiency.
波峰波谷数目相同的动作进行矩阵化处理。波峰波谷数目相同的动作形成一个的矩阵,M表示三个坐标轴上波峰波谷的数目。进一步提高了计算效率。Actions with the same number of peaks and valleys are processed in a matrix. Movements with the same number of crests and troughs form a The matrix, M represents the number of peaks and valleys on the three coordinate axes. Computational efficiency is further improved.
本发明的识别流程示意图如图2所示:The schematic diagram of the identification process of the present invention is shown in Figure 2:
首先将用户动作特征标准库移植到智能手机存储空间内。用户携带智能手机做不同的动作时,在手机端采集传感器数据,在手机端开拓缓冲区,提取运动轨迹数据,建立多线程提取分段特征点,匹配用户动作特征标准库,智能手机本地实时的进行特征点提取,进而识别用户动作。Firstly, the user action feature standard library is transplanted into the storage space of the smart phone. When the user carries a smart phone to do different actions, collect sensor data on the mobile phone side, develop a buffer zone on the mobile phone side, extract motion trajectory data, establish multi-threaded extraction of segmented feature points, match user action feature standard library, and local real-time on the smart phone Feature point extraction is performed to identify user actions.
下面将对流程作进一步的说明。The process will be further described below.
如图3所示:在用户做动作时,智能手机加速度传感器在X,Y,Z三个方向产生数据,将数据进行处理,然后拼接,就是三条波形。手机坐标轴上的数据反映了用户的运动状态。As shown in Figure 3: when the user makes an action, the acceleration sensor of the smartphone generates data in the three directions of X, Y, and Z, processes the data, and then stitches them together to form three waveforms. The data on the coordinate axis of the mobile phone reflects the user's motion state.
如图4所示:根据智能手机传感器X,Y,Z三个方向对加速度传感的三组器数据分别进行处理,根据波峰波谷出现的顺序作为波形的特征点。按照波峰为1,波谷为0的方式将检测到的特征点进行编码。利用二进制编码可节约计算机存储资源。As shown in Figure 4: According to the X, Y, and Z directions of the smartphone sensor, the three sets of sensor data of the acceleration sensor are processed separately, and the order of the peaks and valleys is used as the characteristic point of the waveform. The detected feature points are encoded in such a way that the peak is 1 and the valley is 0. Utilizing binary encoding saves computer storage resources.
波峰波谷特征提取的方法:The method of peak and valley feature extraction:
如图5所示:利用滑动窗口识别一段数据的波峰与波谷。将产生的一段固定数目的数据作为定值滑动窗口,用户运动状态数据都在定值滑动窗口内,将滑动窗口平均分成等份的小滑动窗口,该小滑动窗口内进行波峰波谷数据的检测,每个小滑动窗口内都会有波峰与波谷,按照出现的次序进行排序,将波峰与波谷的各自比对,取值最大的前几个波峰以及值最小的后几个波谷,作为每个轴的特征点。通过这种方法可以过滤掉部分杂点,寻找到关键特征点。As shown in Figure 5: use the sliding window to identify the peaks and troughs of a piece of data. A fixed number of generated data is used as a fixed-value sliding window, and the user's motion state data is all in the fixed-value sliding window, and the sliding window is divided into equal parts of small sliding windows, and the detection of peak and valley data is carried out in this small sliding window. There will be peaks and troughs in each small sliding window, sorted according to the order of appearance, compare the peaks and troughs, and take the first few peaks with the largest value and the last few troughs with the smallest value as the value of each axis Feature points. Through this method, some noise points can be filtered out and key feature points can be found.
如图6所示:将波峰波谷得到的特征点进行向量化。三组加速度传感器产生的数据按照时间顺序产生的波峰波谷,形成三组向量,每个向量中的数据表示各个方向的特征点以及特征点出现的顺序。As shown in Figure 6: the feature points obtained from the peaks and valleys are vectorized. The peaks and valleys of the data generated by the three sets of acceleration sensors are generated in time order to form three sets of vectors, and the data in each vector represents the feature points in each direction and the order in which the feature points appear.
如图7所示:将特殊动作形成的数据进行矩阵化处理。分离传感器各个方向产生的数据,若三个坐标轴波峰波谷长度都相同,可以进行矩阵化处理。形成一个的矩阵。As shown in Figure 7: the data formed by the special action is matrixed. The data generated by the separation sensor in all directions can be processed in a matrix if the peak and valley lengths of the three coordinate axes are the same. forming one matrix.
识别阶段:智能手机传感器每个时刻都会产生数据,对于数据波峰波谷的计算与数据的产生会有同步的问题,采用开拓缓冲区,以及多线程解决同步问题。Recognition stage: Smartphone sensors will generate data every moment, and there will be synchronization problems between the calculation of data peaks and valleys and the generation of data, and the use of open buffers and multi-threading to solve synchronization problems.
实现方法如下:在波峰波谷检测的过程中,定义一个大的滑动窗口缓冲区,将传感器产生的数据不断的迭代存入到一个定长矩阵中,其中N根据显示窗口长度设定的一个长度值,在矩阵内加入最新数据,同时丢弃较早存入的数据,维持矩阵的长度为固定值N,然后将3个定长矩阵等分,对等分后的小的滑动窗口缓冲区开辟多个线程进行波峰波谷特征的检测。The implementation method is as follows: In the process of peak and valley detection, a large sliding window buffer is defined, and the data generated by the sensor is continuously iteratively stored in a In the fixed-length matrix, where N is based on a length value set by the length of the display window, the latest data is added to the matrix, while the data stored earlier is discarded, and the length of the matrix is maintained at a fixed value N, and then the three fixed-length matrices Equally divided, open up multiple threads for the detection of peak and valley features on the small sliding window buffer after equalization.
在用户运动状态与样本库比对过程中,首先对X轴数据进行比对,若对于某一动作在X轴方向有其独特性,即可识别。若在X轴无法识别,对于Y,Z轴就不必进行比对。依次对Y轴,Z轴进行处理。In the process of comparing the user's motion state with the sample library, the X-axis data is first compared, and if a certain action has its uniqueness in the X-axis direction, it can be identified. If it cannot be identified on the X axis, there is no need to compare the Y and Z axes. Process the Y axis and Z axis in turn.
下面通过具体的应用场景进一步说明本发明的效果:The effect of the present invention is further illustrated through specific application scenarios below:
场景1:人机交互领域。对于现在人机交互都需要购买配套的设备,花费比较昂贵。智能手机作为一个集成度非常高的设备,内部布置了许多传感器,每时每刻都在产生了大量的数据,且智能手机计算能力弱、存储空间小,对于产生的数据可以进行关键点的有效存储。识别用户动作,进而映射其他功能,进行人机交互。Scenario 1: Human-computer interaction field. For the current human-computer interaction, it is necessary to purchase supporting equipment, which is relatively expensive. As a highly integrated device, a smart phone has many sensors inside, which generate a large amount of data all the time. Moreover, the smart phone has weak computing power and small storage space, and can carry out key points for the generated data. storage. Recognize user actions, and then map other functions for human-computer interaction.
场景2:特征点归一化处理。对于一些波形特征处理方面,有时会受到很多的干扰,例如速度,力,对于采集到的特征点需要进行归一化处理,通过本方法可以不必进行同大比例或者同小比例的归一化处理,只需按照时间顺序寻找波峰波谷特征点。Scenario 2: Normalization processing of feature points. For some waveform feature processing, sometimes there will be a lot of interference, such as speed, force, and the collected feature points need to be normalized. Through this method, it is not necessary to perform normalization with the same large or small scale. , only need to find the peak and valley feature points according to the time sequence.
场景3:特征点量化以及计算。对于某些特定的领域,当寻找到特征点之后,无法对特征点进行计算处理。通过本文中对特征点进行矩阵化,可以将特征点作进一步的处理。Scenario 3: Quantization and calculation of feature points. For some specific fields, after the feature points are found, the feature points cannot be calculated. By matrixing the feature points in this paper, the feature points can be further processed.
综上,本发明通过对智能手机加速度传感器进行数据的分离,特征提取,特征编码,特征点向量化等一系列的处理,有效的解决对智能手机运动轨迹特征提取比较复杂的问题,而且对于波形特征进行编码量化,对关键信息进行有效存储,极大的节约了智能手机存储资源,特征编码可以减少对特征点进行归一化处理,节约了智能手机手机的计算资源,而且不用借助借助额外设备,在本地即可进行实时的运动轨迹特征提取与识别,适用于资源受限的智能手机等嵌入式设备。To sum up, the present invention effectively solves the relatively complex problem of extracting motion trajectory features of smart phones through a series of processing such as data separation, feature extraction, feature encoding, and feature point vectorization on the smart phone acceleration sensor, and the waveform The features are encoded and quantified, and the key information is effectively stored, which greatly saves the storage resources of the smart phone. The feature encoding can reduce the normalization processing of the feature points, save the computing resources of the smart phone, and do not need to use additional equipment. , which can perform real-time motion trajectory feature extraction and recognition locally, and is suitable for resource-constrained smart phones and other embedded devices.
Claims (4)
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN201610732089.XA CN106406516A (en) | 2016-08-26 | 2016-08-26 | Local real-time movement trajectory characteristic extraction and identification method for smartphone |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN201610732089.XA CN106406516A (en) | 2016-08-26 | 2016-08-26 | Local real-time movement trajectory characteristic extraction and identification method for smartphone |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| CN106406516A true CN106406516A (en) | 2017-02-15 |
Family
ID=58004828
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN201610732089.XA Pending CN106406516A (en) | 2016-08-26 | 2016-08-26 | Local real-time movement trajectory characteristic extraction and identification method for smartphone |
Country Status (1)
| Country | Link |
|---|---|
| CN (1) | CN106406516A (en) |
Cited By (5)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN108737623A (en) * | 2018-05-31 | 2018-11-02 | 南京航空航天大学 | The method for identifying ID of position and carrying mode is carried based on smart mobile phone |
| CN110151187A (en) * | 2019-04-09 | 2019-08-23 | 缤刻普达(北京)科技有限责任公司 | Body-building action identification method, device, computer equipment and storage medium |
| CN113283493A (en) * | 2021-05-19 | 2021-08-20 | Oppo广东移动通信有限公司 | Sample acquisition method, device, terminal and storage medium |
| CN113780447A (en) * | 2021-09-16 | 2021-12-10 | 郑州云智信安安全技术有限公司 | Sensitive data discovery and identification method and system based on flow analysis |
| WO2023178594A1 (en) * | 2022-03-24 | 2023-09-28 | 广东高驰运动科技股份有限公司 | Action counting method and apparatus, device, and storage medium |
Citations (10)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN101788861A (en) * | 2009-01-22 | 2010-07-28 | 华硕电脑股份有限公司 | Three-dimensional action recognition method and system |
| CN102246125A (en) * | 2008-10-15 | 2011-11-16 | 因文森斯公司 | Mobile devices with motion gesture recognition |
| CN102772211A (en) * | 2012-08-08 | 2012-11-14 | 中山大学 | Human movement state detection system and detection method |
| CN103345627A (en) * | 2013-07-23 | 2013-10-09 | 清华大学 | Action recognition method and device |
| CN103517118A (en) * | 2012-12-28 | 2014-01-15 | Tcl集团股份有限公司 | Motion recognition method and system for remote controller |
| CN103886323A (en) * | 2013-09-24 | 2014-06-25 | 清华大学 | Behavior identification method based on mobile terminal and mobile terminal |
| CN103984416A (en) * | 2014-06-10 | 2014-08-13 | 北京邮电大学 | Gesture recognition method based on acceleration sensor |
| CN104754111A (en) * | 2013-12-31 | 2015-07-01 | 北京新媒传信科技有限公司 | Control method for mobile terminal application and control device |
| CN104750386A (en) * | 2015-03-20 | 2015-07-01 | 广东欧珀移动通信有限公司 | Gesture recognition method and device |
| CN105159441A (en) * | 2015-07-28 | 2015-12-16 | 东华大学 | Autonomous motion identification technology based private coach smart band |
-
2016
- 2016-08-26 CN CN201610732089.XA patent/CN106406516A/en active Pending
Patent Citations (10)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN102246125A (en) * | 2008-10-15 | 2011-11-16 | 因文森斯公司 | Mobile devices with motion gesture recognition |
| CN101788861A (en) * | 2009-01-22 | 2010-07-28 | 华硕电脑股份有限公司 | Three-dimensional action recognition method and system |
| CN102772211A (en) * | 2012-08-08 | 2012-11-14 | 中山大学 | Human movement state detection system and detection method |
| CN103517118A (en) * | 2012-12-28 | 2014-01-15 | Tcl集团股份有限公司 | Motion recognition method and system for remote controller |
| CN103345627A (en) * | 2013-07-23 | 2013-10-09 | 清华大学 | Action recognition method and device |
| CN103886323A (en) * | 2013-09-24 | 2014-06-25 | 清华大学 | Behavior identification method based on mobile terminal and mobile terminal |
| CN104754111A (en) * | 2013-12-31 | 2015-07-01 | 北京新媒传信科技有限公司 | Control method for mobile terminal application and control device |
| CN103984416A (en) * | 2014-06-10 | 2014-08-13 | 北京邮电大学 | Gesture recognition method based on acceleration sensor |
| CN104750386A (en) * | 2015-03-20 | 2015-07-01 | 广东欧珀移动通信有限公司 | Gesture recognition method and device |
| CN105159441A (en) * | 2015-07-28 | 2015-12-16 | 东华大学 | Autonomous motion identification technology based private coach smart band |
Cited By (7)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN108737623A (en) * | 2018-05-31 | 2018-11-02 | 南京航空航天大学 | The method for identifying ID of position and carrying mode is carried based on smart mobile phone |
| CN110151187A (en) * | 2019-04-09 | 2019-08-23 | 缤刻普达(北京)科技有限责任公司 | Body-building action identification method, device, computer equipment and storage medium |
| CN110151187B (en) * | 2019-04-09 | 2022-07-05 | 缤刻普达(北京)科技有限责任公司 | Body-building action recognition method and device, computer equipment and storage medium |
| CN113283493A (en) * | 2021-05-19 | 2021-08-20 | Oppo广东移动通信有限公司 | Sample acquisition method, device, terminal and storage medium |
| CN113780447A (en) * | 2021-09-16 | 2021-12-10 | 郑州云智信安安全技术有限公司 | Sensitive data discovery and identification method and system based on flow analysis |
| CN113780447B (en) * | 2021-09-16 | 2023-07-11 | 郑州云智信安安全技术有限公司 | Sensitive data discovery and identification method and system based on flow analysis |
| WO2023178594A1 (en) * | 2022-03-24 | 2023-09-28 | 广东高驰运动科技股份有限公司 | Action counting method and apparatus, device, and storage medium |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| Singha et al. | Recognition of Indian sign language in live video | |
| Ibraheem et al. | Survey on various gesture recognition technologies and techniques | |
| Wang et al. | Mining motion atoms and phrases for complex action recognition | |
| CN101539994B (en) | Mutually translating system and method of sign language and speech | |
| CN102938065B (en) | Face feature extraction method and face identification method based on large-scale image data | |
| CN106406516A (en) | Local real-time movement trajectory characteristic extraction and identification method for smartphone | |
| Chen et al. | Rapid recognition of dynamic hand gestures using leap motion | |
| Nazir et al. | A bag of expression framework for improved human action recognition | |
| CN103399896B (en) | The method and system of incidence relation between identification user | |
| Santosh et al. | DTW–Radon-based shape descriptor for pattern recognition | |
| CN106548149A (en) | The recognition methods of the micro- facial expression image sequence of face in monitor video sequence | |
| CN113378770A (en) | Gesture recognition method, device, equipment, storage medium and program product | |
| CN104517100B (en) | Gesture pre-judging method and system | |
| Chanthaphan et al. | Facial emotion recognition based on facial motion stream generated by kinect | |
| Lee et al. | Cnn-based mask-pose fusion for detecting specific persons on heterogeneous embedded systems | |
| KR101899590B1 (en) | Method and Apparatus for Recognizing Hand Shapes and Gestures | |
| Sun et al. | Action disambiguation analysis using normalized google-like distance correlogram | |
| Sultana et al. | Vision based gesture recognition for alphabetical hand gestures using the SVM classifier | |
| Barnachon et al. | Human actions recognition from streamed motion capture | |
| Wiryana et al. | Feature extraction methods in sign language recognition system: a literature review | |
| Pei et al. | Pedestrian detection based on HOG and LBP | |
| Ding et al. | Similar hand gesture recognition by automatically extracting distinctive features | |
| Sarawagi et al. | Automatic facial expression recognition for image sequences | |
| Weng et al. | Online facial expression recognition based on combining texture and geometric information | |
| Truong et al. | Local descriptors without orientation normalization to enhance landmark regconition |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| C06 | Publication | ||
| PB01 | Publication | ||
| C10 | Entry into substantive examination | ||
| SE01 | Entry into force of request for substantive examination | ||
| RJ01 | Rejection of invention patent application after publication | ||
| RJ01 | Rejection of invention patent application after publication |
Application publication date: 20170215 |