[go: up one dir, main page]

CN111184512B - Method for recognizing rehabilitation training actions of upper limbs and hands of stroke patient - Google Patents

Method for recognizing rehabilitation training actions of upper limbs and hands of stroke patient Download PDF

Info

Publication number
CN111184512B
CN111184512B CN201911394850.3A CN201911394850A CN111184512B CN 111184512 B CN111184512 B CN 111184512B CN 201911394850 A CN201911394850 A CN 201911394850A CN 111184512 B CN111184512 B CN 111184512B
Authority
CN
China
Prior art keywords
layer
attention
data
weight
rehabilitation training
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911394850.3A
Other languages
Chinese (zh)
Other versions
CN111184512A (en
Inventor
刘勇国
任志扬
李巧勤
杨尚明
刘朗
陈智
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
University of Electronic Science and Technology of China
Original Assignee
University of Electronic Science and Technology of China
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by University of Electronic Science and Technology of China filed Critical University of Electronic Science and Technology of China
Priority to CN201911394850.3A priority Critical patent/CN111184512B/en
Publication of CN111184512A publication Critical patent/CN111184512A/en
Application granted granted Critical
Publication of CN111184512B publication Critical patent/CN111184512B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/103Measuring devices for testing the shape, pattern, colour, size or movement of the body or parts thereof, for diagnostic purposes
    • A61B5/11Measuring movement of the entire body or parts thereof, e.g. head or hand tremor or mobility of a limb
    • A61B5/1124Determining motor skills
    • A61B5/1125Grasping motions of hands
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/103Measuring devices for testing the shape, pattern, colour, size or movement of the body or parts thereof, for diagnostic purposes
    • A61B5/11Measuring movement of the entire body or parts thereof, e.g. head or hand tremor or mobility of a limb
    • A61B5/1126Measuring movement of the entire body or parts thereof, e.g. head or hand tremor or mobility of a limb using a particular sensing technique
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/24Detecting, measuring or recording bioelectric or biomagnetic signals of the body or parts thereof
    • A61B5/316Modalities, i.e. specific diagnostic methods
    • A61B5/389Electromyography [EMG]
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/48Other medical applications
    • A61B5/4836Diagnosis combined with treatment in closed-loop systems or methods
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/72Signal processing specially adapted for physiological signals or for diagnostic purposes
    • A61B5/7203Signal processing specially adapted for physiological signals or for diagnostic purposes for noise prevention, reduction or removal
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/72Signal processing specially adapted for physiological signals or for diagnostic purposes
    • A61B5/7235Details of waveform analysis
    • A61B5/7264Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems
    • A61B5/7267Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems involving training the classification device

Landscapes

  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Molecular Biology (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • General Health & Medical Sciences (AREA)
  • Pathology (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Veterinary Medicine (AREA)
  • Public Health (AREA)
  • Animal Behavior & Ethology (AREA)
  • Surgery (AREA)
  • Medical Informatics (AREA)
  • Artificial Intelligence (AREA)
  • Physiology (AREA)
  • Signal Processing (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Theoretical Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Psychiatry (AREA)
  • Evolutionary Computation (AREA)
  • Dentistry (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Data Mining & Analysis (AREA)
  • Fuzzy Systems (AREA)
  • Software Systems (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Computing Systems (AREA)
  • Computational Linguistics (AREA)
  • Image Analysis (AREA)

Abstract

本发明公开了一种脑卒中患者上肢及手部康复训练动作识别方法,采用非负矩阵分解模型对肌电信号数据进行盲源分离,去除非平稳的肌肉激活信息,获得稳定的时变盲源分离结果;应用分解后的时变盲源分离结果数据做进一步的模式识别,提高识别的稳定性和精度;通过CNN‑RNN模型使得学习的特征同时保持时间和空间特性。CNN‑RNN模型无需进行人工的数据特征提取与筛选,直接处理数据,自动提取特征且完成分类识别,可以实现端到端的康复训练动作识别分析,并结合注意力层对两层双向GRU层中第二层的隐含状态做注意力加权,赋予贡献度大的数据更大的权重,使其发挥更大作用,从而进一步提升分类识别的精度。

Figure 201911394850

The invention discloses an action recognition method for upper limb and hand rehabilitation training of stroke patients, which adopts a non-negative matrix decomposition model to separate electromyographic signal data blindly, removes non-stationary muscle activation information, and obtains a stable time-varying blind source. Separation results; apply the decomposed time-varying blind source separation result data for further pattern recognition to improve the stability and accuracy of recognition; the learned features maintain both temporal and spatial characteristics through the CNN-RNN model. The CNN-RNN model does not need to perform manual data feature extraction and screening, directly processes the data, automatically extracts features and completes classification and identification, which can realize end-to-end rehabilitation training action recognition and analysis, and combine the attention layer to analyze the first two-layer bidirectional GRU layer. The hidden state of the second layer is weighted by attention, giving more weight to the data with large contribution, making it play a greater role, thereby further improving the accuracy of classification and recognition.

Figure 201911394850

Description

一种脑卒中患者上肢及手部康复训练动作识别方法An action recognition method for upper limb and hand rehabilitation training in stroke patients

技术领域technical field

本发明属于机器学习领域,具体涉及一种脑卒中患者上肢及手部康复训练动作识别方法。The invention belongs to the field of machine learning, and in particular relates to a method for recognizing an upper limb and hand rehabilitation training action of a stroke patient.

背景技术Background technique

康复训练作为康复医学中的重要治疗手段,主要通过不同运动训练,改善病人的相应肢体部位的功能性运动障碍,尽可能的使患者运动功能康复,达到治疗效果。脑卒中造成肢体功能障碍的患者中,80%患有上肢功能障碍,在患有上肢功能障碍患者中,最终只有30%的患者能够实现上肢功能恢复,12%的患者手功能得到较好的恢复。上肢及手部功能康复对脑卒中患者的生活质量和社会参与度有着深远的影响。对脑卒中患者康复训练动作的识别应用广泛,识别结果在临床上可以作为辅助训练设备的控制信号,如机械外骨骼、假肢等;也可以作为严肃游戏康复训练的输入信号,如虚拟现实康复训练;在社区或家庭康复中实现交互式康复训练或协助医师远程监测训练情况等。表面肌电图(sEMG)具有非侵入性,易于记录的特点,并包含丰富的运动控制信息,因此通常使用表面肌电信号进行康复训练动作的识别。Rehabilitation training, as an important treatment method in rehabilitation medicine, mainly improves the functional movement disorder of the corresponding limbs of the patient through different exercise training, and restores the patient's motor function as much as possible to achieve the therapeutic effect. Among the patients with limb dysfunction caused by stroke, 80% suffer from upper limb dysfunction. Among patients with upper limb dysfunction, only 30% of patients can finally achieve upper limb function recovery, and 12% of patients have better recovery of hand function. . Rehabilitation of upper limb and hand function has a profound impact on the quality of life and social participation of stroke patients. The recognition of stroke patients' rehabilitation training movements is widely used, and the recognition results can be used as control signals for auxiliary training equipment in clinical practice, such as mechanical exoskeletons, prostheses, etc.; they can also be used as input signals for serious game rehabilitation training, such as virtual reality rehabilitation training. ; Realize interactive rehabilitation training in community or home rehabilitation or assist physicians to monitor training conditions remotely, etc. Surface electromyography (sEMG) is non-invasive, easy to record, and contains rich motor control information, so surface electromyography is usually used to identify rehabilitation training movements.

目前,使用表面肌电信号进行康复训练动作识别的机器学习方法主要包含三个步骤,信号预处理、特征提取及分类识别。首先对采集的原始肌电信号进行预处理,经过陷波滤波、带通滤波和全波整流等去除信号中的噪声,再通过阈值法或人工方法对肌电时间序列进行分割,分割出训练动作对应的信号段。在特征提取步骤中,需要人为选择设定特征,再从预处理后的肌电信号中提取出这些特征。特征主要包括时域特征和频域特征两类,时域特征有峰值、均值、均方根值、峭度和自回归系数等,频域特征有功率谱、中频、重心频率和频率均方根等。最后,使用上一步提取的特征集和对应的类别标签作为输入,应用机器学习算法,训练分类识别模型。目前常用于康复训练动作识别的分类方法主要有决策分类树、支持向量机(SVM)、线性判别分类器(LDC)、朴素贝叶斯分类器(NB)和高斯混合模型(GMM)等。完成模型训练后,对于新采集的训练动作肌电信号,即可通过预处理、特征提取,进行康复训练动作进行识别。At present, the machine learning method for rehabilitation training action recognition using surface EMG signals mainly includes three steps, signal preprocessing, feature extraction and classification and recognition. First, the collected original EMG signal is preprocessed, and the noise in the signal is removed through notch filtering, bandpass filtering and full-wave rectification, etc., and then the EMG time series is segmented by threshold method or artificial method, and the training actions are segmented. corresponding signal segment. In the feature extraction step, it is necessary to manually select and set features, and then extract these features from the preprocessed EMG signals. Features mainly include time domain features and frequency domain features. Time domain features include peak value, mean value, root mean square value, kurtosis, and autoregressive coefficients. Frequency domain features include power spectrum, intermediate frequency, center of gravity frequency, and frequency root mean square. Wait. Finally, using the feature set extracted in the previous step and the corresponding category labels as input, the machine learning algorithm is applied to train the classification and recognition model. At present, the classification methods commonly used in rehabilitation training action recognition mainly include decision classification tree, support vector machine (SVM), linear discriminant classifier (LDC), naive Bayes classifier (NB) and Gaussian mixture model (GMM). After the model training is completed, the newly collected EMG signals of training actions can be identified by preprocessing and feature extraction for rehabilitation training actions.

目前对于康复训练动作的识别方法主要流程为人工设置并提取特征再应用机器学习方法进行分类识别。At present, the main process of recognition methods for rehabilitation training actions is to manually set and extract features, and then apply machine learning methods for classification and recognition.

这类方法模型有其局限性。原始肌电信号有一定非平稳性,由于不同患者体征的差异、卒中损伤的差异和动作完成规范程度的差异等,都会导致肌电数据较大的差异,进而影响识别,使用基本的滤波整流等预处理难以消除这种非平稳性。并且这种特征工程通常需要一些特定领域内的专业知识,因此也更进一步加大了预处理成本。现有模型识别性能对特征选择的依赖性较高,提取不同的特征对于分类识别性能影响不同;人工设置并提取特征由于特征间可能具有相关性,容易造成信息冗余;以及对于生理时间序列数据,其时变信息也很重要,人工特征提取丢失了这类信息。目前使用的机器学习的分类器(如SVM、LDC、GMM等)对于脑卒中患者受损的上肢和手部运动,以及一些相近的运动,如不同手指运动,识别区分的性能较差。This type of method model has its limitations. The original EMG signal has a certain degree of non-stationarity. Due to differences in physical signs of different patients, differences in stroke injuries, and differences in the degree of standardization of movement completion, etc., the EMG data will be greatly different, which will affect the recognition. Use basic filtering and rectification, etc. Preprocessing is difficult to remove this non-stationarity. And this kind of feature engineering usually requires some domain-specific expertise, which further increases the cost of preprocessing. The recognition performance of the existing model is highly dependent on feature selection, and different features have different effects on the classification and recognition performance; manually setting and extracting features may cause information redundancy due to the possible correlation between features; and for physiological time series data , whose time-varying information is also important, which is lost by artificial feature extraction. The currently used machine learning classifiers (such as SVM, LDC, GMM, etc.) have poor performance in identifying and distinguishing the damaged upper limb and hand movements of stroke patients, as well as some similar movements, such as different finger movements.

发明内容SUMMARY OF THE INVENTION

针对现有技术中的上述不足,本发明提供的一种脑卒中患者上肢及手部康复训练动作识别方法解决了人工提取设置特征造成信息冗余和丢失时变信息,以及机器学习识别区分性能差的问题。In view of the above deficiencies in the prior art, the present invention provides an action recognition method for stroke patients' upper limb and hand rehabilitation training, which solves the problem of information redundancy and loss of time-varying information caused by manual extraction and setting features, as well as poor machine learning recognition and discrimination performance. The problem.

为了达到上述发明目的,本发明采用的技术方案为:一种脑卒中患者上肢及手部康复训练动作识别方法,包括以下步骤:In order to achieve the above purpose of the invention, the technical solution adopted in the present invention is: a method for recognizing the upper limb and hand rehabilitation training action of stroke patients, comprising the following steps:

S1、采集康复训练动作的肌电信号数据;S1, collect the electromyographic signal data of the rehabilitation training action;

S2、对肌电信号数据进行预处理;S2. Preprocess the EMG signal data;

S3、采用非负矩阵分解模型对预处理后的肌电信号数据进行分解,得到多个盲源分离结果矩阵;S3, using a non-negative matrix decomposition model to decompose the preprocessed EMG signal data to obtain multiple blind source separation result matrices;

S4、采用多个盲源分离结果矩阵对CNN-RNN模型进行迭代训练,得到训练完成的CNN-RNN模型;S4. Use multiple blind source separation result matrices to iteratively train the CNN-RNN model to obtain a trained CNN-RNN model;

S5、对新采集的训练动作肌电数据,重复步骤S1~S3,得到多个盲源分离结果矩阵,将多个盲源分离结果矩阵输入训练完成的CNN-RNN模型中,得到康复训练动作识别类别。S5. Repeat steps S1 to S3 for the newly collected training action EMG data to obtain multiple blind source separation result matrices, and input the multiple blind source separation result matrices into the trained CNN-RNN model to obtain rehabilitation training action recognition category.

进一步地:步骤S3包括以下步骤:Further: Step S3 includes the following steps:

S31、对预处理后的肌电信号数据进行时间维度上的人工分割,得到对应的时间序列每条数据组成的肌电信号数据矩阵;S31. Perform manual segmentation on the time dimension of the preprocessed EMG data to obtain an EMG data matrix composed of each piece of data in the corresponding time series;

S32、采用非负矩阵分解模型对肌电信号数据矩阵进行分解,得到多个盲源分离结果矩阵。S32, using a non-negative matrix decomposition model to decompose the electromyographic signal data matrix to obtain multiple blind source separation result matrices.

进一步地:步骤S4中包括以下步骤:Further: step S4 includes the following steps:

S41、建立CNN网络模型和RNN网络模型,并初始化迭代次数m=0;S41, establish a CNN network model and an RNN network model, and initialize the number of iterations m=0;

S42、将多个盲源分离结果矩阵输入CNN网络模型中,进行特征提取和池化降维操作,得到特征向量;S42, input multiple blind source separation result matrices into the CNN network model, perform feature extraction and pooling dimension reduction operations, and obtain feature vectors;

S43、将特征向量输入RNN网络模型中进行处理,得到预测动作类别的概率值;S43, input the feature vector into the RNN network model for processing, and obtain the probability value of the predicted action category;

S44、通过交叉熵计算预测动作类别和真实动作类别概率值的距离LossmS44, calculate the distance Loss m between the predicted action category and the real action category probability value by cross entropy;

S45、判断第m次的Lossm值和m-1次的Lossm-1值的差值是否小于阈值,若是,则得到训练完成的CNN-RNN模型,若否,则采用批量随机梯度下降方法更新CNN网络模型中的权重参数和偏置参数以及RNN网络模型的权重参数,并令迭代次数m自加1,并跳转至步骤S42。S45. Determine whether the difference between the m-th Loss m value and the m-1 time Loss m-1 value is less than the threshold, if so, obtain the trained CNN-RNN model, if not, adopt the batch stochastic gradient descent method Update the weight parameters and bias parameters in the CNN network model and the weight parameters of the RNN network model, increase the number of iterations m by 1, and jump to step S42.

进一步地:步骤S41中的CNN网络模型包括:三层卷积层、三层池化层和三层激活层。Further, the CNN network model in step S41 includes: three layers of convolution layers, three layers of pooling layers and three layers of activation layers.

进一步地:卷积层的输入和输出的计算公式为:Further: the calculation formula of the input and output of the convolutional layer is:

Figure BDA0002346021670000041
Figure BDA0002346021670000041

其中,

Figure BDA0002346021670000042
为第l-1层卷积层的第i个输入通道的数据,
Figure BDA0002346021670000043
为第l层卷积层的第j个输出通道的数据,Ml-1为第l-1层卷积层的输入通道数,
Figure BDA0002346021670000044
为第l层卷积核权重,
Figure BDA0002346021670000045
为第l层卷积层偏置,1≤l≤3;第1层卷积层的第i个输入通道的数据为盲源分离结果矩阵Hr×n第i行数据。in,
Figure BDA0002346021670000042
is the data of the ith input channel of the l-1th convolutional layer,
Figure BDA0002346021670000043
is the data of the jth output channel of the lth convolutional layer, M l-1 is the number of input channels of the l-1th convolutional layer,
Figure BDA0002346021670000044
is the weight of the convolution kernel of the lth layer,
Figure BDA0002346021670000045
is the bias of the convolutional layer of the first layer, 1≤l≤3; the data of the i-th input channel of the first-layer convolutional layer is the data of the i-th row of the blind source separation result matrix H r×n .

进一步地:步骤S41中的RNN网络模型包括:两层双向GRU层、注意力层和全连接层,每层双向GRU层包括T个GRU单元;Further: the RNN network model in step S41 includes: two bidirectional GRU layers, an attention layer and a fully connected layer, and each bidirectional GRU layer includes T GRU units;

所述GRU单元中包括更新门和重置门;The GRU unit includes an update gate and a reset gate;

所述两层双向GRU层的第一层中的输入为特征向量,其第二层的输出为注意力层的输入;The input in the first layer of the two-layer bidirectional GRU layer is a feature vector, and the output of the second layer is the input of the attention layer;

所述注意力层的输出为全连接层的输入。The output of the attention layer is the input of the fully connected layer.

进一步地:GRU单元的状态更新方程如下:Further: The state update equation of the GRU unit is as follows:

Figure BDA0002346021670000046
Figure BDA0002346021670000046

其中,[]表示两个向量连接,·表示向量内积,σ是激活函数,tanh为双曲正切激活函数,Wr为重置门的权重矩阵,Wz为更新门的权重矩阵,

Figure BDA0002346021670000047
为候选集
Figure BDA0002346021670000048
的权重矩阵,xt为特征向量,ht为t时刻的隐含状态,ht-1为t-1时刻的隐含状态。Among them, [] represents the connection of two vectors, · represents the vector inner product, σ is the activation function, tanh is the hyperbolic tangent activation function, W r is the weight matrix of the reset gate, W z is the weight matrix of the update gate,
Figure BDA0002346021670000047
candidate set
Figure BDA0002346021670000048
The weight matrix of , x t is the eigenvector, h t is the hidden state at time t, and h t-1 is the hidden state at time t-1.

进一步地:将两层双向GRU层中第二层的隐含状态ht输入到注意力层中进行处理包括以下步骤:Further: inputting the hidden state ht of the second layer in the two-layer bidirectional GRU layer into the attention layer for processing includes the following steps:

A1、将两层双向GRU层中第二层的隐含状态ht输入到注意力层中;A1. Input the hidden state h t of the second layer in the two-layer bidirectional GRU layer into the attention layer;

A2、初始化注意力层的权重Ww和偏置bwA2. Initialize the weight W w and bias b w of the attention layer;

A3、根据注意力层的权重Ww和偏置bw,通过tanh双曲正切激活函数获得隐含状态ht的隐层表示utA3. According to the weight W w and bias b w of the attention layer, obtain the hidden layer representation u t of the hidden state h t through the tanh hyperbolic tangent activation function;

A4、随机初始化一个权重向量uw,对隐层表示ut进行softmax标准化,得到注意力权重αtA4. Randomly initialize a weight vector u w , perform softmax normalization on the hidden layer representation u t to obtain the attention weight α t ;

A5、对隐含状态ht通过注意力权重αt进行加权,得到隐含状态ht的注意力加权表示qtA5. Weight the hidden state h t by the attention weight α t to obtain the attention weighted representation q t of the hidden state h t .

进一步地:注意力加权表示qt输入全连接层的进行处理的过程包括:Further: the attention weighting indicates that the processing process of q t input to the fully connected layer includes:

B1、将注意力加权表示qt输入全连接层,进行离散处理得到注意力加权ok

Figure BDA0002346021670000051
C为全连接层的神经元个数;B1. Input the attention weighted representation q t into the fully connected layer, and perform discrete processing to obtain the attention weight o k ,
Figure BDA0002346021670000051
C is the number of neurons in the fully connected layer;

B2、对注意力加权ok进行随机失活操作和采用softmax进行分类操作,得到预测动作类别的概率值。B2. Perform random inactivation operation on the attention weight ok and use softmax to perform the classification operation to obtain the probability value of the predicted action category.

进一步地:步骤B2中预测动作类别的概率值的计算公式为:Further: the calculation formula of the probability value of the predicted action category in step B2 is:

Figure BDA0002346021670000052
Figure BDA0002346021670000052

其中,sk为预测的第k类动作的概率值。Among them, sk is the probability value of the predicted k-th action.

本发明的有益效果为:一种脑卒中患者上肢及手部康复训练动作识别方法,采用非负矩阵分解模型对肌电信号数据进行盲源分离,去除非平稳的肌肉激活信息,获得稳定的时变盲源分离结果;应用分解后的时变盲源分离结果数据做进一步的模式识别,提高识别的稳定性和精度;采用CNN网络模型保留了盲源分离结果数据的空间特性,RNN网络模型对特征数据进行了融合,提供了时间维度信息方便当前数据的判别操作。通过CNN-RNN模型使得学习的特征同时保持时间和空间特性。CNN-RNN模型无需进行人工的数据特征提取与筛选,直接处理数据,自动提取特征且完成分类识别,可以实现端到端的康复训练动作识别分析,并结合注意力层对两层双向GRU层中第二层的隐含状态做注意力加权,赋予贡献度大的数据更大的权重,使其发挥更大作用,从而进一步提升分类识别的精度。The beneficial effects of the present invention are as follows: a method for recognizing the upper limb and hand rehabilitation training action of stroke patients, adopts a non-negative matrix decomposition model to perform blind source separation of electromyographic signal data, removes non-stable muscle activation information, and obtains stable time Change the blind source separation result; apply the decomposed time-varying blind source separation result data for further pattern recognition, improve the stability and accuracy of the recognition; adopt the CNN network model to retain the spatial characteristics of the blind source separation result data, and the RNN network model The feature data is fused to provide time dimension information to facilitate the discriminating operation of the current data. The learned features maintain both temporal and spatial characteristics through the CNN-RNN model. The CNN-RNN model does not need to perform manual data feature extraction and screening. It directly processes the data, automatically extracts features, and completes classification and recognition. It can realize end-to-end rehabilitation training action recognition and analysis, and combine the attention layer for the first two-layer two-way GRU layer. The hidden state of the second layer is weighted by attention, giving more weight to the data with large contribution, making it play a greater role, thereby further improving the accuracy of classification and recognition.

附图说明Description of drawings

图1为一种脑卒中患者上肢及手部康复训练动作识别方法流程图;Fig. 1 is a flow chart of an action recognition method for upper limb and hand rehabilitation training of stroke patients;

图2为CNN网络模型结构示意图;Figure 2 is a schematic diagram of the structure of the CNN network model;

图3为GRU单元结构图;Figure 3 is a structural diagram of a GRU unit;

图4为RNN网络模型部分网络模型结构图。Figure 4 is a partial network model structure diagram of the RNN network model.

具体实施方式Detailed ways

下面对本发明的具体实施方式进行描述,以便于本技术领域的技术人员理解本发明,但应该清楚,本发明不限于具体实施方式的范围,对本技术领域的普通技术人员来讲,只要各种变化在所附的权利要求限定和确定的本发明的精神和范围内,这些变化是显而易见的,一切利用本发明构思的发明创造均在保护之列。The specific embodiments of the present invention are described below to facilitate those skilled in the art to understand the present invention, but it should be clear that the present invention is not limited to the scope of the specific embodiments. For those of ordinary skill in the art, as long as various changes Such changes are obvious within the spirit and scope of the present invention as defined and determined by the appended claims, and all inventions and creations utilizing the inventive concept are within the scope of protection.

如图1所示,一种脑卒中患者上肢及手部康复训练动作识别方法,包括以下步骤:As shown in Figure 1, a method for recognizing the movements of upper limb and hand rehabilitation training for stroke patients includes the following steps:

S1、采集康复训练动作的肌电信号数据;S1, collect the electromyographic signal data of the rehabilitation training action;

在受试者指伸肌群、指屈肌群、肱二头肌、肱三头肌、三角肌、大鱼际肌、小鱼际肌,分别放置电极,共采集8通道表面肌电(sEMG)信号,采样频率为2KHz,肌电放置位置如表1所示。Electrodes were placed on the subjects' finger extensors, finger flexors, biceps, triceps, deltoid, thenar muscle, and hypothenar muscle, and a total of 8 channels of surface electromyography (sEMG) were collected. ) signal, the sampling frequency is 2KHz, and the placement of EMG is shown in Table 1.

表1肌电电极位置设计Table 1 EMG electrode position design

Figure BDA0002346021670000071
Figure BDA0002346021670000071

在本实施例中,设计有关上臂、前臂、手部的康复作业训练和运动训练中的功能动作,共25种,如表2所示。受试者放松坐在扶手椅上。对于手部、腕关节和肘关节动作,双臂支撑放置在桌面固定位置,根据视频或语音指示,从静止到引起收缩并保持姿势一共5秒时间,每个动作重复6次,动作之间休息时间5秒;对于肩部动作,受试者端坐于椅子上,前方无遮挡物,根据视频或语音指示,从静止到引起肌肉收缩完成动作并保持姿势5秒时间,每个动作重复6次,动作之间休息时间5秒。在实验期间,由健康人执行的每个运动的视频被用作示范,用于引导受试者执行(意图执行)每个运动。In this embodiment, a total of 25 functional actions are designed in the rehabilitation work training and sports training of the upper arm, forearm, and hand, as shown in Table 2. The subject was sitting relaxed in an armchair. For hand, wrist and elbow movements, place the arms in a fixed position on the table, follow the video or voice instructions, and take a total of 5 seconds from rest to induce contraction and maintain the posture, each movement is repeated 6 times, rest between movements The time is 5 seconds; for the shoulder movement, the subject sits on a chair with no obstructions in front, according to the video or voice instructions, from stillness to causing muscle contraction to complete the movement and maintain the posture for 5 seconds, each movement is repeated 6 times , 5 seconds rest between movements. During the experiment, videos of each movement performed by healthy individuals were used as demonstrations to guide subjects to perform (intend to perform) each movement.

表2实验动作设计Table 2 Experimental Action Design

Figure BDA0002346021670000072
Figure BDA0002346021670000072

S2、对肌电信号数据进行预处理,即使用50Hz陷波滤波器去除电源线干扰噪声,20Hz-450Hz带通滤波,消除运动伪影(<20)和高频噪声(>450),并进行全波整流;S2. Preprocess the EMG signal data, that is, use a 50Hz notch filter to remove power line interference noise, and 20Hz-450Hz bandpass filter to remove motion artifacts (<20) and high-frequency noise (>450), and perform full wave rectification;

S3、采用非负矩阵分解模型对预处理后的肌电信号数据进行分解,得到多个盲源分离结果矩阵;S3, using a non-negative matrix decomposition model to decompose the preprocessed EMG signal data to obtain multiple blind source separation result matrices;

步骤S3包括以下步骤:Step S3 includes the following steps:

S31、对预处理后的肌电信号数据进行时间维度上的人工分割,得到对应的时间序列每条数据组成的肌电信号数据矩阵;S31. Perform manual segmentation on the time dimension of the preprocessed EMG data to obtain an EMG data matrix composed of each piece of data in the corresponding time series;

S32、采用非负矩阵分解模型对肌电信号数据矩阵进行分解,得到多个盲源分离结果矩阵。S32, using a non-negative matrix decomposition model to decompose the electromyographic signal data matrix to obtain multiple blind source separation result matrices.

步骤S32中的非负矩阵分解模型为:The non-negative matrix decomposition model in step S32 is:

Xm×n=Wm×r×Hr×n X m×n =W m×r ×H r×n

其中,Xm×n是m×n维度的肌电信号数据矩阵,m为电极数量,n为测量值数量,Wm×r是m×r维度的肌肉活动矩阵,Hr×n是r×n维度的盲源分离结果矩阵。Among them, X m×n is the EMG data matrix of m×n dimension, m is the number of electrodes, n is the number of measurement values, W m×r is the muscle activity matrix of m×r dimension, H r×n is r× N-dimensional blind source separation result matrix.

S4、采用多个盲源分离结果矩阵对CNN-RNN模型进行迭代训练,得到训练完成的CNN-RNN模型;S4. Use multiple blind source separation result matrices to iteratively train the CNN-RNN model to obtain a trained CNN-RNN model;

步骤S4中包括以下步骤:Step S4 includes the following steps:

S41、建立CNN网络模型和RNN网络模型,并初始化迭代次数m=0;S41, establish a CNN network model and an RNN network model, and initialize the number of iterations m=0;

步骤S41中的CNN网络模型包括:三层卷积层、三层池化层和三层激活层,如图2所示。The CNN network model in step S41 includes: three layers of convolution layers, three layers of pooling layers, and three layers of activation layers, as shown in FIG. 2 .

卷积层的输入和输出的计算公式为:The calculation formula for the input and output of the convolutional layer is:

Figure BDA0002346021670000081
Figure BDA0002346021670000081

其中,

Figure BDA0002346021670000082
为第l-1层卷积层的第i个输入通道的数据,
Figure BDA0002346021670000083
为第l层卷积层的第j个输出通道的数据,Ml-1为第l-1层卷积层的输入通道数,
Figure BDA0002346021670000091
为第l层卷积核权重,
Figure BDA0002346021670000092
为第l层卷积层偏置,1≤l≤3;第1层卷积层的第i个输入通道的数据为盲源分离结果矩阵Hr×n第i行数据。in,
Figure BDA0002346021670000082
is the data of the ith input channel of the l-1th convolutional layer,
Figure BDA0002346021670000083
is the data of the jth output channel of the lth convolutional layer, M l-1 is the number of input channels of the l-1th convolutional layer,
Figure BDA0002346021670000091
is the weight of the convolution kernel of the lth layer,
Figure BDA0002346021670000092
is the bias of the convolutional layer of the first layer, 1≤l≤3; the data of the i-th input channel of the first-layer convolutional layer is the data of the i-th row of the blind source separation result matrix H r×n .

S42、将多个盲源分离结果矩阵输入CNN网络模型中,进行特征提取和池化降维操作,得到特征向量;S42, input multiple blind source separation result matrices into the CNN network model, perform feature extraction and pooling dimension reduction operations, and obtain feature vectors;

通过激活层在CNN网络模型引入非线性,提高神经网络的拟合能力,在本实施例中,激活层使用ReLU非线性函数,使网络训练得更快,不损害准确性,缓解消失的梯度问题。The activation layer introduces nonlinearity into the CNN network model to improve the fitting ability of the neural network. In this embodiment, the activation layer uses the ReLU nonlinear function, so that the network can be trained faster, without compromising accuracy, and alleviating the problem of vanishing gradients .

通过池化层去减少输入数据的纬度,同时减少了要训练的参数或权重的数量,从而降低计算成本,控制过拟合。The pooling layer is used to reduce the latitude of the input data, and at the same time reduce the number of parameters or weights to be trained, thereby reducing the computational cost and controlling overfitting.

S43、将特征向量输入RNN网络模型中进行处理,得到预测动作类别的概率值;S43, input the feature vector into the RNN network model for processing, and obtain the probability value of the predicted action category;

RNN网络模型包括:两层双向GRU层、注意力层和全连接层,每层双向GRU层包括T′个GRU单元;所述GRU单元中包括更新门和重置门;The RNN network model includes: two-layer bidirectional GRU layer, attention layer and fully connected layer, each bidirectional GRU layer includes T' GRU units; the GRU unit includes an update gate and a reset gate;

所述两层双向GRU层的第一层中的输入为特征向量,其第二层的输出为注意力层的输入;The input in the first layer of the two-layer bidirectional GRU layer is a feature vector, and the output of the second layer is the input of the attention layer;

所述注意力层的输出为全连接层的输入,如图3所示。The output of the attention layer is the input of the fully connected layer, as shown in Figure 3.

GRU单元的状态更新方程如下:The state update equation of the GRU unit is as follows:

Figure BDA0002346021670000093
Figure BDA0002346021670000093

其中,[]表示两个向量连接,·表示向量内积,σ是激活函数,tanh为双曲正切激活函数,Wr为重置门rt的权重矩阵,Wz为更新门zt的权重矩阵,

Figure BDA0002346021670000094
为候选集
Figure BDA0002346021670000095
的权重矩阵,xt为特征向量,ht为t时刻的隐含状态,ht-1为t-1时刻的隐含状态。Among them, [] represents the connection of two vectors, · represents the vector inner product, σ is the activation function, tanh is the hyperbolic tangent activation function, W r is the weight matrix of the reset gate rt , and W z is the weight of the update gate z t matrix,
Figure BDA0002346021670000094
candidate set
Figure BDA0002346021670000095
The weight matrix of , x t is the eigenvector, h t is the hidden state at time t, and h t-1 is the hidden state at time t-1.

如图4所示,将两层双向GRU层中第二层的隐含状态ht输入到注意力层中进行处理包括以下步骤:As shown in Figure 4, inputting the hidden state ht of the second layer in the two-layer bidirectional GRU layer into the attention layer for processing includes the following steps:

A1、将两层双向GRU层中第二层的隐含状态ht输入到注意力层中;A1. Input the hidden state h t of the second layer in the two-layer bidirectional GRU layer into the attention layer;

A2、初始化注意力层的权重Ww和偏置bwA2. Initialize the weight W w and bias b w of the attention layer;

A3、根据注意力层的权重Ww和偏置bw,通过tanh双曲正切激活函数获得隐含状态ht的隐层表示utA3. According to the weight W w and bias b w of the attention layer, obtain the hidden layer representation u t of the hidden state h t through the tanh hyperbolic tangent activation function;

A4、随机初始化一个权重向量uw,对隐层表示ut进行softmax标准化,得到注意力权重αtA4. Randomly initialize a weight vector u w , perform softmax normalization on the hidden layer representation u t to obtain the attention weight α t ;

A5、对隐含状态ht通过注意力权重αt进行加权,得到隐含状态ht的注意力加权表示qtA5. Weight the hidden state h t by the attention weight α t to obtain the attention weighted representation q t of the hidden state h t .

步骤A5中注意力加权表示qt的计算公式为:The calculation formula of the attention weighted representation q t in step A5 is:

ut=tanh(Wwht+bw)u t =tanh(W w h t +b w )

Figure BDA0002346021670000101
Figure BDA0002346021670000101

qt=αtht q tt h t

注意力加权表示qt输入全连接层的进行处理的过程包括:The attention weighting indicates that the processing process of the qt input fully connected layer includes:

B1、将注意力加权表示qt输入全连接层,进行离散处理得到注意力加权ok

Figure BDA0002346021670000102
C为全连接层的神经元个数;B1. Input the attention weighted representation q t into the fully connected layer, and perform discrete processing to obtain the attention weight o k ,
Figure BDA0002346021670000102
C is the number of neurons in the fully connected layer;

B2、对注意力加权ok进行随机失活操作和采用softmax进行分类操作,得到预测动作类别的概率值。B2. Perform random inactivation operation on the attention weight ok and use softmax to perform the classification operation to obtain the probability value of the predicted action category.

步骤B2中预测动作类别的概率值的计算公式为:The calculation formula of the probability value of the predicted action category in step B2 is:

Figure BDA0002346021670000103
Figure BDA0002346021670000103

其中,sk为预测的第k类动作的概率值。Among them, sk is the probability value of the predicted k-th action.

S44、通过交叉熵计算预测动作类别和真实动作类别概率值的距离LossmS44, calculate the distance Loss m between the predicted action category and the real action category probability value by cross entropy;

步骤S44中计算预测和真实动作类别概率值的距离Lossm的计算公式为:The calculation formula for calculating the distance Loss m between the prediction and the real action category probability value in step S44 is:

Figure BDA0002346021670000111
Figure BDA0002346021670000111

其中,Batch为批量样本数,n为第n条数据,On为第n条数据的预测动作类别概率值分布:{s1,s2…,sC},yn第n条数据的真实动作类别概率值。Among them, Batch is the number of batch samples, n is the nth piece of data, On is the predicted action category probability value distribution of the nth piece of data: {s 1 ,s 2 …,s C }, y n The true value of the nth piece of data Action class probability value.

S45、判断第m次的Lossm值和m-1次的Lossm-1值的差值是否小于阈值,若是,则得到训练完成的CNN-RNN模型,若否,则采用批量随机梯度下降方法更新CNN网络模型中的权重参数和偏置参数以及RNN网络模型的权重参数,并令迭代次数m自加1,并跳转至步骤S42。S45. Determine whether the difference between the m-th Loss m value and the m-1 time Loss m-1 value is less than the threshold, if so, obtain the trained CNN-RNN model, if not, adopt the batch stochastic gradient descent method Update the weight parameters and bias parameters in the CNN network model and the weight parameters of the RNN network model, increase the number of iterations m by 1, and jump to step S42.

S5、对新采集的训练动作肌电数据,重复步骤S1~S3,得到多个盲源分离结果矩阵,将多个盲源分离结果矩阵输入训练完成的CNN-RNN模型中,得到康复训练动作识别类别。S5. Repeat steps S1 to S3 for the newly collected training action EMG data to obtain multiple blind source separation result matrices, and input the multiple blind source separation result matrices into the trained CNN-RNN model to obtain rehabilitation training action recognition category.

本发明的有益效果为:一种脑卒中患者上肢及手部康复训练动作识别方法,采用非负矩阵分解模型对肌电信号数据进行盲源分离,去除非平稳的肌肉激活信息,获得稳定的时变盲源分离结果;应用分解后的时变盲源分离结果数据做进一步的模式识别,提高识别的稳定性和精度;采用CNN网络模型保留了盲源分离结果数据的空间特性,RNN网络模型对特征数据进行了融合,提供了时间维度信息方便当前数据的判别操作。通过CNN-RNN模型使得学习的特征同时保持时间和空间特性。CNN-RNN模型无需进行人工的数据特征提取与筛选,直接处理数据,自动提取特征且完成分类识别,可以实现端到端的康复训练动作识别分析,并结合注意力层对两层双向GRU层中第二层的隐含状态做注意力加权,赋予贡献度大的数据更大的权重,使其发挥更大作用,从而进一步提升分类识别的精度。The beneficial effects of the present invention are as follows: a method for recognizing the upper limb and hand rehabilitation training action of stroke patients, adopts a non-negative matrix decomposition model to perform blind source separation of electromyographic signal data, removes non-stable muscle activation information, and obtains stable time The result of blind source separation is changed; the decomposed time-varying blind source separation result data is used for further pattern recognition to improve the stability and accuracy of the recognition; the CNN network model is used to retain the spatial characteristics of the blind source separation result data, and the RNN network model The feature data is fused to provide time dimension information to facilitate the discriminating operation of the current data. The learned features maintain both temporal and spatial characteristics through the CNN-RNN model. The CNN-RNN model does not need to perform manual data feature extraction and screening, directly processes the data, automatically extracts features and completes classification and recognition, which can realize end-to-end rehabilitation training action recognition and analysis, and combine the attention layer for the first two-layer two-way GRU layer. The hidden state of the second layer is weighted by attention, giving more weight to the data with large contribution, making it play a greater role, thereby further improving the accuracy of classification and recognition.

Claims (5)

1. A stroke patient upper limb and hand rehabilitation training action recognition method is characterized by comprising the following steps:
s1, collecting myoelectric signal data of rehabilitation training actions;
s2, preprocessing electromyographic signal data;
s3, decomposing the preprocessed electromyographic signal data by adopting a non-negative matrix decomposition model to obtain a plurality of blind source separation result matrixes;
s4, performing iterative training on the CNN-RNN model by adopting a plurality of blind source separation result matrixes to obtain a trained CNN-RNN model;
s5, repeating the steps S1-S3 on newly collected training action electromyographic data to obtain a plurality of blind source separation result matrixes, and inputting the plurality of blind source separation result matrixes into a trained CNN-RNN model to obtain rehabilitation training action recognition categories;
step S3 includes the following steps:
s31, manually segmenting the preprocessed electromyographic signal data in a time dimension to obtain an electromyographic signal data matrix formed by each piece of data of the corresponding time sequence;
s32, decomposing the electromyographic signal data matrix by adopting a non-negative matrix decomposition model to obtain a plurality of blind source separation result matrixes;
the step S4 includes the following steps:
s41, establishing a CNN network model and an RNN network model, and initializing the iteration number m to be 0;
the RNN network model in step S41 includes: two bidirectional GRU layers, an attention layer and a full connection layer, wherein each bidirectional GRU layer comprises T' GRU units;
the GRU unit comprises an updating gate and a resetting gate;
the input in the first of the two bidirectional GRU layers is a feature vector, and the output of the second layer is the input of the attention layer;
the output of the attention layer is the input of the full connection layer;
the state update equation for the GRU unit is as follows:
Figure FDA0002949574680000021
wherein, the [ alpha ], [ beta ]]Representing the concatenation of two vectors, representing the inner product of the vectors, σIs an activation function, tanh is a hyperbolic tangent activation function, WrTo reset the weight matrix of the gate, WzIn order to update the weight matrix for the gate,
Figure FDA0002949574680000022
as a candidate set
Figure FDA0002949574680000023
Weight matrix of xtIs a feature vector, htIs an implicit state at time t, ht-1Is an implicit state at the moment t-1;
the CNN network model in step S41 includes: three layers of convolution layer, three layers of pooling layer and three layers of activation layer; the method comprises the following steps that a convolutional layer is connected with an active layer and then connected with a pooling layer to form a group of network units, and the three groups of network units are sequentially connected to form a CNN network model;
s42, inputting the blind source separation result matrixes into a CNN network model, and performing feature extraction and pooling dimension reduction operation to obtain feature vectors;
s43, inputting the feature vector into an RNN network model for processing to obtain a probability value of the predicted action category;
s44, calculating the distance Loss of the probability values of the predicted action category and the real action category through cross entropym
S45, judging the Loss of the mth timemValue sum Loss of m-1 timesm-1And if the difference value of the values is smaller than the threshold value, obtaining the trained CNN-RNN model, otherwise, updating the weight parameter and the offset parameter in the CNN network model and the weight parameter of the RNN network model by adopting a batch random gradient descent method, adding 1 to the iteration number m, and jumping to the step S42.
2. The method for recognizing rehabilitation training actions of upper limbs and hands of stroke patients as claimed in claim 1, wherein the calculation formula of the input and output of the convolutional layer is as follows:
Figure FDA0002949574680000031
wherein,
Figure FDA0002949574680000032
is the data of the ith input channel of the l-1 th convolutional layer,
Figure FDA0002949574680000033
data of the jth output channel of the first convolutional layer, Ml-1The number of input channels of the l-1 th convolutional layer,
Figure FDA0002949574680000034
for the l-th layer of the convolution kernel weights,
Figure FDA0002949574680000035
the first layer of convolution layer is offset, l is more than or equal to 1 and less than or equal to 3; the data of the ith input channel of the 1 st convolutional layer is a blind source separation result matrix Hr×nThe ith row of data.
3. The method of claim 1, wherein the hidden state h of the second layer of the two-layer bidirectional GRU layer is used for recognizing the rehabilitation training actions of the upper limbs and the hands of the patient with stroketThe input into the attention layer for processing comprises the following steps:
a1 hidden state h of the second layer of two-layer bidirectional GRU layertInput into the attention layer;
a2 weight W of initial attention layerwAnd bias bw
A3, weight W according to attention levelwAnd bias bwObtaining the hidden state h by tanh hyperbolic tangent activation functiontHidden layer representation of ut
A4, randomly initializing a weight vector uwFor the hidden layer to represent utPerforming softmax standardization to obtain the attention weight alphat
A5, implicit State htBy paying attention toForce weight alphatWeighting to obtain hidden state htIs expressed as a weighted attention representation qt
4. The method of claim 1, wherein the attention-weighted representation q represents a weight of the patient's upper limbs and hands in the rehabilitation training action recognitiontThe process of inputting the full connection layer for processing comprises the following steps:
b1, representing attention by weighting qtInputting the full connection layer, and performing discrete processing to obtain attention weighting ok
Figure FDA0002949574680000036
C is the number of the neurons of the full connection layer;
b2 weighting attention okAnd carrying out random inactivation operation and classification operation by using softmax to obtain the probability value of the predicted action category.
5. The method for recognizing rehabilitation training actions on upper limbs and hands of patients with stroke according to claim 4, wherein the probability value of the predicted action category in the step B2 is calculated by the formula:
Figure FDA0002949574680000041
wherein s iskIs the probability value of the predicted kth action.
CN201911394850.3A 2019-12-30 2019-12-30 Method for recognizing rehabilitation training actions of upper limbs and hands of stroke patient Active CN111184512B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911394850.3A CN111184512B (en) 2019-12-30 2019-12-30 Method for recognizing rehabilitation training actions of upper limbs and hands of stroke patient

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911394850.3A CN111184512B (en) 2019-12-30 2019-12-30 Method for recognizing rehabilitation training actions of upper limbs and hands of stroke patient

Publications (2)

Publication Number Publication Date
CN111184512A CN111184512A (en) 2020-05-22
CN111184512B true CN111184512B (en) 2021-06-01

Family

ID=70684400

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911394850.3A Active CN111184512B (en) 2019-12-30 2019-12-30 Method for recognizing rehabilitation training actions of upper limbs and hands of stroke patient

Country Status (1)

Country Link
CN (1) CN111184512B (en)

Families Citing this family (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111938660B (en) * 2020-08-13 2022-04-12 电子科技大学 An action recognition method for stroke patients' hand rehabilitation training based on array electromyography
CN111950460B (en) * 2020-08-13 2022-09-20 电子科技大学 Muscle strength self-adaptive stroke patient hand rehabilitation training action recognition method
CN112043269B (en) * 2020-09-27 2021-10-19 中国科学技术大学 A method for extracting and identifying muscle spatial activation patterns during gesture actions
CN114081513B (en) * 2021-12-13 2023-04-07 苏州大学 Electromyographic signal-based abnormal driving behavior detection method and system
CN114649079B (en) * 2022-03-25 2025-02-14 南京信息工程大学无锡研究院 A prediction method for encoder-decoder of GCN and bidirectional GRU
CN115281902A (en) * 2022-07-05 2022-11-04 北京工业大学 Myoelectric artificial limb control method based on fusion network
CN115311737B (en) * 2022-07-10 2025-10-14 复旦大学 Unnoticed hand movement recognition method for stroke patients based on deep learning
CN116013548B (en) * 2022-12-08 2024-04-09 广州视声健康科技有限公司 Intelligent ward monitoring method and device based on computer vision
CN115831368B (en) * 2022-12-28 2023-06-16 东南大学附属中大医院 Rehabilitation analysis management system based on cerebral imaging stroke patient data
CN116561518B (en) * 2023-05-24 2025-08-22 电子科技大学 A brain network construction method based on brain region weight correlation

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018119316A1 (en) * 2016-12-21 2018-06-28 Emory University Methods and systems for determining abnormal cardiac activity
CN108388348A (en) * 2018-03-19 2018-08-10 浙江大学 A kind of electromyography signal gesture identification method based on deep learning and attention mechanism
CN109480838A (en) * 2018-10-18 2019-03-19 北京理工大学 A kind of continuous compound movement Intention Anticipation method of human body based on surface layer electromyography signal
CN109924977A (en) * 2019-03-21 2019-06-25 西安交通大学 A kind of surface electromyogram signal classification method based on CNN and LSTM
CN110337269A (en) * 2016-07-25 2019-10-15 开创拉布斯公司 Method and apparatus for inferring user intent based on neuromuscular signals
CN110399846A (en) * 2019-07-03 2019-11-01 北京航空航天大学 A Gesture Recognition Method Based on Correlation of Multi-channel EMG Signals
CN110610172A (en) * 2019-09-25 2019-12-24 南京邮电大学 An EMG gesture recognition method based on RNN-CNN architecture

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108885870A (en) * 2015-12-01 2018-11-23 流利说人工智能公司 System and method for implementing a voice user interface by combining a speech-to-text system with a speech-to-intent system
KR101785500B1 (en) * 2016-02-15 2017-10-16 인하대학교산학협력단 A monophthong recognition method based on facial surface EMG signals by optimizing muscle mixing
US20190121306A1 (en) * 2017-10-19 2019-04-25 Ctrl-Labs Corporation Systems and methods for identifying biological structures associated with neuromuscular source signals
US10709390B2 (en) * 2017-03-02 2020-07-14 Logos Care, Inc. Deep learning algorithms for heartbeats detection
CN109308459B (en) * 2018-09-05 2022-06-24 南京大学 Gesture estimation method based on finger attention model and key point topology model
CN109359619A (en) * 2018-10-31 2019-02-19 浙江工业大学之江学院 A high-density surface electromyography signal decomposition method based on convolution blind source separation
CN109820525A (en) * 2019-01-23 2019-05-31 五邑大学 A driving fatigue recognition method based on CNN-LSTM deep learning model

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110337269A (en) * 2016-07-25 2019-10-15 开创拉布斯公司 Method and apparatus for inferring user intent based on neuromuscular signals
WO2018119316A1 (en) * 2016-12-21 2018-06-28 Emory University Methods and systems for determining abnormal cardiac activity
CN108388348A (en) * 2018-03-19 2018-08-10 浙江大学 A kind of electromyography signal gesture identification method based on deep learning and attention mechanism
CN109480838A (en) * 2018-10-18 2019-03-19 北京理工大学 A kind of continuous compound movement Intention Anticipation method of human body based on surface layer electromyography signal
CN109924977A (en) * 2019-03-21 2019-06-25 西安交通大学 A kind of surface electromyogram signal classification method based on CNN and LSTM
CN110399846A (en) * 2019-07-03 2019-11-01 北京航空航天大学 A Gesture Recognition Method Based on Correlation of Multi-channel EMG Signals
CN110610172A (en) * 2019-09-25 2019-12-24 南京邮电大学 An EMG gesture recognition method based on RNN-CNN architecture

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于深度神经网络的sEMG手势识别研究;张龙娇等;《计算机工程与应用》;20190605(第23期);第113-119页 *

Also Published As

Publication number Publication date
CN111184512A (en) 2020-05-22

Similar Documents

Publication Publication Date Title
CN111184512B (en) Method for recognizing rehabilitation training actions of upper limbs and hands of stroke patient
Chen et al. Hand gesture recognition based on surface electromyography using convolutional neural network with transfer learning method
CN110765920A (en) Motor imagery classification method based on convolutional neural network
CN111584029B (en) Electroencephalogram self-adaptive model based on discriminant confrontation network and application of electroencephalogram self-adaptive model in rehabilitation
CN109620223A (en) A kind of rehabilitation of stroke patients system brain-computer interface key technology method
US12106204B2 (en) Adaptive brain-computer interface decoding method based on multi-model dynamic integration
CN110619322A (en) Multi-lead electrocardio abnormal signal identification method and system based on multi-flow convolution cyclic neural network
CN113111831A (en) Gesture recognition technology based on multi-mode information fusion
CN107958213A (en) A kind of cospace pattern based on the medical treatment of brain-computer interface recovering aid and deep learning method
CN112120697A (en) Muscle fatigue advanced prediction and classification method based on surface electromyographic signals
CN110333783B (en) An irrelevant gesture processing method and system for robust electromyography control
CN115607169B (en) Electroencephalogram signal identification method based on self-adaptive multi-view deep learning framework
CN111938660B (en) An action recognition method for stroke patients&#39; hand rehabilitation training based on array electromyography
Xu et al. A novel event-driven spiking convolutional neural network for electromyography pattern recognition
CN116522106A (en) Motor imagery electroencephalogram signal classification method based on transfer learning parallel multi-scale filter bank time domain convolution
CN111631908A (en) Active hand training system and method based on brain-computer interaction and deep learning
CN109498370A (en) Joint of lower extremity angle prediction technique based on myoelectricity small echo correlation dimension
CN112043269B (en) A method for extracting and identifying muscle spatial activation patterns during gesture actions
CN112244851B (en) Muscle movement recognition method and surface electromyographic signal acquisition device
CN114841191A (en) Epilepsia electroencephalogram signal feature compression method based on fully-connected pulse neural network
CN114384999B (en) User-independent EMG gesture recognition system based on adaptive learning
CN114548165B (en) A cross-user electromyographic pattern classification method
CN114098768A (en) Cross-individual surface electromyographic signal gesture recognition method based on dynamic threshold and easy TL
Chan et al. Unsupervised domain adaptation for gesture identification against electrode shift
CN115137375B (en) Surface electromyographic signal classification method based on double-branch network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant