CN105676639A - Parallel multi-modal brain control method for complete grabbing operation of artificial hand - Google Patents
Parallel multi-modal brain control method for complete grabbing operation of artificial hand Download PDFInfo
- Publication number
- CN105676639A CN105676639A CN201610019136.6A CN201610019136A CN105676639A CN 105676639 A CN105676639 A CN 105676639A CN 201610019136 A CN201610019136 A CN 201610019136A CN 105676639 A CN105676639 A CN 105676639A
- Authority
- CN
- China
- Prior art keywords
- eeg
- scene animation
- module
- scene
- control
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05B—CONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
- G05B13/00—Adaptive control systems, i.e. systems automatically adjusting themselves to have a performance which is optimum according to some preassigned criterion
- G05B13/02—Adaptive control systems, i.e. systems automatically adjusting themselves to have a performance which is optimum according to some preassigned criterion electric
- G05B13/04—Adaptive control systems, i.e. systems automatically adjusting themselves to have a performance which is optimum according to some preassigned criterion electric involving the use of models or simulators
Landscapes
- Engineering & Computer Science (AREA)
- Software Systems (AREA)
- Artificial Intelligence (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Evolutionary Computation (AREA)
- Medical Informatics (AREA)
- Health & Medical Sciences (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Automation & Control Theory (AREA)
- Measurement And Recording Of Electrical Phenomena And Electrical Characteristics Of The Living Body (AREA)
- User Interface Of Digital Computer (AREA)
- Processing Or Creating Images (AREA)
Abstract
Description
技术领域technical field
本发明涉及智能机器人领域,具体涉及一种假手完整抓取操作的并行多模态脑控方法。The invention relates to the field of intelligent robots, in particular to a parallel multi-modal brain control method for a complete grasping operation of a prosthetic hand.
背景技术Background technique
随着我国经济的不断发展,各类残疾人总量和占总人口比例都有所上升,其中因为截肢、肌萎缩性脊髓侧索硬化症(AmyotrophicLateralSclerosis,ALS)等缺乏肢体行动能力或肌肉控制能力的残疾人数量众多,其家庭在照顾护理上往往投入了大量的人力与财力。近年来,基于脑电控制源的假手控制策略逐渐被人们所采用。它将脑电信号作为驱动假手动作的信号源,建立两者间的直接联系,从而帮助一些例如脊髓或外周神经受损而中枢神经系统完好的残疾人获得能够与外界进行交流的能力,不仅符合国家政策,也将减轻社会的负担,必将产生巨大的经济效益。With the continuous development of my country's economy, the total number of all kinds of disabled people and the proportion of the total population have increased, among them, amputation, amyotrophic lateral sclerosis (Amyotrophic Lateral Sclerosis, ALS) and other lack of physical mobility or muscle control ability There are a large number of disabled people, and their families often invest a lot of manpower and financial resources in care. In recent years, artificial hand control strategies based on EEG control sources have been gradually adopted by people. It uses EEG signals as a signal source to drive prosthetic hand movements, and establishes a direct connection between the two, thus helping some disabled people with spinal cord or peripheral nerve damage but intact central nervous system to obtain the ability to communicate with the outside world, not only in line with The national policy will also reduce the burden on the society and will surely produce huge economic benefits.
混合脑控技术是在传统脑机接口技术上提出的新研究方向,它在传统的单一范式的脑机接口中,加入新的生理电信号,从而得到混合脑控系统。其中,新的生理电信号包括眼电、心电、血流变换等信号模式。混合脑控方法按其混合的方式可分为并行模式和串行模式。其中,基于并行模式的混合脑控方法可以实现两种脑电信号的协同控制策略,在增加了系统可靠性的同时,提高了整个系统的控制精度。Hybrid brain control technology is a new research direction proposed on the basis of traditional brain-computer interface technology. It adds new physiological electrical signals to the traditional single-paradigm brain-computer interface to obtain a hybrid brain-control system. Among them, the new physiological electrical signals include oculoelectricity, electrocardiogram, blood flow transformation and other signal modes. The hybrid brain control method can be divided into parallel mode and serial mode according to its mixing mode. Among them, the hybrid brain control method based on parallel mode can realize the cooperative control strategy of two kinds of EEG signals, which not only increases the reliability of the system, but also improves the control accuracy of the whole system.
由于目前国内学者还停留在用单一范式下的脑电信号作为智能假手的控制源,并未对基于混合脑控技术下的假手控制技术进行深入的研究。因此,本发明根据场景动画视觉诱发脑控范式与表情驱动脑控范式下脑电信号特征的不同,提出了一种假手完整抓取操作的并行多模态脑控方法,通过加强不同类别脑电信号间的各体差异性,以解决智能假手完整抓取操作脑控系统的正确率、信息传输率、可靠性等关键问题。At present, domestic scholars are still using EEG signals under a single paradigm as the control source of intelligent prosthetic hands, and have not conducted in-depth research on the prosthetic hand control technology based on hybrid brain control technology. Therefore, the present invention proposes a parallel multi-modal brain control method for the complete grasping operation of a prosthetic hand according to the difference in the characteristics of EEG signals under the visually induced brain control paradigm of scene animation and the expression driven brain control paradigm. In order to solve the key problems such as the correct rate, information transmission rate, and reliability of the brain-controlled system for the complete grasping operation of the intelligent prosthetic hand.
发明内容Contents of the invention
本发明的目的在于克服上述现有技术不足,在现有脑电信号控制智能假手方法的基础上,综合利用了基于场景动画视觉诱发及表情驱动脑控范式的可靠性,以提高假手完整抓取操作的正确率为目的,提供了一种假手完整抓取操作的并行多模态脑控方法。The purpose of the present invention is to overcome the above-mentioned deficiencies in the prior art. On the basis of the existing EEG signal control intelligent prosthetic hand method, the reliability of the brain control paradigm based on scene animation visual induction and expression driving is comprehensively utilized to improve the complete grasping of the prosthetic hand. The accuracy of the operation is the purpose, and a parallel multi-modal brain control method for the complete grasping operation of the prosthetic hand is provided.
为达到以上目的,本发明是采取如下技术方案予以实现的:To achieve the above object, the present invention is achieved by taking the following technical solutions:
一种假手完整抓取操作的并行多模态脑控方法,包括下述步骤:A parallel multi-modal brain control method for a complete grasping operation of a prosthetic hand, comprising the following steps:
(1)为受试者佩戴脑电采集模块,其中所有电极均处于国际10/20系统的标准位置,根据场景动画视觉诱发脑电信号产生机理采集大脑皮层枕叶区O1、O2通道的脑电信号,根据表情驱动脑电信号产生机理采集侧额叶皮质区F7、F8通道的脑电信号。(1) Wear the EEG acquisition module for the subject, in which all electrodes are in the standard position of the international 10/20 system, and collect the EEG of the O1 and O2 channels in the occipital lobe area of the cerebral cortex according to the generation mechanism of the visually induced EEG signal in the scene animation According to the expression-driven EEG signal generation mechanism, the EEG signals of the F7 and F8 channels of the lateral frontal cortex were collected.
(2)事先在场景动画视觉诱发模块中植入由一残疾人日常喝水简易操作同时进行四种不同表情驱动的全过程,分解得到的四个具有连续性的动作场景图像。其中,每一个图像中残疾人面部呈现一种不同表情,每一个动作场景图像经过灰度处理后,分别得到两张黑白分明的反转色图片,交替变化呈现在受试者面前进行视觉诱发。所述两张黑白分明的反转色图片交替变化的频率,也即闪烁频率与该组图片的脉宽调制频率相同,所述脉宽频率调制范围为2-30Hz。(2) Embed in the visual eliciting module of the scene animation in advance, which is driven by four different facial expressions at the same time through the simple daily drinking water operation of a disabled person, and four continuous action scene images obtained by decomposition. Among them, the faces of the disabled in each image showed a different expression, and each action scene image was gray-scale processed to obtain two black and white reversed color images, which were alternately presented in front of the subjects for visual induction. The alternating changing frequency of the two black and white reversed color pictures, that is, the flickering frequency is the same as the pulse width modulation frequency of the group of pictures, and the pulse width frequency modulation range is 2-30 Hz.
所述的由分解得到的四个具有连续性的动作场景图像分别定义为场景动画一、场景动画二、场景动画三、场景动画四;所述四种不同表情驱动分别为提眉、皱眉、左撇嘴、右撇嘴。其中,提眉出现在场景动画一中;皱眉出现在场景动画二中;左撇嘴出现在场景动画三中;右撇嘴出现在场景动画四中。The four continuous action scene images obtained by decomposition are respectively defined as scene animation one, scene animation two, scene animation three, and scene animation four; the four different expression drives are respectively eyebrow raising, frowning, left Curl your lips, curl your lips to the right. Among them, raised eyebrows appear in scene animation 1; frowns appear in scene animation 2; left-handed mouth appears in scene animation 3; right-handed mouth appears in scene animation 4.
(3)受试者通过注视不同场景动画视觉诱发界面同时进行不同表情驱动来表达控制意图。脑电采集模块同时提取受试者的O1、O2、F7、F8四通道脑电信号后进行离线分析。(3) The subjects expressed their control intentions by gazing at different scene animation visual evoked interfaces and driving different expressions at the same time. The EEG acquisition module simultaneously extracts the subject's O1, O2, F7, and F8 four-channel EEG signals for off-line analysis.
(4)首先,脑电采集模块对采集到的O1、O2、F7、F8四通道脑电信号经放大、滤波预处理后,经由蓝牙模块,无线传输给信号处理模块,信号处理模块采用功率谱密度方法对四通道脑电信号进行频谱分析,提取alpha频带的能量谱密度值。计算场景动画视觉诱发同时进行表情驱动下,由场景动画视觉诱发产生的O1、O2通道脑电信号alpha频带的能量谱密度及由表情驱动产生的F7、F8通道脑电信号alpha频带的能量谱密度比重权值ω。(4) First, the EEG acquisition module amplifies and filters the collected EEG signals of four channels O1, O2, F7, and F8, and then wirelessly transmits them to the signal processing module via the Bluetooth module. The signal processing module adopts power spectrum The density method analyzes the frequency spectrum of the four-channel EEG signal, and extracts the energy spectral density value of the alpha frequency band. Calculate the energy spectral density of the alpha frequency bands of the O1 and O2 channel EEG signals generated by the scene animation visual induction and the expression drive at the same time, and the energy spectral density of the alpha frequency bands of the F7 and F8 channel EEG signals generated by the expression drive Specific weight ω.
其次,采用并行控制方法,根据场景动画稳态视觉诱发及表情驱动脑控范式下产生的脑电信号特性,通过功率谱密度方法同时提取四通道alpha频带的时频域特征值。其中,场景动画视觉诱发脑电信号alpha频带的时频域特征提取算法为快速傅里叶变换;表情驱动脑电信号alpha频带的时频域特征提取算法为小波变换模均值。Secondly, using the parallel control method, according to the characteristics of EEG signals generated under the scene animation steady-state visual evoked and expression-driven brain control paradigms, the time-frequency domain eigenvalues of the four-channel alpha bands are simultaneously extracted through the power spectral density method. Among them, the time-frequency domain feature extraction algorithm of the alpha frequency band of the scene animation visually induced EEG signal is fast Fourier transform; the time-frequency domain feature extraction algorithm of the expression-driven EEG signal alpha frequency band is the wavelet transform modulus mean.
(5)根据步骤(4)计算得到的权值ω确定两种不同脑控范式下产生的特征脑电信号的训练样本数,并将其输入BP神经网络分类器中进行训练;(5) determine the number of training samples of the characteristic EEG signals produced under two different brain control paradigms according to the weight ω calculated in step (4), and input it into the BP neural network classifier for training;
(6)样本训练完成后,受试者通过注视不同场景动画视觉诱发界面同时进行不同表情驱动来表达控制意图,脑电采集模块同时提取受试者的O1、O2、F7、F8四通道脑电信号后,返回步骤(4),进行在线目标识别,根据步骤(5)的训练结果与所提取的脑电特征信号采用BP神经网络进行在线分类;(6) After the sample training is completed, the subject expresses the control intention by gazing at the animated visual eliciting interface of different scenes and driving different expressions at the same time. The EEG acquisition module simultaneously extracts the four-channel EEG of the subject O1, O2, F7, and F8 After the signal, return to step (4), carry out online target recognition, adopt BP neural network to carry out online classification according to the training result of step (5) and the extracted EEG characteristic signal;
(7)由步骤(6)得到的识别结果经通信模块无线传输给假手臂筒内的驱动控制模块,驱动控制模块控制假手本体模块完成基本动作。(7) The recognition result obtained in step (6) is wirelessly transmitted to the drive control module in the prosthetic arm tube through the communication module, and the drive control module controls the prosthetic hand body module to complete the basic actions.
上述方法中,所述四种表情分别为提眉、皱眉、左撇嘴、右撇嘴,每次驱动时每种表情至少重复3次。所述提眉与场景动画一控制假手张开;皱眉与场景动画二控制假手抓握;左撇嘴与场景动画三控制假手腕关节内旋;右撇嘴与场景动画四控制假手腕关节外旋。In the above method, the four kinds of expressions are eyebrow raising, frowning, left-handed mouth, and right-handed mouth, and each expression is repeated at least 3 times each time it is driven. The eyebrow raising and scene animation one controls the opening of the artificial hand; frowning and the scene animation two control the grasping of the artificial hand; the left curling mouth and the scene animation three control the internal rotation of the fake wrist joint; the right curling mouth and the scene animation four control the external rotation of the fake wrist joint.
本发明针对传统单一范式下脑电信号控制假手,其优越性在于:The present invention aims at controlling the prosthetic hand with the EEG signal under the traditional single paradigm, and its superiority lies in:
针对单一范式下的脑控假手方法,本发明提出来一种假手完整抓取操作的并行多模态脑控方法,以受试者注视不同场景动画视觉诱发下产生的脑电信号及不同表情驱动时产生的脑电信号同时作为控制信息源,通过提高不同类别脑电信号间的个体差异性,实现假手的精密性控制,进一步提高了脑控假手系统的准确率与信息传输率。Aiming at the brain-controlled prosthetic hand method under a single paradigm, the present invention proposes a parallel multi-modal brain-controlled method for the complete grasping operation of the prosthetic hand. The EEG signals generated at the same time are used as the control information source at the same time. By improving the individual differences between different types of EEG signals, the precision control of the prosthetic hand is realized, and the accuracy and information transmission rate of the brain-controlled prosthetic hand system are further improved.
附图说明Description of drawings
图1为本发明脑电采集模块的布置示意图。FIG. 1 is a schematic diagram of the layout of the EEG acquisition module of the present invention.
图2为基于场景动画视觉诱发的示意图。Fig. 2 is a schematic diagram of visual induction based on scene animation.
其中(a)假手张口与提眉场景示意图;(a) Schematic diagram of prosthetic hand opening mouth and eyebrow raising scene;
(b)假手抓握与皱眉场景示意图;(b) Schematic diagram of the prosthetic hand grasping and frowning scene;
(c)假手腕内旋与左撇嘴场景示意图;(c) Schematic diagram of prosthetic wrist pronation and left-handed mouth;
(d)假手腕外旋与右撇嘴场景示意图。(d) Schematic diagram of the fake wrist external rotation and right-handed mouth scene.
每种场景中,左侧为动作场景;中间和右侧分别为经过灰度处理后的两张黑白分明反转色图片;In each scene, the left side is the action scene; the middle and the right side are two black and white reversed color pictures after grayscale processing;
图3为智能假手的场景动画视觉诱发的刺激界面图。Fig. 3 is a diagram of the stimulation interface of the scene animation visually evoked by the intelligent prosthetic hand.
图4为本发明的控制系统原理图。Fig. 4 is a schematic diagram of the control system of the present invention.
其中(a)为控制方法流程框图;Wherein (a) is a flow chart of the control method;
(b)为控制装置结构框图。(b) is the structural block diagram of the control device.
具体实施方式detailed description
一种基于假手完整抓取操作的并行多模态脑控假手装置,包括:佩戴于受试者者头部的脑电信号采集模块,置于受试者腰部的便携化信号处理模块,假手驱动控制模块,假手本体模块,蓝牙传输模块,通信模块以及置于受试者可视范围之内的场景动画视觉诱发模块,该场景动画视觉诱发模块对受试者播放经分解及处理后的不同动作场景的闪烁图像,以诱发受试者产生脑电信号。A parallel multimodal brain-controlled prosthetic device based on the complete grasping operation of the prosthetic hand, including: an EEG signal acquisition module worn on the subject's head, a portable signal processing module placed on the subject's waist, and a prosthetic hand driven Control module, prosthetic hand body module, bluetooth transmission module, communication module and scene animation visual eliciting module placed within the visual range of the subject, the scene animation visual eliciting module plays different actions decomposed and processed to the subject A flickering image of the scene to elicit EEG signals from the subject.
上述方案中,脑电采集模块采用自带放大滤波的专用便携化脑电帽EMOTIV,并选取国际标准10/20下的F7,F8,O1,O2通道信号。便携化信号处理模块采用嵌入式微处理器。脑电采集模块与信号处理模块通过蓝牙传输模块无线连接,信号处理模块与假手的驱动控制模块采用通信模块无线连接。本装置采用的场景动画视觉诱发模块为计算机显示器、电视屏幕、手机或平板电脑之一。In the above scheme, the EEG acquisition module adopts the special portable EEG cap EMOTIV with its own amplification filter, and selects the F7, F8, O1, O2 channel signals under the international standard 10/20. The portable signal processing module adopts an embedded microprocessor. The EEG acquisition module and the signal processing module are wirelessly connected through the Bluetooth transmission module, and the signal processing module and the drive control module of the prosthetic hand are connected wirelessly through the communication module. The scene animation visual eliciting module adopted by the device is one of a computer monitor, a TV screen, a mobile phone or a tablet computer.
参考图1、图4(b),图1中根据场景动画视觉诱发及表情驱动产生机理,采集受试者头部枕叶区的O1、O2位置和位于侧额叶皮质区的F7、F8位置脑电信号,选用双侧耳后位置放置参考电极。本发明涉及的假手控制装置包括置于受试者头部的脑电信号采集模块310,置于受试者可视范围内的场景动画视觉诱发模块370。优先采用便携化16通道无线脑电采集设备,选取国际标准10/20下位于枕叶区的O1、O2位置与位于侧额叶皮质区的F7、F8位置的脑电信号。脑电信号采集模块对采集到的脑电信号进行放大、滤波之后通过蓝牙传输模块320,将其送入便携化的信号处理模块330。信号处理模块负责将脑电信号进行特征提取和模式识别,识别结果通过TTL串口通信技术传输给置于假手本体模块360臂筒内的驱动控制模块350,驱动控制模块将识别结果转换为电机的控制命令完成假手目标运动。Referring to Figure 1 and Figure 4(b), in Figure 1, the O1 and O2 positions in the occipital lobe area of the subject's head and the F7 and F8 positions in the lateral frontal cortex area were collected according to the scene animation visual evoked and expression-driven mechanism For EEG signals, reference electrodes were placed behind the ears on both sides. The prosthetic hand control device involved in the present invention includes an EEG signal acquisition module 310 placed on the subject's head, and a scene animation visual induction module 370 placed within the visual range of the subject. Prioritize the use of portable 16-channel wireless EEG acquisition equipment, and select the EEG signals at the O1 and O2 positions in the occipital lobe area and the F7 and F8 positions in the lateral frontal cortex area under the international standard 10/20. The EEG signal collection module amplifies and filters the collected EEG signals and sends them to the portable signal processing module 330 through the Bluetooth transmission module 320 . The signal processing module is responsible for feature extraction and pattern recognition of EEG signals, and the recognition results are transmitted to the drive control module 350 placed in the arm tube of the prosthetic hand body module 360 through TTL serial communication technology, and the drive control module converts the recognition results into motor control command to complete the prosthetic hand target movement.
参考图2场景动画视觉诱发单元通过将残疾人不同表情驱动情况下的喝水过程分解为假手张开与提眉、假手抓握与皱眉、假手腕关节内旋与左撇嘴和假手腕关节外旋与右撇嘴四个不同目标动作场景,并预先将其植入场景动画视觉诱发模块;Refer to Figure 2. The visual eliciting unit of the scene animation decomposes the drinking process driven by different expressions of the disabled into opening and eyebrow raising of the artificial hand, grasping and frowning of the artificial hand, internal rotation of the artificial wrist joint and left-handed mouth, and external rotation of the artificial wrist joint. Four different target action scenes with right-handed lips, and pre-implanted into the scene animation visual eliciting module;
场景一(a)残疾人假手张开初始状态同时进行提眉表情驱动,控制假手完成张开的动作过程。场景二(b)残疾人假手抓握杯子同时进行皱眉表情驱动,控制假手完成抓取杯子的动作过程。场景三(c)残疾人假手腕内旋同时进行左撇嘴表情驱动,控制假手完成腕内旋的动作过程;场景四(d)残疾人假手腕外旋同时进行右撇嘴表情驱动,控制假手完成腕外旋的动作过程,从而实现一套完整的喝水过程。Scenario 1 (a) The initial state of the artificial hand of the handicapped is opened, and at the same time, the eyebrow-raising expression is driven, and the artificial hand is controlled to complete the opening process. Scenario 2 (b) The handicap's prosthetic hand grasps the cup while driving the frown expression, and controls the prosthetic hand to complete the action process of grabbing the cup. Scene 3 (c) The artificial wrist of the handicapped is turned inward while the expression of left-handed mouth is driven, and the prosthetic hand is controlled to complete the action process of wrist internal rotation; Scene 4 (d) The artificial wrist of the disabled is turned outward while the expression of right-handed mouth is driven, and the artificial hand is controlled to complete the wrist movement process. The action process of external rotation, so as to realize a complete set of water drinking process.
参考图4(a),本发明一种假手完整抓取操作的并行多模态脑控方法,当对受试者同时进行场景动画视觉诱发及表情驱动时,脑电采集设备采集枕叶区和侧额叶皮质区脑电信号,进行放大和带通滤波预处理。信号处理设备提取预处理后脑电信号的功率谱密度及时频域特征值,并根据时频域特征值识别不同的假手基本动作类。最后,假肢根据识别结果控制假手完成四种基本动作。具体包括下述步骤:Referring to Figure 4(a), the present invention is a parallel multi-modal brain control method for a complete grasping operation of a prosthetic hand. When the scene animation visual induction and expression drive are performed on the subject at the same time, the EEG acquisition device collects the occipital lobe and The EEG signals in the lateral frontal cortex were amplified and preprocessed with band-pass filtering. The signal processing equipment extracts the power spectral density and frequency-domain eigenvalues of the preprocessed EEG signals, and identifies different basic action types of the prosthetic hand according to the time-frequency domain eigenvalues. Finally, the prosthesis controls the prosthetic hand to complete four basic actions according to the recognition results. Specifically include the following steps:
(1)步骤S110,受试者在注视场景动画视觉诱发模块的同时分别做出提眉、皱眉、左撇嘴、右撇嘴、四种简单表情之一(图3),同步采集O1、O2、F7、F8位置脑电信号。本实施例中,智能假手四种基本动作分别为手张开、手抓握、腕关节内旋、腕关节外旋。(1) Step S110, the subject made one of the four simple expressions of eyebrow raising, frowning, left-handed mouth, right-handed mouth while watching the scene animation visual evoked module (Figure 3), and simultaneously collected O1, O2, F7 , F8 position EEG signal. In this embodiment, the four basic movements of the intelligent prosthetic hand are hand opening, hand grasping, wrist joint internal rotation, and wrist joint external rotation.
(2)步骤S120,对采集到的脑电信号进行预处理。在本实施例中,对采集到的脑电信号首先进行放大,再进行2-50Hz带通滤波。(2) Step S120, preprocessing the collected EEG signals. In this embodiment, the collected EEG signal is first amplified, and then 2-50 Hz band-pass filtering is performed.
(3)步骤S130,提取脑电信号时频域特征值。受试者根据自身需求,随机注视所需视场景动画视觉诱发界面并做出与之相对应的四种简单表情之一后,提取O1、O2、F7、F8位置脑电信号,采用功率谱密度方法提取四通道alpha频带的时频域特征值。(3) Step S130, extracting time-frequency domain feature values of the EEG signal. According to their own needs, the subjects randomly stared at the desired visual scene animation visual evoked interface and made one of the four corresponding simple expressions, then extracted the EEG signals at O1, O2, F7, and F8 positions, and used power spectral density The method extracts the time-frequency domain eigenvalues of four-channel alpha bands.
r(k)=E[x(t)x*(n+t)](2)r(k)=E[x(t)x * (n+t)](2)
其中,r(k)为不同通道脑电信号的的自相关函数,x(t)为不同通道的脑电信号;Among them, r(k) is the autocorrelation function of EEG signals of different channels, and x(t) is the EEG signals of different channels;
计算由场景动画视觉诱发产生的O1、O2通道的特征脑电信号alpha频带的能量谱密度及由表情驱动产生的F7、F8通道的特征脑电信号alpha频带的能量谱密度比重权值ω:Calculate the energy spectral density of the alpha frequency bands of the characteristic EEG signals of the O1 and O2 channels generated by the visual induction of the scene animation and the specific weight ω of the energy spectral density of the alpha frequency bands of the characteristic EEG signals of the F7 and F8 channels generated by the expression drive:
其中,PO1(i)和PO2(i)分别表示对应于场景动画视觉诱发产生的O1、O2通道脑电信号alpha频带的能量谱密度,PF7(i)和PF8(i)分别表示对应于表情驱动产生的F7、F8通道脑电信号alpha频带的能量谱密度。Among them, PO1(i) and PO2(i) represent the energy spectral densities corresponding to the alpha bands of the O1 and O2 channel EEG signals generated by the visual evoked scene animation, respectively, and PF7(i) and PF8(i) represent the energy spectral density corresponding to the expression-driven The energy spectral density of the alpha band of the generated EEG signals of F7 and F8 channels.
在计算得到权值ω后,分别计算四通道脑电信号的alpha频带时频域特征值,在本实施例中采用快速傅里叶变换方法计算由场景动画视觉诱发产生的O1、O2通道alpha频带的频域特征值,采用小波变换模均值模方法计算由表情驱动产生的F7、F8通道alpha频带时频域特征。提取特征值除了本实施例中采用的快速傅里叶变换及小波变换外,还可采用主成分分析、共空域模式等特征提取方法。After calculating the weight ω, calculate the alpha frequency band time-frequency domain eigenvalues of the four-channel EEG signals respectively. In this embodiment, the fast Fourier transform method is used to calculate the O1 and O2 channel alpha frequency bands generated by the visual induction of the scene animation. The frequency domain eigenvalues of the F7 and F8 channel alpha frequency band time-frequency domain characteristics generated by the expression drive are calculated by using the wavelet transform modulus mean modulo method. In addition to the fast Fourier transform and wavelet transform used in this embodiment, feature extraction methods such as principal component analysis and common-space mode can also be used for extracting feature values.
(4)步骤S140,根据四种不同控制源下alpha频带的能量比重权值ω,确定分类器训练样本中各通道下的样本数量,并进行分类器的训练,训练过程中识别不同控制源对应的假手基本动作类型。在本实施例中采取BP神经网络对特征值进行模式识别,根据权值计算后训练得到的四种识别假手基本动作类型,也就是受试者在正式使用智能假手之前,需要用四种并行多模态脑电信号的时频域特征值训练BP神经网络分类器,分类器输出结果控制假手完成四种基本动作,四种基本动作识别结果如表1所示。(4) Step S140, according to the energy proportion weight ω of the alpha frequency band under four different control sources, determine the number of samples under each channel in the classifier training samples, and carry out the training of the classifier. During the training process, identify the corresponding The basic movement types of the prosthetic hand. In this embodiment, the BP neural network is used to perform pattern recognition on the eigenvalues. According to the four types of basic action types of the artificial hand that are trained after weight calculation, that is, before the subject formally uses the intelligent artificial hand, he needs to use four parallel multi- The time-frequency domain eigenvalues of the modal EEG signals trained the BP neural network classifier, and the output result of the classifier controlled the prosthetic hand to complete four basic actions. The recognition results of the four basic actions are shown in Table 1.
表1脑电识别结果Table 1 EEG recognition results
(5)步骤S150,假手臂筒内的驱动控制模块根据识别结果完成四种基本动作。(5) Step S150, the drive control module in the prosthetic arm tube completes four basic actions according to the recognition results.
(6)步骤S160,假手本体模块完成四种基本动作后,可通过视觉信息与生物感知,实现反馈。(6) Step S160, after the prosthetic hand body module completes the four basic movements, feedback can be realized through visual information and biological perception.
参考图4(b),基于图4(a)的方法,本发明对应提供了一种控制装置,包括:脑电信号采集模块310,蓝牙传输模块320,信号处理模块330,通信模块340,驱动控制模块350,假手本体模块360,以及场景动画视觉诱发模块370。Referring to Fig. 4(b), based on the method in Fig. 4(a), the present invention provides a corresponding control device, including: EEG signal acquisition module 310, Bluetooth transmission module 320, signal processing module 330, communication module 340, drive Control module 350 , prosthetic hand body module 360 , and scene animation visual induction module 370 .
该装置中,脑电信号采集模块310采集国际标准10/20下表情驱动特征脑电信号,选择双侧耳后位置为参考位置。便携化信号处理模块330采用嵌入式微处理器。脑电采集模块与信号处理模块通过蓝牙传输模块相连接。信号处理模块与假手的驱动控制模块采用通信模块相连接。本装置采用的场景动画视觉诱发模块为计算机显示器、电视屏幕、手机或平板电脑之一。In this device, the electroencephalogram signal collection module 310 collects the characteristic electroencephalogram signals driven by expressions under the international standard 10/20, and selects the positions behind the ears on both sides as reference positions. The portable signal processing module 330 adopts an embedded microprocessor. The EEG acquisition module is connected with the signal processing module through the Bluetooth transmission module. The signal processing module is connected with the driving control module of the prosthetic hand through the communication module. The scene animation visual eliciting module adopted by the device is one of a computer monitor, a TV screen, a mobile phone or a tablet computer.
当场景动画视觉诱发模块370工作时,脑电采集模块310用于采集O1、O2、F7、F8位置脑电信号,选用16通道无线脑电帽Emotiv,自带软件进行放大、滤波。蓝牙传输模块320与信号处理模块330相连接,将脑电采集模块所采集到的脑电信号传给信号处理模块。信号处理模块330接收经由蓝牙传输模块320传来的脑电信号,进行特征提取与模式识别。采用快速傅里叶变换与小波变换模均值方法提取特征值向量,采用BP神经网络方法识别假手的四种基本动作。信号处理模块可选用随身携带的嵌入式微处理器。通信模块340由发生端子模块和接收端子模块组成,通过AT指令完成发射端与接收端初始化设置。其中,发射端与信号处理模块330相连接,接收端与假手的驱动控制模块350相连接。发射端子模块通过TTL串口通信将识别结果通过接收端子模块,传输给假手的驱动控制模块。假手的驱动控制模块350由电机控制子模块与电机驱动子模块组成。接收经由通信模块传输的假手基本动作类型识别结果。识别结果传给电机控制子模块后转换为0,1电平控制指令传给电机驱动子模块。电机驱动子模块根据电平控制指令,控制假手本体模块360完成四种基本动作。如识别结果为假手的抓握动作,则驱动控制模块驱动假手完成抓握动作。When the scene animation visual eliciting module 370 is working, the EEG acquisition module 310 is used to collect EEG signals at O1, O2, F7, and F8 positions. The 16-channel wireless EEG cap Emotiv is selected, and its own software is used for amplification and filtering. The Bluetooth transmission module 320 is connected with the signal processing module 330, and transmits the EEG signals collected by the EEG collection module to the signal processing module. The signal processing module 330 receives the EEG signal transmitted from the Bluetooth transmission module 320 to perform feature extraction and pattern recognition. Fast Fourier transform and wavelet transform modulus mean method are used to extract eigenvalue vectors, and BP neural network method is used to identify four basic movements of the prosthetic hand. The signal processing module can be selected as an embedded microprocessor that is carried around. The communication module 340 is composed of a generating terminal module and a receiving terminal module, and completes the initial setting of the transmitting terminal and the receiving terminal through AT commands. Wherein, the transmitting end is connected with the signal processing module 330 , and the receiving end is connected with the drive control module 350 of the prosthetic hand. The transmitting terminal module transmits the recognition result to the driving control module of the prosthetic hand through the receiving terminal module through the TTL serial port communication. The drive control module 350 of the prosthetic hand is composed of a motor control sub-module and a motor drive sub-module. Receive the recognition result of the basic motion type of the prosthetic hand transmitted via the communication module. The recognition result is transmitted to the motor control sub-module and then converted into a 0,1 level control instruction and transmitted to the motor drive sub-module. The motor drive sub-module controls the prosthetic hand body module 360 to complete four basic actions according to the level control command. If the recognition result is the grasping action of the artificial hand, the drive control module drives the artificial hand to complete the grasping action.
假手本体模块360根据驱动控制模块350传来的动作指令,带动不同电机转动,最终实现假手的四种基本动作。The prosthetic hand body module 360 drives different motors to rotate according to the action commands sent by the drive control module 350, and finally realizes four basic actions of the prosthetic hand.
上述实施例只是为了说明本发明的技术构思及特点,其目的在于让熟悉此项技术的人是能够了解本发明的内容并据以实施,并不能以此限制本发明的保护范围。The above-mentioned embodiments are only to illustrate the technical concept and characteristics of the present invention, and its purpose is to enable those familiar with this technology to understand the content of the present invention and implement it accordingly, and cannot limit the protection scope of the present invention.
Claims (2)
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN201610019136.6A CN105676639B (en) | 2016-01-12 | 2016-01-12 | A kind of concurrent multimode state brain prosecutor method for complete grasping manipulation of doing evil through another person |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN201610019136.6A CN105676639B (en) | 2016-01-12 | 2016-01-12 | A kind of concurrent multimode state brain prosecutor method for complete grasping manipulation of doing evil through another person |
Publications (2)
| Publication Number | Publication Date |
|---|---|
| CN105676639A true CN105676639A (en) | 2016-06-15 |
| CN105676639B CN105676639B (en) | 2018-12-07 |
Family
ID=56300547
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN201610019136.6A Active CN105676639B (en) | 2016-01-12 | 2016-01-12 | A kind of concurrent multimode state brain prosecutor method for complete grasping manipulation of doing evil through another person |
Country Status (1)
| Country | Link |
|---|---|
| CN (1) | CN105676639B (en) |
Cited By (1)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN112353391A (en) * | 2020-10-22 | 2021-02-12 | 武汉理工大学 | Electroencephalogram signal-based method and device for recognizing sound quality in automobile |
Citations (7)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| JP2000279435A (en) * | 1999-03-29 | 2000-10-10 | Shimadzu Corp | Eye-gaze input control system for body assist devices |
| WO2002049534A2 (en) * | 2000-12-19 | 2002-06-27 | Alorman Advanced Medical Technologies, Ltd. | Method for controlling multi-function myoelectric prothesis |
| CN101057795A (en) * | 2007-05-18 | 2007-10-24 | 天津大学 | Artificial hand using muscle electrical and electroencephalogram cooperative control and controlling method thereof |
| CN102133139A (en) * | 2011-01-21 | 2011-07-27 | 华南理工大学 | Artificial hand control system and method |
| CN102309365A (en) * | 2011-08-30 | 2012-01-11 | 西安交通大学苏州研究院 | Wearable brain-control intelligent prosthesis |
| CN104997581A (en) * | 2015-07-17 | 2015-10-28 | 西安交通大学 | Artificial hand control method and apparatus for driving EEG signals on the basis of facial expressions |
| CN105022486A (en) * | 2015-07-17 | 2015-11-04 | 西安交通大学 | Electroencephalogram identification method based on different expression drivers |
-
2016
- 2016-01-12 CN CN201610019136.6A patent/CN105676639B/en active Active
Patent Citations (7)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| JP2000279435A (en) * | 1999-03-29 | 2000-10-10 | Shimadzu Corp | Eye-gaze input control system for body assist devices |
| WO2002049534A2 (en) * | 2000-12-19 | 2002-06-27 | Alorman Advanced Medical Technologies, Ltd. | Method for controlling multi-function myoelectric prothesis |
| CN101057795A (en) * | 2007-05-18 | 2007-10-24 | 天津大学 | Artificial hand using muscle electrical and electroencephalogram cooperative control and controlling method thereof |
| CN102133139A (en) * | 2011-01-21 | 2011-07-27 | 华南理工大学 | Artificial hand control system and method |
| CN102309365A (en) * | 2011-08-30 | 2012-01-11 | 西安交通大学苏州研究院 | Wearable brain-control intelligent prosthesis |
| CN104997581A (en) * | 2015-07-17 | 2015-10-28 | 西安交通大学 | Artificial hand control method and apparatus for driving EEG signals on the basis of facial expressions |
| CN105022486A (en) * | 2015-07-17 | 2015-11-04 | 西安交通大学 | Electroencephalogram identification method based on different expression drivers |
Non-Patent Citations (3)
| Title |
|---|
| LIU C,等: "Design on portable brain control system and its application", 《CYBER TECHNOLOGY IN AUTOMATION, CONTROL, AND INTELLIGENT SYSTEMS (CYBER), 2015 IEEE INTERNATIONAL CONFERENCE ON. IEEE》 * |
| SCHWARTZ A B,等: "Brain-controlled interfaces: movement restoration with neural prosthetics", 《NEURON》 * |
| 李耀楠,等: "脑-机接口驱动神经义肢手系统的研究", 《第七届全国康复医院工程和康复工程学术研讨会》 * |
Cited By (1)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN112353391A (en) * | 2020-10-22 | 2021-02-12 | 武汉理工大学 | Electroencephalogram signal-based method and device for recognizing sound quality in automobile |
Also Published As
| Publication number | Publication date |
|---|---|
| CN105676639B (en) | 2018-12-07 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| US11602300B2 (en) | Brain-computer interface based robotic arm self-assisting system and method | |
| CN113398422B (en) | Rehabilitation training system and method based on motor imagery-brain-computer interface and virtual reality | |
| CN104398325B (en) | The device and method of brain-myoelectric-controlled prosthesis based on scene stable state vision inducting | |
| CN109992113B (en) | MI-BCI system based on multi-scene induction and control method thereof | |
| CN111544854B (en) | Cerebral apoplexy motor rehabilitation method based on brain myoelectric signal deep learning fusion | |
| CN101711709B (en) | Electric artificial hand control method using electro-oculogram and electro-encephalic information | |
| CN107315478A (en) | A kind of Mental imagery upper limbs intelligent rehabilitation robot system and its training method | |
| CN112114670B (en) | Man-machine co-driving system based on hybrid brain-computer interface and control method thereof | |
| CN113101021B (en) | Mechanical arm control method based on MI-SSVEP hybrid brain-computer interface | |
| CN104997581B (en) | Artificial hand control method and apparatus for driving EEG signals on the basis of facial expressions | |
| CN103793058A (en) | Method and device for classifying active brain-computer interaction system motor imagery tasks | |
| CN107626040A (en) | It is a kind of based on the rehabilitation system and method that can interact virtual reality and nerve electric stimulation | |
| CN104360730A (en) | Man-machine interaction method supported by multi-modal non-implanted brain-computer interface technology | |
| CN108417249A (en) | VR-based audio-visual-tactile multimodal hand function rehabilitation method | |
| CN105411580B (en) | A kind of brain control wheelchair system based on tactile auditory evoked potential | |
| CN115482907A (en) | Active rehabilitation system combining electroencephalogram and myoelectricity and rehabilitation training method | |
| CN111584031B (en) | Brain-controlled intelligent limb rehabilitation system based on portable electroencephalogram acquisition equipment and application | |
| CN110916652A (en) | Data acquisition device and method for controlling robot movement based on motor imagery through electroencephalogram and application of data acquisition device and method | |
| CN113713333A (en) | Dynamic virtual induction method and system for lower limb rehabilitation full training process | |
| CN117873330B (en) | Electroencephalogram-eye movement hybrid teleoperation robot control method, system and device | |
| CN107562191A (en) | The online brain-machine interface method of fine Imaginary Movement based on composite character | |
| CN107510555A (en) | A kind of wheelchair E.E.G control device and control method | |
| CN109901711B (en) | Asynchronous real-time brain control method driven by weak myoelectricity artifact micro-expression electroencephalogram signals | |
| CN113359991A (en) | Intelligent brain-controlled mechanical arm auxiliary feeding system and method for disabled people | |
| CN106476281B (en) | Based on blink identification and vision induced 3D printer control method |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| C06 | Publication | ||
| PB01 | Publication | ||
| C10 | Entry into substantive examination | ||
| SE01 | Entry into force of request for substantive examination | ||
| GR01 | Patent grant | ||
| GR01 | Patent grant |