[go: up one dir, main page]

CN114818786B - Channel screening method, emotion recognition system and storage medium - Google Patents

Channel screening method, emotion recognition system and storage medium Download PDF

Info

Publication number
CN114818786B
CN114818786B CN202210354570.5A CN202210354570A CN114818786B CN 114818786 B CN114818786 B CN 114818786B CN 202210354570 A CN202210354570 A CN 202210354570A CN 114818786 B CN114818786 B CN 114818786B
Authority
CN
China
Prior art keywords
features
channel
feature
connection
aekm
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210354570.5A
Other languages
Chinese (zh)
Other versions
CN114818786A (en
Inventor
陈创泉
黎真成
庞苗齐
许雷财
王洪涛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuyi University Fujian
Original Assignee
Wuyi University Fujian
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuyi University Fujian filed Critical Wuyi University Fujian
Priority to CN202210354570.5A priority Critical patent/CN114818786B/en
Publication of CN114818786A publication Critical patent/CN114818786A/en
Application granted granted Critical
Publication of CN114818786B publication Critical patent/CN114818786B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2218/00Aspects of pattern recognition specially adapted for signal processing
    • G06F2218/02Preprocessing
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2218/00Aspects of pattern recognition specially adapted for signal processing
    • G06F2218/08Feature extraction
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2218/00Aspects of pattern recognition specially adapted for signal processing
    • G06F2218/12Classification; Matching
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D30/00Reducing energy consumption in communication networks
    • Y02D30/70Reducing energy consumption in communication networks in wireless communication networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Measurement And Recording Of Electrical Phenomena And Electrical Characteristics Of The Living Body (AREA)

Abstract

The application discloses a channel screening method, a mood recognition system and a storage medium. The method comprises the steps of obtaining multichannel electroencephalogram signals and preprocessing; obtaining the frequency domain characteristics of each channel and the brain connection characteristics between every two channels; thereby obtaining a connection-channel characteristic; inputting each connection-channel characteristic and emotion label into a width learning system to obtain a recognition result and verifying the recognition result to obtain average test accuracy; and sorting and screening all the connection-channel characteristics according to the average test accuracy to obtain a key connection-channel characteristic set, thereby obtaining an important channel. Extracting important frequency domain features and important brain connection features from the important channels, mapping the important frequency domain features and the important brain connection features through an AEKM algorithm to obtain AEKM frequency domain features and AEKM brain connection features, and splicing the AEKM frequency domain features and the AEKM brain connection features to obtain fusion features; and then inputting the emotion labels and the emotion labels into a width learning system for cross-test emotion recognition, so as to obtain the classification accuracy.

Description

通道筛选方法、情绪识别方法、系统及存储介质Channel screening method, emotion recognition method, system and storage medium

技术领域Technical field

本申请涉及但不限于情绪识别技术领域,特别涉及一种通道筛选方法、情绪识别方法、系统及存储介质。This application relates to but is not limited to the technical field of emotion recognition, and in particular relates to a channel screening method, emotion recognition method, system and storage medium.

背景技术Background technique

情绪变化作为人类最基本的心理过程之一,在日常生活中起着至关重要的作用。随着人机交互技术的高速发展,研究人员越来越重视增强计算机识别、解释、处理和模拟人类情感等多重能力。As one of the most basic psychological processes of human beings, emotional changes play a vital role in daily life. With the rapid development of human-computer interaction technology, researchers are paying more and more attention to enhancing the multiple capabilities of computers in identifying, interpreting, processing, and simulating human emotions.

相关技术中的情绪识脑电数据以及情绪识别方法主要是利用脑电帽的全部电极通道进行数据采集以及使用融合特征在单一被试内进行构建模型,在采集数据时,被试者需要戴上包含数十个甚至上百个电极的脑电帽进行信号数据采集,这导致采集数据时需要花费大量的时间以及人力。The emotion recognition EEG data and emotion recognition methods in related technologies mainly use all the electrode channels of the EEG cap to collect data and use fusion features to build a model within a single subject. When collecting data, the subject needs to wear the EEG cap. EEG caps containing dozens or even hundreds of electrodes collect signal data, which requires a lot of time and manpower to collect data.

发明内容Contents of the invention

本申请旨在至少解决现有技术中存在的技术问题之一。为此,本申请提出一种通道筛选方法、情绪识别方法、系统及存储介质,能够对脑电信号的通道进行筛选,减少通道数量,从而降低采集数据时花费的时间与成本。This application aims to solve at least one of the technical problems existing in the prior art. To this end, this application proposes a channel screening method, emotion recognition method, system and storage medium, which can screen the channels of EEG signals and reduce the number of channels, thereby reducing the time and cost spent in collecting data.

本申请第一方面实施例提供了一种通道筛选方法,包括:The first embodiment of the present application provides a channel screening method, including:

获取多通道的脑电信号,并对所述脑电信号进行预处理;Acquire multi-channel EEG signals and preprocess the EEG signals;

根据经过预处理的所述脑电信号得到每一个通道的频域特征;Obtain the frequency domain characteristics of each channel according to the preprocessed EEG signal;

根据经过预处理的所述脑电信号得到每两个通道之间的脑连接特征;Obtain the brain connection characteristics between each two channels according to the preprocessed EEG signals;

根据所述频域特征和所述脑连接特征,得到连接-通道特征;其中,每一个连接-通道特征包括一个所述脑连接特征和两个所述频域特征,所述脑连接特征对应的两个通道分别与两个所述频域特征对应;According to the frequency domain features and the brain connection features, connection-channel features are obtained; wherein each connection-channel feature includes one of the brain connection features and two of the frequency domain features, and the brain connection features correspond to The two channels respectively correspond to the two frequency domain features;

将每一所述连接-通道特征与情绪标签输入至宽度学习系统,得到识别结果并对所述识别结果进行验证,得到每一所述连接-通道特征对应的平均测试准确率;Input each connection-channel feature and emotion label into the width learning system, obtain the recognition result and verify the recognition result, and obtain the average test accuracy corresponding to each connection-channel feature;

根据所述平均测试准确率,对所有所述连接-通道特征进行排序,根据预设比例阈值进行筛选,得到关键连接-通道特征集合;Sort all the connection-channel features according to the average test accuracy, and filter according to the preset proportion threshold to obtain a key connection-channel feature set;

根据所述关键连接-通道特征集合,得到重要通道。According to the key connection-channel feature set, important channels are obtained.

根据本申请第一方面实施例的通道筛选方法,具有以下技术效果:本申请实施例的通道筛选方法,获取多通道的脑电信号,并对脑电信号进行预处理;然后根据经过预处理的脑电信号得到每一个通道的频域特征;根据经过预处理的脑电信号得到每两个通道之间的脑连接特征;根据频域特征和脑连接特征,得到连接-通道特征;其中,每一个连接-通道特征包括一个脑连接特征和两个频域特征,脑连接特征对应的两个通道分别与两个频域特征对应;将每一连接-通道特征与情绪标签输入至宽度学习系统,得到识别结果并对识别结果进行验证,得到每一连接-通道特征对应的平均测试准确率;根据平均测试准确率,对所有连接-通道特征进行排序,根据预设比例阈值进行筛选,得到关键连接-通道特征集合;根据关键连接-通道特征集合,得到重要通道。如此,完成了对脑电信号通道的筛选,减少通道数量,从而降低采集数据时花费的时间与成本。The channel screening method according to the embodiment of the first aspect of the present application has the following technical effects: the channel screening method of the embodiment of the present application acquires multi-channel EEG signals and preprocesses the EEG signals; and then based on the preprocessed The frequency domain characteristics of each channel are obtained from the EEG signal; the brain connection characteristics between each two channels are obtained based on the preprocessed EEG signal; the connection-channel characteristics are obtained based on the frequency domain characteristics and brain connection characteristics; among them, each A connection-channel feature includes a brain connection feature and two frequency domain features. The two channels corresponding to the brain connection feature correspond to two frequency domain features respectively; input each connection-channel feature and emotion label into the width learning system, Obtain the recognition results and verify the recognition results, and obtain the average test accuracy corresponding to each connection-channel feature; according to the average test accuracy, sort all connection-channel features, filter according to the preset proportion threshold, and obtain the key connections -Channel feature set; obtain important channels based on key connections - channel feature set. In this way, the screening of EEG signal channels is completed and the number of channels is reduced, thereby reducing the time and cost spent in collecting data.

根据本申请第一方面的一些实施例,所述对所述脑电信号进行预处理,包括:According to some embodiments of the first aspect of the present application, preprocessing the EEG signal includes:

通过5个不同的频带对所述脑电信号进行滤波。The EEG signal is filtered through 5 different frequency bands.

根据本申请第一方面的一些实施例,所述根据经过预处理的所述脑电信号得到每一个通道的频域特征,包括:According to some embodiments of the first aspect of the present application, obtaining the frequency domain characteristics of each channel based on the preprocessed EEG signal includes:

通过第一公式对每个通道的每个频带的所述脑电信号进行特征提取,得到对应的频带微分熵特征,每一所述频域特征包括5个不同的频带对应的所述频带微分熵特征;其中,所述第一公式具体为:Features are extracted from the EEG signals of each frequency band of each channel through the first formula to obtain corresponding frequency band differential entropy features. Each frequency domain feature includes the frequency band differential entropy corresponding to 5 different frequency bands. Characteristics; wherein, the first formula is specifically:

其中,所述DE表征所述频带微分熵特征,σ2为对应脑电信号的方差,π为常量,e为欧拉常数。Wherein, the DE represents the differential entropy characteristics of the frequency band, σ 2 is the variance of the corresponding EEG signal, π is a constant, and e is Euler’s constant.

根据本申请第一方面的一些实施例,所述根据经过预处理的所述脑电信号得到每两个通道之间的脑连接特征,包括:According to some embodiments of the first aspect of the present application, obtaining the brain connection characteristics between each two channels based on the preprocessed EEG signals includes:

通过第二公式计算得到每个通道的所述脑电信号的瞬时相位;第二公式为:The instantaneous phase of the EEG signal of each channel is calculated through the second formula; the second formula is:

其中,通道的总数量为C,i=1,2,…,C,φi(t)表征第i通道的所述脑电信号的所述瞬时相位,t=1,2,3,…,T,T为采样点,si(t)表征第i通道的时间序列,为si(t)的希尔伯特变换;Wherein, the total number of channels is C, i=1,2,...,C, φ i (t) represents the instantaneous phase of the EEG signal of the i-th channel, t=1,2,3,..., T, T is the sampling point, s i (t) represents the time series of the i-th channel, is the Hilbert transform of s i (t);

通过第三公式,根据每两个通道的所述脑电信号所述瞬时相位,得到每两个通道的脑电信号之间的相滞指数特征,所述相滞指数特征作为所述脑连接特征;所述第三公式为:Through the third formula, according to the instantaneous phase of the EEG signals of each two channels, the phase lag index characteristics between the EEG signals of each two channels are obtained, and the phase lag index characteristics are used as the brain connection characteristics. ;The third formula is:

其中,PLIi,k表征第i通道的所述脑电信号与第k通道的所述脑电信号之间的所述相滞指数特征,φi(t)表征第i通道的所述脑电信号的所述瞬时相位,φk(t)表征第k通道的所述脑电信号的所述瞬时相位,T为采样点,t=1,2,3,…,t,i=1,2,3,…,C,k=1,2,3,…,C。Wherein, PLI i,k represents the phase lag index characteristic between the EEG signal of the i-th channel and the EEG signal of the k-th channel, and φ i (t) represents the EEG signal of the i-th channel. The instantaneous phase of the signal, φ k (t) represents the instantaneous phase of the EEG signal of the k-th channel, T is the sampling point, t=1,2,3,...,t, i=1,2 ,3,…,C, k=1,2,3,…,C.

根据本申请第一方面的一些实施例,所述将每一所述连接-通道特征与情绪标签输入至宽度学习系统,得到识别结果并对所述识别结果进行验证,得到每一所述连接-通道特征对应的平均测试准确率,包括:According to some embodiments of the first aspect of the present application, each connection-channel feature and emotion label are input into the width learning system, a recognition result is obtained and the recognition result is verified, and each connection-channel feature and emotion label are input to the width learning system, and the recognition result is verified, and each connection-channel feature and emotion label are input to the width learning system The average test accuracy corresponding to channel features includes:

通过第四公式,根据所述连接-通道特征,得到映射特征;所述第四公式为:Through the fourth formula, mapping characteristics are obtained according to the connection-channel characteristics; the fourth formula is:

Xp=[Z1,…,Zp]X p =[Z 1 ,…,Z p ]

其中,Zp表征映射特征集合,Zi表征第i组映射特征,表征映射函数,p为特征映射的数量,/>为权重,/>为偏置项,/>表征连接-通道特征;Among them, Z p represents the mapping feature set, Z i represents the i-th group of mapping features, Characterization mapping function, p is the number of feature maps, /> is the weight,/> is the bias term,/> Characterizing connection-channel characteristics;

通过第五公式,将所述映射特征集合送入增强节点,得到增强特征;所述第五公式为:Through the fifth formula, the mapping feature set is sent to the enhancement node to obtain the enhanced features; the fifth formula is:

Hq=[H1,…,Hq]H q =[H 1 ,…,H q ]

其中,Hq表征增强特征集合,Hj表征第j组增强特征,ξj表征非线性激活函数,q为特征增强的数量,为权重,/>为偏置项,Zp表征所述映射特征集合;Among them, H q represents the enhanced feature set, H j represents the jth group of enhanced features, ξ j represents the nonlinear activation function, q is the number of feature enhancements, is the weight,/> is the bias term, Z p represents the mapping feature set;

将所述增强特征集合与所述映射特征集合进行串联,得到转换特征集合A=[Zp,Hq];其中,A表征所述转换特征集合,Zp表征所述映射特征集合,Hq表征所述增强特征集合;The enhanced feature set and the mapping feature set are concatenated to obtain a conversion feature set A = [Z p , H q ]; where A represents the conversion feature set, Z p represents the mapping feature set, H q Characterize the set of enhanced features;

根据所述转换特征集合与所述情绪标签,最小化所述宽度学习系统的目标函数,得到标签空间;所述目标函数为:According to the conversion feature set and the emotion label, the objective function of the width learning system is minimized to obtain a label space; the objective function is:

其中,F表征矩阵的F-范数,用于测量精度的近似值,/>表征/>正则化项,W表征宽度学习系统的权重,Y表征所述情绪标签,λ表征惩罚因子,I表征单位矩阵,A表征所述转换特征集合,A=[Zp,Hq],Zp表征所述映射特征集合,Hq表征所述增强特征集合;通过求解所述目标函数得到:Among them, F represents the F-norm of the matrix, Approximation used for measurement accuracy,/> Characterization/> Regularization term, W represents the weight of the width learning system, Y represents the emotion label, λ represents the penalty factor, I represents the unit matrix, A represents the conversion feature set, A = [Z p , H q ], Z p represents The mapping feature set, H q represents the enhanced feature set; by solving the objective function, it is obtained:

W=(λI+ATA)-1ATYW=(λI+A T A) -1 A T Y

根据所述标签空间进行验证,得到每一所述连接-通道特征对应的平均测试准确率。Verify according to the label space to obtain the average test accuracy corresponding to each connection-channel feature.

本申请第二方面实施例提供了一种情绪识别方法,包括如本申请第一方面实施例任一项所述的通道筛选方法;The second embodiment of the application provides an emotion recognition method, including the channel screening method described in any one of the embodiments of the first aspect of the application;

所述情绪识别方法还包括:The emotion recognition method also includes:

根据所述重要通道提取重要频域特征与重要脑连接特征,对所述重要频域特征以及所述重要脑连接特征分别通过AEKM算法进行映射,分别得到AEKM频域特征以及AEKM脑连接特征;Extract important frequency domain features and important brain connection features according to the important channels, map the important frequency domain features and the important brain connection features through the AEKM algorithm respectively, and obtain AEKM frequency domain features and AEKM brain connection features respectively;

将所述AEKM频域特征与所述AEKM脑连接特征进行拼接得到融合特征,将所述融合特征与所述情绪标签输入至所述宽度学习系统,进行跨被试情绪识别,得到分类准确率。The AEKM frequency domain features and the AEKM brain connection features are spliced to obtain fusion features. The fusion features and the emotion labels are input to the width learning system to perform cross-subject emotion recognition to obtain classification accuracy.

根据本申请第二方面的一些实施例,所述对所述重要频域特征以及所述重要脑连接特征分别通过AEKM算法进行映射,分别得到AEKM频域特征以及AEKM脑连接特征,包括:According to some embodiments of the second aspect of the present application, the important frequency domain features and the important brain connection features are mapped through the AEKM algorithm, respectively, to obtain the AEKM frequency domain features and the AEKM brain connection features, including:

根据所述重要通道,得到重要特征,具体为:According to the important channels, important features are obtained, specifically:

其中,m∈{FD,BC},Xm为重要特征,当m=FD时,xm为所述重要频域特征;当m=BC时,xm为所述重要脑连接特征,Xm的每一行由(xi m)T组成,为m的第i个样本,n为样本数;Among them, m∈{FD, BC}, X m is an important feature, when m = FD, x m is the important frequency domain feature; when m = BC, x m is the important brain connection feature, X m Each row of is composed of (x i m ) T , is the i-th sample of m, n is the number of samples;

对所述重要频域特征以及所述脑连接特征进行映射,得到AEKM特征;其中,The important frequency domain features and the brain connection features are mapped to obtain AEKM features; where,

gm(xm)∈Rl×1 g m (x m )∈R l×1

其中,m∈{FD,BC},gm(xm)表征AEKM特征;l表征维度;当m=FD时,gm(xm)为AEKM频域特征;当m=BC时,gm(xm)为AEKM脑连接特征;当m=FD时,xm为所述重要频域特征;当m=BC时,xm为所述重要脑连接特征;Among them, m∈{FD,BC}, g m (x m ) represents the AEKM feature; l represents the dimension; when m=FD, g m (x m ) is the AEKM frequency domain feature; when m=BC, g m (x m ) is the AEKM brain connection feature; when m = FD, x m is the important frequency domain feature; when m = BC, x m is the important brain connection feature;

所述对所述重要频域特征以及所述脑连接特征进行映射,得到AEKM特征,包括:The important frequency domain features and the brain connection features are mapped to obtain AEKM features, including:

对于所述重要特征通过/>算法,得到标记点集合,具体为:For the important features mentioned Pass/> Algorithm, obtain the set of marked points, specifically:

其中,Vm表征所述标记点集合,表征第j个标记点,l表征维度,j=1,2,…,l;Among them, V m represents the set of marked points, It represents the jth marker point, l represents the dimension, j=1,2,…,l;

通过第六公式构建第一核矩阵;所述第六公式为:The first kernel matrix is constructed through the sixth formula; the sixth formula is:

其中,k=1,…,n,j=1,…,l,n为样本数,l表征维度,/>表征所述第一核矩阵,σm为核参数,κ(.,.,.)为核函数;Among them, k=1,…,n, j=1,…,l, n is the number of samples, l represents the dimension,/> Characterizing the first kernel matrix, σ m is the kernel parameter, and κ (.,.,.) is the kernel function;

通过第七公式构建第二核矩阵;所述第七公式为:The second kernel matrix is constructed through the seventh formula; the seventh formula is:

其中,k=1,…,n,j=1,…,l,n为样本数,l表征维度,/>表征所述第二核矩阵,σm为核参数,κ(.,.,.)为核函数;Among them, k=1,…,n, j=1,…,l, n is the number of samples, l represents the dimension,/> Characterizing the second kernel matrix, σ m is the kernel parameter, and κ (.,.,.) is the kernel function;

对所述第二核矩阵进行特征值分解,具体为:Perform eigenvalue decomposition on the second kernel matrix, specifically:

其中,l表征维度,为一个对角矩阵,对角线元素是/>的特征值,/>的列向量是对应特征值的特征向量;Among them, l represents the dimension, is a diagonal matrix, the diagonal elements are/> eigenvalues,/> The column vector of is the eigenvector corresponding to the eigenvalue;

通过第八公式,根据所述第一核矩阵和所述第二核矩阵,得到AEKM特征矩阵第八公式为:Through the eighth formula, according to the first kernel matrix and the second kernel matrix, the AEKM feature matrix is obtained The eighth formula is:

其中,n为样本数,l表征维度,Gm表征所述AEKM特征矩阵,当m=FD时,Gm为AEKM频域特征矩阵;当m=BC时,Gm为AEKM脑连接特征矩阵,其中,/>为一个对角矩阵,对角线元素是/>的特征值,/>的列向量是对应特征值的特征向量,Mm作为特征值映射矩阵;Among them, n is the number of samples, l represents the dimension, G m represents the AEKM feature matrix, when m = FD, G m is the AEKM frequency domain feature matrix; when m = BC, G m is the AEKM brain connection feature matrix, Among them,/> is a diagonal matrix, the diagonal elements are/> eigenvalues,/> The column vector of is the eigenvector corresponding to the eigenvalue, and M m serves as the eigenvalue mapping matrix;

根据所述AEKM频域特征矩阵和所述AEKM脑连接特征矩阵,得到融合特征矩阵GF=[GBC,GFD],其中,GF表征所述融合特征矩阵,GFD表征所述AEKM频域特征矩阵,GBC表征所述AEKM脑连接特征矩阵。According to the AEKM frequency domain feature matrix and the AEKM brain connection feature matrix, a fusion feature matrix G F = [G BC , G FD ] is obtained, where G F represents the fusion feature matrix, and G FD represents the AEKM frequency domain feature matrix. Domain feature matrix, G BC represents the AEKM brain connection feature matrix.

本申请第三方面实施例提供了一种通道筛选系统,包括:至少一个第一存储器;The third embodiment of the present application provides a channel screening system, including: at least one first memory;

至少一个第一处理器;at least one first processor;

至少一个程序;at least one program;

程序被存储在第一存储器中,第一处理器执行至少一个程序以实现:The program is stored in the first memory, and the first processor executes at least one program to implement:

如本申请第一方面实施例任一项所述的通道筛选方法。The channel screening method as described in any one of the embodiments of the first aspect of this application.

本申请第四方面实施例提供了一种情绪识别系统,包括:至少一个第二存储器;The fourth embodiment of the present application provides an emotion recognition system, including: at least one second memory;

至少一个第二处理器;at least one second processor;

至少一个程序;at least one program;

程序被存储在第二存储器中,第二处理器执行至少一个程序以实现:The program is stored in the second memory, and the second processor executes at least one program to implement:

如本申请第二方面实施例任一项所述的情绪识别方法。The emotion recognition method as described in any one of the embodiments of the second aspect of this application.

本申请第五方面实施例提供了一种计算机可读存储介质,计算机可读存储介质存储有计算机可执行信号,计算机可执行信号用于执行:The fifth aspect embodiment of the present application provides a computer-readable storage medium. The computer-readable storage medium stores computer-executable signals. The computer-executable signals are used to execute:

如本申请第一方面实施例任一项所述的通道筛选方法;或者,The channel screening method as described in any one of the embodiments of the first aspect of this application; or,

如本申请第二方面实施例任一项所述的情绪识别方法。The emotion recognition method as described in any one of the embodiments of the second aspect of this application.

本申请的附加方面和优点将在下面的描述中部分给出,部分将从下面的描述中变得明显,或通过本申请的实践了解到。Additional aspects and advantages of the application will be set forth in part in the description which follows, and in part will be obvious from the description, or may be learned by practice of the application.

附图说明Description of drawings

下面结合附图和实施例对本申请做进一步的说明,其中:The present application will be further described below in conjunction with the accompanying drawings and examples, wherein:

图1为本申请一些实施例的通道筛选方法的步骤流程图;Figure 1 is a step flow chart of a channel screening method according to some embodiments of the present application;

图2为本申请一些实施例的得到脑连接特征的步骤流程图;Figure 2 is a flow chart of steps for obtaining brain connection features according to some embodiments of the present application;

图3为本申请一些实施例的得到每一连接-通道特征对应的平均测试准确率的步骤流程图;Figure 3 is a flow chart of steps for obtaining the average test accuracy corresponding to each connection-channel feature according to some embodiments of the present application;

图4为本申请一些实施例的情绪识别方法的步骤流程图;Figure 4 is a step flow chart of an emotion recognition method according to some embodiments of the present application;

图5为本申请一些实施例的得到融合特征矩阵的步骤流程图。Figure 5 is a flow chart of steps for obtaining a fusion feature matrix according to some embodiments of the present application.

具体实施方式Detailed ways

下面详细描述本申请的实施例,所述实施例的示例在附图中示出,其中自始至终相同或类似的标号表示相同或类似的元件或具有相同或类似功能的元件。下面通过参考附图描述的实施例是示例性的,仅用于解释本申请,而不能理解为对本申请的限制。The embodiments of the present application are described in detail below. Examples of the embodiments are shown in the accompanying drawings, wherein the same or similar reference numerals throughout represent the same or similar elements or elements with the same or similar functions. The embodiments described below with reference to the drawings are exemplary and are only used to explain the present application and cannot be understood as limiting the present application.

在本申请的描述中,需要理解的是,涉及到方位描述,例如上、下、前、后、左、右等指示的方位或位置关系为基于附图所示的方位或位置关系,仅是为了便于描述本申请和简化描述,而不是指示或暗示所指的装置或元件必须具有特定的方位、以特定的方位构造和操作,因此不能理解为对本申请的限制。In the description of this application, it should be understood that the orientation descriptions involved, such as the orientation or positional relationship indicated by up, down, front, back, left, right, etc. are based on the orientation or positional relationship shown in the drawings, and are only In order to facilitate the description of the present application and simplify the description, it is not intended to indicate or imply that the device or element referred to must have a specific orientation, be constructed and operated in a specific orientation, and therefore cannot be construed as a limitation of the present application.

在本申请的描述中,若干的含义是一个以上,多个的含义是两个以上,大于、小于、超过等理解为不包括本数,以上、以下、以内等理解为包括本数。如果有描述到第一、第二只是用于区分技术特征为目的,而不能理解为指示或暗示相对重要性或者隐含指明所指示的技术特征的数量或者隐含指明所指示的技术特征的先后关系。In the description of this application, several means one or more, plural means two or more, greater than, less than, exceeding, etc. are understood to exclude the original number, and above, below, within, etc. are understood to include the original number. If there is a description of first and second, it is only for the purpose of distinguishing technical features, and cannot be understood as indicating or implying the relative importance or implicitly indicating the number of indicated technical features or implicitly indicating the order of indicated technical features. relation.

本申请的描述中,除非另有明确的限定,设置、安装、连接等词语应做广义理解,所属技术领域技术人员可以结合技术方案的具体内容合理确定上述词语在本申请中的具体含义。In the description of this application, unless otherwise explicitly limited, words such as setting, installation, and connection should be understood in a broad sense. Those skilled in the art can reasonably determine the specific meaning of the above words in this application in conjunction with the specific content of the technical solution.

本申请的描述中,参考术语“一个实施例”、“一些实施例”、“示意性实施例”、“示例”、“具体示例”、或“一些示例”等的描述意指结合该实施例或示例描述的具体特征、结构、材料或者特点包含于本申请的至少一个实施例或示例中。在本说明书中,对上述术语的示意性表述不一定指的是相同的实施例或示例。而且,描述的具体特征、结构、材料或者特点可以在任何的一个或多个实施例或示例中以合适的方式结合。In the description of this application, reference to the description of the terms "one embodiment," "some embodiments," "illustrative embodiments," "examples," "specific examples," or "some examples" is intended to be in conjunction with the description of the embodiment. or examples describe specific features, structures, materials, or characteristics that are included in at least one embodiment or example of the present application. In this specification, schematic representations of the above terms do not necessarily refer to the same embodiment or example. Furthermore, the specific features, structures, materials or characteristics described may be combined in any suitable manner in any one or more embodiments or examples.

情绪变化作为人类最基本的心理过程之一,在日常生活中起着至关重要的作用。随着人机交互技术的高速发展,研究人员越来越重视增强计算机识别、解释、处理和模拟人类情感等多重能力,情绪识别在人机交互中起着至关重要的作用。对情绪不同方面的研究越来越多,已经发现与情绪相关的特征有多种类型,例如脑电图信号的频域特征和连接特征。多特征的输入,可以反映情绪的不同方面并提供互补的信息,例如频域特征在通道内提供信息,而大脑连接特征表征通道之间的关系,更有利于建立准确、稳健的情绪识别模型。融合多个特征的数据可以提高情绪识别的准确性。在针对受试者层面的情绪识别研究中,越来越多的研究员开始关注跨被试的情绪识别研究。跨被试实验在所有受试者的全部样本数据集中,通过“留一受试者交叉验证法”构建跨被试的情绪识别模型,“留一受试者交叉验证法”意思是选择其中一个受试者的所有样本作为测试集,剩余的受试者的所有样本作为训练集。与单一被试对应单一识别模型的模式相比,跨被试情绪识别模型泛化性更好、鲁棒性更高,在采集到新的被试数据后可以直接进行情绪识别而省去了重新为该被试训练模型的时间,这样可以大大减少时间成本。As one of the most basic psychological processes of human beings, emotional changes play a vital role in daily life. With the rapid development of human-computer interaction technology, researchers are paying more and more attention to enhancing the multiple capabilities of computers to identify, interpret, process and simulate human emotions. Emotion recognition plays a vital role in human-computer interaction. There is an increasing number of studies on different aspects of emotion, and it has been found that there are many types of features related to emotions, such as frequency domain features and connection features of EEG signals. Multi-feature input can reflect different aspects of emotion and provide complementary information. For example, frequency domain features provide information within a channel, while brain connection features represent the relationship between channels, which is more conducive to establishing an accurate and robust emotion recognition model. Fusion of data from multiple features can improve the accuracy of emotion recognition. In research on emotion recognition at the subject level, more and more researchers are beginning to pay attention to research on emotion recognition across subjects. In the cross-subject experiment, in all sample data sets of all subjects, a cross-subject emotion recognition model was constructed through the "one-subject-leave cross-validation method". The "one-subject-leave cross-validation method" means to select one of the All samples of the subjects are used as the test set, and all samples of the remaining subjects are used as the training set. Compared with the model in which a single subject corresponds to a single recognition model, the cross-subject emotion recognition model has better generalization and higher robustness. After collecting new subject data, emotion recognition can be directly performed without having to re- The time to train the model for this subject can greatly reduce the time cost.

相关技术中的情绪识脑电数据以及情绪识别方法主要是利用脑电帽的全部电极通道进行数据采集以及使用融合特征在单一被试内进行构建模型,在采集数据时,被试者需要戴上包含数十个甚至上百个电极的脑电帽进行信号数据采集,这导致采集数据时需要花费大量的时间以及人力。Emotional EEG data and emotion recognition methods in related technologies mainly use all the electrode channels of the EEG cap to collect data and use fusion features to build a model within a single subject. When collecting data, the subject needs to wear a EEG caps containing dozens or even hundreds of electrodes collect signal data, which requires a lot of time and manpower to collect data.

基于此,本申请提出一种通道筛选方法、情绪识别方法、系统及存储介质,能够对特征进行筛选,即对脑电信号的通道进行筛选,减少通道数量,从而降低采集数据时花费的时间与成本。通道数量的减少,即是电极的减少,相较于全电极的实验采集成本,利用更少更有效的电极去进行实验可以节省大量的人力物力,使实验变得更有效率。Based on this, this application proposes a channel screening method, emotion recognition method, system and storage medium, which can screen features, that is, screen the channels of EEG signals to reduce the number of channels, thereby reducing the time and time spent in collecting data. cost. The reduction in the number of channels means the reduction of electrodes. Compared with the experimental collection cost of all electrodes, using fewer and more effective electrodes to conduct experiments can save a lot of manpower and material resources, making the experiment more efficient.

第一方面,参照图1,本申请实施例提供了一种通道筛选方法,包括但不限于步骤S100、步骤S200、步骤S300、步骤S400、步骤S500、步骤S600和步骤S700。In the first aspect, referring to FIG. 1 , an embodiment of the present application provides a channel screening method, including but not limited to step S100, step S200, step S300, step S400, step S500, step S600 and step S700.

步骤S100,获取多通道的脑电信号,并对脑电信号进行预处理;Step S100, obtain multi-channel EEG signals and preprocess the EEG signals;

步骤S200,根据经过预处理的脑电信号得到每一个通道的频域特征;Step S200, obtain the frequency domain characteristics of each channel based on the preprocessed EEG signal;

步骤S300,根据经过预处理的脑电信号得到每两个通道之间的脑连接特征;Step S300, obtain the brain connection characteristics between each two channels based on the preprocessed EEG signals;

步骤S400,根据频域特征和脑连接特征,得到连接-通道特征;其中,每一个连接-通道特征包括一个脑连接特征和两个频域特征,脑连接特征对应的两个通道分别与两个频域特征对应;Step S400, obtain connection-channel features based on frequency domain features and brain connection features; wherein each connection-channel feature includes one brain connection feature and two frequency domain features, and the two channels corresponding to the brain connection feature are respectively associated with two Frequency domain feature correspondence;

步骤S500,将每一连接-通道特征与情绪标签输入至宽度学习系统,得到识别结果并对识别结果进行验证,得到每一连接-通道特征对应的平均测试准确率;Step S500, input each connection-channel feature and emotion label into the width learning system, obtain the recognition result and verify the recognition result, and obtain the average test accuracy corresponding to each connection-channel feature;

步骤S600,根据平均测试准确率,对所有连接-通道特征进行排序,根据预设比例阈值进行筛选,得到关键连接-通道特征集合;Step S600: Sort all connection-channel features according to the average test accuracy, filter according to the preset proportion threshold, and obtain a key connection-channel feature set;

步骤S700,根据关键连接-通道特征集合,得到重要通道。Step S700: Obtain important channels based on key connection-channel feature sets.

本申请实施例的通道筛选方法,获取多通道的脑电信号,并对脑电信号进行预处理;然后根据经过预处理的脑电信号得到每一个通道的频域特征;根据经过预处理的脑电信号得到每两个通道之间的脑连接特征;根据频域特征和脑连接特征,得到连接-通道特征;其中,每一个连接-通道特征包括一个脑连接特征和两个频域特征,脑连接特征对应的两个通道分别与两个频域特征对应;将每一连接-通道特征与情绪标签输入至宽度学习系统,得到识别结果并对识别结果进行验证,得到每一连接-通道特征对应的平均测试准确率;根据平均测试准确率,对所有连接-通道特征进行排序,根据预设比例阈值进行筛选,得到关键连接-通道特征集合;根据关键连接-通道特征集合,得到重要通道。如此,完成了对脑电信号通道的筛选,减少通道数量,从而降低采集数据时花费的时间与成本。并且,根据重要通道,可以得到对应的重要频域特征与重要脑连接特征,重要频域特征与重要脑连接特征能够用于进行情绪识别,且与未进行筛选通道相比,在进行通道筛选后得到的重要频域特征与重要脑连接特征,进行情绪识别得到的结果准确率更高。The channel screening method in the embodiment of the present application acquires multi-channel EEG signals and pre-processes the EEG signals; then obtains the frequency domain characteristics of each channel based on the pre-processed EEG signals; based on the pre-processed EEG signals The electrical signal obtains the brain connection features between each two channels; based on the frequency domain features and brain connection features, the connection-channel features are obtained; among them, each connection-channel feature includes a brain connection feature and two frequency domain features. The two channels corresponding to the connection feature correspond to two frequency domain features respectively; input each connection-channel feature and emotion label into the width learning system, obtain the recognition result and verify the recognition result, and obtain the correspondence between each connection-channel feature The average test accuracy; according to the average test accuracy, all connection-channel features are sorted, and filtered according to the preset proportion threshold to obtain the key connection-channel feature set; based on the key connection-channel feature set, important channels are obtained. In this way, the screening of EEG signal channels is completed and the number of channels is reduced, thereby reducing the time and cost spent in collecting data. Moreover, according to the important channels, the corresponding important frequency domain features and important brain connection features can be obtained. The important frequency domain features and important brain connection features can be used for emotion recognition, and compared with the channels without filtering, after channel filtering With the obtained important frequency domain features and important brain connection features, the accuracy of emotion recognition results is higher.

可以理解的是,本申请实施例的通道筛选方法,通过脑电帽采集受试者在观看不同情绪标签的电影片段时的脑电信号数据,脑电信号具有多个通道。而在另一些实施例中,利用现有公开的情绪脑电数据集SEED作为实验数据,其中该数据集使用的脑电帽是国际10-20系统的62通道脑电帽,因此获取的脑电信号包括62个通道。It can be understood that the channel screening method in the embodiment of the present application uses an EEG cap to collect EEG signal data of a subject when watching movie clips with different emotional labels, and the EEG signal has multiple channels. In other embodiments, the existing public emotional EEG data set SEED is used as experimental data, in which the EEG cap used in this data set is a 62-channel EEG cap of the International 10-20 system, so the acquired EEG The signal consists of 62 channels.

可以理解的是,步骤S100中,对脑电信号进行预处理,包括:It can be understood that in step S100, the EEG signal is preprocessed, including:

通过5个不同的频带对脑电信号进行滤波。EEG signals are filtered through 5 different frequency bands.

具体的,将原始数据的采样频率降至200Hz。为了去除不相关的伪影,所有的原始脑电图信号都经过1到50Hz的带通滤波器滤波。然后用5个频带对脑电信号进行滤波,5个频带分别为delta(δ)(1-4Hz)、theta(θ)(4-8Hz)、alpha(α)(8-14Hz)、beta(β)(14-31Hz)和gamma(γ)(31-50Hz)。Specifically, the sampling frequency of the original data is reduced to 200Hz. To remove irrelevant artifacts, all raw EEG signals were filtered with a bandpass filter ranging from 1 to 50 Hz. Then use 5 frequency bands to filter the EEG signal. The 5 frequency bands are delta(δ)(1-4Hz), theta(θ)(4-8Hz), alpha(α)(8-14Hz), beta(β) )(14-31Hz) and gamma(γ)(31-50Hz).

可以理解的是,频域特征在不同的情绪下,反应了在不同脑区间的能量差异。常见的频域特征包含功率谱密度(Power Spectral Density,PSD)特征、微分熵(DifferentiaEntropy,DE)特征。本申请使用微分熵特征作为频域特征。步骤S200包括:通过第一公式对每个通道的每个频带的脑电信号进行特征提取,得到对应的频带微分熵特征,每一频域特征包括5个不同的频带对应的频带微分熵特征;其中,第一公式具体为:It is understandable that frequency domain features reflect energy differences in different brain regions under different emotions. Common frequency domain features include Power Spectral Density (PSD) features and Differential Entropy (DifferentiaEntropy, DE) features. This application uses differential entropy features as frequency domain features. Step S200 includes: performing feature extraction on the EEG signals of each frequency band of each channel through the first formula to obtain corresponding frequency band differential entropy features. Each frequency domain feature includes frequency band differential entropy features corresponding to 5 different frequency bands; Among them, the first formula is specifically:

其中,DE表征频带微分熵特征,σ2为对应脑电信号的方差,π为常量,e为欧拉常数。由于脑电信号包括62个通道,且每个通道的脑电信号分割成5个频带,因此一个样本对应的DE特征维度为310(62个通道×5个频带)。Among them, DE represents the frequency band differential entropy characteristics, σ 2 is the variance of the corresponding EEG signal, π is a constant, and e is Euler's constant. Since the EEG signal includes 62 channels, and the EEG signal of each channel is divided into 5 frequency bands, the DE feature dimension corresponding to one sample is 310 (62 channels × 5 frequency bands).

可以理解的是,大脑中的情绪反应涉及多个大脑皮层区域的协调。因此,大脑可以被视为一个具有连通性的复杂网络,以探索多个大脑区域之间的相互作用。常见的脑连接特征包含相滞指数(Phase Lag Index,PLI)特征、锁相值(Phase Locking Value,PLV)特征、部分有向相干(Partial Directed Coherence,PDC)特征等。本申请使用相滞指数特征作为脑连接特征。参照图2,步骤S300,包括但不限于步骤S310和步骤S320。Understandably, emotional responses in the brain involve the coordination of multiple cortical areas. Therefore, the brain can be viewed as a complex network with connectivity to explore the interactions between multiple brain regions. Common brain connection features include Phase Lag Index (PLI) features, Phase Locking Value (PLV) features, Partial Directed Coherence (PDC) features, etc. This application uses the phase delay index feature as the brain connection feature. Referring to Figure 2, step S300 includes but is not limited to step S310 and step S320.

步骤S310,通过第二公式计算得到每个通道的脑电信号的瞬时相位;第二公式为:Step S310, calculate the instantaneous phase of the EEG signal of each channel through the second formula; the second formula is:

其中,通道的总数量为C,i=1,2,…,C,φi(t)表征第i通道的脑电信号的瞬时相位,t=1,2,3,…,T,T为采样点,so(t)表征第i通道的时间序列,为so(t)的希尔伯特变换;时间序列si(t)对应的分析信号为:/>在分析信号的表达式中,可以由以下公式计算得到/> Among them, the total number of channels is C, i=1,2,…,C, φ i (t) represents the instantaneous phase of the EEG signal of the i-th channel, t=1,2,3,…,T, T is The sampling point, s o (t) represents the time series of the i-th channel, is the Hilbert transform of s o (t); the analysis signal corresponding to the time series s i (t) is:/> In the expression that analyzes the signal, It can be calculated by the following formula/>

的计算公式中,PV表征西主值(Cauchy principal value),τ表征积分变量。需要说明的是,由于SEED中,脑电信号包括62个通道,因此C=62,本申请对通道数不作出具体限定,本领域技术人员可以根据实际需要设定脑电信号的通道数。exist In the calculation formula, PV represents the Cauchy principal value, and τ represents the integral variable. It should be noted that in SEED, the EEG signal includes 62 channels, so C=62. This application does not specifically limit the number of channels. Those skilled in the art can set the number of EEG signal channels according to actual needs.

步骤S320,通过第三公式,根据每两个通道的脑电信号瞬时相位,得到每两个通道的脑电信号之间的相滞指数特征,相滞指数特征作为脑连接特征;第三公式为:Step S320, through the third formula, according to the instantaneous phase of the EEG signals of each two channels, the phase lag index feature between the EEG signals of each two channels is obtained, and the phase lag index feature is used as the brain connection feature; the third formula is :

其中,PLIi,k表征第i通道的脑电信号与第k通道的脑电信号之间的相滞指数特征,φi(t)表征第i通道的脑电信号的瞬时相位,φk(t)表征第k通道的脑电信号的瞬时相位,T为采样点,t=1,2,3,…,T,i=1,2,3,…,C,k=1,2,3,…,C。由于每个频带的脑电信号包含62个通道,所以一个PLI特征的连接矩阵维度为62×62。由于连接矩阵具有对称性,本申请只使用矩阵上三角形中的值,移除对角线上自连接的值(该值为1)。因此,每一个频带对应的样本的PLI特征维度为62×(62-1)÷2=1891,因此,一个样本的PLI特征维度为9455(1891个连接×5个频带)。Among them, PLI i,k represents the phase lag index characteristics between the EEG signal of the i-th channel and the EEG signal of the k-th channel, φ i (t) represents the instantaneous phase of the EEG signal of the i-th channel, φ k ( t) represents the instantaneous phase of the k-th channel EEG signal, T is the sampling point, t=1,2,3,…,T, i=1,2,3,…,C, k=1,2,3 ,…,C. Since the EEG signal of each frequency band contains 62 channels, the connection matrix dimension of a PLI feature is 62×62. Due to the symmetry of the connection matrix, this application only uses the values in the triangles on the matrix and removes the self-connected values on the diagonal (the value is 1). Therefore, the PLI feature dimension of the sample corresponding to each frequency band is 62 × (62-1) ÷ 2 = 1891. Therefore, the PLI feature dimension of a sample is 9455 (1891 connections × 5 frequency bands).

可以理解的是,在步骤S400中,根据频域特征和脑连接特征,得到连接-通道特征;其中,每一个连接-通道特征对应一个脑连接特征和两个频域特征。具体的,表示提取于通道k1和k2的脑连接特征,/>和/>分别表示提取于通道k1和k2的频域特征,则连接-通道特征/>在/>中,k=1,2,3,…,1891,k1=1,2,3,…,C,k2=1,2,3,…,C。It can be understood that in step S400, connection-channel features are obtained based on frequency domain features and brain connection features; where each connection-channel feature corresponds to one brain connection feature and two frequency domain features. specific, Represents the brain connection features extracted from channels k 1 and k 2 ,/> and/> Represent the frequency domain features extracted from channels k 1 and k 2 respectively, then connect - channel features/> in/> In, k=1,2,3,…,1891, k 1 =1,2,3,…,C, k 2 =1,2,3,…,C.

可以理解的是,参照图3,步骤S500可以包括但不限于步骤S510、步骤S520、步骤S530、步骤S540和步骤S550。It can be understood that, referring to FIG. 3 , step S500 may include but is not limited to step S510, step S520, step S530, step S540 and step S550.

步骤S510,通过第四公式,根据连接-通道特征,得到映射特征;第四公式为:Step S510: Obtain mapping features according to the connection-channel features through the fourth formula; the fourth formula is:

Zp=[Z1,…,Zp]Z p =[Z 1 ,…,Z p ]

其中,Zp表征映射特征集合,Zi表征第i组映射特征,表征映射函数,p为特征映射的数量,/>为权重,/>为偏置项,/>表征连接-通道特征;在恰当维度的前提下,可以随机生成/>和/>和/>可以通过稀疏自动编码器来进行微调以得到更合适的映射特征集合ZpAmong them, Z p represents the mapping feature set, Z i represents the i-th group of mapping features, Characterization mapping function, p is the number of feature maps, /> is the weight,/> is the bias term,/> Characterizes connection-channel characteristics; under the premise of appropriate dimensions, it can be randomly generated/> and/> and/> Fine-tuning can be performed by a sparse autoencoder to obtain a more appropriate mapping feature set Z p .

步骤S520,通过第五公式,将映射特征集合送入增强节点,得到增强特征;第五公式为:Step S520: Send the mapping feature set to the enhancement node through the fifth formula to obtain enhanced features; the fifth formula is:

Hq=[H1,…,Hq]H q =[H 1 ,…,H q ]

其中,Hq表征增强特征集合,Hj表征第j组增强特征,ξj表征非线性激活函数,q为特征增强的数量,为权重,/>为偏置项,Zp表征映射特征集合;Among them, H q represents the enhanced feature set, H j represents the jth group of enhanced features, ξ j represents the nonlinear activation function, q is the number of feature enhancements, is the weight,/> is the bias term, Z p represents the mapping feature set;

步骤S530,将增强特征集合与映射特征集合进行串联,得到转换特征集合A=[Zp,Hq];其中,a表征所述转换特征集合,Zp表征所述映射特征集合,Hq表征所述增强特征集合;Step S530, concatenate the enhanced feature set and the mapping feature set to obtain the conversion feature set A = [Z p , H q ]; where a represents the conversion feature set, Z p represents the mapping feature set, and H q represents The enhanced feature set;

步骤S540,根据转换特征集合与情绪标签,最小化宽度学习系统的目标函数,得到标签空间;目标函数为:Step S540, according to the conversion feature set and the emotion label, minimize the objective function of the width learning system to obtain the label space; the objective function is:

其中,F表征矩阵的F-范数,用于测量精度的近似值,/>表征/>正则化项,W表征宽度学习系统的权重,Y表征情绪标签,λ表征惩罚因子,I表征单位矩阵,A表征转换特征集合,A=[Zp,Hq],Zp表征映射特征集合,Hq表征增强特征集合;/>用于平滑W的分布以及避免过拟合;/>yi表征第i个情绪标签。通过求解目标函数得到:Among them, F represents the F-norm of the matrix, Approximation used for measurement accuracy,/> Characterization/> Regularization term, W represents the weight of the width learning system, Y represents the emotion label, λ represents the penalty factor, I represents the unit matrix, A represents the transformation feature set, A = [Z p , H q ], Z p represents the mapping feature set, H q represents the enhanced feature set;/> Used to smooth the distribution of W and avoid overfitting;/> y i represents the i-th emotion label. By solving the objective function we get:

W=(λI+ATA)-1ATYW=(λI+A T A) -1 A T Y

步骤S550,根据标签空间进行验证,得到每一连接-通道特征对应的平均测试准确率。具体的,平均测试准确率BLS表征宽度学习系统,/>表征连接-通道特征,k=1,2,3,…,1891,Y表征情绪标签。Step S550: Verify according to the label space to obtain the average test accuracy corresponding to each connection-channel feature. Specifically, the average test accuracy BLS representation width learning system,/> Represents connection-channel features, k=1,2,3,…,1891, Y represents emotion label.

可以理解的是,在得到每一连接-通道特征对应的平均测试准确率后,将每一连接-通道特征对应的平均测试准确率存入数组Accset中,具体为:It can be understood that after obtaining the average test accuracy corresponding to each connection-channel feature, the average test accuracy corresponding to each connection-channel feature is stored in the array Acc set , specifically:

Accset=[Accset,Acck]Acc set =[Acc set ,Acc k ]

Accset表征数组,Acck表征平均测试准确率,k=1,2,3,…,1891。对数组中的平均准确率进行从高到低排序,得到:Acc set represents the array, Acc k represents the average test accuracy, k=1,2,3,…,1891. Sorting the average accuracy in the array from high to low gives:

表征各连接-通道特征的准确率从大到小排列后对应的索引,能够根据按预设比例阈值进行筛选,得到筛选后的连接-通道特征,即得到关键连接-通道特征集合, The corresponding index that represents the accuracy of each connection-channel feature is arranged from large to small, and can be Filter according to the preset proportion threshold to obtain the filtered connection-channel features, that is, obtain the key connection-channel feature set,

表征关键连接-通道特征集合,C为通道数量,C=62,t为预设的比例阈值,本领域技术人员可以根据实际需要设定t。由于每一个连接-通道特征对应一个脑连接特征和两个频域特征,因此,可以根据关键连接-通道特征集合/>得到重要通道,进而根据重要通道得到对应的筛选后的脑连接特征,即得到重要脑连接特征XBC,根据重要通道得到对应的筛选后的频域特征,即得到重要频域特征XFD。需要说明的是,由于关键连接-通道特征集合中的不同脑连接特征可能由相同通道产生,因此筛选时会出现重复的通道,在进行筛选时,需要对重复的通道进行筛除。 Characterizing the key connection-channel feature set, C is the number of channels, C=62, t is the preset proportion threshold, and those skilled in the art can set t according to actual needs. Since each connection-channel feature corresponds to a brain connection feature and two frequency domain features, it can be based on the key connection-channel feature set/> The important channels are obtained, and then the corresponding filtered brain connection features are obtained based on the important channels, that is, the important brain connection features X BC are obtained, and the corresponding filtered frequency domain features are obtained based on the important channels, that is, the important frequency domain features X FD are obtained. It should be noted that since different brain connection features in the key connection-channel feature set may be generated by the same channel, duplicate channels will appear during screening. When screening, duplicate channels need to be screened out.

第二方面,本申请实施例提供了一种情绪识别方法,包括如本申请第一方面实施例任一项的通道筛选方法。In a second aspect, embodiments of the present application provide an emotion recognition method, including the channel screening method of any one of the embodiments of the first aspect of the present application.

参照图4,本申请实施例的情绪识别方法还包括但不限于步骤S900和步骤S1000。Referring to Figure 4, the emotion recognition method in this embodiment of the present application also includes but is not limited to step S900 and step S1000.

步骤S900,根据重要通道提取重要频域特征与重要脑连接特征,对重要频域特征以及重要脑连接特征分别通过AEKM算法进行映射,分别得到AEKM频域特征以及AEKM脑连接特征;Step S900, extract important frequency domain features and important brain connection features according to important channels, map the important frequency domain features and important brain connection features through the AEKM algorithm respectively, and obtain AEKM frequency domain features and AEKM brain connection features respectively;

步骤S1000,将AEKM频域特征与AEKM脑连接特征进行拼接得到融合特征,将融合特征与情绪标签输入至宽度学习系统,进行跨被试情绪识别,得到分类准确率。Step S1000, splice AEKM frequency domain features and AEKM brain connection features to obtain fusion features, input the fusion features and emotion labels to the width learning system, perform cross-subject emotion recognition, and obtain classification accuracy.

本申请第二方面实施例的情绪识别方法,通过第一方面实施例的通道筛选方法,对脑电信号的通道进行筛选,得到重要通道,根据重要通道提取重要频域特征与重要脑连接特征,对重要频域特征以及重要脑连接特征分别通过AEKM算法进行映射,分别得到AEKM频域特征以及AEKM脑连接特征;将AEKM频域特征与AEKM脑连接特征进行拼接得到融合特征,将融合特征与情绪标签输入至宽度学习系统,进行跨被试情绪识别,得到分类准确率。由于对脑电信号的通道进行筛选,减少通道数量,从而降低采集数据时花费的时间与成本。通道数量的减少,即是电极的减少,相较于全电极的实验采集成本,利用更少更有效的电极去进行实验可以节省大量的人力物力,并且对特征进行了筛选,处理的数据减少了,使实验变得更有效率,且识别结果的准确率较高。The emotion recognition method of the second embodiment of the present application uses the channel screening method of the first embodiment to screen the channels of the EEG signals to obtain important channels, and extract important frequency domain features and important brain connection features based on the important channels. Important frequency domain features and important brain connection features are mapped through the AEKM algorithm to obtain AEKM frequency domain features and AEKM brain connection features respectively; AEKM frequency domain features and AEKM brain connection features are spliced to obtain fusion features, and the fusion features are combined with emotions The labels are input into the width learning system to perform cross-subject emotion recognition and obtain the classification accuracy. By filtering the channels of EEG signals, the number of channels is reduced, thereby reducing the time and cost spent in collecting data. The reduction in the number of channels means the reduction of electrodes. Compared with the experimental collection cost of all electrodes, using fewer and more effective electrodes to conduct experiments can save a lot of manpower and material resources, and the features are screened and the data processed is reduced. , making the experiment more efficient and the accuracy of the recognition results higher.

需要说明的是,(AEKM,Approximate Empirical Kernel Map)为一种可以提取足够的具有小维度l的判别信息的映射算法,可以为后续分类器提供快速计算。It should be noted that (AEKM, Approximate Empirical Kernel Map) is a mapping algorithm that can extract sufficient discriminant information with a small dimension l, and can provide fast calculation for subsequent classifiers.

可以理解的是,步骤S900可以包括以下步骤:It can be understood that step S900 may include the following steps:

根据重要通道,得到重要特征,具体为:According to the important channels, important features are obtained, specifically:

其中,Xm为重要特征,当m=FD时,xm为重要频域特征;当m=BC时,xm为重要脑连接特征,Xm的每一行由(xi m)T组成,i=1,2,…,n,为m的第i个样本,n为样本数;Among them, X m is an important feature. When m = FD , x m is an important frequency domain feature; when m = BC, x m is an important brain connection feature. Each row of i=1,2,…,n, is the i-th sample of m, n is the number of samples;

对重要频域特征以及脑连接特征进行映射,得到AEKM特征;其中,Important frequency domain features and brain connection features are mapped to obtain AEKM features; where,

gm(vm)∈Rl×1(m=FD,BC)g m (v m )∈R l ×1 (m=FD, BC)

其中,gm(xm)表征AEKM特征;l表征维度;当m=FD时,gm(xm)为AEKM频域特征;当m=BC时,gm(xm)为AEKM脑连接特征;当m=FD时,xm为重要频域特征;当m=BC时,xm为重要脑连接特征。Among them, g m (x m ) represents the AEKM features; l represents the dimension; when m = FD, g m (x m ) is the AEKM frequency domain feature; when m = BC, g m (x m ) is the AEKM brain connection Features; when m=FD, x m is an important frequency domain feature; when m=BC, x m is an important brain connection feature.

可以理解的是,参照图5,在步骤S900中,对重要频域特征以及脑连接特征进行映射,得到AEKM特征,具体可以包括但不限于步骤S910、步骤S920、步骤S930、步骤S940、步骤S950和步骤S960。It can be understood that, referring to Figure 5, in step S900, important frequency domain features and brain connection features are mapped to obtain AEKM features, which may specifically include but are not limited to steps S910, step S920, step S930, step S940, and step S950. and step S960.

步骤S910,对于重要特征通过/>算法,得到标记点集合,具体为:Step S910, for important features Pass/> Algorithm, obtain the set of marked points, specifically:

其中,Vm表征标记点集合,表征第j个标记点,l表征维度,j=1,2,…,l;需要说明的是,/>可使用快速近似k-均值采样算法求得;该l个标记点是Xm的簇中心,可以对所有的训练集中的被试者的特征进行归纳、总结,从而产生跨被试的不变性特征。Among them, V m represents the set of marked points, Represents the j-th marker point, l represents the dimension, j=1,2,…,l; it should be noted that,/> It can be obtained using a fast approximate k-means sampling algorithm; the l mark points are the cluster centers of .

步骤S920,通过第六公式构建第一核矩阵;第六公式为:Step S920, construct the first kernel matrix through the sixth formula; the sixth formula is:

其中,k=1,…,n,j=1,…,l,n为样本数,l表征维度,/>表征第一核矩阵,σm为核参数,κ(.,.,.)为核函数;Among them, k=1,…,n, j=1,…,l, n is the number of samples, l represents the dimension,/> Characterizing the first kernel matrix, σ m is the kernel parameter, and κ(.,.,.) is the kernel function;

步骤S930,通过第七公式构建第二核矩阵;第七公式为:Step S930: Construct the second kernel matrix through the seventh formula; the seventh formula is:

其中,k=1,…,n,j=1,…,l,n为样本数,l表征维度,/>表征第二核矩阵,σm为核参数,κ(.,.,.)为核函数;Among them, k=1,…,n, j=1,…,l, n is the number of samples, l represents the dimension,/> Characterizing the second kernel matrix, σ m is the kernel parameter, and κ(.,.,.) is the kernel function;

步骤S940,对第二核矩阵进行特征值分解,具体为:Step S940: Perform eigenvalue decomposition on the second kernel matrix, specifically as follows:

其中,l表征维度,为一个对角矩阵,对角线元素是/>的特征值,的列向量是对应特征值的特征向量;Among them, l represents the dimension, is a diagonal matrix, the diagonal elements are/> eigenvalues of The column vector of is the eigenvector corresponding to the eigenvalue;

步骤S950,通过第八公式,根据第一核矩阵和第二核矩阵,得到AEKM特征矩阵第八公式为:Step S950: Obtain the AEKM feature matrix according to the first kernel matrix and the second kernel matrix through the eighth formula. The eighth formula is:

其中,n为样本数,l表征维度,Gm表征AEKM特征矩阵,当m=FD时,Gm为AEKM频域特征矩阵;当m=BC时,Gm为AEKM脑连接特征矩阵,其中,/>为一个对角矩阵,对角线元素是/>的特征值,/>的列向量是对应特征值的特征向量,Mm作为特征值映射矩阵;Among them, n is the number of samples, l represents the dimension, G m represents the AEKM feature matrix, when m = FD, G m is the AEKM frequency domain feature matrix; when m = BC, G m is the AEKM brain connection feature matrix, Among them,/> is a diagonal matrix, the diagonal elements are/> eigenvalues,/> The column vector of is the eigenvector corresponding to the eigenvalue, and M m serves as the eigenvalue mapping matrix;

步骤S960,根据AEKM频域特征矩阵和AEKM脑连接特征矩阵,得到融合特征矩阵Step S960: Obtain the fusion feature matrix based on the AEKM frequency domain feature matrix and the AEKM brain connection feature matrix.

GF=[GBC,GFD],其中,GF表征所述融合特征矩阵,GFD表征所述AEKM频域特征矩阵,GBC表征所述AEKM脑连接特征矩阵。由于因此能够从AEKM特征矩阵Gm中得到AEKM频域特征矩阵GFD和AEKM脑连接特征矩阵GBC。具体的,融合特征矩阵GF=[GBC,GFD]。 GF = [G BC , G FD ], where GF represents the fusion feature matrix, G FD represents the AEKM frequency domain feature matrix, and G BC represents the AEKM brain connection feature matrix. because Therefore, the AEKM frequency domain feature matrix G FD and the AEKM brain connection feature matrix G BC can be obtained from the AEKM feature matrix G m . Specifically, the fusion feature matrix G F =[G BC , G FD ].

在得到融合特征矩阵后,将融合特征矩阵与情绪标签输入至宽度学习系统,进行跨被试情绪识别,得到分类准确率,具体为:After obtaining the fusion feature matrix, input the fusion feature matrix and emotion labels into the width learning system to perform cross-subject emotion recognition and obtain the classification accuracy, specifically:

Acc=BLS(GF,Y)Acc=BLS(G F ,Y)

其中,Acc表征分类准确率,BLS表征宽度学习系统,GF表征融合特征矩阵,Y表征情绪标签。Among them, Acc represents the classification accuracy, BLS represents the width learning system, G F represents the fusion feature matrix, and Y represents the emotion label.

可以理解的是,本申请利用现有公开的情绪脑电数据集SEED作为实验数据,其中该数据集使用的脑电帽是国际10-20系统的62通道脑电帽,因此获取的脑电信号包括62个通道。使用微分熵特征作为频域特征,每两个通道的脑电信号之间的相滞指数特征作为脑连接特征,经过上述实施例的情绪识别方法处理,在通道筛选方法中,预设比例阈值为t=3%,将62通道进行筛选,得到筛选后的37通道。并根据37通道提取了频域特征矩阵以及脑连接特征矩阵,频域特征矩阵以及脑连接特征矩阵分别经过AEKM算法映射后得到AEKM频域特征矩阵GFD和AEKM脑连接特征矩阵GBC,将AEKM频域特征矩阵以及AEKM脑连接特征矩阵进行拼接得到融合特征矩阵,将融合特征矩阵与情绪标签输入至宽度学习系统,进行跨被试情绪识别,得到分类准确率。如表1所示,表1将不同特征输入至宽度学习系统得到的分类准确率。It can be understood that this application uses the existing public emotional EEG data set SEED as experimental data. The EEG cap used in this data set is the 62-channel EEG cap of the International 10-20 system. Therefore, the EEG signals obtained Includes 62 channels. The differential entropy feature is used as the frequency domain feature, and the phase lag index feature between the EEG signals of each two channels is used as the brain connection feature. After processing by the emotion recognition method in the above embodiment, in the channel screening method, the preset proportion threshold is t=3%, 62 channels are screened, and 37 channels are obtained after screening. The frequency domain feature matrix and the brain connection feature matrix were extracted based on the 37 channels. The frequency domain feature matrix and the brain connection feature matrix were respectively mapped by the AEKM algorithm to obtain the AEKM frequency domain feature matrix G FD and the AEKM brain connection feature matrix G BC . The AEKM The frequency domain feature matrix and the AEKM brain connection feature matrix are spliced to obtain a fusion feature matrix. The fusion feature matrix and emotion labels are input to the width learning system to perform cross-subject emotion recognition and obtain the classification accuracy. As shown in Table 1, Table 1 shows the classification accuracy obtained by inputting different features into the width learning system.

表1“留一受试者”交叉验证法获得的准确率Table 1 Accuracy obtained by the "leave one subject" cross-validation method

参照表1,从表1的结果来看,融合特征矩阵的准确率相较于单一特征的准确率都提升了,这可以说明频域特征以及脑连接特征的特性进行了互补,融合后对不同情绪有更好的识别性能;从表1结果可以得到,选择出来的37-通道特征比62-通道特征中的通道电极数量减少了40.32%,频域特征和脑连接特征以及融合特征矩阵准确率相较于62-通道的准确率都有所上升,说明了本申请的方法选择出来的通道是有效。Referring to Table 1, from the results of Table 1, the accuracy of the fused feature matrix is improved compared to the accuracy of a single feature. This can show that the characteristics of frequency domain features and brain connection features are complementary. After fusion, different Emotions have better recognition performance; from the results in Table 1, it can be seen that the number of channel electrodes in the selected 37-channel feature is 40.32% less than that in the 62-channel feature. The accuracy of frequency domain features, brain connection features, and fusion feature matrix Compared with 62-channel, the accuracy has increased, which shows that the channels selected by the method of this application are effective.

本申请第三方面实施例提供了一种通道筛选系统,包括:The third embodiment of the present application provides a channel screening system, including:

至少一个第一存储器;at least one first memory;

至少一个第一处理器;at least one first processor;

至少一个程序;at least one program;

程序被存储在第一存储器中,第一处理器执行至少一个程序以实现:The program is stored in the first memory, and the first processor executes at least one program to implement:

如本申请第一方面实施例所述的通道筛选方法。The channel screening method described in the embodiment of the first aspect of this application.

第一处理器和第一存储器可以通过总线或者其他方式连接。The first processor and the first memory may be connected through a bus or other means.

存储器作为一种非暂态可读存储介质,可用于存储非暂态软件指令以及非暂态性可指令。此外,存储器可以包括高速随机存取存储器,还可以包括非暂态存储器,例如至少一个磁盘存储器件、闪存器件、或其他非暂态固态存储器件。可以理解的是,存储器可选包括相对于处理器远程设置的存储器,这些远程存储器可以通过网络连接至该处理器。上述网络的实例包括但不限于互联网、企业内部网、局域网、移动通信网及其组合。As a non-transitory readable storage medium, memory can be used to store non-transitory software instructions and non-transitory instructions. In addition, the memory may include high-speed random access memory and may also include non-transitory memory, such as at least one magnetic disk storage device, flash memory device, or other non-transitory solid-state storage device. It will be appreciated that the memory may optionally include memory located remotely relative to the processor, and these remote memories may be connected to the processor via a network. Examples of the above-mentioned networks include but are not limited to the Internet, intranets, local area networks, mobile communication networks and combinations thereof.

第一处理器通过运行存储在第一存储器中的非暂态软件指令、指令以及信号,从而各种功能应用以及数据处理,即实现上述第一方面实施例的通道筛选方法。The first processor executes non-transient software instructions, instructions and signals stored in the first memory to implement various functional applications and data processing, that is, to implement the channel screening method of the first embodiment.

实现第一方面实施例的通道筛选方法或第二方面实施例的情绪识别方法所需的非暂态软件指令以及指令存储在存储器中,当被处理器执行时,执行本申请第一方面实施例的通道筛选方法例如,执行以上描述的图1中的方法步骤S100至S700、图2中的方法步骤S310至S320、图3中的方法步骤S510至S550。The non-transient software instructions and instructions required to implement the channel screening method of the first embodiment or the emotion recognition method of the second embodiment are stored in the memory. When executed by the processor, the first embodiment of the present application is executed. The channel screening method, for example, performs the above-described method steps S100 to S700 in Figure 1, method steps S310 to S320 in Figure 2, and method steps S510 to S550 in Figure 3.

需要说明的是,上述所提及到的实施例中的通道筛选系统,与上述所提及到的实施例中的通道筛选方法基于相同的发明构思,因此,上述所提及到的实施例中的通道筛选方法的相应内容同样适用于上述所提及到的实施例中的通道筛选系统,并且具有相同的实现原理以及技术效果,为避免描述内容冗余,此处不再详细描述。It should be noted that the channel screening system in the above-mentioned embodiment is based on the same inventive concept as the channel screening method in the above-mentioned embodiment. Therefore, in the above-mentioned embodiment The corresponding content of the channel screening method is also applicable to the channel screening system in the above-mentioned embodiments, and has the same implementation principles and technical effects. In order to avoid redundant description content, it will not be described in detail here.

本申请第四方面实施例提供了一种情绪识别系统,包括:The fourth embodiment of the present application provides an emotion recognition system, including:

至少一个第二存储器;at least one second memory;

至少一个第二处理器;at least one second processor;

至少一个程序;at least one program;

程序被存储在第二存储器中,第二处理器执行至少一个程序以实现:The program is stored in the second memory, and the second processor executes at least one program to implement:

如本申请第二方面实施例所述的情绪识别方法。The emotion recognition method as described in the embodiment of the second aspect of this application.

第二处理器和第二存储器可以通过总线或者其他方式连接。The second processor and the second memory may be connected through a bus or other means.

第二处理器通过运行存储在第二存储器中的非暂态软件指令、指令以及信号,从而各种功能应用以及数据处理,第二方面实施例的情绪识别方法。The second processor executes the non-transient software instructions, instructions and signals stored in the second memory to perform various functional applications and data processing, and implements the emotion recognition method of the embodiment of the second aspect.

实现第二方面实施例的情绪识别方法所需的非暂态软件指令以及指令存储在存储器中,当被处理器执行时,执行本申请第一方面实施例的通道筛选方法或第二方面实施例的情绪识别方法,例如,执行以上描述的图1中的方法步骤S100至S700、图2中的方法步骤S310至S320、图3中的方法步骤S510至S550、图4中的方法步骤S900至S1100、图5中的方法步骤S910至S960。The non-transient software instructions and instructions required to implement the emotion recognition method of the embodiment of the second aspect are stored in the memory. When executed by the processor, the channel screening method or the embodiment of the second aspect of the application is executed. The emotion recognition method, for example, performs the above-described method steps S100 to S700 in Figure 1, method steps S310 to S320 in Figure 2, method steps S510 to S550 in Figure 3, and method steps S900 to S1100 in Figure 4 , method steps S910 to S960 in Figure 5.

需要说明的是,上述所提及到的实施例中的情绪识别系统,与上述所提及到的实施例中的情绪识别方法基于相同的发明构思,因此,上述所提及到的实施例中的情绪识别方法的相应内容同样适用于上述所提及到的实施例中的情绪识别系统,并且具有相同的实现原理以及技术效果,为避免描述内容冗余,此处不再详细描述。It should be noted that the emotion recognition system in the above-mentioned embodiments is based on the same inventive concept as the emotion recognition method in the above-mentioned embodiments. Therefore, in the above-mentioned embodiments The corresponding content of the emotion recognition method is also applicable to the emotion recognition system in the above-mentioned embodiments, and has the same implementation principles and technical effects. To avoid redundancy of description content, it will not be described in detail here.

以上所描述的装置实施例仅仅是示意性的,其中作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。The device embodiments described above are only illustrative. The units described as separate components may or may not be physically separated. The components shown as units may or may not be physical units, that is, they may be located in one place. , or it can be distributed to multiple network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of this embodiment.

本申请第五方面实施例提供了一种计算机可读存储介质,计算机可读存储介质存储有计算机可执行信号,计算机可执行信号用于执行:The fifth aspect embodiment of the present application provides a computer-readable storage medium. The computer-readable storage medium stores computer-executable signals. The computer-executable signals are used to execute:

如本申请第一方面实施例所述的通道筛选方法;或者,The channel screening method as described in the embodiment of the first aspect of this application; or,

如本申请第二方面实施例所述的情绪识别方法。The emotion recognition method as described in the embodiment of the second aspect of this application.

该计算机可读存储介质存储有计算机可执行指令,该计算机可执行指令被一个处理器或控制器执行,例如,被上述实施例中多数据源处理系统的一个处理器执行,可使得上述处理器执行如本申请第一方面实施例的通道筛选方法或者第二方面实施例的情绪识别方法,例如,执行以上描述的图1中的方法步骤S100至S700、图2中的方法步骤S310至S320、图3中的方法步骤S510至S550、图4中的方法步骤S900至S1000、图5中的方法步骤S910至S960。The computer-readable storage medium stores computer-executable instructions, and the computer-executable instructions are executed by a processor or a controller, for example, by a processor of the multi-data source processing system in the above embodiment, so that the processor Perform the channel screening method according to the first embodiment of the present application or the emotion recognition method according to the second embodiment, for example, perform the above-described method steps S100 to S700 in Figure 1, method steps S310 to S320 in Figure 2, Method steps S510 to S550 in Figure 3, method steps S900 to S1000 in Figure 4, and method steps S910 to S960 in Figure 5.

通过以上的实施方式的描述,本领域普通技术人员可以理解,上文中所公开方法中的全部或某些步骤、系统可以被实施为软件、固件、硬件及其适当的组合。某些物理组件或所有物理组件可以被实施为由处理器,如中央处理器、数字信号处理器或微处理器执行的软件,或者被实施为硬件,或者被实施为集成电路,如专用集成电路。这样的软件可以分布在可读介质上,可读介质可以包括计算机存储介质(或非暂时性介质)和通信介质(或暂时性介质)。如本领域普通技术人员公知的,术语计算机存储介质包括在用于存储信息(诸如计算机可读信号、数据结构、指令模块或其他数据)的任何方法或技术中实施的易失性和非易失性、可移除和不可移除介质。计算机存储介质包括但不限于RAM、ROM、EEPROM、闪存或其他存储器技术、CD-ROM、数字多功能盘(DVD)或其他光盘存储、磁盒、磁带、磁盘存储或其他磁存储装置、或者可以用于存储期望的信息并且可以被计算机访问的任何其他的介质。此外,本领域普通技术人员公知的是,通信介质通常包含计算机可读信号、数据结构、指令模块或者诸如载波或其他传输机制之类的调制数据信号中的其他数据,并且可包括任何信息递送介质。Through the description of the above embodiments, those of ordinary skill in the art can understand that all or some steps and systems in the methods disclosed above can be implemented as software, firmware, hardware, and appropriate combinations thereof. Some or all of the physical components may be implemented as software executed by a processor, such as a central processing unit, a digital signal processor, or a microprocessor, or as hardware, or as an integrated circuit, such as an application specific integrated circuit . Such software may be distributed on readable media, which may include computer storage media (or non-transitory media) and communication media (or transitory media). As is known to those of ordinary skill in the art, the term computer storage media includes volatile and nonvolatile media implemented in any method or technology for storage of information, such as computer readable signals, data structures, modules of instructions, or other data. removable, removable and non-removable media. Computer storage media includes, but is not limited to, RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, Digital Versatile Disk (DVD) or other optical disk storage, magnetic cassettes, tapes, disk storage or other magnetic storage devices, or may Any other medium used to store the desired information and that can be accessed by a computer. Additionally, it is known to those of ordinary skill in the art that communication media typically embodies a computer-readable signal, data structure, instruction module, or other data in a modulated data signal such as a carrier wave or other transport mechanism, and may include any information delivery media .

上面结合附图对本申请实施例作了详细说明,但是本申请不限于上述实施例,在所属技术领域普通技术人员所具备的知识范围内,还可以在不脱离本申请宗旨的前提下作出各种变化。此外,在不冲突的情况下,本申请的实施例及实施例中的特征可以相互组合。The embodiments of the present application have been described in detail above in conjunction with the accompanying drawings. However, the present application is not limited to the above-mentioned embodiments. Within the scope of knowledge possessed by those of ordinary skill in the art, various embodiments can be made without departing from the purpose of the present application. Variety. In addition, the embodiments of the present application and the features in the embodiments may be combined with each other without conflict.

Claims (9)

1.一种通道筛选方法,其特征在于,包括:1. A channel screening method, characterized by comprising: 获取多通道的脑电信号,并对所述脑电信号进行预处理;Acquire multi-channel EEG signals and preprocess the EEG signals; 根据经过预处理的所述脑电信号得到每一个通道的频域特征;Obtain the frequency domain characteristics of each channel according to the preprocessed EEG signal; 根据经过预处理的所述脑电信号得到每两个通道之间的脑连接特征;Obtain the brain connection characteristics between each two channels according to the preprocessed EEG signals; 根据所述频域特征和所述脑连接特征,得到连接-通道特征;其中,每一个连接-通道特征包括一个所述脑连接特征和两个所述频域特征,所述脑连接特征对应的两个通道分别与两个所述频域特征对应;According to the frequency domain features and the brain connection features, connection-channel features are obtained; wherein each connection-channel feature includes one of the brain connection features and two of the frequency domain features, and the brain connection features correspond to The two channels respectively correspond to the two frequency domain features; 将每一所述连接-通道特征与情绪标签输入至宽度学习系统,得到识别结果并对所述识别结果进行验证,得到每一所述连接-通道特征对应的平均测试准确率;Input each connection-channel feature and emotion label into the width learning system, obtain the recognition result and verify the recognition result, and obtain the average test accuracy corresponding to each connection-channel feature; 根据所述平均测试准确率,对所有所述连接-通道特征进行排序,根据预设比例阈值进行筛选,得到关键连接-通道特征集合;Sort all the connection-channel features according to the average test accuracy, and filter according to the preset proportion threshold to obtain a key connection-channel feature set; 根据所述关键连接-通道特征集合,得到重要通道;According to the key connection-channel feature set, important channels are obtained; 所述将每一所述连接-通道特征与情绪标签输入至宽度学习系统,得到识别结果并对所述识别结果进行验证,得到每一所述连接-通道特征对应的平均测试准确率,包括:Input each connection-channel feature and emotion label into the width learning system, obtain the recognition result and verify the recognition result, and obtain the average test accuracy corresponding to each connection-channel feature, including: 通过第四公式,根据所述连接-通道特征,得到映射特征;所述第四公式为:Through the fourth formula, mapping characteristics are obtained according to the connection-channel characteristics; the fourth formula is: zp=[Z1,…,Zp]z p =[Z 1 ,…,Z p ] 其中,Zp表征映射特征集合,Zi表征第i组映射特征,表征映射函数,p为特征映射的数量,/>为权重,/>为偏置项,/>表征连接-通道特征;Among them, Z p represents the mapping feature set, Z i represents the i-th group of mapping features, Characterization mapping function, p is the number of feature maps, /> is the weight,/> is the bias term,/> Characterizing connection-channel characteristics; 通过第五公式,将所述映射特征集合送入增强节点,得到增强特征;所述第五公式为:Through the fifth formula, the mapping feature set is sent to the enhancement node to obtain the enhanced features; the fifth formula is: Hq=[H1,…,Hq]H q =[H 1 ,…,H q ] 其中,Hq表征增强特征集合,Hj表征第j组增强特征,ξj表征非线性激活函数,q为特征增强的数量,为权重,/>为偏置项,Zp表征所述映射特征集合;Among them, H q represents the enhanced feature set, H j represents the jth group of enhanced features, ξ j represents the nonlinear activation function, q is the number of feature enhancements, is the weight,/> is the bias term, Z p represents the mapping feature set; 将所述增强特征集合与所述映射特征集合进行串联,得到转换特征集合A=[Zp,Hq];其中,A表征所述转换特征集合,Zp表征所述映射特征集合,Hq表征所述增强特征集合;The enhanced feature set and the mapping feature set are concatenated to obtain a conversion feature set A = [Z p , H q ]; where A represents the conversion feature set, Z p represents the mapping feature set, H q Characterize the set of enhanced features; 根据所述转换特征集合与所述情绪标签,最小化所述宽度学习系统的目标函数,得到标签空间;所述目标函数为:According to the conversion feature set and the emotion label, the objective function of the width learning system is minimized to obtain a label space; the objective function is: 其中,F表征矩阵的F-范数,用于测量精度的近似值,/>表征l2正则化项,W表征宽度学习系统的权重,Y表征所述情绪标签,λ表征惩罚因子,I表征单位矩阵,A表征所述转换特征集合,A=[Zp,Hq],Zp表征所述映射特征集合,Hq表征所述增强特征集合;通过求解所述目标函数得到:Among them, F represents the F-norm of the matrix, Approximation used for measurement accuracy,/> represents the l 2 regularization term, W represents the weight of the width learning system, Y represents the emotion label, λ represents the penalty factor, I represents the unit matrix, A represents the conversion feature set, A=[Z p , H q ], Z p represents the mapping feature set, and H q represents the enhanced feature set; by solving the objective function, it is obtained: W=(λI+ATA)-1ATYW=(λI+A T A) -1 A T Y 根据所述标签空间进行验证,得到每一所述连接-通道特征对应的平均测试准确率。Verify according to the label space to obtain the average test accuracy corresponding to each connection-channel feature. 2.根据权利要求1所述的通道筛选方法,其特征在于,所述对所述脑电信号进行预处理,包括:2. The channel screening method according to claim 1, wherein the preprocessing of the EEG signal includes: 通过5个不同的频带对所述脑电信号进行滤波。The EEG signal is filtered through 5 different frequency bands. 3.根据权利要求2所述的通道筛选方法,其特征在于,所述根据经过预处理的所述脑电信号得到每一个通道的频域特征,包括:3. The channel screening method according to claim 2, wherein the frequency domain characteristics of each channel are obtained according to the preprocessed EEG signal, including: 通过第一公式对每个通道的每个频带的所述脑电信号进行特征提取,得到对应的频带微分熵特征,每一所述频域特征包括5个不同的频带对应的所述频带微分熵特征;其中,所述第一公式具体为:Features are extracted from the EEG signals of each frequency band of each channel through the first formula to obtain corresponding frequency band differential entropy features. Each frequency domain feature includes the frequency band differential entropy corresponding to 5 different frequency bands. Characteristics; wherein, the first formula is specifically: 其中,所述DE表征所述频带微分熵特征,σ2为对应脑电信号的方差,π为常量,e为欧拉常数。Wherein, the DE represents the differential entropy characteristics of the frequency band, σ 2 is the variance of the corresponding EEG signal, π is a constant, and e is Euler’s constant. 4.根据权利要求3所述的通道筛选方法,其特征在于,所述根据经过预处理的所述脑电信号得到每两个通道之间的脑连接特征,包括:4. The channel screening method according to claim 3, wherein the brain connection characteristics between each two channels are obtained according to the preprocessed EEG signals, including: 通过第二公式计算得到每个通道的所述脑电信号的瞬时相位;第二公式为:The instantaneous phase of the EEG signal of each channel is calculated through the second formula; the second formula is: 其中,通道的总数量为C,i=1,2,…,C,φi(t)表征第i通道的所述脑电信号的所述瞬时相位,t=1,2,3,…,T,T为采样点,si(t)表征第i通道的时间序列,为si(t)的希尔伯特变换;Wherein, the total number of channels is C, i=1,2,...,C, φ i (t) represents the instantaneous phase of the EEG signal of the i-th channel, t=1,2,3,..., T, T is the sampling point, s i (t) represents the time series of the i-th channel, is the Hilbert transform of s i (t); 通过第三公式,根据每两个通道的所述脑电信号所述瞬时相位,得到每两个通道的脑电信号之间的相滞指数特征,所述相滞指数特征作为所述脑连接特征;所述第三公式为:Through the third formula, according to the instantaneous phase of the EEG signals of each two channels, the phase lag index characteristics between the EEG signals of each two channels are obtained, and the phase lag index characteristics are used as the brain connection characteristics. ;The third formula is: 其中,PLIi,k表征第i通道的所述脑电信号与第k通道的所述脑电信号之间的所述相滞指数特征,φi(t)表征第i通道的所述脑电信号的所述瞬时相位,φk(t)表征第k通道的所述脑电信号的所述瞬时相位,T为采样点,t=1,2,3,…,T,i=1,2,3,…,C,k=1,2,3,…,C。Wherein, PLI i,k represents the phase lag index characteristic between the EEG signal of the i-th channel and the EEG signal of the k-th channel, and φ i (t) represents the EEG signal of the i-th channel. The instantaneous phase of the signal, φ k (t) represents the instantaneous phase of the EEG signal of the k-th channel, T is the sampling point, t=1,2,3,...,T, i=1,2 ,3,…,C, k=1,2,3,…,C. 5.一种情绪识别方法,其特征在于,包括如权利要求1至4任一项所述的通道筛选方法;5. An emotion recognition method, characterized in that it includes the channel screening method according to any one of claims 1 to 4; 所述情绪识别方法还包括:The emotion recognition method also includes: 根据所述重要通道提取重要频域特征与重要脑连接特征,对所述重要频域特征以及所述重要脑连接特征分别通过AEKM算法进行映射,分别得到AEKM频域特征以及AEKM脑连接特征;Extract important frequency domain features and important brain connection features according to the important channels, map the important frequency domain features and the important brain connection features through the AEKM algorithm respectively, and obtain AEKM frequency domain features and AEKM brain connection features respectively; 将所述AEKM频域特征与所述AEKM脑连接特征进行拼接得到融合特征,将所述融合特征与所述情绪标签输入至所述宽度学习系统,进行跨被试情绪识别,得到分类准确率。The AEKM frequency domain features and the AEKM brain connection features are spliced to obtain fusion features. The fusion features and the emotion labels are input to the width learning system to perform cross-subject emotion recognition to obtain classification accuracy. 6.根据权利要求5所述的情绪识别方法,其特征在于,所述对所述重要频域特征以及所述重要脑连接特征分别通过AEKM算法进行映射,分别得到AEKM频域特征以及AEKM脑连接特征,包括:6. The emotion recognition method according to claim 5, characterized in that the important frequency domain features and the important brain connection features are mapped respectively through the AEKM algorithm to obtain the AEKM frequency domain features and the AEKM brain connection respectively. Features, including: 根据所述重要通道,得到重要特征,具体为:According to the important channels, important features are obtained, specifically: 其中,m∈{FD,BC},Xm为重要特征,当m=FD时,xm为所述重要频域特征;当m=BC时,xm为所述重要脑连接特征,Xm的每一行由(xi m)T组成,i=1,2,…,n,为m的第i个样本,n为样本数;Among them, m∈{FD, BC}, X m is an important feature, when m = FD, x m is the important frequency domain feature; when m = BC, x m is the important brain connection feature, X m Each row of is composed of (x i m ) T , i=1,2,…,n, is the i-th sample of m, n is the number of samples; 对所述重要频域特征以及所述重要脑连接特征进行映射,得到AEKM特征;其中,The important frequency domain features and the important brain connection features are mapped to obtain AEKM features; where, gm(xm)∈Rl×1 g m (x m )∈R l×1 其中,m∈{FD,BC},gm(xm)表征AEKM特征;l表征维度;当m=FD时,gm(xm)为AEKM频域特征;当m=BC时,gm(xm)为AEKM脑连接特征;当m=FD时,xm为所述重要频域特征;当m=BC时,xm为所述重要脑连接特征;Among them, m∈{FD,BC}, g m (x m ) represents the AEKM feature; l represents the dimension; when m=FD, g m (x m ) is the AEKM frequency domain feature; when m=BC, g m (x m ) is the AEKM brain connection feature; when m = FD, x m is the important frequency domain feature; when m = BC, x m is the important brain connection feature; 所述对所述重要频域特征以及所述脑连接特征进行映射,得到AEKM特征,包括:The important frequency domain features and the brain connection features are mapped to obtain AEKM features, including: 对于所述重要特征通过/>算法,得到标记点集合,具体为:For the important features mentioned Pass/> Algorithm, obtain the set of marked points, specifically: 其中,m∈{FD,BC},Vm表征所述标记点集合,表征第j个标记点,l表征维度,j=1,2,…,l;Among them, m∈{FD,BC}, V m represents the set of marked points, It represents the jth marker point, l represents the dimension, j=1,2,…,l; 通过第六公式构建第一核矩阵;所述第六公式为:The first kernel matrix is constructed through the sixth formula; the sixth formula is: 其中,k=1,…,n,j=1,…,l,n为样本数,l表征维度,/>表征所述第一核矩阵,σm为核参数,κ(.,.,.)为核函数;Among them, k=1,…,n, j=1,…,l, n is the number of samples, l represents the dimension,/> Characterizing the first kernel matrix, σ m is the kernel parameter, and κ (.,.,.) is the kernel function; 通过第七公式构建第二核矩阵;所述第七公式为:The second kernel matrix is constructed through the seventh formula; the seventh formula is: 其中,k=1,…,n,j=1,…,l,n为样本数,l表征维度,/>表征所述第二核矩阵,σm为核参数,κ(.,.,.)为核函数;Among them, k=1,…,n, j=1,…,l, n is the number of samples, l represents the dimension,/> Characterizing the second kernel matrix, σ m is the kernel parameter, and κ (.,.,.) is the kernel function; 对所述第二核矩阵进行特征值分解,具体为:Perform eigenvalue decomposition on the second kernel matrix, specifically: 其中,l表征维度,为一个对角矩阵,对角线元素是/>的特征值,/>的列向量是对应特征值的特征向量;Among them, l represents the dimension, is a diagonal matrix, the diagonal elements are/> eigenvalues,/> The column vector of is the eigenvector corresponding to the eigenvalue; 通过第八公式,根据所述第一核矩阵和所述第二核矩阵,得到AEKM特征矩阵第八公式为:Through the eighth formula, according to the first kernel matrix and the second kernel matrix, the AEKM feature matrix is obtained The eighth formula is: 其中,n为样本数,l表征维度,Gm表征所述AEKM特征矩阵,当m=FD时,Gm为AEKM频域特征矩阵;当m=BC时,Gm为AEKM脑连接特征矩阵,其中,/>为一个对角矩阵,对角线元素是/>的特征值,/>的列向量是对应特征值的特征向量,Mm作为特征值映射矩阵;Among them, n is the number of samples, l represents the dimension, G m represents the AEKM feature matrix, when m = FD, G m is the AEKM frequency domain feature matrix; when m = BC, G m is the AEKM brain connection feature matrix, Among them,/> is a diagonal matrix, the diagonal elements are/> eigenvalues,/> The column vector of is the eigenvector corresponding to the eigenvalue, and M m serves as the eigenvalue mapping matrix; 根据所述AEKM频域特征矩阵和所述AEKM脑连接特征矩阵,得到融合特征矩阵According to the AEKM frequency domain feature matrix and the AEKM brain connection feature matrix, a fusion feature matrix is obtained GF=[GBC,GFD],其中,GF表征所述融合特征矩阵,GFD表征所述AEKM频域特征矩阵,GBC表征所述AEKM脑连接特征矩阵。 GF = [G BC , G FD ], where GF represents the fusion feature matrix, G FD represents the AEKM frequency domain feature matrix, and G BC represents the AEKM brain connection feature matrix. 7.一种通道筛选系统,其特征在于,包括:至少一个第一存储器;7. A channel screening system, characterized in that it includes: at least one first memory; 至少一个第一处理器;at least one first processor; 至少一个程序;at least one program; 程序被存储在第一存储器中,第一处理器执行至少一个程序以实现:The program is stored in the first memory, and the first processor executes at least one program to implement: 如权利要求1至4任一项所述的通道筛选方法。The channel screening method according to any one of claims 1 to 4. 8.一种情绪识别系统,其特征在于,包括:至少一个第二存储器;8. An emotion recognition system, characterized by comprising: at least one second memory; 至少一个第二处理器;at least one second processor; 至少一个程序;at least one program; 程序被存储在第二存储器中,第二处理器执行至少一个程序以实现:The program is stored in the second memory, and the second processor executes at least one program to implement: 如权利要求5至6任一项所述的情绪识别方法。The emotion recognition method according to any one of claims 5 to 6. 9.一种计算机可读存储介质,其特征在于,计算机可读存储介质存储有计算机可执行信号,计算机可执行信号用于执行:9. A computer-readable storage medium, characterized in that the computer-readable storage medium stores computer-executable signals, and the computer-executable signals are used to execute: 如权利要求1至4任一项所述的通道筛选方法;或者,The channel screening method according to any one of claims 1 to 4; or, 如权利要求5至6任一项所述的情绪识别方法。The emotion recognition method according to any one of claims 5 to 6.
CN202210354570.5A 2022-04-06 2022-04-06 Channel screening method, emotion recognition system and storage medium Active CN114818786B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210354570.5A CN114818786B (en) 2022-04-06 2022-04-06 Channel screening method, emotion recognition system and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210354570.5A CN114818786B (en) 2022-04-06 2022-04-06 Channel screening method, emotion recognition system and storage medium

Publications (2)

Publication Number Publication Date
CN114818786A CN114818786A (en) 2022-07-29
CN114818786B true CN114818786B (en) 2024-03-01

Family

ID=82533258

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210354570.5A Active CN114818786B (en) 2022-04-06 2022-04-06 Channel screening method, emotion recognition system and storage medium

Country Status (1)

Country Link
CN (1) CN114818786B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118133079B (en) * 2024-02-07 2025-05-09 上海脑虎科技有限公司 Electroencephalogram signal channel processing method and device, electronic equipment and storage medium
CN118466748B (en) * 2024-04-11 2024-12-24 北京智冉医疗科技有限公司 Performance evaluation method, device and medium for brain-computer interface electrode
CN118975804B (en) * 2024-07-17 2025-07-15 华南理工大学 A method, device and storage medium for identifying drug addicts

Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110070105A (en) * 2019-03-25 2019-07-30 中国科学院自动化研究所 Brain electricity Emotion identification method, the system quickly screened based on meta learning example
CN111134666A (en) * 2020-01-09 2020-05-12 中国科学院软件研究所 Emotion recognition method and electronic device based on multi-channel EEG data
CN111427450A (en) * 2020-03-20 2020-07-17 海南大学 A method, system, device and readable storage medium for emotion recognition
CN112057089A (en) * 2020-08-31 2020-12-11 五邑大学 Emotion recognition method, emotion recognition device and storage medium
CN112101152A (en) * 2020-09-01 2020-12-18 西安电子科技大学 An EEG emotion recognition method, system, computer equipment, and wearable device
WO2021046949A1 (en) * 2019-09-11 2021-03-18 五邑大学 Driving fatigue related eeg function connection dynamic characteristic analysis method
CN113011493A (en) * 2021-03-18 2021-06-22 华南理工大学 Electroencephalogram emotion classification method, device, medium and equipment based on multi-kernel width learning
WO2021159571A1 (en) * 2020-02-12 2021-08-19 五邑大学 Method and device for constructing and identifying multiple mood states using directed dynamic functional brain network
CN113627518A (en) * 2021-08-07 2021-11-09 福州大学 A method for implementing multi-channel convolution-recurrent neural network EEG emotion recognition model using transfer learning
CN114041795A (en) * 2021-12-03 2022-02-15 北京航空航天大学 Emotion recognition method and system based on multi-modal physiological information and deep learning
CN114052735A (en) * 2021-11-26 2022-02-18 山东大学 Electroencephalogram emotion recognition method and system based on depth field self-adaption
CN114224342A (en) * 2021-12-06 2022-03-25 南京航空航天大学 Multi-channel electroencephalogram emotion recognition method based on space-time fusion feature network

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101618275B1 (en) * 2014-10-23 2016-05-04 숭실대학교산학협력단 Method and System for Analyzing EEG Response to Video Stimulus to Media Facades

Patent Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110070105A (en) * 2019-03-25 2019-07-30 中国科学院自动化研究所 Brain electricity Emotion identification method, the system quickly screened based on meta learning example
WO2021046949A1 (en) * 2019-09-11 2021-03-18 五邑大学 Driving fatigue related eeg function connection dynamic characteristic analysis method
CN111134666A (en) * 2020-01-09 2020-05-12 中国科学院软件研究所 Emotion recognition method and electronic device based on multi-channel EEG data
WO2021159571A1 (en) * 2020-02-12 2021-08-19 五邑大学 Method and device for constructing and identifying multiple mood states using directed dynamic functional brain network
CN111427450A (en) * 2020-03-20 2020-07-17 海南大学 A method, system, device and readable storage medium for emotion recognition
CN112057089A (en) * 2020-08-31 2020-12-11 五邑大学 Emotion recognition method, emotion recognition device and storage medium
CN112101152A (en) * 2020-09-01 2020-12-18 西安电子科技大学 An EEG emotion recognition method, system, computer equipment, and wearable device
CN113011493A (en) * 2021-03-18 2021-06-22 华南理工大学 Electroencephalogram emotion classification method, device, medium and equipment based on multi-kernel width learning
CN113627518A (en) * 2021-08-07 2021-11-09 福州大学 A method for implementing multi-channel convolution-recurrent neural network EEG emotion recognition model using transfer learning
CN114052735A (en) * 2021-11-26 2022-02-18 山东大学 Electroencephalogram emotion recognition method and system based on depth field self-adaption
CN114041795A (en) * 2021-12-03 2022-02-15 北京航空航天大学 Emotion recognition method and system based on multi-modal physiological information and deep learning
CN114224342A (en) * 2021-12-06 2022-03-25 南京航空航天大学 Multi-channel electroencephalogram emotion recognition method based on space-time fusion feature network

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Emotion Specific Network with Multi-dimension Features in Emotion Recognition;Gufeng Jia 等;《IEEE》;全文 *
事件相关电位(P300)脑电信号解码的 两个问题及其解决方法 张;张鸿飞 等;《五邑大学学报(自然科学版)》;第35卷(第4期);全文 *

Also Published As

Publication number Publication date
CN114818786A (en) 2022-07-29

Similar Documents

Publication Publication Date Title
CN114818786B (en) Channel screening method, emotion recognition system and storage medium
CN113254696B (en) Cover image acquisition method and device
CN112949533B (en) Motor imagery electroencephalogram identification method based on relative wavelet packet entropy brain network and improved version lasso
CN110353673A (en) A kind of brain electric channel selection method based on standard mutual information
CN107918821A (en) Teachers ' classroom teaching process analysis method and system based on artificial intelligence technology
CN101383008A (en) Image Classification Method Based on Visual Attention Model
CN108960299A (en) A kind of recognition methods of multiclass Mental imagery EEG signals
CN110782497B (en) Method and device for calibrating external parameters of camera
CN105718597A (en) Data retrieving method and system thereof
CN116416884B (en) A testing device and testing method for a display module
CN114548166B (en) A Riemannian manifold-based method for transferring learning of heterogeneous label spaces of EEG signals
CN111134664A (en) Epileptic discharge identification method and system based on capsule network and storage medium
CN117911710A (en) Multispectral target detection model training method, target detection method and system
CN112465069A (en) Electroencephalogram emotion classification method based on multi-scale convolution kernel CNN
CN115083003B (en) Clustering network training and target clustering method, device, terminal and storage medium
WO2023115790A1 (en) Chemical structure image extraction method and apparatus, storage medium, and electronic device
CN109330613A (en) Human body Emotion identification method based on real-time brain electricity
CN105469117A (en) Image recognition method and device based on robust characteristic extraction
CN118015496A (en) Small target detection method for UAV aerial photography based on YOLOv7 neural network
CN115359353A (en) Flower identification and classification method and device
CN116108889A (en) Gradual change fusion model establishment method and fusion method for multispectral image
Jiang et al. Analytical comparison of two emotion classification models based on convolutional neural networks
Wang et al. Emotion recognition based on phase-locking value brain functional network and topological data analysis
CN113017651B (en) Brain function network analysis method for emotion EEG
CN118070127B (en) Diphase affective disorder feature extraction and classification method based on high-order functional network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant