[go: up one dir, main page]

WO2016163556A1 - Method for estimating perceptual semantic content by analysis of brain activity - Google Patents

Method for estimating perceptual semantic content by analysis of brain activity Download PDF

Info

Publication number
WO2016163556A1
WO2016163556A1 PCT/JP2016/061645 JP2016061645W WO2016163556A1 WO 2016163556 A1 WO2016163556 A1 WO 2016163556A1 JP 2016061645 W JP2016061645 W JP 2016061645W WO 2016163556 A1 WO2016163556 A1 WO 2016163556A1
Authority
WO
WIPO (PCT)
Prior art keywords
subject
brain activity
perceptual
training
stimulus
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Ceased
Application number
PCT/JP2016/061645
Other languages
French (fr)
Japanese (ja)
Inventor
伸志 西本
秀紀 柏岡
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
National Institute of Information and Communications Technology
Original Assignee
National Institute of Information and Communications Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by National Institute of Information and Communications Technology filed Critical National Institute of Information and Communications Technology
Priority to CN201680019204.7A priority Critical patent/CN107427250B/en
Priority to EP16776722.7A priority patent/EP3281582A4/en
Priority to US15/564,071 priority patent/US20180092567A1/en
Publication of WO2016163556A1 publication Critical patent/WO2016163556A1/en
Anticipated expiration legal-status Critical
Ceased legal-status Critical Current

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/24Detecting, measuring or recording bioelectric or biomagnetic signals of the body or parts thereof
    • A61B5/316Modalities, i.e. specific diagnostic methods
    • A61B5/369Electroencephalography [EEG]
    • A61B5/377Electroencephalography [EEG] using evoked responses
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/16Devices for psychotechnics; Testing reaction times ; Devices for evaluating the psychological state
    • A61B5/165Evaluating the state of mind, e.g. depression, anxiety
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/72Signal processing specially adapted for physiological signals or for diagnostic purposes
    • A61B5/7235Details of waveform analysis
    • A61B5/7264Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems
    • A61B5/7267Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems involving training the classification device
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/015Input arrangements based on nervous system activity detection, e.g. brain waves [EEG] detection, electromyograms [EMG] detection, electrodermal response detection
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/05Detecting, measuring or recording for diagnosis by means of electric currents or magnetic fields; Measuring using microwaves or radio waves
    • A61B5/055Detecting, measuring or recording for diagnosis by means of electric currents or magnetic fields; Measuring using microwaves or radio waves involving electronic [EMR] or nuclear [NMR] magnetic resonance, e.g. magnetic resonance imaging
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/24Detecting, measuring or recording bioelectric or biomagnetic signals of the body or parts thereof
    • A61B5/316Modalities, i.e. specific diagnostic methods
    • A61B5/369Electroencephalography [EEG]
    • A61B5/377Electroencephalography [EEG] using evoked responses
    • A61B5/378Visual stimuli
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/30Semantic analysis
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/082Learning methods modifying the architecture, e.g. adding, deleting or silencing nodes or connections
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • G16H50/70ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for mining of medical data, e.g. analysing previous cases of other patients

Definitions

  • the present invention measures the brain activity of a subject under natural perception such as when watching a video, and analyzes the measured information to estimate the perceptual meaning content perceived by the subject.
  • the present invention relates to a content estimation method.
  • a technology has been developed for estimating the perceived content and predicting the behavior by analyzing the brain activity of the subject. These technologies are expected as elemental technologies for inter-brain machine interfaces, and as means for conducting preliminary evaluations, purchase predictions, etc. of videos and various products.
  • the semantic perception estimation technology based on the above brain activity at the present time estimates predetermined perceptual semantic content for a limited perceptual object such as a simple line drawing or still image containing a single or a small number of perceptual semantic content. Limited to things.
  • the procedure for decoding perceptual meaning content from brain activity using the prior art is as follows. First, model training (calibration) is performed to interpret an individual's brain activity.
  • Patent Document 1 discloses the decoding and reproduction of subjective perceptual or cognitive experience.
  • an initial set of brain activity data generated from a subject in response to an initial sensory stimulus is obtained using a brain imaging device and converted into a corresponding set of predetermined response values.
  • Non-Patent Document 1 describes encoding (encoding) and decoding (decoding) by fMRI (Functional Magnetic Resonance Imaging). This document shows that both encoding and decoding operations are used to explore how common issues of information are represented in the brain, but focus on the encoding model. Doing has two important advantages over decoding.
  • Non-Patent Document 1 proposes a systematic model approach from estimation of a voxel encoding model by fMRI scanning, and performs decoding using the obtained encoding model.
  • a brain image can be acquired while a certain scene is viewed, and the brain image can be reconstructed from the brain image.
  • Non-Patent Document 2 further shows that text corresponding to the mental content reflected in the brain image can also be generated.
  • This starts with a brain image collected when a subject is reading the name of a specific item (eg, “apartment”) while looking at a line drawing of the name of an item.
  • a spiritual meaning display model of a specific concept is constructed from text data, and such a display aspect is mapped to an activation pattern in a corresponding brain image. It has been reported that from this mapping, a collection of semantically related words (eg, “door”, “window” for “apartment”) could be generated.
  • the technique targeted by the present invention is capable of estimating any perceptual meaning content of a subject under natural perception such as watching a movie.
  • the prior art has a limitation in at least one of the following points, and the above-mentioned purpose cannot be satisfied.
  • the prior art is intended for simple line drawings and still images, and cannot be applied to situations where many things such as natural moving images, impressions, etc. occur dynamically.
  • the perceptual meaning contents that can be estimated are limited to those included in the training data sample, and any other perceptual meaning contents cannot be estimated.
  • the technique to which the present invention is directed includes estimating the perceptual meaning content perceived by the subject by analyzing the measured information as described above. By associating contents in an intermediate representation space (semantic space), estimation of arbitrary perceived contents is realized. Details are shown below.
  • the method of estimating perceptual meaning content by analyzing brain activity includes information presenting means for presenting information serving as a stimulus to a subject, cranial nerve activity detecting means for detecting a brain nerve activity signal of the subject by the stimulus, and stimulus Data processing means for inputting a language description relating to the contents of the contents and the output of the cranial nerve activity detecting means, semantic space information storage means readable from the data processing means, and training result information storage means readable and writable from the data processing means And analyzing the brain activity of the subject by using a brain activity analysis device comprising: and estimating the perceptual meaning content perceived by the subject.
  • Training information is presented to the subject and a training stimulus is given to the subject, the language description of the perceptual content that the training stimulus induces to the subject, and the brain that the training stimulus induces to the subject
  • the output of the cranial nerve activity detecting means that has detected the activity is input to the data processing means.
  • the training information is an image, a moving image, or the like, and this information becomes a stimulus to the subject, and the stimulus induces some perceptual content to the subject.
  • a language description (annotation) of this perceptual content is acquired and input to the data processing means.
  • the output of the cranial nerve activity detecting means that detects the brain activity as an electroencephalogram or an fMRI signal is also input to the data processing means.
  • the output of the cranial nerve activity detecting means such as an electroencephalogram and fMRI signal is the output of the cranial nerve activity detecting means induced by the training stimulus, and the signal and firing pattern extracted from it,
  • the perceptual meaning content for the new stimulus can be obtained as a linear composition of a language description (annotation) corresponding to the training information.
  • a probability distribution in the semantic space representing the perceptual meaning content of the new stimulus can be obtained from the linear synthesis coefficient and the association obtained in (2) above.
  • a highly probable perceptual meaning content is estimated from the probability distribution obtained in (3) above.
  • dispersion of the estimation results can be suppressed by providing a threshold for the probability used for estimation in the probability distribution, or by providing a threshold for the number of perceptual meaning contents having a high probability.
  • the association using the meaning space representation of stimulation and brain activity training data in (2) above is performed for each subject, and all or part of the training data is performed for each subject. It is good also as what shifts the correlation with the position in the said semantic space uniformly according to this projection function for every test subject.
  • a coordinate in the semantic space is found for a given word, and the probability distribution obtained in (3) above is obtained. And the likelihood value can be used as an indicator of the probability.
  • the present invention makes it possible to estimate arbitrary perceptual meaning content from brain activity under natural perception such as moving images.
  • FIG. 1 is a conceptual diagram of a brain activity semantic space model and perceptual meaning content estimation. It shows that perceptual meaning content is estimated from brain activity under any new condition by learning the correspondence between brain activity and corpus-derived semantic space as a quantitative model.
  • FIG. 2 is a diagram illustrating an example of perceptual meaning content estimation from brain activity during TV commercial (CM) video viewing. (Left) An example of a CM clip presented to the subject, (Right) Perceptual meaning content estimated from brain activity at the time of viewing the clip. Each line next to the clip indicates a word that has a high probability of perceiving the part of speech of the noun, verb, or adjective.
  • FIG. 3 is a diagram showing a specific impression time-series quantitative evaluation example from brain activity. The degree of recognition of a specific impression (in this case “cute”) was estimated from the brain activity during the brain activity while watching three 30-second CMs.
  • FIG. 4 is a diagram showing an example of a device configuration for applying the present invention.
  • FIG. 4 shows an apparatus configuration example for applying the present invention.
  • Presentation of training stimuli (images, moving images, etc.) to the subject 2 is performed by the display device 1, and the brain activity signal of the subject 2 can detect, for example, an EEG (electroencephalogram) or an fMRI signal. Detect with.
  • EEG electronicEG
  • fMRI magnetic resonance imaging
  • Detect with As a brain activity signal, a brain neuron firing pattern and a signal of activity change in one or more specific regions are detected.
  • the detected brain activity signal is processed by the data processing device 4.
  • the natural language annotation from the subject 2 is input to the data processing device 4.
  • the semantic space used for data processing is obtained by analyzing the corpus data from the storage 5 by the analysis device 6 and stored in the storage 7.
  • natural language annotation data from the subject 2 or a third party is analyzed as a vector in the semantic space by the data processing device 4, and the analysis result as a training result together with the brain activity signal of the subject 2. It is stored in the storage 8.
  • a brain activity signal is detected by the cranial nerve activity detector 3, and the data processor 4 uses the semantic space from the storage 7 and the training result from the storage 8 as a basis.
  • the analysis result is output from the data processing device 4.
  • the storage 5, the storage 7, and the storage 8 may be obtained by dividing one storage area, and the data processing device 4 and the analysis device 6 may be used by switching one computer.
  • FIG. 1 is a conceptual diagram of a brain activity semantic space model and perceptual meaning content estimation.
  • A With respect to the stimulus 11 for training (image, moving image, etc.), a language description (annotation) 13 of the perceptual content that the stimulus induces in the subject is acquired.
  • some still image or moving image (training data) is presented to the subject 12 as a training stimulus, and a list of language descriptions recalled by the subject who has received the presentation is created.
  • A Using a large-scale database such as the corpus 16, a semantic space describing the semantic relationship of words appearing in the language description is constructed.
  • natural language processing techniques such as a latent semantic analysis method and a word2vec method.
  • the corpus newspaper and magazine articles, encyclopedias, stories and the like can be used.
  • the corpus-derived semantic space is a space for projecting elements such as words to a fixed-length vector space based on statistical properties inherent in the corpus, as is well known. Of course, if this semantic space has already been obtained, it can be used.
  • the latent semantic analysis method performs the singular value decomposition on the co-occurrence matrix indicating what kind of words are included in the sentence object to be analyzed, and performs the dimensional reduction to reduce the main meaning of the target text. It is a well-known technique that is a principal component analysis technique for grasping the structure.
  • the Word2Vec method is a quantification method for expressing words as vectors. Word2Vec learns a fixed-length vector space expression of a word through optimizing a word appearance prediction model in a sentence.
  • the training data is used to project 15 onto the semantic space of the stimulus 11, and the representation on the semantic space and the brain activity output 14 are related.
  • Training data images, moving images, etc.
  • the brain activity signals of the subject such as EEG (electroencephalogram) and fMRI signals, are detected.
  • the detected brain activity signal is associated with the position in the semantic space. This association associates EEG (electroencephalogram) and fMRI signal waveforms with the representation in the semantic space.
  • This association is preferably performed for each subject, but in this case, not all of the above training data is performed, but a part thereof is performed to obtain a projection function in the semantic space for each subject, and the projection function Accordingly, the association with the position in the semantic space may be shifted uniformly.
  • D For a new brain activity, a probability distribution in the semantic space representing the perceptual meaning content is obtained from the association obtained in (c) above. New data (image, video, etc.) is presented to the subject, and the brain activity signal is detected using the same brain activity signal acquisition means as in the case of the training data. The detected brain activity signal is compared with the brain activity signal obtained from the training data to determine which training data is close to.
  • the language description of the perceptual content corresponding to the brain activity signal can be obtained with a probabilistic weight from the probability distribution corresponding to the brain activity signal.
  • a probable language description is estimated using this probabilistic weight.
  • the perceptual content induced by the subject who has been stimulated using this list is expressed as a language description, so all of the above-mentioned corpus-derived semantic spaces or selected It is desirable to cover a predetermined part.
  • the present invention provides a technique for estimating arbitrary perceptual meaning content perceived by a subject from brain activity under relatively dynamic and complex video / audio perception such as television commercial (CM). .
  • the present invention it is possible to estimate an arbitrary perceptual meaning content from a brain activity under natural perception such as a moving image. For example, it is possible to quantitatively evaluate from a brain activity whether a moving picture such as the above-mentioned TV commercial exhibits an intended expression effect.
  • a topic model in LDA can be applied. This makes it easy to express a sentence in which the perceived meaning content is estimated from the estimated brain activity.
  • An example of the procedure for this is shown below.
  • (B) Using a large-scale database such as the corpus 16, a topic model that describes the semantic relationship of words that appear in the language description is constructed.
  • This topic model can be prepared by a technique well known in the LDA method.
  • the topic model is a statistical model, and the probability of appearance of each word can be obtained.
  • the training data is replaced with the label of the topic to which the morpheme of the training data belongs, and the label and the brain activity output 14 are related.
  • training data images, moving images, etc.
  • the brain activity signals of the subject such as EEG (electroencephalogram) and fMRI signals, are detected.
  • the detected brain activity signal is associated with the label of the topic to which the morpheme of the training data belongs.
  • a linear combination of labels may correspond to a brain activity signal based on one training data, or conversely, a linear combination of the brain activity signals may correspond to one label.
  • This association is preferably performed for each subject, but at that time, it is possible to omit a part of the association work by performing a part of the training data instead of the whole of the training data.
  • D For a new brain activity, a probability distribution of a language description represented by a topic model label representing perceptual meaning content is obtained from the association obtained in (C) above.
  • New data (image, video, etc.) is presented to the subject, and the brain activity signal is detected using the same brain activity signal acquisition means as in the case of the training data.
  • the detected brain activity signal is compared with the brain activity signal obtained from the training data to determine which training data is close to. Alternatively, it is determined what kind of hybrid is the closest in the case of training data.
  • This comparison can be performed using, for example, the peak value of the cross-correlation between the brain activity signal for the new data and the brain activity signal for the previous training data as an index. By this determination, it is possible to obtain the probability distribution of the language description corresponding to the brain activity signal detected in the presentation of new data.
  • E From the probability distribution obtained in (D) above, a highly probable perceptual meaning content is estimated.
  • the sentence can be estimated by the same method as LDA.
  • the perceptual content induced by the subject who has been stimulated using this list is expressed as a language description, so all the meaning spaces derived from the corpus Or it is desirable to cover the selected predetermined part.
  • the present invention provides a technique for estimating arbitrary perceptual meaning content perceived by a subject from brain activity under relatively dynamic and complex video and audio perception such as television commercial (CM).
  • CM television commercial
  • the example shown in FIG. 2 is an example of estimation of perceptual meaning content from brain activity during CM video viewing.
  • the purpose is to rationally answer the question of how the perception of “intimacy” can be induced in the viewer.
  • This shows the perceptual meaning content estimated from the brain activity by the procedures (a) to (e) above for the presented CM video in FIG. 2, and the left column shows an example of a CM clip presented to the subject, The right column shows the perceptual meaning content estimated from the brain activity at the time of viewing the clip.
  • Each row next to the clip indicates a word having a high probability that the subject perceives the part of speech of each of the noun, verb, and adjective in order from the top.
  • Figure 2 (a): A scene in which a daughter holds a mobile phone over her mother (noun) Male Female Single Neighborhood Parents Older relatives Visit older mothers (verbs) Quit Dating Get acquainted Join together Visit Die (adjective) Intimate Poor Young Young Figure 2 (b): Dog and owner sitting on a bench with a radio tower and looking at the scenery (noun) Female Male Older Blonde Friend Her Mother Single (verb) Dating Wear Speak Ask Ask Ask Sit Sit (adjective) ) Intimate Friendly Young Young Cute Figure 2 (c): A scene where a dog appears explosively by breaking the center of the scene in (b) (Noun) Face Mouth Glasses Facial expression Personal appearance Mouth Serious (verb) Speaking Dating Angry Wearing Wearing Sit and swing around (adjectives) intimate cute kind young want scary Figure 2 (d): (c) dog campaign for a product (noun) ) Letter Font Logo Gothic Alphabet Display (verb) Replace Write Append It becomes possible to objectively judge whether these perceptual meaning
  • the example shown in FIG. 3 is an example of quantitative evaluation from the brain activity of a specific impression time series. This is for the purpose of providing a quantitative index such as which of the two images A and B gives a specific impression to the viewer more strongly.
  • a quantitative index such as which of the two images A and B gives a specific impression to the viewer more strongly.
  • CM-1 A scene where a high school girl is talking with a relative
  • CM-2 A scene where a board meeting is held
  • CM-3 A scene where an idol is practicing dance, It can be seen that a relatively strong reaction is obtained.
  • the present invention can be widely used as a basis of a brain machine interface through prior evaluation of audiovisual materials (video, music, teaching materials, etc.) and perception / behavioral intention reading.

Landscapes

  • Health & Medical Sciences (AREA)
  • Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Physics & Mathematics (AREA)
  • Psychiatry (AREA)
  • Biomedical Technology (AREA)
  • General Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Animal Behavior & Ethology (AREA)
  • Public Health (AREA)
  • Veterinary Medicine (AREA)
  • Surgery (AREA)
  • Molecular Biology (AREA)
  • Medical Informatics (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Pathology (AREA)
  • Biophysics (AREA)
  • Psychology (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Child & Adolescent Psychology (AREA)
  • Developmental Disabilities (AREA)
  • Social Psychology (AREA)
  • Hospice & Palliative Care (AREA)
  • Educational Technology (AREA)
  • Evolutionary Computation (AREA)
  • Fuzzy Systems (AREA)
  • Physiology (AREA)
  • Signal Processing (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Mathematical Physics (AREA)
  • Dermatology (AREA)
  • Neurology (AREA)
  • Neurosurgery (AREA)
  • Human Computer Interaction (AREA)
  • General Physics & Mathematics (AREA)
  • Measurement And Recording Of Electrical Phenomena And Electrical Characteristics Of The Living Body (AREA)
  • Magnetic Resonance Imaging Apparatus (AREA)
  • Measuring And Recording Apparatus For Diagnosis (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

Provided is an estimation method for measuring and analyzing brain activity to estimate a perceptual semantic content thereof. This method comprises: (A) inputting, to a data processing means, an output when a cranial nerve activity detection means detects an annotation of a perceptual content and brain activity induced in a subject by a training stimulation; (B) associating a sematic space representation of the training stimulation and the output of the cranial nerve activity detection means in a stored semantic space and storing the association in a training result information storage means; (C) inputting, to the data processing means, an output when the cranial nerve activity detection means detects brain activity induced by a novel stimulation, and obtaining a probability distribution in the semantic space which represents perceptual semantic contents for the output of the novel stimulation-induced brain activity by the cranial nerve activity detection means on the basis of the association; and (D) estimating a highly probable perceptual semantic content on the basis of the probability distribution. The association process may be performed for each subject. In the probability estimation process, the likelihood calculated on the basis of the coordinate of a given word in the semantic space and the probability distribution is used as an indicator.

Description

脳活動の解析による知覚意味内容の推定方法Estimation method of perceptual meaning by analysis of brain activity

 本発明は、動画視聴時等の自然知覚下における被験者の脳活動を計測し、計測した情報を解析することで、被験者が知覚している知覚意味内容の推定を行う脳活動の解析による知覚意味内容の推定方法に関する。 The present invention measures the brain activity of a subject under natural perception such as when watching a video, and analyzes the measured information to estimate the perceptual meaning content perceived by the subject. The present invention relates to a content estimation method.

 被験者の脳活動を解析することで、知覚内容の推定や行動予測を行うための技術(脳情報デコーディング技術)が開発されている。これらの技術は、脳機械間インターフェースの要素技術として、また、映像や各種製品の事前評価や購買予測等を行う手段として期待されている。
 現時点における上記脳活動からの意味知覚推定技術は、単一または少数の知覚意味内容が含まれた簡単な線画や静止画等の限られた知覚対象について、予め決められた知覚意味内容を推定するものに限られている。
 従来技術を用いて脳活動から知覚意味内容をデコーディングするための手順は、以下の通りである。最初に、個人の脳活動を解釈するためのモデル訓練(キャリブレーション)を行う。この段階では、画像等で構成される刺激セットを被験者に提示し、それらの刺激が誘発した脳活動を記録する。この刺激−脳活動ペア(訓練用データサンプル)により、知覚内容と脳活動の対応付けを得る。次に、知覚意味内容推定の対象となる新規の脳活動を記録し、その脳活動が訓練用データサンプルで得られたどの脳活動と類似しているかを判断することで、知覚意味内容推定を行う。
 特許文献1には、主観的な知覚的経験または認識経験の解読および再現することが、開示されている。この開示においては、対象から、最初の知覚刺激に応じて生み出される脳活動データの最初のセットは、脳画像形成装置を使って得られ、それに対応する所定の応答値のセットに変換される。第2の知覚刺激に応じて生み出される脳活動データの第2セットは、対象からデコーディング分布を用いて得られるが、上記所定の応答値に応じた脳の活動データの第2のセットである確率が決定される。脳活動刺激の第2セットは、脳活動データの第2セットと予測された応答値の間の対応の確率に基づいて解読される。
 また、非特許文献1には、fMRI(functional Magnetic Resonance Imaging)による符号化(エンコーディング)と復号化(デコーディング)が記載されている。この文献では、共通の問題である情報が脳においてどのように表現されるかについて探究するために、エンコーディング操作とデコーディング操作の両方が用いられることが示されているが、エンコーディング・モデルに集中することはデコーディングよりも2つの重要な利点がある、としている。その理由は、第1に、デコーディングモデルでは部分的な説明だけが得られるが、原理的にエンコーディング・モデルでは興味のある領域の完全な機能的な説明が得られる。第2に、エンコーディング・モデルから最適デコーディングモデルを得ることは直接的であるが、デコーディングモデルからエンコーディング・モデルを得ることはずっと困難である、ということである。そこで、非特許文献1では、fMRIスキャンによるボクセルのエンコーディング・モデルの推定からシステマティックモデル・アプローチを提案し、得られたエンコーディング・モデルを用いてデコーディングを行っている。
 また、ある場面を見ている間に脳画像を取得し、その脳画像から、その場面に近いものを再構成することができることが、既に報告されている。非特許文献2には、さらに、脳画像で反映される心的内容に対応するテキストを生成することもできることが示されている。これは、あるアイテムの名前の線画を見ながら被験者がその具体的なアイテム(例えば、「アパート」)の名前を読んでいる時に集められた脳画像から始めるものである。具体的な概念の精神的意味表示モデルを、テキストデータから構築し、それに対応する脳画像でそのような表示の様相を活性化のパターンにマッピングするようにするものである。このマッピングから、意味論的に関係する語(例えば、「アパート」に対する「ドア」、「ウインドウ」)の集まりを生成することができた旨、報告されている。
A technology (brain information decoding technology) has been developed for estimating the perceived content and predicting the behavior by analyzing the brain activity of the subject. These technologies are expected as elemental technologies for inter-brain machine interfaces, and as means for conducting preliminary evaluations, purchase predictions, etc. of videos and various products.
The semantic perception estimation technology based on the above brain activity at the present time estimates predetermined perceptual semantic content for a limited perceptual object such as a simple line drawing or still image containing a single or a small number of perceptual semantic content. Limited to things.
The procedure for decoding perceptual meaning content from brain activity using the prior art is as follows. First, model training (calibration) is performed to interpret an individual's brain activity. At this stage, a stimulus set composed of images and the like is presented to the subject, and brain activity induced by those stimuli is recorded. By this stimulus-brain activity pair (training data sample), a correspondence between perceptual content and brain activity is obtained. Next, by recording the new brain activity that is the subject of perceptual meaning content estimation and judging which brain activity is similar to the brain activity obtained in the training data sample, perceptual meaning content estimation is performed. Do.
Patent Document 1 discloses the decoding and reproduction of subjective perceptual or cognitive experience. In this disclosure, an initial set of brain activity data generated from a subject in response to an initial sensory stimulus is obtained using a brain imaging device and converted into a corresponding set of predetermined response values. The second set of brain activity data generated in response to the second sensory stimulus is obtained from the subject using the decoding distribution, and is the second set of brain activity data in accordance with the predetermined response value. Probability is determined. A second set of brain activity stimuli is decoded based on the probability of correspondence between the second set of brain activity data and the predicted response value.
Non-Patent Document 1 describes encoding (encoding) and decoding (decoding) by fMRI (Functional Magnetic Resonance Imaging). This document shows that both encoding and decoding operations are used to explore how common issues of information are represented in the brain, but focus on the encoding model. Doing has two important advantages over decoding. The reason for this is that, firstly, only a partial description can be obtained with the decoding model, but in principle the encoding model gives a complete functional description of the area of interest. Second, obtaining an optimal decoding model from an encoding model is straightforward, but obtaining an encoding model from a decoding model is much more difficult. Therefore, Non-Patent Document 1 proposes a systematic model approach from estimation of a voxel encoding model by fMRI scanning, and performs decoding using the obtained encoding model.
In addition, it has already been reported that a brain image can be acquired while a certain scene is viewed, and the brain image can be reconstructed from the brain image. Non-Patent Document 2 further shows that text corresponding to the mental content reflected in the brain image can also be generated. This starts with a brain image collected when a subject is reading the name of a specific item (eg, “apartment”) while looking at a line drawing of the name of an item. A spiritual meaning display model of a specific concept is constructed from text data, and such a display aspect is mapped to an activation pattern in a corresponding brain image. It has been reported that from this mapping, a collection of semantically related words (eg, “door”, “window” for “apartment”) could be generated.

米国特許出願公開第2013−0184558号明細書US Patent Application Publication No. 2013-0184558

Thomas Naselaris,Kendrick N.Kay,Shinji Nishimoto,Jack L. Gallant,″Encoding and decoding in fMRI″,NeuroImage 2011,56(2):400−410Thomas Naselaris, Kendrick N. Kay, Shinji Nishimoto, Jack L. Gallant, “Encoding and decoding in fMRI”, NeuroImage 2011, 56 (2): 400-410 Francisco Pereira,Greg Detre,Matthew Botvinick″Generating text from functional brain images″,Frontiers in Human Neuroscience 2012,5:72Francisco Pereira, Greg Detre, Matthew Botvinick "Generating text from functional brain images", Frontiers in Human Neuroscience 2012, 5:72

 本発明が目的とする技術は、動画視聴下等の自然知覚下における被験者の任意の知覚意味内容を推定することができるものである。この点に関して、従来技術では以下の少なくとも1点で限界があり、上記の目的を満たすことが出来なかった。(1)従来技術は、単純な線画や静止画像を対象としており、自然動画等の多数の事物や印象等がダイナミックに生起する状況に適応できない。(2)従来技術では、推定できる知覚意味内容は訓練用データサンプルに含まれているものに限られており、それ以外の任意の知覚意味内容を推定できない。 The technique targeted by the present invention is capable of estimating any perceptual meaning content of a subject under natural perception such as watching a movie. In this regard, the prior art has a limitation in at least one of the following points, and the above-mentioned purpose cannot be satisfied. (1) The prior art is intended for simple line drawings and still images, and cannot be applied to situations where many things such as natural moving images, impressions, etc. occur dynamically. (2) In the prior art, the perceptual meaning contents that can be estimated are limited to those included in the training data sample, and any other perceptual meaning contents cannot be estimated.

 本発明が目的とする技術には、上記の様に、計測した情報を解析することで、被験者が知覚している知覚意味内容の推定を行うことが含まれるが、この際、脳活動と知覚内容の対応付けを中間表象空間(意味空間)内において行うことで任意の知覚内容の推定を実現するものである。詳しくは、以下に示す。
 本発明の、脳活動の解析による知覚意味内容の推定方法は、被験者への刺激となる情報を提示する情報提示手段と、刺激による上記被験者の脳神経活動信号を検出する脳神経活動検出手段と、刺激の内容に関する言語記述と上記脳神経活動検出手段の出力を入力するデータ処理手段と、該データ処理手段から読み出し可能な意味空間情報記憶手段と、該データ処理手段から読み出し書き込み可能な訓練成果情報記憶手段と、を備える脳活動解析装置を用いて上記被験者の脳活動を解析することで、上記被験者が知覚している知覚意味内容の推定を行う方法である。
(1) 訓練用情報を上記被験者に提示して訓練用刺激を上記被験者に与え、該訓練用刺激が上記被験者に誘発する知覚内容の言語記述と、該訓練用刺激が上記被験者に誘発する脳活動を検出した上記脳神経活動検出手段の出力とを上記データ処理手段に入力する。
 ここで、訓練用情報は画像や動画等であり、この情報が被験者への刺激となり、その刺激が被験者になんらかの知覚内容を誘発する。この知覚内容の言語記述(アノテーション)を取得し、上記データ処理手段に入力する。また、脳活動を脳波やfMRI信号として検出した上記脳神経活動検出手段の出力も上記データ処理手段に入力する。
(2) 上記意味空間情報記憶手段に保存された意味空間を適用し、該意味空間上において上記訓練用刺激の意味空間表象と上記脳神経活動検出手段の出力との関係付けを行い、関連付けの結果を上記訓練成果情報記憶手段に保存する。
 上記の意味空間は、コーパス等の大規模データベースを用いて構築されたもので、言語記述に現れる単語の意味的関係を記述するものである。
 また、上記の関係付けは、上記意味空間の座標軸上で行うものであり、ここでは、訓練用情報による刺激で誘発される意味空間表象と該刺激による脳活動の関係付けを言う。
(3) 新規情報を上記被験者に提示して新規刺激を上記被験者に与え、該新規刺激が上記被験者に誘発する脳活動を検出した上記脳神経活動検出手段の出力を上記データ処理手段に入力して、上記新規情報による脳活動の上記脳神経活動検出手段の出力について、上記(2)で得られた関連付けから知覚意味内容を表す意味空間上の確率分布を得る。
 新規刺激で誘発される脳活動について、脳波やfMRI信号などの上記脳神経活動検出手段の出力を、上記訓練用刺激で誘発された上記脳神経活動検出手段の出力やそれから抽出した信号や発火パターンの、例えば線形合成として分解することで、この新規刺激に対する知覚意味内容を訓練用情報に対応した言語記述(アノテーション)の線形合成として得ることができる。この線形合成の係数と、上記(2)で得られた関連付けから、新規刺激についての知覚意味内容を表す意味空間上の確率分布を得ることができる。
(4) 上記(3)で得られた確率分布から蓋然性の高い知覚意味内容を推定するものである。
 この推定においては、上記確率分布における推定に用いる確率に閾値を設けるか、上記の蓋然性の高い知覚意味内容の個数に閾値を設ける等により上記の推定結果の分散を抑制することができる。
 被験者が複数の場合、上記(2)における、刺激の意味空間表象と脳活動の訓練用データを用いた関連付けは、被験者ごとに行ない、上記訓練用データの全てまたはその一部について行なって被験者ごとの投影関数を得て、被験者ごとに該投影関数に従って上記意味空間上の位置との関連付けを一律にずらすものとしてもよい。
 また、上記(4)において、知覚意味内容として蓋然性の高いものを推定するに当たり、与えられた任意の単語について意味空間上の座標を見出し、該座標と上記(3)で得られた確率分布との尤度をとり、その尤度値を上記蓋然性の指標とすることができる。
The technique to which the present invention is directed includes estimating the perceptual meaning content perceived by the subject by analyzing the measured information as described above. By associating contents in an intermediate representation space (semantic space), estimation of arbitrary perceived contents is realized. Details are shown below.
The method of estimating perceptual meaning content by analyzing brain activity according to the present invention includes information presenting means for presenting information serving as a stimulus to a subject, cranial nerve activity detecting means for detecting a brain nerve activity signal of the subject by the stimulus, and stimulus Data processing means for inputting a language description relating to the contents of the contents and the output of the cranial nerve activity detecting means, semantic space information storage means readable from the data processing means, and training result information storage means readable and writable from the data processing means And analyzing the brain activity of the subject by using a brain activity analysis device comprising: and estimating the perceptual meaning content perceived by the subject.
(1) Training information is presented to the subject and a training stimulus is given to the subject, the language description of the perceptual content that the training stimulus induces to the subject, and the brain that the training stimulus induces to the subject The output of the cranial nerve activity detecting means that has detected the activity is input to the data processing means.
Here, the training information is an image, a moving image, or the like, and this information becomes a stimulus to the subject, and the stimulus induces some perceptual content to the subject. A language description (annotation) of this perceptual content is acquired and input to the data processing means. Further, the output of the cranial nerve activity detecting means that detects the brain activity as an electroencephalogram or an fMRI signal is also input to the data processing means.
(2) Applying a semantic space stored in the semantic space information storage means, associating the semantic space representation of the training stimulus with the output of the cranial nerve activity detecting means on the semantic space, and as a result of the association Is stored in the training result information storage means.
The above semantic space is constructed using a large-scale database such as a corpus and describes the semantic relationship of words appearing in a language description.
Further, the above relationship is performed on the coordinate axis of the above semantic space, and here, the relationship between the semantic space representation induced by the stimulus by the training information and the brain activity by the stimulus is referred to.
(3) Presenting new information to the subject, giving a new stimulus to the subject, and inputting the output of the cranial nerve activity detecting means, which detects the brain activity induced by the new stimulus to the subject, to the data processing means For the output of the cranial nerve activity detecting means for the brain activity based on the new information, a probability distribution in the semantic space representing the perceptual meaning content is obtained from the association obtained in (2).
For brain activity induced by a new stimulus, the output of the cranial nerve activity detecting means such as an electroencephalogram and fMRI signal is the output of the cranial nerve activity detecting means induced by the training stimulus, and the signal and firing pattern extracted from it, For example, by decomposing as a linear composition, the perceptual meaning content for the new stimulus can be obtained as a linear composition of a language description (annotation) corresponding to the training information. A probability distribution in the semantic space representing the perceptual meaning content of the new stimulus can be obtained from the linear synthesis coefficient and the association obtained in (2) above.
(4) A highly probable perceptual meaning content is estimated from the probability distribution obtained in (3) above.
In this estimation, dispersion of the estimation results can be suppressed by providing a threshold for the probability used for estimation in the probability distribution, or by providing a threshold for the number of perceptual meaning contents having a high probability.
In the case where there are a plurality of subjects, the association using the meaning space representation of stimulation and brain activity training data in (2) above is performed for each subject, and all or part of the training data is performed for each subject. It is good also as what shifts the correlation with the position in the said semantic space uniformly according to this projection function for every test subject.
In the above (4), in estimating a highly probable perceptual meaning content, a coordinate in the semantic space is found for a given word, and the probability distribution obtained in (3) above is obtained. And the likelihood value can be used as an indicator of the probability.

 本発明によって、動画等の自然知覚下における脳活動から任意の知覚意味内容の推定を行うことが可能になる。 The present invention makes it possible to estimate arbitrary perceptual meaning content from brain activity under natural perception such as moving images.

 図1は脳活動の意味空間モデルおよび知覚意味内容推定の概念図である。脳活動とコーパス由来の意味空間との対応関係を定量モデルとして学習することで、任意の新規条件下において脳活動から知覚意味内容を推定することを示す。
 図2はテレビコマーシャル(CM)動画視聴中の脳活動からの知覚意味内容推定例を示す図である。(左)被験者に提示したCMクリップ例、(右)当該クリップ視聴時の脳活動から推定した知覚意味内容を示す。クリップ横の各行は名詞・動詞・形容詞それぞれの品詞について知覚している確率が高い単語を示す。
 図3は脳活動からの特定印象時系列定量評価例を示す図である。3つの30秒CMを視聴中の脳活動中の脳活動から、特定印象(この場合は「可愛い」)を認知している度合を推定した。
 図4は本発明を適用するための装置構成例を示す図である。
FIG. 1 is a conceptual diagram of a brain activity semantic space model and perceptual meaning content estimation. It shows that perceptual meaning content is estimated from brain activity under any new condition by learning the correspondence between brain activity and corpus-derived semantic space as a quantitative model.
FIG. 2 is a diagram illustrating an example of perceptual meaning content estimation from brain activity during TV commercial (CM) video viewing. (Left) An example of a CM clip presented to the subject, (Right) Perceptual meaning content estimated from brain activity at the time of viewing the clip. Each line next to the clip indicates a word that has a high probability of perceiving the part of speech of the noun, verb, or adjective.
FIG. 3 is a diagram showing a specific impression time-series quantitative evaluation example from brain activity. The degree of recognition of a specific impression (in this case “cute”) was estimated from the brain activity during the brain activity while watching three 30-second CMs.
FIG. 4 is a diagram showing an example of a device configuration for applying the present invention.

 以下に、この発明の実施の形態を図面に基づいて詳細に説明する。 Hereinafter, embodiments of the present invention will be described in detail with reference to the drawings.

 図4に、本発明を適用するための装置構成例を示す。被験者2への訓練用刺激(画像、動画等)の提示は、表示装置1で行い、被験者2の脳活動信号は、例えばEEG(脳波)やfMRI信号を検出することができる脳神経活動検出器3で検出する。脳活動信号としては、脳神経細胞の発火パターンや、単数または複数の特定領域での活動変化の信号を検出する。また、検出された脳活動信号はデータ処理装置4で処理される。また、のデータ処理装置4には、被験者2からの自然言語アノテーションが入力される。データ処理に用いられる意味空間は、ストレージ5からのコーパスデータを解析装置6で解析したもので、ストレージ7に保存される。
 また、訓練用刺激については、被験者2もしくは第三者からの自然言語アノテーションデータは上記意味空間上のベクトルとしてデータ処理装置4で解析され、その解析結果は被験者2の脳活動信号とともに訓練成果としてストレージ8に保存される。
 新規の刺激が表示装置1を通じて被験者2に提示された時には、脳神経活動検出器3で脳活動信号が検出され、データ処理装置4において、ストレージ7からの意味空間とストレージ8からの訓練成果を基に解析され、その解析結果がデータ処理装置4から出力される。
 ここで、ストレージ5、ストレージ7、ストレージ8は、1つのストレージの領域を分割したものでもよく、データ処理装置4と解析装置6は1つのコンピュータを切り替えて用いるものでもよい。
 本発明の、脳活動の解析による知覚意味内容の推定方法では、コーパス由来の意味空間を介して脳情報デコーディングを行う。これにより、脳活動から任意の知覚意味内容の解読を行う。より具体的な手続きは以下のとおりであり、図1を参照して説明する。図1は、脳活動の意味空間モデルおよび知覚意味内容推定の概念図である。脳活動とコーパス由来の意味空間との対応関係を定量モデルとして学習することで、任意の新規条件下において脳活動から知覚意味内容を推定する手順の概略を示すものである。
(ア) 訓練用の刺激11(画像、動画等)に関して、その刺激が被験者に誘発する知覚内容の言語記述(アノテーション)13を取得する。
 より具体的には、被験者12に訓練用刺激として何らかの静止画や動画(訓練用データ)を提示し、その提示を受けた被験者が想起する言語記述のリストを作成する。
(イ) コーパス16等の大規模データベースを用いて、言語記述に現れる単語の意味的関係を記述する意味空間を構築しておく。コーパスから意味空間を構築する手段として、潜在意味解析法やword2vec法等の自然言語処理技術を用いることは、既によく知られている。
 上記コーパスとしては、新聞や雑誌の記事、百科事典、物語などを用いることができる。ここで、コーパス由来の意味空間とは、よく知られている様に、コーパスに内在する統計的性質を元に、単語等の要素を固定長ベクトル空間に投射する空間である。当然のことながら、既にこの意味空間が得られている場合は、それを利用することができる。
 また、潜在意味解析法は、解析対象の文オブジェクトにどのような単語が含まれているかを示す共起行列に対して特異値分解を施し、次元縮約を行うことによって対象のテキストの主たる意味構造を把握する主成分分析的手法で、よく知られた手法である。
 また、Word2Vec法は、単語をベクトル化して表現するする定量化手法である。Word2Vecでは、文章中の単語出現予測モデルを最適化することを介し、単語の固定長ベクトル空間表現を学習する。
(ウ) 上記(イ)で得られた意味空間上において、訓練用データを用いて刺激11の意味空間へ投射15し、その意味空間上の表象と脳活動出力14の関係付けを行う。
 被験者に訓練用データ(画像、動画等)を提示し、それによって生じる被験者の脳活動信号、例えばEEG(脳波)やfMRI信号を検出する。検出した脳活動信号を、上記意味空間上の位置と関連付ける。この関連付けは、上記の意味空間上の表象にEEG(脳波)やfMRIの信号波形を対応付けるものである。
 この関連付けは、被験者ごとに行なうことが望ましいが、その際、上記訓練用データの全てについて行うのではなく、その一部について行なって被験者ごとの意味空間上の投影関数を得て、その投影関数に従って上記意味空間上の位置との関連付けを一律にずらす様にしてもよい。
(エ) 新規の脳活動について、上記(ウ)で得られた関連付けから、知覚意味内容を表す意味空間上の確率分布を得る。
 被験者に新規データ(画像、動画等)を提示し、上記訓練用データの場合と同じ脳活動信号取得手段を用いて、脳活動信号を検出する。検出した脳活動信号を、訓練用データから得られる脳活動信号と比較して、どの訓練用データの場合に近いかを決定する。あるいは、訓練用データの場合のどのような混成の場合に最も近いかを決定する。この比較は、例えば、新規データに対する脳活動信号と先の訓練用データに対する脳活動信号との相互相関のピーク値を指標に用いて行うことができる。この決定によって、意味空間において、新規データの提示で検出した脳活動信号に対応する確率分布を得ることができる。
(オ) 上記(エ)で得られた確率分布から、知覚意味内容として蓋然性の高いものを推定する。
 上記(ア)において、訓練用データについて、それに対応する知覚内容の言語記述が得られており、また、上記(イ)において、それぞれの単語が意味空間におけるベクトルとして表現されている。このため、意味空間において、脳活動信号に対応する確率分布から、脳活動信号に対応する知覚内容の言語記述を確率的重みつきで得ることができる。この確率的重みを用いて蓋然性の高い言語記述を推定する。
 ここで、上記(ア)におけるリストについては、このリストを用いて刺激を受けた被験者に誘発される知覚内容が言語記述として表現されるので、上記のコーパス由来の意味空間の全てまたは選択された所定の部分をカバーしていることが望ましい。
 この様に本発明は、例えば、テレビコマーシャル(CM)等の比較的ダイナミックで複雑な映像音声知覚下の脳活動から、被験者が知覚する任意の知覚意味内容を推定する技術を提供するものである。本発明によって、動画等の自然知覚下における脳活動から任意の知覚意味内容の推定を行うことが可能になる。例えば上記テレビコマーシャル等の動画作品が狙い通りの表現効果を発揮しているかを脳活動から定量評価することが出来る。
FIG. 4 shows an apparatus configuration example for applying the present invention. Presentation of training stimuli (images, moving images, etc.) to the subject 2 is performed by the display device 1, and the brain activity signal of the subject 2 can detect, for example, an EEG (electroencephalogram) or an fMRI signal. Detect with. As a brain activity signal, a brain neuron firing pattern and a signal of activity change in one or more specific regions are detected. The detected brain activity signal is processed by the data processing device 4. The natural language annotation from the subject 2 is input to the data processing device 4. The semantic space used for data processing is obtained by analyzing the corpus data from the storage 5 by the analysis device 6 and stored in the storage 7.
As for training stimuli, natural language annotation data from the subject 2 or a third party is analyzed as a vector in the semantic space by the data processing device 4, and the analysis result as a training result together with the brain activity signal of the subject 2. It is stored in the storage 8.
When a new stimulus is presented to the subject 2 through the display device 1, a brain activity signal is detected by the cranial nerve activity detector 3, and the data processor 4 uses the semantic space from the storage 7 and the training result from the storage 8 as a basis. And the analysis result is output from the data processing device 4.
Here, the storage 5, the storage 7, and the storage 8 may be obtained by dividing one storage area, and the data processing device 4 and the analysis device 6 may be used by switching one computer.
In the estimation method of perceptual meaning content by analyzing brain activity according to the present invention, brain information decoding is performed through a corpus-derived semantic space. In this way, arbitrary perceptual meaning content is decoded from the brain activity. A more specific procedure is as follows and will be described with reference to FIG. FIG. 1 is a conceptual diagram of a brain activity semantic space model and perceptual meaning content estimation. By learning the correspondence between brain activity and corpus-derived semantic space as a quantitative model, an outline of the procedure for estimating perceptual meaning content from brain activity under any new condition is shown.
(A) With respect to the stimulus 11 for training (image, moving image, etc.), a language description (annotation) 13 of the perceptual content that the stimulus induces in the subject is acquired.
More specifically, some still image or moving image (training data) is presented to the subject 12 as a training stimulus, and a list of language descriptions recalled by the subject who has received the presentation is created.
(A) Using a large-scale database such as the corpus 16, a semantic space describing the semantic relationship of words appearing in the language description is constructed. As a means for constructing a semantic space from a corpus, it is already well known to use natural language processing techniques such as a latent semantic analysis method and a word2vec method.
As the corpus, newspaper and magazine articles, encyclopedias, stories and the like can be used. Here, the corpus-derived semantic space is a space for projecting elements such as words to a fixed-length vector space based on statistical properties inherent in the corpus, as is well known. Of course, if this semantic space has already been obtained, it can be used.
The latent semantic analysis method performs the singular value decomposition on the co-occurrence matrix indicating what kind of words are included in the sentence object to be analyzed, and performs the dimensional reduction to reduce the main meaning of the target text. It is a well-known technique that is a principal component analysis technique for grasping the structure.
The Word2Vec method is a quantification method for expressing words as vectors. Word2Vec learns a fixed-length vector space expression of a word through optimizing a word appearance prediction model in a sentence.
(C) In the semantic space obtained in (i) above, the training data is used to project 15 onto the semantic space of the stimulus 11, and the representation on the semantic space and the brain activity output 14 are related.
Training data (images, moving images, etc.) is presented to the subject, and the brain activity signals of the subject, such as EEG (electroencephalogram) and fMRI signals, are detected. The detected brain activity signal is associated with the position in the semantic space. This association associates EEG (electroencephalogram) and fMRI signal waveforms with the representation in the semantic space.
This association is preferably performed for each subject, but in this case, not all of the above training data is performed, but a part thereof is performed to obtain a projection function in the semantic space for each subject, and the projection function Accordingly, the association with the position in the semantic space may be shifted uniformly.
(D) For a new brain activity, a probability distribution in the semantic space representing the perceptual meaning content is obtained from the association obtained in (c) above.
New data (image, video, etc.) is presented to the subject, and the brain activity signal is detected using the same brain activity signal acquisition means as in the case of the training data. The detected brain activity signal is compared with the brain activity signal obtained from the training data to determine which training data is close to. Alternatively, it is determined what kind of hybrid is the closest in the case of training data. This comparison can be performed using, for example, the peak value of the cross-correlation between the brain activity signal for the new data and the brain activity signal for the previous training data as an index. By this determination, a probability distribution corresponding to the brain activity signal detected by the presentation of new data can be obtained in the semantic space.
(E) From the probability distribution obtained in (d) above, a highly probable perceptual meaning content is estimated.
In (a) above, the language description of the perceptual content corresponding to the training data is obtained, and in (a) above, each word is expressed as a vector in the semantic space. For this reason, in the semantic space, the language description of the perceptual content corresponding to the brain activity signal can be obtained with a probabilistic weight from the probability distribution corresponding to the brain activity signal. A probable language description is estimated using this probabilistic weight.
Here, for the list in (a) above, the perceptual content induced by the subject who has been stimulated using this list is expressed as a language description, so all of the above-mentioned corpus-derived semantic spaces or selected It is desirable to cover a predetermined part.
As described above, the present invention provides a technique for estimating arbitrary perceptual meaning content perceived by a subject from brain activity under relatively dynamic and complex video / audio perception such as television commercial (CM). . According to the present invention, it is possible to estimate an arbitrary perceptual meaning content from a brain activity under natural perception such as a moving image. For example, it is possible to quantitatively evaluate from a brain activity whether a moving picture such as the above-mentioned TV commercial exhibits an intended expression effect.

 上記実施例1のアノテーションの取り扱いに関しては、LDA(Latent Dirichlet Allocation)におけるトピックモデルを適用することができる。これによって、推定した脳活動から知覚意味内容を推定した文章にして表現することが容易になる。このための手続例を以下に示す。
(A) 訓練用の刺激11(画像、動画等)に関して、その刺激が被験者に誘発する知覚内容の言語記述(アノテーション)13を取得する。
 より具体的には、被験者12に訓練用刺激として何らかの静止画や動画(訓練用データ)を提示し、その提示を受けた被験者が想起する言語記述のリストを作成する。
(B) コーパス16等の大規模データベースを用いて、言語記述に現れる単語の意味的関係を記述するトピックモデルを構築しておく。このトピックモデルは、LDA法でよく知られた手法で用意することができる。
 よく知られている様にトピックモデルは統計モデルであり、それぞれの単語の出現確率を得ることができる。
(C) 上記(B)で得られたトピックモデルにおいて、訓練用データをその訓練用データの形態素が属するトピックのラベルで置き換え、そのラベルと脳活動出力14の関係付けを行う。
 つまり、被験者に訓練用データ(画像、動画等)を提示し、それによって生じる被験者の脳活動信号、例えばEEG(脳波)やfMRI信号を検出する。検出した脳活動信号を、上記訓練用データの形態素が属するトピックのラベルと関連付ける。この関連付けは、1つの訓練用データによる脳活動信号に例えばラベルの線形結合を対応させても、逆に、1つのラベルにその脳活動信号の線形結合を対応させてもよい。
 この関連付けは、被験者ごとに行なうことが望ましいが、その際、上記訓練用データの全てについて行うのではなく、その一部について行なって、関連付け作業を一部省略することもできる。
(D) 新規の脳活動について、上記(C)で得られた関連付けから、知覚意味内容を表すトピックモデルのラベルで代表される言語記述の確率分布を得る。
 被験者に新規データ(画像、動画等)を提示し、上記訓練用データの場合と同じ脳活動信号取得手段を用いて、脳活動信号を検出する。検出した脳活動信号を、訓練用データから得られる脳活動信号と比較して、どの訓練用データの場合に近いかを決定する。あるいは、訓練用データの場合のどのような混成の場合に最も近いかを決定する。この比較は、例えば、新規データに対する脳活動信号と先の訓練用データに対する脳活動信号との相互相関のピーク値を指標に用いて行うことができる。この決定によって、新規データの提示で検出した脳活動信号に対応するる言語記述の確率分布を得ることができる。
(E) 上記(D)で得られた確率分布から、知覚意味内容として蓋然性の高いものを推定する。
 この実施例の場合、上記(D)において、言語記述の確率分布が得られていることから、LDAと同様な方法で、文章を推定することができる。
 ここで、上記(ア)や(A)におけるリストについては、このリストを用いて刺激を受けた被験者に誘発される知覚内容が言語記述として表現されるので、上記のコーパス由来の意味空間の全てまたは選択された所定の部分をカバーしていることが望ましい。
 本発明は、例えば、テレビコマーシャル(CM)等の比較的ダイナミックで複雑な映像音声知覚下の脳活動から、被験者が知覚する任意の知覚意味内容を推定する技術を提供するものである。本発明によって、動画等の自然知覚下における脳活動から任意の知覚意味内容の推定を行うことが可能になる。例えば上記テレビコマーシャル等の動画作品が狙い通りの表現効果を発揮しているかを脳活動から定量評価することが出来る。
Regarding the handling of annotations in the first embodiment, a topic model in LDA (Lent Dirichlet Allocation) can be applied. This makes it easy to express a sentence in which the perceived meaning content is estimated from the estimated brain activity. An example of the procedure for this is shown below.
(A) With respect to the stimulus 11 for training (image, video, etc.), a language description (annotation) 13 of the perceptual content that the stimulus induces to the subject is acquired.
More specifically, some still image or moving image (training data) is presented to the subject 12 as a training stimulus, and a list of language descriptions recalled by the subject who has received the presentation is created.
(B) Using a large-scale database such as the corpus 16, a topic model that describes the semantic relationship of words that appear in the language description is constructed. This topic model can be prepared by a technique well known in the LDA method.
As is well known, the topic model is a statistical model, and the probability of appearance of each word can be obtained.
(C) In the topic model obtained in (B) above, the training data is replaced with the label of the topic to which the morpheme of the training data belongs, and the label and the brain activity output 14 are related.
In other words, training data (images, moving images, etc.) is presented to the subject, and the brain activity signals of the subject, such as EEG (electroencephalogram) and fMRI signals, are detected. The detected brain activity signal is associated with the label of the topic to which the morpheme of the training data belongs. For this association, for example, a linear combination of labels may correspond to a brain activity signal based on one training data, or conversely, a linear combination of the brain activity signals may correspond to one label.
This association is preferably performed for each subject, but at that time, it is possible to omit a part of the association work by performing a part of the training data instead of the whole of the training data.
(D) For a new brain activity, a probability distribution of a language description represented by a topic model label representing perceptual meaning content is obtained from the association obtained in (C) above.
New data (image, video, etc.) is presented to the subject, and the brain activity signal is detected using the same brain activity signal acquisition means as in the case of the training data. The detected brain activity signal is compared with the brain activity signal obtained from the training data to determine which training data is close to. Alternatively, it is determined what kind of hybrid is the closest in the case of training data. This comparison can be performed using, for example, the peak value of the cross-correlation between the brain activity signal for the new data and the brain activity signal for the previous training data as an index. By this determination, it is possible to obtain the probability distribution of the language description corresponding to the brain activity signal detected in the presentation of new data.
(E) From the probability distribution obtained in (D) above, a highly probable perceptual meaning content is estimated.
In this embodiment, since the probability distribution of the language description is obtained in (D) above, the sentence can be estimated by the same method as LDA.
Here, with respect to the lists in (A) and (A) above, the perceptual content induced by the subject who has been stimulated using this list is expressed as a language description, so all the meaning spaces derived from the corpus Or it is desirable to cover the selected predetermined part.
The present invention provides a technique for estimating arbitrary perceptual meaning content perceived by a subject from brain activity under relatively dynamic and complex video and audio perception such as television commercial (CM). According to the present invention, it is possible to estimate an arbitrary perceptual meaning content from a brain activity under natural perception such as a moving image. For example, it is possible to quantitatively evaluate from a brain activity whether a moving picture such as the above-mentioned TV commercial exhibits an intended expression effect.

 図2に示す例は、CM動画視聴中の脳活動からの知覚意味内容の推定例である。具体的には、例えば「親しさ」の知覚を視聴者にどの様に誘発出来ているか、という問いに合理的に答えることを目的にしている。これは、図2の提示CM動画について、上記(ア)~(オ)の手続きによって脳活動から推定した知覚意味内容を示すもので、左列に、被験者に提示したCMクリップ例を、また、右列に、当該クリップ視聴時の脳活動から推定した知覚意味内容を示す。クリップ横の各行は上から順に、名詞、動詞、形容詞それぞれの品詞について、被験者が知覚している確率が高い単語を示す。
図2(a):娘が携帯電話を母親にかざして会話する場面
(名詞)男性 女性 独身 近所 実家 親戚 年上 母親
(動詞)訪ねる 辞める 付き合う 知り合う 連れる 会う 訪れる 亡くす
(形容詞)親しい 優しい 貧しい 幼い 若い
図2(b):犬と飼い主が電波塔の見えるベンチに座って、景色を眺めている場面
(名詞)女性 男性 年上 金髪 友人 彼女 母親 独身
(動詞)付き合う 着る 話す 慕う 頼む 喋る 会う 座る
(形容詞)親しい 優しい 幼い 若い 可愛い
図2(c):(b)の場面の中央を破って爆発的に犬が登場する場面
(名詞)顔 口癖 眼鏡 表情 自分 容姿 口調 真面目
(動詞)喋る 殴る 付き合う 怒る 被る 着る 座る 振り回す
(形容詞)親しい 可愛い 優しい 幼い 欲しい 怖い
図2(d):(c)の犬が商品のキャンペーンを行う場面
(名詞)文字 フオント ロゴ ゴシック アルファベット 表示
(動詞)置き換える 書く 付す
 視聴者の脳活動を表わすこれらの知覚意味内容が、CM作成者の意図に沿ったものであるかどうかを客観的に判断できるようになる。
 また、上記(A)~(E)の手続きによって、文章を推定することができる。
The example shown in FIG. 2 is an example of estimation of perceptual meaning content from brain activity during CM video viewing. Specifically, for example, the purpose is to rationally answer the question of how the perception of “intimacy” can be induced in the viewer. This shows the perceptual meaning content estimated from the brain activity by the procedures (a) to (e) above for the presented CM video in FIG. 2, and the left column shows an example of a CM clip presented to the subject, The right column shows the perceptual meaning content estimated from the brain activity at the time of viewing the clip. Each row next to the clip indicates a word having a high probability that the subject perceives the part of speech of each of the noun, verb, and adjective in order from the top.
Figure 2 (a): A scene in which a daughter holds a mobile phone over her mother (noun) Male Female Single Neighborhood Parents Older relatives Visit older mothers (verbs) Quit Dating Get acquainted Join together Visit Die (adjective) Intimate Poor Young Young Figure 2 (b): Dog and owner sitting on a bench with a radio tower and looking at the scenery (noun) Female Male Older Blonde Friend Her Mother Single (verb) Dating Wear Speak Ask Ask Ask Sit Sit (adjective) ) Intimate Friendly Young Young Cute Figure 2 (c): A scene where a dog appears explosively by breaking the center of the scene in (b) (Noun) Face Mouth Glasses Facial expression Personal appearance Mouth Serious (verb) Speaking Dating Angry Wearing Wearing Sit and swing around (adjectives) intimate cute kind young want scary Figure 2 (d): (c) dog campaign for a product (noun) ) Letter Font Logo Gothic Alphabet Display (verb) Replace Write Append It becomes possible to objectively judge whether these perceptual meaning contents representing the viewer's brain activity are in line with the intention of the CM creator.
In addition, the sentence can be estimated by the procedures (A) to (E).

 図3に示す例は、特定印象時系列の脳活動からの定量評価例である。これは、例えば2つの映像AとBのどちらが特定の印象をより強く視聴者に与えるか、等の定量指標を提供することを目的とするものである。3つの30秒CMを視聴中の脳活動中の脳活動の時系列から、特定印象(この場合は「可愛い」)が蓋然性の高い言語記述であるかどうかを確認することで、この特定印象を認知している度合を推定したものである。CM−1:女子高生が親戚と話をしている場面、CM−2:重役会議が行われている場面、CM−3:アイドルがダンスの練習をしている場面、において、CM−1で比較的強い反応が得られることが分かる。 The example shown in FIG. 3 is an example of quantitative evaluation from the brain activity of a specific impression time series. This is for the purpose of providing a quantitative index such as which of the two images A and B gives a specific impression to the viewer more strongly. By confirming whether the specific impression (in this case “cute”) is a highly probable language description from the time series of brain activity during the brain activity while watching three 30-second CMs, This is an estimate of the degree of recognition. CM-1: A scene where a high school girl is talking with a relative, CM-2: A scene where a board meeting is held, CM-3: A scene where an idol is practicing dance, It can be seen that a relatively strong reaction is obtained.

 本発明は視聴覚素材(映像、音楽、教材等)の事前評価、及び知覚・行動意図読み取りを通じた脳機械インターフェースの基盤として広汎に用いることができる。 The present invention can be widely used as a basis of a brain machine interface through prior evaluation of audiovisual materials (video, music, teaching materials, etc.) and perception / behavioral intention reading.

1 表示装置
2 被験者
3 脳神経活動検出器
4 データ処理装置
5 ストレージ
6 コーパスデータを解析装置
7、8 ストレージ
11 刺激
12 被験者
13 言語記述
14 脳活動出力
15 意味空間へ投射
16 コーパス
DESCRIPTION OF SYMBOLS 1 Display apparatus 2 Test subject 3 Cranial nerve activity detector 4 Data processing apparatus 5 Storage 6 Corpus data analysis apparatus 7, 8 Storage 11 Stimulus 12 Subject 13 Language description 14 Brain activity output 15 Projection to meaning space 16 Corpus

Claims (3)

 被験者への刺激となる情報を提示する情報提示手段と、刺激による上記被験者の脳神経活動信号を検出する脳神経活動検出手段と、刺激内容に関する言語記述と上記脳神経活動検出手段の出力を入力するデータ処理手段と、該データ処理手段から読み出し可能な意味空間情報記憶手段と、該データ処理手段から読み出し書き込み可能な訓練成果情報記憶手段と、を備える脳活動解析装置を用いて上記被験者の脳活動を解析することで、上記被験者が知覚している知覚意味内容の推定を行う方法であって、
(1) 訓練用情報を上記被験者に提示して訓練用刺激を上記被験者に与え、該訓練用刺激が上記被験者に誘発する知覚内容の言語記述と、該訓練用刺激が上記被験者に誘発する脳活動を検出した上記脳神経活動検出手段の出力とを上記データ処理手段に入力し、
(2) 上記意味空間情報記憶手段に保存された意味空間を適用し、該意味空間上において上記訓練用刺激の意味空間表象と上記脳神経活動検出手段の出力との関係付けを行い、関連付けの結果を上記訓練成果情報記憶手段に保存し、
(3) 新規情報を上記被験者に提示して新規刺激を上記被験者に与え、該新規刺激が上記被験者に誘発する脳活動を検出した上記脳神経活動検出手段の出力を上記データ処理手段に入力して、上記新規情報による脳活動の上記脳神経活動検出手段の出力について、上記(2)で得られた関連付けから知覚意味内容を表す意味空間上の確率分布を得、
(4) 上記(3)で得られた確率分布から蓋然性の高い知覚意味内容を推定する、
ことを特徴とする脳活動の解析による知覚意味内容の推定方法。
Information presenting means for presenting information to be a stimulus to the subject, cranial nerve activity detecting means for detecting the cranial nerve activity signal of the subject by the stimulation, data processing for inputting a language description relating to the stimulus content and the output of the cranial nerve activity detecting means Analyzing the brain activity of the subject using a brain activity analyzer comprising: means, semantic space information storage means readable from the data processing means, and training result information storage means readable and writable from the data processing means A method for estimating the perceptual meaning content perceived by the subject,
(1) Training information is presented to the subject and a training stimulus is given to the subject, the language description of the perceptual content that the training stimulus induces to the subject, and the brain that the training stimulus induces to the subject The output of the cranial nerve activity detection means that detected the activity is input to the data processing means,
(2) Applying a semantic space stored in the semantic space information storage means, associating the semantic space representation of the training stimulus with the output of the cranial nerve activity detecting means on the semantic space, and as a result of the association Is stored in the training result information storage means,
(3) Presenting new information to the subject, giving a new stimulus to the subject, and inputting the output of the cranial nerve activity detecting means, which detects the brain activity induced by the new stimulus to the subject, to the data processing means For the output of the cranial nerve activity detecting means of the brain activity based on the new information, a probability distribution on the semantic space representing the perceptual meaning content is obtained from the association obtained in (2) above,
(4) Estimating highly probable perceptual meaning content from the probability distribution obtained in (3) above,
Perceptual meaning content estimation method by analyzing brain activity.
 複数の被験者について、上記(2)における、刺激の意味空間表象と脳活動間の上記訓練用情報を用いた関連付けは、上記被験者ごとに上記訓練用情報の全てまたは一部について行なって、上記被験者ごとの意味空間上の投影関数を得て、該投影関数に従って上記意味空間上の位置との関連付けを上記被験者ごとに一律にずらすものであることを特徴とする請求項1に記載の脳活動の解析による知覚意味内容の推定方法。 For the plurality of subjects, the association using the training information between the meaning space representation of the stimulus and the brain activity in (2) above is performed for all or part of the training information for each subject, and the subject The brain function according to claim 1, wherein a projection function on each semantic space is obtained, and the association with the position on the semantic space is uniformly shifted for each subject according to the projection function. An estimation method of perceptual meaning content by analysis.  上記(4)において、蓋然性の高い知覚意味内容を推定するに当たり、与えられた任意の単語について意味空間上の座標を見出し、該座標と上記(エ)で得られた確率分布との内積をとり、その内積の値を上記蓋然性の指標とするものであることを特徴とする請求項1あるいは2に記載の脳活動の解析による知覚意味内容の推定方法。 In the above (4), in estimating a highly probable perceptual meaning content, a coordinate in the semantic space is found for a given word, and an inner product of the coordinate and the probability distribution obtained in (d) above is taken. 3. The method of estimating perceptual meaning content by analyzing brain activity according to claim 1, wherein the inner product value is used as an index of the probability.
PCT/JP2016/061645 2015-04-06 2016-04-05 Method for estimating perceptual semantic content by analysis of brain activity Ceased WO2016163556A1 (en)

Priority Applications (3)

Application Number Priority Date Filing Date Title
CN201680019204.7A CN107427250B (en) 2015-04-06 2016-04-05 Method for presuming perception semantic content through brain activity analysis and presumption
EP16776722.7A EP3281582A4 (en) 2015-04-06 2016-04-05 Method for estimating perceptual semantic content by analysis of brain activity
US15/564,071 US20180092567A1 (en) 2015-04-06 2016-04-05 Method for estimating perceptual semantic content by analysis of brain activity

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2015-077694 2015-04-06
JP2015077694A JP6618702B2 (en) 2015-04-06 2015-04-06 Perceptual meaning content estimation device and perceptual meaning content estimation method by analyzing brain activity

Publications (1)

Publication Number Publication Date
WO2016163556A1 true WO2016163556A1 (en) 2016-10-13

Family

ID=57072256

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2016/061645 Ceased WO2016163556A1 (en) 2015-04-06 2016-04-05 Method for estimating perceptual semantic content by analysis of brain activity

Country Status (5)

Country Link
US (1) US20180092567A1 (en)
EP (1) EP3281582A4 (en)
JP (1) JP6618702B2 (en)
CN (1) CN107427250B (en)
WO (1) WO2016163556A1 (en)

Families Citing this family (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10856815B2 (en) * 2015-10-23 2020-12-08 Siemens Medical Solutions Usa, Inc. Generating natural language representations of mental content from functional brain images
US20180177619A1 (en) * 2016-12-22 2018-06-28 California Institute Of Technology Mixed variable decoding for neural prosthetics
JP7069716B2 (en) * 2017-12-28 2022-05-18 株式会社リコー Biological function measurement and analysis system, biological function measurement and analysis program, and biological function measurement and analysis method
JP7075045B2 (en) * 2018-03-30 2022-05-25 国立研究開発法人情報通信研究機構 Estimating system and estimation method
JP6872515B2 (en) * 2018-06-27 2021-05-19 株式会社人総研 Visual approach aptitude test system
JP6902505B2 (en) * 2018-06-27 2021-07-14 株式会社人総研 Visual approach aptitude test system
CN110687999B (en) * 2018-07-04 2024-11-05 刘彬 A method and device for semantic processing of electroencephalogram signals
WO2021035067A1 (en) * 2019-08-20 2021-02-25 The Trustees Of Columbia University In The City Of New York Measuring language proficiency from electroencephelography data
CN111012342B (en) * 2019-11-01 2022-08-02 天津大学 Audio-visual dual-channel competition mechanism brain-computer interface method based on P300
CN113143293B (en) * 2021-04-12 2023-04-07 天津大学 Continuous speech envelope nerve entrainment extraction method based on electroencephalogram source imaging
CN113974658B (en) * 2021-10-28 2024-01-26 天津大学 Semantic visual image classification method and device based on EEG time-sharing frequency spectrum Riemann

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2007016149A2 (en) * 2005-08-02 2007-02-08 Brainscope Company, Inc. Automatic brain function assessment apparatus and method
US20080208072A1 (en) * 2004-08-30 2008-08-28 Fadem Kalford C Biopotential Waveform Data Fusion Analysis and Classification Method
US20130184558A1 (en) * 2009-03-04 2013-07-18 The Regents Of The University Of California Apparatus and method for decoding sensory and cognitive information from brain activity

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103077205A (en) * 2012-12-27 2013-05-01 浙江大学 Method for carrying out semantic voice search by sound stimulation induced ERP (event related potential)
US9646248B1 (en) * 2014-07-23 2017-05-09 Hrl Laboratories, Llc Mapping across domains to extract conceptual knowledge representation from neural systems
JP6662644B2 (en) * 2016-01-18 2020-03-11 国立研究開発法人情報通信研究機構 Viewing material evaluation method, viewing material evaluation system, and program
US11002814B2 (en) * 2017-10-25 2021-05-11 Siemens Medical Solutions Usa, Inc. Decoding from brain imaging data of individual subjects by using additional imaging data from other subjects

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080208072A1 (en) * 2004-08-30 2008-08-28 Fadem Kalford C Biopotential Waveform Data Fusion Analysis and Classification Method
WO2007016149A2 (en) * 2005-08-02 2007-02-08 Brainscope Company, Inc. Automatic brain function assessment apparatus and method
US20130184558A1 (en) * 2009-03-04 2013-07-18 The Regents Of The University Of California Apparatus and method for decoding sensory and cognitive information from brain activity

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP3281582A4 *

Also Published As

Publication number Publication date
EP3281582A4 (en) 2019-01-02
EP3281582A1 (en) 2018-02-14
JP6618702B2 (en) 2019-12-11
US20180092567A1 (en) 2018-04-05
CN107427250B (en) 2021-01-05
JP2016195716A (en) 2016-11-24
CN107427250A (en) 2017-12-01

Similar Documents

Publication Publication Date Title
JP6618702B2 (en) Perceptual meaning content estimation device and perceptual meaning content estimation method by analyzing brain activity
JP2016195716A5 (en)
Wingenbach et al. Validation of the Amsterdam Dynamic Facial Expression Set–Bath Intensity Variations (ADFES-BIV): A set of videos expressing low, intermediate, and high intensity emotions
Soleymani et al. Multimodal emotion recognition in response to videos
Woźniak et al. Prioritization of arbitrary faces associated to self: An EEG study
Soleymani et al. Analysis of EEG signals and facial expressions for continuous emotion detection
Valstar et al. Induced disgust, happiness and surprise: an addition to the mmi facial expression database
Siddiqui et al. A survey on databases for multimodal emotion recognition and an introduction to the VIRI (visible and InfraRed image) database
Klimovich-Gray et al. Balancing prediction and sensory input in speech comprehension: the spatiotemporal dynamics of word recognition in context
Sarma et al. Review on stimuli presentation for affect analysis based on EEG
US12154313B2 (en) Generative neural network for synthesis of faces and behaviors
Gupta et al. A quality adaptive multimodal affect recognition system for user-centric multimedia indexing
Kwon et al. Detection of nonverbal synchronization through phase difference in human communication
Saadon et al. Real-time emotion detection by quantitative facial motion analysis
Delaherche et al. Multimodal coordination: exploring relevant features and measures
Kamaruddin et al. Pornography addiction detection based on neurophysiological computational approach
Verhoef et al. Towards affective computing that works for everyone
Steinert et al. Evaluation of an engagement-aware recommender system for people with dementia
Sinha Affective computing and emotion-sensing technology for emotion recognition in mood disorders
Saunders et al. Measuring information acquisition from sensory input using automated scoring of natural-language descriptions
Buck et al. Measuring the dynamic stream of display: Spontaneous and intentional facial expression and communication.
Chen et al. The gender-based facing bias in 3-D biological motion perception
Siniukov et al. SEMPI: A Database for Understanding Social Engagement in Video-Mediated Multiparty Interaction
Holler Experimental methods in co-speech gesture research
Gunes et al. Automatic measurement of affect in dimensional and continuous spaces: Why, what, and how?

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 16776722

Country of ref document: EP

Kind code of ref document: A1

WWE Wipo information: entry into national phase

Ref document number: 15564071

Country of ref document: US

REEP Request for entry into the european phase

Ref document number: 2016776722

Country of ref document: EP

NENP Non-entry into the national phase

Ref country code: DE