CN114466618A - Pressure response mode determination system and method, learning device and method, program, and learning completion model - Google Patents
Pressure response mode determination system and method, learning device and method, program, and learning completion model Download PDFInfo
- Publication number
- CN114466618A CN114466618A CN202080068209.5A CN202080068209A CN114466618A CN 114466618 A CN114466618 A CN 114466618A CN 202080068209 A CN202080068209 A CN 202080068209A CN 114466618 A CN114466618 A CN 114466618A
- Authority
- CN
- China
- Prior art keywords
- learning
- response
- subject
- feature amount
- face image
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Classifications
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/16—Devices for psychotechnics; Testing reaction times ; Devices for evaluating the psychological state
- A61B5/165—Evaluating the state of mind, e.g. depression, anxiety
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/01—Measuring temperature of body parts ; Diagnostic temperature sensing, e.g. for malignant or inflamed tissue
- A61B5/015—By temperature mapping of body part
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/02—Detecting, measuring or recording for evaluating the cardiovascular system, e.g. pulse, heart rate, blood pressure or blood flow
- A61B5/02028—Determining haemodynamic parameters not otherwise provided for, e.g. cardiac contractility or left ventricular ejection fraction
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/02—Detecting, measuring or recording for evaluating the cardiovascular system, e.g. pulse, heart rate, blood pressure or blood flow
- A61B5/0205—Simultaneously evaluating both cardiovascular conditions and different types of body conditions, e.g. heart and respiratory condition
- A61B5/02055—Simultaneously evaluating both cardiovascular condition and temperature
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/72—Signal processing specially adapted for physiological signals or for diagnostic purposes
- A61B5/7235—Details of waveform analysis
- A61B5/7264—Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems
- A61B5/7267—Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems involving training the classification device
Landscapes
- Health & Medical Sciences (AREA)
- Life Sciences & Earth Sciences (AREA)
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Public Health (AREA)
- General Health & Medical Sciences (AREA)
- Medical Informatics (AREA)
- Animal Behavior & Ethology (AREA)
- Veterinary Medicine (AREA)
- Surgery (AREA)
- Molecular Biology (AREA)
- Cardiology (AREA)
- Biophysics (AREA)
- Pathology (AREA)
- Biomedical Technology (AREA)
- Heart & Thoracic Surgery (AREA)
- Physiology (AREA)
- Artificial Intelligence (AREA)
- Psychiatry (AREA)
- Fuzzy Systems (AREA)
- Hospice & Palliative Care (AREA)
- Signal Processing (AREA)
- Evolutionary Computation (AREA)
- Child & Adolescent Psychology (AREA)
- Developmental Disabilities (AREA)
- Educational Technology (AREA)
- Mathematical Physics (AREA)
- Psychology (AREA)
- Social Psychology (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Pulmonology (AREA)
- Measuring And Recording Apparatus For Diagnosis (AREA)
- Measuring Pulse, Heart Rate, Blood Pressure Or Blood Flow (AREA)
- Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)
- Image Analysis (AREA)
Abstract
本发明提供能够以非接触状态判定被检者的压力应对方式的系统。具有:生物体信息获取部,以非接触状态获取被检者的生物体信息;判定部,基于所述生物体信息与预先确定的响应模式判定被检者的压力应对方式。所述响应模式由血流动力学参数确定。
The present invention provides a system capable of judging the stress response mode of a subject in a non-contact state. The biometric information acquisition unit includes a biometric information acquisition unit that acquires biometric information of the subject in a non-contact state, and a determination unit that determines a stress response mode of the subject based on the biometric information and a predetermined response pattern. The response pattern is determined by hemodynamic parameters.
Description
Technical Field
The present invention relates to a technique for determining a stress response mode of a subject in a non-contact state.
Background
A technique for grasping the stress state of a subject is known. For example, patent document 1 describes a technique for estimating psychological changes of a subject by measuring the amount of heat radiated from the face.
Further, patent document 2 describes a technique of measuring a level of a psychological state of a subject based on face image information. According to these techniques, it is possible to detect that a certain psychological change has occurred in a subject in a non-contact state, or to quantitatively analyze the psychological state of the subject in a non-contact state.
However, in patent documents 1 and 2, since the type of pressure felt by the subject cannot be grasped, the pressure of the subject cannot be analyzed.
Documents of the prior art
Patent document
Patent document 1: japanese laid-open patent publication No. 6-54836
Patent document 2: japanese laid-open patent publication No. 2007-68620
Disclosure of Invention
Technical problem to be solved by the invention
It is known that humans show characteristic response patterns for the purpose of satisfying the metabolic requirements from the various tissues of the body in the cardiovascular system in the face of pressure stimuli. That is, the mode showing active countermeasure, the mode showing passive countermeasure, and the mode showing no specific countermeasure against stress are provided. These modes are referred to as "stress response modes", and when an active response mode occurs, it is estimated that the subject is in a good stress state. Conversely, when the passive countermeasure mode occurs, it can be estimated that the subject is in a poor stress state.
That is, by determining the pressure coping method of the subject, the type of pressure felt by the subject can be grasped.
The invention provides a pressure response method determination system, a pressure response method determination method, a learning device, a learning method, a program for realizing the above by using a computer, and a learning completion model, which can determine a pressure response method of a subject in a non-contact state.
Solution for solving the above technical problem
In order to solve the above-described problems, a pressure coping method determination system according to claim 1 is characterized by comprising: a biological information acquisition unit that acquires biological information of a subject in a non-contact state; and a determination unit configured to determine a stress response mode of the subject based on the biological information and a predetermined response pattern, the response pattern being determined by the hemodynamic parameter.
The pressure response method determination system acquires biological information of a subject in a non-contact state, and determines a pressure response method of the subject in the non-contact state based on the biological information and a response pattern specified by a hemodynamic parameter.
The stress management method determination system according to claim 2 is characterized in that the system according to claim 1 is configured such that the hemodynamic parameters include a plurality of parameters selected from the group consisting of average blood pressure, heart rate, cardiac output, stroke volume, and total peripheral vascular resistance.
"hemodynamics" (hemodynamics) refers to a branch of the circulatory physiology that is the subject of blood circulation, and is a field of application of the theory of mechanics, elastomer dynamics, and fluid dynamics to the study of biological systems.
Specifically, the heart pressure, pulsation, work amount, stroke volume, elasticity of blood vessels and cardiac muscle, pulse, blood flow velocity, blood viscosity, and the like are examined. Therefore, the term "hemodynamic parameters" as used herein refers to parameters such as the internal pressure of the heart, pulsation, work done, stroke volume, elasticity of blood vessels and cardiac muscle, pulse rate, blood flow velocity, and blood viscosity.
The pressure response method determination system acquires biological information of a subject in a non-contact state, and determines a pressure response method of the subject from the biological information and a response pattern determined from a plurality of parameters selected from the group consisting of average blood pressure, heart rate, cardiac output, stroke volume, and total peripheral vascular resistance. Furthermore, hemodynamic parameters can often be identified by continuous sphygmomanometers.
The pressure response method determination system according to claim 3 is characterized in that, in the pressure response method determination system according to claim 2, the biological information is a face image.
The pressure response mode determination system acquires a face image of a subject in a non-contact state, and determines a pressure response mode of the subject based on the face image and the response pattern.
The pressure response type determination system according to claim 4 is characterized in that, in the pressure response type determination system according to claim 3, the face image is a face thermal image or a face visual image.
The pressure coping method determining system acquires a facial thermal image or a facial visual image of a subject in a non-contact state, and determines the pressure coping method of the subject based on the facial thermal image or the facial visual image and the response pattern.
In this case, the "face visible image" is an image obtained by capturing the face of the subject with a camera, which is generally widely used, that is, a device having an optical system for imaging and capturing an image. In this case, a color image is preferable. The "facial thermal image" is an image obtained by analyzing infrared rays radiated from the face of the subject and displaying the thermal distribution as a graph, and is an image obtained by imaging with an infrared thermal imager.
In the pressure coping method determination system according to claim 5, in the pressure coping method determination system according to claim 3 or 4, the determination unit determines the pressure coping method of the subject by observing a pressure response of a specific portion of the face included in the face image.
The pressure response method determination system determines the pressure response method of the subject based on the pressure response of the specific part of the face of the subject and the response pattern.
The pressure countermeasure determination system according to claim 6 is characterized in that the response mode includes three modes including "active countermeasure", "passive countermeasure", and "non-countermeasure".
The pressure response method determination system acquires biological information of a subject in a non-contact state, and determines whether the pressure response method of the subject indicates a response mode of "active response", "passive response", and "no response" based on the biological information.
The pressure coping method determination system according to claim 7 is characterized in that in the pressure coping method determination system according to claim 6, the determination unit has a determination feature amount storage unit that stores a spatial feature amount corresponding to "active handling", a spatial feature amount corresponding to "passive handling", and a spatial feature amount corresponding to "no handling", and determines which of the "active handling", "passive handling", and "no handling" response mode is indicated in the pressure coping method based on the biological information and each of the spatial feature amounts stored in the determination feature amount storage unit.
The pressure coping method determination system stores a spatial feature amount corresponding to "active coping", a spatial feature amount corresponding to "passive coping", and a spatial feature amount corresponding to "no coping", and determines whether the pressure coping method of the subject is a method showing a response pattern among "active coping", "passive coping", and "no coping" based on the biological information of the subject and each spatial feature amount.
A pressure coping method determination system according to claim 8 is the pressure coping method determination system according to claim 7, wherein the spatial feature quantity stored in the feature quantity storage unit for determination is a spatial feature quantity extracted by a machine learning unit, and the machine learning unit includes: a learning data storage unit for storing a plurality of learning face images labeled with labels corresponding to "active response", "passive response", and "no response", respectively; a feature value extraction unit that extracts a spatial feature value of the face image from the face image for learning using a learned model; and a feature amount learning unit configured to change a network parameter of the learned model so that the spatial feature amount obtained by the feature amount extraction unit is extracted with high accuracy, based on a relationship between an extraction result obtained by the feature amount extraction unit and a label attached to the face image for learning as an extraction target.
In this pressure coping method determination system, the spatial feature amount corresponding to "active coping", the spatial feature amount corresponding to "passive coping", and the spatial feature amount corresponding to "no coping" are extracted by the machine learning unit.
The machine learning unit stores a plurality of learning face images labeled with labels corresponding to "active response", "passive response", and "no response", extracts a spatial feature amount of a face image of a subject from the learning face images using a learned model, and changes network parameters of the learned model so that the accuracy of extracting the spatial feature amount of the face image of the subject is improved based on a relationship between an extraction result and the label labeled to the learning face image to be extracted.
Further, the stress countermeasure determination system of claim 9 is characterized in that, in the stress countermeasure determination system of claim 7 or 8, the spatial feature amount is a fractal dimension calculated based on a face image of the subject.
The program according to claim 10 is a program for causing a computer to function as a means for determining a stress response method of a subject, the program including: a determination feature quantity storage step of storing a spatial feature quantity corresponding to the "active correspondence", a spatial feature quantity corresponding to the "passive correspondence", and a spatial feature quantity corresponding to the "no correspondence"; and a determination step of determining whether the stress response mode of the subject is a mode showing any one of "active response", "passive response", and "no response", the response mode being determined by the hemodynamic parameter, based on the face image of the subject and each of the spatial feature amounts stored in the feature amount storage step for determination.
The program is installed in one or a plurality of computers operating in cooperation with each other and executed, so that a system constituted by the one or the plurality of computers operating in cooperation with each other functions as: the pressure response mode of the subject is determined to be a mode showing any one of the response modes of "active response", "passive response", and "non-response", based on the spatial feature amount corresponding to "active response", the spatial feature amount corresponding to "passive response", the spatial feature amount corresponding to "non-response", and the face image of the subject.
The program according to claim 11 is characterized in that the program according to claim 10 includes: a learning data storage step of storing a plurality of learning face images labeled with labels corresponding to "active response", "passive response", and "no response", respectively; a feature amount extraction step of extracting a spatial feature amount of the face image for learning using the learned model; and a learning step of changing a network parameter of the learned model so that the extraction accuracy of the feature amount obtained in the feature amount extraction step is improved, based on a relationship between the extraction result obtained in the feature amount extraction step and a label attached to the face image for learning as an extraction target, and the determination feature amount storage step is a step of storing the spatial feature amount extracted in the feature amount extraction step.
The program is installed in one or a plurality of computers working in cooperation with each other and executed, so that a system constituted by the one or more computers functions as: a plurality of learning face images labeled with labels respectively corresponding to an active correspondence, a passive correspondence, and a non-correspondence are stored, a spatial feature amount of the learning face image is extracted using a learning model, a network parameter of the learning model is changed so as to increase the extraction accuracy of the spatial feature amount based on the relationship between the extraction result and the label labeled with the learning face image as the extraction target, and the extracted spatial feature amount is stored.
Further, the program of claim 12 is characterized in that, in the program of claim 10 or 11, the spatial feature quantity is a fractal dimension calculated based on a face image of the subject.
The pressure coping method according to claim 13 is characterized by comprising: a biological information acquisition step of acquiring biological information of a subject in a non-contact state; a determination step of determining a stress response mode of the subject based on the biological information and a predetermined response pattern specified by a hemodynamic parameter.
The method acquires biological information of a subject in a non-contact state, and determines a stress response mode of the subject based on a response pattern specified by the biological information and a hemodynamic parameter.
The learning device according to claim 14 is characterized by comprising: a learning data storage unit that stores a plurality of learning face images labeled with labels corresponding to response patterns specified by hemodynamic parameters; a feature amount extraction unit that extracts a spatial feature amount of a face image of a subject from the face image for learning by using a learned model; and a feature amount learning unit configured to change a network parameter of the learned model so that the spatial feature amount obtained by the feature amount extraction unit is extracted with high accuracy, based on a relationship between an extraction result obtained by the feature amount extraction unit and a label attached to the face image for learning as an extraction target.
The learning device stores a plurality of learning face images labeled with labels corresponding to response patterns specified by hemodynamic parameters, extracts spatial feature quantities of a face image of a subject from the learning face images using a learned model, and changes network parameters of the learned model so as to increase the accuracy of extracting the spatial feature quantities of the face image of the subject based on the relationship between the extraction result and the label labeled to the learning face image to be extracted.
Further, the learning apparatus of claim 15 is characterized in that, in the learning apparatus of claim 14, the spatial feature amount is a fractal dimension calculated based on a face image of the subject.
The learning method of claim 16 is characterized by including: a learning data storage step of storing a plurality of learning face images labeled with labels corresponding to response patterns determined by hemodynamic parameters; a feature amount extraction step of extracting a spatial feature amount of a face image of a subject from the face image for learning by using a learned model; and a feature amount learning step of changing a network parameter of the learned model so that the spatial feature amount obtained by the feature amount extraction unit is extracted with high accuracy, based on a relationship between an extraction result obtained by the feature amount extraction unit and a label attached to the face image for learning as an extraction target.
The learning method stores a plurality of learning face images labeled with labels corresponding to response patterns specified by hemodynamic parameters, extracts spatial feature quantities of a face image of a subject from the learning face images using a learned model, and changes network parameters of the learned model so as to increase the accuracy of extracting the spatial feature quantities of the face image of the subject based on the relationship between the extraction result and the label labeled with the learning face image as the extraction target.
Further, the learning method of claim 17 is characterized in that, in the learning method of claim 16, the spatial feature amount is a fractal dimension calculated based on a face image of the subject.
Further, the program according to claim 18 is a program for causing a computer to function as means for learning a spatial feature amount of a face image, the program including: a learning data storage step of storing a plurality of learning face images labeled with labels corresponding to response patterns determined by hemodynamic parameters; a feature amount extraction step of extracting a spatial feature amount of a face image of a subject from the face image for learning by using a learned model; and a feature amount learning step of changing a network parameter of the learned model so that the spatial feature amount obtained in the feature amount extraction step is extracted with high accuracy, based on a relationship between the extraction result obtained in the feature amount extraction step and a label attached to the face image for learning as an extraction target.
The program is installed in one or a plurality of computers that work in cooperation with each other and executed, so that a system constituted by the one or more computers functions as a learning device as follows: a plurality of learning face images to which labels are attached in correspondence with response patterns specified by hemodynamic parameters are stored, a spatial feature amount of a face image of a subject is extracted from these learning face images using a learning model, and network parameters of the learning model are changed so that the accuracy of extracting the spatial feature amount of the face image of the subject is improved based on the relationship between the extraction result and the label attached to the learning face image to be extracted.
Further, the program of claim 19 is characterized in that, in the program of claim 18, the spatial feature amount is a fractal dimension calculated based on a face image of the subject.
The learned model according to claim 20 is generated by using a plurality of face images for learning labeled with labels corresponding to response patterns specified by hemodynamic parameters as training data and performing machine learning on spatial feature values of the face image of the subject.
The learned model takes a face image of a subject as an input, and outputs a spatial feature amount of the face image of the subject.
The learned model according to claim 21 is characterized in that in the program according to claim 18, the spatial feature amount is a fractal dimension calculated based on a face image of the subject.
Effects of the invention
According to the pressure coping method determination system of claim 1, since the biological information of the subject is acquired in a non-contact state and the pressure coping method of the subject is determined in a non-contact state based on the biological information and the response pattern specified by the hemodynamic parameter, the type of pressure felt by the subject can be grasped without imposing restrictions on the subject.
Generally, the term "stress" is widely used, and psychologically, "stress" refers to a general term "nonspecific reaction to an external stimulus (stressor) acting on a living body" (doctor hans seires (Han seye).
Various stressors based on human survival are known, and in recent years, in particular, there are many stressors in social environments, which exert various influences on living bodies, and it has been medically known that the stressors may cause diseases in some cases.
However, it is known that not all of the stresses are bad for the existence of human beings, and even if stress is applied, living organisms are activated when trying to respond to the stress, which brings about a beneficial effect. From such a viewpoint, zels clearly expresses the idea that the pressure can be a good pressure (stress) and a bad pressure (stress) depending on the difference in the biological conditions of the recipient, the degree of the pressure, and the like.
Therefore, from the viewpoint of such psychology, when considering the stress, it is necessary to consider classification of the types of stresses in living bodies as described above, without grasping all the stresses as being bad, and from the viewpoint of sociality and productivity, it is possible to positively consider how to perform stress management in modern various social activities, for example, whether to improve the production efficiency and the work efficiency.
However, as described above, the technology and the patent invention for grasping the pressure state of the subject are known, and according to the conventional method, only the pressure sensed by the subject can be grasped roughly, and the property analysis based on the kind of the pressure sensed by the subject cannot be performed.
According to the present invention, by classifying and grasping the type of stress according to the subject, the relationship between the stressor and the stress response of the subject can be grasped in more detail. As a result, the relationship between the human and the social environment which becomes a stressor and the like can be analyzed and studied more accurately, and the method can be applied to various fields of social environment to solve the problem, and can be applied to various industrial fields to improve the production efficiency and promote the production activity.
According to the stress management method determination system of claim 2, the stress management method of the subject can be determined in detail and accurately based on the biological information of the subject and the response pattern determined from any of the average blood pressure, the heart rate, the cardiac output, the stroke volume, and the total peripheral vascular resistance.
According to the pressure coping method determination system of claim 3, since the face image of the subject can be acquired in a non-contact state and the continuous sphygmomanometer is not used as in the related art based on the face image and the response pattern specified by the hemodynamic parameter, the pressure coping method of the subject can be quickly determined without imposing a physical burden on the subject without being restricted to the subject.
According to the stress countermeasure determination system of claim 4, the facial thermal image or the facial visual image of the subject can be acquired in a non-contact state, and the stress countermeasure of the subject can be determined from psychophysiology on the basis of the facial thermal image or the facial visual image and the response pattern determined from the hemodynamic parameters.
According to the stress countermeasure determination system of claim 5, the stress countermeasure of the subject can be easily and accurately determined based on the response pattern specified by the state of the specific part of the face of the subject and the hemodynamic parameters.
According to the pressure response method determination system of claim 6, it is possible to acquire the biological information of the subject in a non-contact state, and determine whether the pressure response method of the subject is a method showing any one of the "active response", the "passive response", and the "no response" based on the biological information.
Therefore, the pressure response method can be classified into three response patterns of "active response", "passive response", and "no response", and analysis can be performed based on the response patterns, and the analysis result can be applied to personnel management business, quality management business, and the like in various production fields, which contributes to improvement of quality of various businesses.
According to the pressure coping method determination system of claim 7, the spatial feature amount corresponding to "active handling", the spatial feature amount corresponding to "passive handling", and the spatial feature amount corresponding to "no handling" can be stored, and it is determined whether the pressure coping method of the subject is a method showing any one of "active handling", "passive handling", and "no handling" based on the biological information of the subject and each spatial feature amount.
According to the stress coping method determination system of claim 8, the network parameters of the learned model can be changed so that the accuracy of extracting the spatial feature amount of the face image of the subject can be improved.
According to the pressure response method determination system of claim 9, since the spatial feature value can be digitized with higher accuracy using the fractal dimension, the pressure response method of the subject can be determined with higher accuracy in a non-contact state.
According to the program of the scheme 10, the following system can be implemented using one or a plurality of computers working in cooperation with each other: the pressure response mode of the subject is determined to be a mode showing any one of the response modes of "active response", "passive response", and "non-response", based on the spatial feature amount corresponding to "active response", the spatial feature amount corresponding to "passive response", the spatial feature amount corresponding to "non-response", and the face image of the subject.
According to the program of the scheme 11, the following system can be implemented using one or a plurality of computers working in cooperation with each other: a plurality of learning face images labeled with labels respectively corresponding to an active correspondence, a passive correspondence, and a non-correspondence are stored, a spatial feature amount of the learning face image is extracted using a learning model, a network parameter of the learning model is changed so as to increase the extraction accuracy of the spatial feature amount based on the relationship between the extraction result and the label labeled with the learning face image as the extraction target, and the extracted spatial feature amount is stored.
According to the program of claim 12, since the spatial feature amount can be digitized with higher accuracy using the fractal dimension, a system capable of determining the stress countermeasure method of the subject with higher accuracy in a non-contact state can be realized using one or a plurality of computers operating in cooperation with each other.
According to the pressure coping method of claim 13, since the biological information of the subject is acquired in a non-contact state and the pressure coping method of the subject is judged in a non-contact state based on the biological information and the response pattern specified by the hemodynamic parameter, the type of pressure felt by the subject can be grasped without imposing restrictions on the subject.
According to the learning apparatus of claim 14, the network parameters of the learned model can be changed so that the accuracy of extracting the spatial feature amount of the face image of the subject can be improved.
According to the learning apparatus of claim 15, since the spatial feature amount can be digitized with higher accuracy using the fractal dimension, the network parameters of the learned model can be changed so that the extraction accuracy of the spatial feature amount of the face image of the subject can be improved.
According to the learning method of claim 16, the network parameters of the learned model can be changed so that the accuracy of extracting the spatial feature amount of the face image of the subject can be increased.
According to the learning method of claim 17, since the spatial feature amount can be digitized with higher accuracy using the fractal dimension, the network parameters of the learned model can be changed so that the extraction accuracy of the spatial feature amount of the face image of the subject becomes higher.
According to the program of claim 18, the program is installed in one computer or a plurality of computers operating in cooperation with each other and executed, and it is possible to realize a learning device capable of changing network parameters of a learned model so as to improve the accuracy of extracting a spatial feature amount of a face image of a subject.
According to the program of claim 19, since the spatial feature amount can be digitized with higher accuracy using the fractal dimension, the network parameters of the learned model can be changed so as to improve the accuracy of extracting the spatial feature amount of the face image of the subject by installing and executing the program in one or a plurality of computers operating in cooperation with each other.
According to the learned model of claim 20, the spatial feature amount of the face image of the subject can be extracted by inputting the face image of the subject to the learned model.
According to the learned model of claim 21, since the spatial feature amount can be digitized with higher accuracy using the fractal dimension, the spatial feature amount of the face image of the subject can be extracted with higher accuracy by inputting the face image of the subject to the learned model.
Drawings
Fig. 1 is a block diagram of an embodiment of a pressure coping method determination system according to the present invention.
Fig. 2 (a) is an explanatory view conceptually illustrating a learning face image group to which a "proactive handling" tag is labeled. (B) The present invention is an explanatory diagram conceptually illustrating a learning face image group to which a "passive handling" tag is attached. (C) The present invention is an explanatory diagram conceptually illustrating a group of learning face images to which a "no-deal" label is attached.
Fig. 3 is a flowchart showing the processing contents of a determination device constituting the pressure coping system determination system of fig. 1.
Fig. 4 is a flowchart showing the processing contents of the learning device constituting the pressure coping system determination system of fig. 1.
Fig. 5 is a table showing the hemodynamic pattern response of the prior study of experimental example 1.
Fig. 6 is a conceptual diagram illustrating a measurement system.
Fig. 7 is a conceptual diagram illustrating a mirror drawing (mirror drawing) problem.
Fig. 8 is a graph showing time-series changes in MBP (mean blood pressure).
Fig. 9 is a schematic diagram showing the structure of CNN (convolutional neural network).
Fig. 10 is a table showing the filter size, step size, and number of filters of the convolutional layer.
Fig. 11 is a table showing the filter size, step size of the pooling layer.
Fig. 12 is a diagram showing the features of the face in each case of active response, passive response, and no response of each subject, and showing the feature map of the face in comparison with the thermal image.
Fig. 13 is a conceptual diagram showing an experimental protocol of experimental example 2.
Fig. 14 is a graph showing the relationship between the degree of preference and the degree of concentration.
Fig. 15 is a diagram showing a state of arrangement of electroencephalogram electrodes to a subject and a measurement system.
FIG. 16 is a table showing the variation of the multisurface sentiment scale for "positive" content.
Fig. 17 is a table showing the variation of the multi-faceted sentiment scale for "negative" content.
Fig. 18 is a table showing the change of the multi-faceted emotional scale for the "horror (concentration)" contents.
Fig. 19 is a table showing the change of the multi-faceted emotional scale for the "horror (non-attentive)" contents.
Fig. 20 is a table showing the variation of the subjective psychological index for the "positive" content.
Fig. 21 is a graph showing the variation of the subjective psychological index for the "negative" content.
Fig. 22 is a table showing the change of the subjective psychological index for the "horror (concentration)" content.
Fig. 23 is a graph showing a change in subjective psychological index for the "horror (non-attentive)" content.
Fig. 24 is a table showing time series variations of physiological indices viewed for "positive" and "negative" content.
Fig. 25 is a table showing the evaluation of the physiological index for each content.
Fig. 26 is a graph showing time-series changes in physiological indices viewed for "horror (concentration)" and "horror (non-concentration)" contents.
Fig. 27 is a graph showing a relationship between preference for TV content and viewing manner.
Fig. 28 is a conceptual diagram showing an experimental protocol of experimental example 3.
Fig. 29 is a conceptual diagram showing an experimental protocol.
Fig. 30 is a table showing the structure of a neural network for estimating "excitation-sedation", "stress response pattern", and "taste".
Fig. 31 is a diagram illustrating a method of extracting a feature vector.
Fig. 32 is a table showing the evaluation of the subjective psychological index for each content.
Fig. 33 is a table showing the evaluation of the physiological index for each content.
Fig. 34 is a table showing positive discrimination rates of preference and viewing manner for TV contents.
Fig. 35 is a graph showing positive discrimination rates of preference and viewing style for TV content.
Fig. 36 is a graph showing temporal changes of the measured value and the estimated value of the "excitation-sedation" state.
Fig. 37 is a graph showing the estimation error of the "excitation-sedation" state.
Fig. 38 is a graph showing the relationship between the measured value and the estimated value of the "excitation-sedation" state.
Fig. 39 is a flowchart illustrating a method of finding a fractal dimension as a spatial feature quantity.
Fig. 40 is an explanatory diagram showing an embodiment of the clustering process in fig. 39.
Fig. 41 is an explanatory diagram showing an example of the image extraction processing and the edge extraction processing in fig. 39.
Fig. 42 is an explanatory diagram showing an embodiment of the fractal resolution process in fig. 39.
Detailed Description
Hereinafter, an embodiment of the present invention will be described with reference to the drawings.
[ constitution ]
The pressure coping system 100 according to one embodiment shown in fig. 1 includes a biological information acquisition device (biological information acquisition unit) 110, a determination device (determination unit) 120, and a learning device (machine learning unit) 130.
The biological information acquisition device 110 is a device for acquiring biological information of the subject P in a non-contact state.
The face image IF can be cited as an example of the most suitable biometric information. In the following description, a case of using the face image IF as the biological information will be described.
The face image IF may be a face thermal image or a face visual image. In the case where the face image IF is a facial thermal image, an infrared thermal imager is used as the biological information acquisition device 110. When the face image IF is a visible face image, a so-called camera as a visible image capturing device is used as the biological information acquisition device 110.
As described above, the "face visible image" is a color image obtained by capturing an image of the face of a subject with a camera, which is a device that has an optical system for imaging and is used to capture an image, and is widely used. The "facial thermal image" is an image obtained by analyzing infrared rays radiated from the face of the subject and representing the thermal distribution as a graph, and is an image obtained by imaging with an infrared thermal imager.
In this case, since the image captured by the camera is imaged by visible light (380nm to 800nm wavelength), and the thermal distribution image obtained by the infrared thermal imager is imaged by infrared light (800nm or longer wavelength), there is only a difference in wavelength in any case, and thus both the infrared thermal imager and the general camera can be used as the biological information acquisition device 110.
The determination means 120 can be realized by installing and executing the program of the present invention in a general-purpose computer.
The determination device 120 is a device having the following functions: the stress response system of the subject is determined based on the face image IF acquired by the biological information acquisition device 110 and a predetermined response pattern. The response modes include three modes consisting of "active response" (mode I), "passive response" (mode II), and "no response" (mode III). The response mode is determined by hemodynamic parameters. The hemodynamic parameters include a plurality of parameters of Mean Blood Pressure (MBP), Heart Rate (HR), Cardiac Output (CO), Stroke Volume (SV), and total peripheral vascular resistance (TPR).
The determination device 120 includes a feature storage unit 121 for determination, a specific site reaction detection unit 122, and a response mode determination unit 123.
The determination feature storage unit 121 is a functional block that stores a spatial feature corresponding to "active handling", a spatial feature corresponding to "passive handling", and a spatial feature corresponding to "non-handling". The spatial feature stored in the determination feature storage unit 121 is the spatial feature extracted by the learning device 130. When the face image IF is a thermal face image, an example of the spatial feature amount is a facial skin temperature distribution.
The specific part reaction detecting unit 122 is a functional block that detects a pressure response of an anatomical specific part of the face of the subject P included in the face image IF. The anatomical specific portion is one or more portions determined from an anatomical viewpoint depending on a portion having a small individual difference. As an example of the anatomical specific part, a nasal tip can be exemplified.
The response mode determination unit 123 is a functional block that determines whether the pressure response mode of the subject P indicates any one of the response modes of "active response" (mode I), "passive response" (mode II), and "no response" (mode III) based on the pressure response detected by the specific part reaction detection unit 122 and each spatial feature amount stored in the determination feature amount storage unit 121.
The learning device 130 includes a learning data storage unit 131, a feature extraction unit 132, and a feature learning unit 133. The learning apparatus 130 is realized by installing and executing the program of the present invention in a general-purpose computer.
The learning data storage unit 131 is a functional module that stores a plurality of learning face images LG labeled with labels corresponding to "active response", "passive response", and "no response", respectively. A learning face image LG is conceptually illustrated in fig. 2. LGA1, LGA2, LGA3, and LGA … shown in fig. 2 (a) are learning face image groups labeled with the "active response" label. LGBs 1, LGB2, LGBs 3, and … shown in fig. 2 (B) are learning face image groups labeled with a "passive correspondence" label. LGC1, LGC2, LGC3, and … shown in fig. 2 (C) are groups of learning face images labeled with a "no-deal" label.
The feature value extraction unit 132 is a functional block that extracts a spatial feature value of the face image from the face image LG for learning using the learned model 134. The plurality of learning face images LG labeled with labels corresponding to the response patterns specified by the hemodynamic parameters are used as training data, and the learned model 134 is generated by machine learning the spatial feature values of the face image of the subject P included in the learning face images LG.
The feature amount learning unit 133 is a functional block that changes the network parameters of the learned model 134 based on the relationship between the extraction result of the feature amount extraction unit 132 and the label attached to the face image LG for learning as the extraction target, so that the spatial feature amount extraction unit 132 can extract spatial feature amounts with high accuracy.
[ actions ]
Next, the processing flow in the determination device 120 and the learning device 130 of the pressure coping process determination system 100 configured as described above will be described with reference to the flowcharts of fig. 3 and 4.
As shown in fig. 2, the determination device 120 executes a determination feature quantity storage process S1, a specific site reaction detection process S2, and a response pattern determination process S3.
The determination feature storage processing S1 is processing for storing the spatial feature extracted by the learning device 130, that is, the spatial feature corresponding to the "active handling", the spatial feature corresponding to the "passive handling", and the spatial feature corresponding to the "non-handling", in the determination feature storage unit 121.
The specific part reaction detection process S2 is a process of detecting a pressure response of an anatomical specific part of the face of the subject P included in the face image IF captured by the biological information acquisition device 110.
The response pattern determination process S3 is a process of determining whether the pressure response mode of the subject P is the mode showing any one of the "active response", "passive response", and "no response" based on the pressure response detected in the specific site reaction detection process S2 and the respective spatial feature values stored in the determination feature value storage unit 121.
As shown in fig. 4, the learning device 130 executes the data storage processing for learning S11, the feature amount extraction processing S12, and the feature amount learning processing S13.
The learning data storage processing S11 is processing for storing a plurality of learning face images LG labeled with labels corresponding to "active response", "passive response", and "no response", respectively, in the learning data storage unit 131.
The feature amount extraction process S12 is a process of extracting the spatial feature amount of the face image of the subject included in the learning face image LG from the learning face image LG using the learned model 134.
The feature amount learning process S13 is a process of changing the network parameters of the learned model 134 so as to improve the accuracy of feature amount extraction in the feature amount extraction process S12, based on the relationship between the extraction result in the feature amount extraction process S12 and the label attached to the face image LG for learning as the extraction target.
[ spatial feature quantity ]
In the present embodiment, the fractal dimension calculated based on the face image IF can be used as the spatial feature amount. By using the fractal dimension, the spatial feature quantity can be digitized easily and with high accuracy.
The method of determining the fractal dimension as the spatial feature quantity is arbitrary. In the processing flow illustrated in fig. 39, the fractal dimension is obtained by executing a series of processing including clustering processing S21, image extraction processing S22, edge extraction processing S23, and fractal analysis processing S24.
The clustering process S21 is a process of clustering the face images IF with respect to the temperature distribution. The method of clustering is arbitrary, and a Fuzzy C-means (Fuzzy-C-means) method can be exemplified as a method suitable for the present embodiment. The fuzzy C-Means method is not a method that considers the problem of whether a certain data group belongs to one of two clusters (cluster), but is a method that assumes that data belong to a plurality of clusters to some extent, respectively, and that shows the degree of membership (degree of membership) of the data to the cluster in a fuzzy manner, in addition to the case where the data completely belongs to a unique cluster (K-Means) method. Fig. 40 shows an example in which the number of clusters is set to 12 and input images (face images IF) are clustered. In the example of fig. 40, cluster 1 is assigned to the lowest temperature distribution and cluster 12 is assigned to the highest temperature distribution.
The image extraction process S22 is a process of extracting clustered images having a temperature distribution in a temperature region of a predetermined temperature or higher from the plurality of clustered images obtained by the clustering process S21. In the example of fig. 41, the images of clusters 8 to 12, which are attribution degree images including the temperature of the face area, among the images of clusters 1 to 12 illustrated in fig. 40, are extracted.
The edge extraction process S23 is a process of detecting an edge portion of the image extracted by the image extraction process S22, and generating an edge graph composed of line segments indicated by the edge portion. The method for edge detection is arbitrary, and Canny can be mentioned as a method suitable for the present embodiment. The canny method is a method of detecting an edge by performing noise removal processing by a gaussian filter, luminance gradient (edge) extraction processing by a sobel filter, non-maximum value suppression processing for removing a portion other than a portion where the intensity of an edge becomes maximum, and Hysteresis threshold processing for determining whether or not the edge is correct by using a threshold value of Hysteresis (hysteris). In the example of fig. 41, edge portions of the images of clusters 8 to 12 are detected by the canny method, and an edge graph is generated that is composed of line segments represented by the edge portions.
The fractal analysis process S24 is a process of obtaining the fractal dimension of the edge graph generated in the edge extraction process S23. Fractal dimension is an index that quantitatively represents the self-similarity and complexity of a graph, phenomenon, and is generally a non-integer value. The method of fractal analysis is arbitrary, and a Box counting (Box counting) method is exemplified as a method suitable for the present embodiment. The box counting method is as follows: when a graph to be analyzed is divided into square boxes (in a grid shape), a fractal dimension is obtained from the absolute value of the slope of a straight line when the relation between the size of the box and the total number of boxes containing the graph is approximated to a straight line on a logarithmic hyperbolic curve.
The fractal dimension (D) is calculated as follows. r is the size of the box, and N (r) is the number of boxes.
[ number 1]
Fig. 42 shows an example of calculation of the fractal dimension of the edge pattern of the cluster 12 in fig. 41. In this example, r is varied within a range of 2 to 128 for the edge graph of the cluster 12, and r and n (r) are plotted on a logarithmic hyperbolic curve to obtain a value of 1.349.
[ Effect and Effect ]
In the pressure response method determination system 100 configured as described above, the biological information acquisition device 110 captures the face image IF of the subject P. The captured face image IF is input to the determination device 120. The determination device 120 determines whether the stress countermeasure mode of the subject P is the mode indicating the mode I, II or III, based on the input face image IF and the response mode (modes I, II and III) specified by the hemodynamic parameters, that is, any of the average blood pressure, the heart rate, the cardiac output, the stroke volume, and the total peripheral vascular resistance.
Therefore, according to the stress countermeasure determination system 100, the stress countermeasure system of the subject P can be determined in a non-contact state based on the face image IF of the subject P, and therefore the type of the pressure felt by the subject P can be grasped without imposing restrictions on the subject P.
The stress response method determination system 100 determines the stress response method of the subject P based on the response and response patterns (patterns I, II and III) of the anatomical specific region of the face included in the face image IF of the subject P. The anatomical specific portion is one or more portions determined from an anatomical viewpoint depending on a portion having a small individual difference.
Therefore, according to the pressure response method determination system 100, a general system for determining the pressure response method can be constructed.
In the pressure coping method determination system 100, the learning device 130 extracts a feature amount corresponding to "active coping", a feature amount corresponding to "passive coping", and a feature amount corresponding to "no coping". The learning device 130 stores a plurality of learning face images LG labeled with labels corresponding to "active response", "passive response", and "no response", extracts a spatial feature amount of a face image from these learning face images LG using the learned model 134, and changes network parameters of the learned model LG so as to increase the accuracy of extracting the spatial feature amount of the face image based on the relationship between the extraction result and the label labeled with the learning face image LG as the extraction target. As the learning of the learned model LG progresses, the accuracy of extracting the spatial feature amount of the face image improves, and the accuracy of the spatial feature amount stored in the feature amount storage unit 121 for determination also improves.
Therefore, according to the pressure response method determination system 100, the accuracy of determination of the pressure response method can be improved as the learning of the learned model LG in the learning device 130 advances.
In addition, according to the pressure handling method determination system, the accuracy of the determination of the pressure handling method can be further improved by digitizing the spatial feature quantity using the fractal dimension.
The present invention is not limited to the above embodiments. For example, although the pressure coping manner determination system 100 includes the learning device 130 in the above embodiment, the learning device 130 may be omitted. In the case where the learning device 130 is omitted, the spatial feature extracted or generated by a mechanism other than the learning device 130 is stored in the feature storage unit 121 for determination of the determination device 120.
< Experimental example 1>
An experimental example for collecting data of the feature amounts stored in the feature amount storage unit 121 for determination according to the present embodiment is shown below.
1. Experimental method
The experiment was carried out in a shielded room at 25.0. + -. 1.5 ℃ and 8 healthy adult males, 18 to 21 years old, were examined. Fig. 6 shows a measuring system. The subject sits on his/her seat and wears a measuring finger cuff of a continuous blood pressure and hemodynamic dynamic measuring device (a precision model2, manufactured by Finapres Medical Systems b.v.).
As hemodynamic parameters, Mean Blood Pressure (MBP), Heart Rate (HR), Cardiac Output (CO), total peripheral vascular resistance (TPR) were measured. In order to measure the face thermal image of the face, an infrared thermal imager (TVS-200EX, AVIONICS corporation) was disposed at a position 1.2m in front at an angle of view capable of measuring the entire face, and the shooting interval was set to 1 fps. When the temperature of the facial skin was measured, the user sat on a chair with a backrest, and a screen of the computer was projected by a projector at a distance of 1.55m from the wall.
The experiment was constructed by time division of quiet eye closure 60s before Task (Rest1), active Task 60s (Task1), non-response Task 60s (Task2), passive Task 60s (Task3), and quiet eye closure 60s after Task (Rest 2). The mental task is performed in the active task, the mirror writing task is performed in the passive task, and the quiet eye closure is performed in the non-pressure response task. The mental arithmetic subjects are 2-digit addition operation every 4 seconds and are carried out on the screen of the computer. The correct and wrong mental calculations are not informed each time. In the mirror writing problem, the examinee is instructed to use the mouse with the right hand to pass between lines of a star-shaped figure (see fig. 7) displayed on the screen and reproduce the figure on the computer. The movement of the mouse and the movement of the cursor on the picture are reversed up, down, left and right.
The cursor and the trajectory of the cursor are displayed on the star-shaped graph, and the cursor is returned to the original position and all the trajectories are eliminated when the boundary is out. The encircling start position is the uppermost part of the star. Quiet eye closure is adopted as a problem that cannot be dealt with. The mean of Rest1 in the hemodynamic parameters was normalized by subtraction using the baseline.
2. Analysis method
2-1 discrimination of hemodynamic parameters
The time-series change of the normalized MBP of subject a is shown in fig. 8. For each hemodynamic parameter, a range of values equal to or greater than baseline +2 σ (σ: standard deviation of each hemodynamic parameter in Rest1) is defined as "+", and a range of values equal to or less than baseline-2 σ is defined as "-". Based on the mode responses in table 1, each hemodynamic parameter was identified as "mode I" (active response) and "mode II" (passive response), and if none of them was identified as "no response". Furthermore, the thermal images captured during Task 1-3 are labeled based on the determination of hemodynamic parameters.
2-2 creation of input data
To use the labeled thermal image in machine learning, a portion of the face is cut at 151 × 171 pixels and grayed out to create a facial thermal image. In order to match the number of labeled facial thermal images (input data) with each pressure correspondence, data is extended by random cropping, addition of white gaussian noise, and contrast adjustment.
2-3 machine learning Using CNN
In this experiment, a personal identification model of a pressure coping method based on a facial skin temperature distribution was constructed using a Convolutional Neural Network (CNN), and feature amounts related to the pressure coping method were extracted.
The CNN is composed of 3 convolutional layers for extracting feature values, 3 pooling layers, and 1 all-connected layer for discrimination. The structure of CNN is shown in fig. 9, and the parameters of the filter of the convolutional layer and the pooling layer are shown in fig. 10 and 11. Feature analysis related to the pressure handling method is performed based on a feature map that is a weight of the convolutional layer of the CNN.
3. Results and study
Fig. 12 shows a feature map of the 2 nd convolutional layer when the facial thermal image of each pressure response method is input to the CNN for each subject.
The observation of the feature mapping for each response method in the subject shows that, particularly in the subject B, the feature portions exhibited differ between the respective stress response methods, and that, for example, the feature is confirmed in the left cheek in the case of active response and no stress response, and the feature is confirmed in the right cheek in the case of passive response. As a result of observing the feature mapping between subjects, although most subjects exhibit features in their noses, it is confirmed that the feature expression sites of the subjects are individually different.
The individual differences in characteristic expression sites of the subjects are considered to be the main cause of the differences in the structures of blood vessels and fat. However, it is considered that a general model for stress response discrimination can be constructed by specifying an anatomically meaningful feature region.
4. Summary of the invention
In this experiment, by using CNN, discrimination of the pressure-based method and extraction of features related to the pressure-based method from the facial skin temperature distribution were attempted. As a result, the characteristic distribution of the stress response method expressed on the facial skin temperature distribution was changed, and it was confirmed that there was an individual difference in the characteristic distribution related to the stress response method among individuals.
Reference to the literature
[1] And (4) the field is happy: xinsheng psychology (palace field, home edition), Chapter 10, North Daolu study, pp.172-195 (1998)
[2] Long-field you Yilang: (cardiovascular reaction in competitive endoscopic writing topic), physiopsychology and psychophysiology, Vol.22, No3, pp.237-246, (2004)
[3] Bandong quiet, shallow wild yujun, Yeze shoxiong: analysis of physiological influences in relation to the display medium read, journal of the Electrical society, journal C (electronic, information, department of the systems), Vol.135, No.5, pp.520-525, (2015)
[4] Suntan, yaozhixiong: analysis of TV watching methods based on stress coping methods, journal C of electric society (department of electronics, information, and systems), Vol.134, No.2, pp.205-211, (2014)
[5] The traditional Chinese medicine is prepared by the following steps of (1) carrying out Daihong, Zhaohongxiong, Shuyaotai, and Jingdao, and leaving Yingren: physiological and psychological assessment of mental workload under time-critical conditions, journal of the Electrical society of academic and vocational study C (department of electronics, information, systems), Vol.127, No.7, pp.1000-1006, (2007)
[6] Hiroki Ito, Shizuka Bando, Kosuke Oiwa, Akio Nozawa: evaluation of Variations in Automic neural Systems's Activity During the Day Based on Facial image Using independent Component Analysis to evaluate changes in daytime Autonomic nervous System Activity, IEJ Transactionson Electronics, Information and Systems, Vol.138, No.7, pp.1-10, (2018)
[7] The pine village is healthy too, and the Zetian is happy: "cardiovascular reactions when two mental tasks were completed", psychological research, Vo79, No.6, pp.473-480, (2008)
In addition, two experimental study examples conducted by the inventors of the present application, which are fundamental studies of the pressure response method determination system of the present invention, are shown below as an application example (embodiment) of the present invention.
These experimental examples are not "acquiring biological information of a subject in a non-contact state" as in the present invention, but are performed by wearing a finger cot on the subject using a continuous blood pressure monitor, but are performed by analyzing, classifying, and evaluating a physiological and psychological state of a viewer watching television video content based on a hemodynamic parameter as in the present invention. Therefore, the present invention can be applied to, for example, pressure type determination of a subject in such a case.
< Experimental example 2>
Subject matter of the experiment
Nowadays, people are flooded with a large number of information devices. In which a television was recognized as a core existence of news media or entertainment media provided in a living room of a home or the like since it was invented. However, with the recent spread of information communication devices with smaller size and higher performance due to the development of IT technology and the growing environment of information networks, the existence of information media has been greatly transformed, and the viewing of television has been greatly transformed in recent years.
The article by the chlamydia states that it is clearly far from "watching you focus" to always have a television on while doing something or operating a cell phone, etc(1). However, if the converted viewing mode can be grasped, it is possible to provide a new added value to the television, such as adjusting the mood, and behavior in a desired direction by TV content design.
Public opinion research is conducted on vine and ziqi, questionnaires are conducted in the wild, and the reasons for watching television include "time to send", "to enjoy interesting programs", "to acquire knowledge for improving cultural cultivation", "to be used as a habit", "to escape from worrisome reality", "to change mood", "when background music", "as a music-on-demand", "in-the-spot", "in-the-spot-in-spot-the-spot-in-spot", "in-the-spot-in-the face", "in-spot-the face ofReason for reunion of one family(2)(3)。
In addition, the time for watching tv is classified by high-bridge et al into "breakfast time", "commute time", "housework time", "family time on weekday" and the like(4). Further, friend or religious service classifies the television viewing modes into "exclusive viewing" in a state where the television is physically and mentally focused on "and" simultaneous viewing in a state where the television is watched while doing something else "although the television wants to watch the program&The "watch for relaxing" is a state in which the user is not particularly interested in watching the content but is physically focused on the television.
Thus, it is obvious that the viewing modes are various and vary according to factors such as time, place, and mood. Preference for TV video is also a cause of viewing style transition. However, there has been no study of classifying viewing modes by preference. In addition, most of the studies for discriminating the viewing style use a psychological response such as public opinion research or posterior internal provincial evaluation, and there is no study for classifying the viewing style using a physiological response.
Since the video and audio of the tv are visual and auditory stimuli to the living body, they can be regarded as stressors. Stressors induce stress responses in the organism, which produce physiological and psychological changes in response to stress. When various contents of a television are regarded as stressors, it is expected that the physiological stress response differs depending on the handling of the contents, that is, the viewing mode. The nomadic village and the like classify the corresponding states of the electronic learning contents through the cardiovascular system indexes(6)。
Therefore, this study has conducted central nervous system activity and autonomic nervous system activity evaluations by cardiovascular system indicators, and for the purpose of analyzing the way in which television content is viewed, performing cardiovascular system measurements using a continuous blood pressure monitor, non-contact facial skin temperature measurements using an infrared thermography apparatus, and electrocardiographic and electroencephalographic measurements using a digital bioimplifier. In particular, analysis of the viewing mode when viewing TV images is performed by a stress response mode using hemodynamic parameters and preference for TV images.
2. Factor extraction experiment
In consideration of the influence of the viewing style by the video content and the preference of the individual, the factors are classified by preliminary experiments.
< 2.1 > Experimental conditions
The subjects were 20 healthy students (age: 19-22, mean age: 21.1, male 11, female 9) who were read at university. The subject was adequately specified the contents, purpose, and subject of the experiment in advance by oral and written, and the consent for the assistance of the experiment was confirmed by signature. Measurements were performed in a convection-free shielded room at room temperature of 26.0 ± 1.6 ℃, with no temperature input from the outside.
The subject viewed 10 kinds of image contents (news, sports, documentaries, music, horror, fun, drama, animation, cooking, shopping) for 5 minutes on a seat using a liquid crystal television (55 inches, HX850, BRAVIA, SONY) installed 2m in front. The video content is played by a DVD player (SCPH-3900, manufactured by SONY). The viewed video content is not disclosed until the subject is viewed. In order to eliminate the sequence effect, the sequence of the video contents to be viewed is set to a random sequence.
< 2.2 > Experimental procedures
As shown in fig. 13, the experiment consisted of 5-minute video viewing and 1-minute before and after the viewing with the quiet eye-closing states R1 and R2. Further, preference and immersion evaluation of various video contents are performed by Visual analog Scale (hereinafter abbreviated as VAS) and immersion survey.
< 2.3 > evaluation method
At the end of the experiment, the subjective sensory measures of "liking" were measured using VAS. In the VAS, pairs of adjectives are arranged at both ends of a line segment, and a subject can measure psychophysical quantities of the subject by checking an arbitrary position on the line segment. The terms disposed at both ends of the scale are set to "very annoying" to "very pleasant". Furthermore, the immersion sensation for the image was measured by a 5-point ruler (1: completely non-immersive-5: quite immersive).
< 2.4 > results and research of factor extraction experiment
The left side of fig. 14 shows the average of all subjects in the taste and immersion of 10 kinds of content. Error bars in the figure indicate standard errors of preference and immersion. The right side of fig. 14 shows the taste of 20 persons of news and horror who are among 10 kinds of contents and particularly have large individual differences, so as to be immersive. In fig. 14, when the taste and the immersion feeling of each content are viewed from the left, the average taste and the immersion feeling are in a proportional relationship. In fig. 14, N is 20.
Specific results of lower preference but higher immersion are shown with respect to horror. This is considered to be a case of what is called "more afraid and more desirable" which is desired to be viewed by being curiosity-driven although it is annoying. As is clear from the right side of fig. 14, the individual differences are large because the preference and immersion feeling of each subject are different. Therefore, in the physiological measurement experiment, based on the above-described factor classification experiment, image contents having a high immersion positive preference (high preference) and image contents having a low immersion negative preference (low preference), and horror images showing a specific result of high immersion or low immersion but having a negative preference are classified and presented for each subject.
3. Physiological measurement experiment
Based on the factors extracted by the factor extraction experiment, the physiological measurement is performed again using another image for the same subject.
< 3.1 > Experimental conditions
The subjects were 14 healthy students (age: 19-22, mean age: 21.1, 7 males, 7 females) who were read at university. Experiments were conducted in the same shielded room as 2.1, and one experiment performer was co-located in the same room to switch the video content and confirm the physiological measurement status. In order to adapt the body surface temperature to room temperature, the experiment was performed after 20 minutes or more from the time the subject entered the room. Based on the result of the factor extraction experiment, the image contents of positive and negative preferences are set for each subject. The video content with the highest preference among the immersion sensations of 4 or more and the preferences of 0.6 or more is referred to as a positive preference (hereinafter, abbreviated as "positive"), and the video content with the lowest preference among the immersion sensations of 2 or less and the preferences of 0.4 or less is referred to as a negative preference (hereinafter, abbreviated as "negative"). Therefore, the image content to be viewed is three types of "positive", "negative", and "horror".
< 3.2 > Experimental procedures
Fig. 13 shows the same structure as the factor extraction experiment. Further, perceptual change, taste, and immersion of each video content are evaluated by VAS, Multiple mood scale (hereinafter abbreviated as MMS), and immersion investigation.
< 3.3 > measurement system
The configuration of the measurement system and EEG measurement electrodes is shown in fig. 15. The subject wears a continuous blood pressure monitor, an EEG derivation electrode, and an ECG derivation electrode on a seat. The subject worn a continuous sphygmomanometer (Finometer model2, Finapres Medical Systems B.V. Inc.) at the second joint of the middle finger of the left hand, recorded on a PC at a sampling frequency of 200 Hz. An infrared thermal imaging camera device (TVS-200X) was disposed at an angle capable of measuring the face at a position 0.7m in front of the subject. The skin radiance is set to 0. 98, the temperature resolution was set to 0.1 ℃ or lower, and the data was recorded on a PC at a sampling frequency of 1 Hz. EEG was measured by the reference electrode method with the left ear (a1) set as a reference. The EEG derivation electrode position is set to 1 point (Pz) by the International 10-20 method, and the electrode contact resistance is set to 10k omega or less.
In general, α waves are recorded as O1 and O2, but in this study, attention is paid to α wave energy as an index of stabilization of brain activity and the attenuation ratio is calculated for the purpose of the study, and therefore, compared to left-right difference or local existence, the ease of wearing electrodes and the reduction of measurement pressure on a subject are emphasized more, and the Pz in the vicinity is measured. In order to minimize the incorporation of myoelectric potential, electrodes for ECG measurement were attached to the sternum upper edge (+) and the cardiac apex (-) according to the NASA lead method. The common use of EEG and ECG as a ground electrode is the parietal (Cz). The EEG signal and ECG signal were amplified by a digital bioanalyzer amplifier (5102, manufactured by NF Electron resonance INSTRUMENTS), and recorded on a PC at a sampling frequency of 200Hz via a processor box (5101, manufactured by NF Electron resonance INSTRUMENTS).
< 3.4 > evaluation method
In this study, the correlation between physiological and psychological indices was analyzed. The physiological indices are Mean blood pressure (hereinafter abbreviated as MP), Heart rate (hereinafter abbreviated as HR), Stroke Volume (hereinafter abbreviated as SV), Cardiac output (hereinafter abbreviated as CO), Total peripheral vascular resistance (hereinafter abbreviated as TPR), Nasal skin temperature (hereinafter abbreviated as NST), electrocardiogram (Electro-Cardiac algorithms, hereinafter abbreviated as ECG), and electroencephalogram (hereinafter abbreviated as EEG). The 8Hz to 13Hz frequency component of the EEG, known as the alpha wave, is known to manifest itself when quiet, closed-eye, awake, and decays when any condition is broken(8). In this study, for an EEG time series sampled at 200Hz, a fourier transform (FFT) was performed on 1024 samples every 10 seconds to obtain an α -wave power spectrum every 10 seconds. Further, the average α -wave power in each of the R1 interval and the R2 interval in fig. 13 was calculated, and the average α -wave power ratio in the R2 interval based on the average α -wave power in the R1 interval was used as an index of the change in the degree of arousal before and after viewing a television.
NST is known as an index of sympathetic nervous system activity that governs peripheral blood flow. Since the increase in the sympathetic nervous system is closely related to the suppression and the temporal change amount of NST, the change amount of NST per 10 seconds is used as a quantitative indicator of the kinetic energy of the sympathetic nervous system activity related to the viewing of images in this study. Positive values indicate inhibition of sympathetic nervous system activity, and negative values indicate hyperactivity of sympathetic nervous system activity. For each increment of the measured nose thermal image time series, the spatial average temperature in 10 × 10 pixels of the nose was obtained and set as the NST time series. HF is a high-frequency component of 0.15Hz to 0.4Hz of the heart rate fluctuation spectrum, known as a respiratory sinus arrhythmia component(7)。
HF is an indicator of the parasympathetic nervous system, and increases or decreases according to the increase or inhibition of parasympathetic nerves. The time series of R-peak intervals are found from the ECG samples by thresholding, followed by sampling at 20Hz after cubic spline interpolation. And performing FFT processing on the sampled data every 1 second to obtain a time sequence of the heart rate fluctuation power spectrum. The number of FFT-processed data points is 512 points. In the time-series heart rate fluctuation power spectrum, the integral value in the range of 0.15Hz to 0.4Hz was obtained and set as HF time-series. It is known that hemodynamic parameters (MP, HR, SV, CO, TPR) show characteristic reaction patterns (pattern I, pattern II) depending on the external pressure, which is an important concept in understanding the physiological response of the cardiovascular system to pressure.
Specifically, mode I is characterized by an increase in myocardial contractile activity and an increase in blood volume in skeletal muscle due to vasodilation, and is said to be an energy-wasting response (proactive response). On the other hand, mode II is characterized by constriction of peripheral blood vessels and the heart rate is also substantially reduced, which can be said to be an energy-saving response (passive response).
Furthermore, the psychological indexes are set as the multi-aspect status scales MMS and VAS. MMS indexes 8 emotional state scales, which are changed depending on the conditions of a subject, including anxiety (hereinafter abbreviated as D-a), hostility (hereinafter abbreviated as H), lassitude (hereinafter abbreviated as F), active pleasure (hereinafter abbreviated as a), inactive pleasure (hereinafter abbreviated as I), affinity (hereinafter abbreviated as AF), concentration (hereinafter abbreviated as D), and surprise (hereinafter abbreviated as S), to temporary emotional states and emotional states(9). Before the start of the experiment and at the end of the experiment, 4 subjective sensory measures of "wakefulness", "pleasure and unpleasantness", "fatigue" and "like" were measured using VAS.
"pleasant and unpleasant feelings" and "conscious feelings" were selected as items of psychological evaluation in this study as essential components in the bivariate theory of emotions of Rousin(10). In the VAS, pairs of adjectives are arranged at both ends of a line segment, and a subject can measure psychophysical quantities of the subject by checking an arbitrary position on the line segment. In terms arranged at both ends of the scale, the feeling of wakefulness is set to "particularly sleepy" - "very wakefulness", and the feeling of pleasure and displeasure is set to "very unpleasant" - "very pleasantMental activities are set to "particularly tired" - "particularly mental", and preferences are set to "particularly unpleasant" - "particularly favorite". Each VAS is prepared on a separate paper, instructing the subject to record in-person in sequence without recursive reference. Further, the immersion sensation for the image was measured by a 5-point scale (1: completely non-immersion-5: considerable immersion) after the end of the experiment. As a statistical analysis method, a corresponding t TEST is used for testing the difference between before and after viewing images of each psychological index, and a Wilcoxon symbol sequence TEST is used for the variation of the entire TEST section in each physiological index.
<3 and 5> results and study
And carrying out statistical analysis aiming at the psychological indexes, and discussing psychological responses when TV images are watched and before and after the TV images are watched. In addition, since the preference of all the subjects is 0.4 or less for the horror image stimulus, only the sense of immersion is focused on. In 5-stage immersion evaluation after viewing of a horror image, 4 or 5 is classified as horror (concentration) (hereinafter abbreviated as "horror (C)"), and 1 or 2 is classified as horror (non-concentration) (hereinafter abbreviated as "horror (N)"). After that, the results are labeled by classification of "positive", "negative", "terrorism (C)", and "terrorism (N)". The number of "terrorists (C)" was 10, and the number of "terrorists (N)" was 4. The average of all subjects in each emotional scale of MMS is shown in FIGS. 16-19.
N-14 in fig. 16, N-14 in fig. 17, N-10 in fig. 18, and N-4 in fig. 19. Fig. 20 to 23 show the average of all subjects who feel immersed in the liquid, i.e., "pleasant and unpleasant", as well as "wakeful", "active", "favorite", and "immersed" in the liquid. N is 14 in fig. 20, 14 in fig. 21, 10 in fig. 22, and 4 in fig. 23.
According to fig. 16, "positive" increased significantly in A, AF and decreased significantly in F. On the other hand, according to fig. 17, "negativity" is significantly increased in H, F. This is consistent with the behavior of viewing positive and negative favorite video content, respectively. "terrorism (C)" in fig. 18 shows a significant reduction in a and I, which can be presumed to evoke an unpleasant emotion in general. Furthermore, there are significantly worse scales in fig. 18 than in fig. 19.
From this, it can be estimated that the emotional change is larger in case of immersion than in case of non-immersion for terrorism. According to fig. 20, "positive" is a significant increase in pleasure and unpleasantness, wakefulness and vitality. It is considered that the image contents that are positively preferred provide the subject with positive comfort along with "refreshing feeling" and "comfortable feeling".
In contrast, according to fig. 21, since "negative" is significantly reduced in pleasantness, unpleasant feeling, wakefulness, and vitality, it is considered that the image content having negative preference gives the subject a negative unpleasant feeling accompanied by "feeling of oppression" and "unpleasant feeling". Further, according to fig. 22, "terrorism (C)" is considered to bring about positive unpleasant feelings accompanied by "wakefulness" and "unpleasant feelings" because it is considered to significantly increase in wakefulness and significantly decrease in pleasure, unpleasant feelings, and vitality.
Next, physiological responses during TV video viewing and before and after TV video viewing are discussed. Fig. 24 shows the average temporal variation among subjects of each physiological index. In fig. 24, N is 14. From the top, NST, alpha wave, HF, MP, HR, SV, CO, TPR. The start of the TEST interval is set to 0. Error bars in the figure indicate standard error every 10 s. Note that the base line of the R1 intervals for NST, MP, HR, SV, CO, TPR, and immersion was set to 0, and the base line of the R1 interval for α -wave and HF was set to 1, and the significance probability p of the Wilcoxon in the sign-sequential TEST concerning the shift of the base line from the entire TEST interval is shown in fig. 25 (+: p <0.1,: p <0.05,: p < 0.01). In the table, P represents a positive response to each index, and N represents a negative response. In fig. 25, N is 14.
From fig. 24, when the time-series change of each index is observed, the "positive" and the "negative" for NST decrease together with the start of the TEST interval, and it is estimated that sympathetic nerve is increased. No significant changes were seen for the alpha wave. According to fig. 25, HF did not see significant change in "positive" and was significantly reduced in "negative". HF is an indicator of parasympathetic nervous system activity, and increases or decreases according to the increase or inhibition of parasympathetic nerves, and thus can be considered to inhibit parasympathetic nerves "negatively". This is consistent with the explanation in NST above.
Fig. 26 shows the average temporal variation among subjects of the respective physiological indices of horror images. In fig. 26, N is 14. From fig. 26, it is presumed that sympathetic hyperactivity is increased because the NST of "terrorism (N)" is decreased. No significant results were obtained for the alpha wave. It is known that "phobia (C)" is significantly reduced and "phobia (N)" is significantly increased in HF. For HF, the explanation in terrorism (N) is inconsistent with that in NST described above. This is considered to be due to differences in mechanism. As shown below, "terror (N)" shows an active countermeasure characterized by an increase in TPR. As a result, the blood flow in the peripheral blood vessel portion is reduced, and the NST is considered to be reduced.
Next, "positive", "negative", "phobia (C)" and "phobia (N)" are classified according to the pattern of immersion and preference, and the hemodynamic response is compared and studied. As a result, the stress response mode and the preferred axis can be summarized as shown in fig. 27. In fig. 27, N is 14.
Both "positive" and "phobia (C)" have a high immersion, but "positive" is a positive preference and phobia (C) is a negative preference. However, both see an increase in MP, HR, SV and CO, and a decrease in TPR. This is a typical mode I response (active response) that is dominated by an increase in myocardial contractile activity.
That is, it is obvious that the TV video contents are immersed in the active time control regardless of the preference. By analogy with the results of the factor extraction experiment, it is expected that a subject who is similarly highly preferred for horror imaging in the physiological measurement experiment is highly immersed and is located in the upper right region of fig. 27, but there is no corresponding subject. On the other hand, immersion and preference of "positive" and "terrorism (N)" are low. However, their physiological responses are different. That is, in "negative", although a decrease in TPR and HR is observed, no change in MP is observed. It is considered that the response is not either active or passive but no response. In fig. 27, N is 14.
In contrast, in "terrorism (N)", although there is no significant variation in HR, significant increases in TPR and MP are characteristic. This is a typical mode II response (passive response) that is mainly an increase in MP caused by an increase in peripheral vasoconstriction. Thus, it is clear that when stress responses are not shown, the preference is generally low without being immersed in TV image content, but passive responses are shown with respect to horror images. That is, it is obvious that the viewing state cannot be specified only by the preference for TV video, and the viewing state can be classified according to the stress countermeasure method.
4. Summary of the invention
In this study, psychophysiological measures for watching video contents with different preferences are performed, and an attempt is made to classify the watching modes according to a stress coping mode with physiological argument. The method comprises the steps of measuring hemodynamic parameters (MP, HR, SV, CO and TPR) as an index of a cardiovascular system, measuring an alpha wave power spectrum of an EEG as an index of a central nervous system, measuring nasal skin temperature and a heart rate variation HF component as an index of an autonomic nervous system, simultaneously measuring the preference of the images and a psychological questionnaire, carrying out statistical analysis related to the physiological and psychological state, and quantitatively evaluating the physiological and psychological effect when the image content is watched.
As a result, the classification of psychological indicators related to preference in this study has not been evaluated at all in the conventional study, and thus a completely new classification of television viewing can be obtained by combining the stress response and preference of this study. As a result, the TV image is "positive" and "horror (C)" when the time is actively responded, the TV image is "horror (N)" when the time is passively responded, and the TV image is "negative" when the time is not stressed. That is, it is obvious that the viewing method cannot be discriminated only by preference for TV video, but the viewing method can be discriminated from the pressure response of the hemodynamic parameter.
< reference >
(1) Pingtianmingyu, Zhuteng Mei, wasteland and pasture center: current (2) of television viewing and media utilization to research and study in Japanese and television 2010, exhibition research and research, pp.2-27(2010)
(2) Barren and pastoral, fructus alpiniae oxyphyllae and Zhongye fructus akebiae immaturus: how television will face people over 20 years of age, screening research and survey, pp.2-21(2008)
(3) Y.siki, y.murakami and y.huzita: "Television viewing and simultaneous use of new media: anethological study of young peoples in Japan "Institute for Media and Communication Research Keio University, Vol.59, No.1, pp.131-140(2009) (in Japan) the mental disorder Yu, village Yang, Teng Tian knot: television viewing and media parallel utilization behavior of young people; audience ethnic survey from university of Jukushu media spread research institute, Vol.59, No.1, pp.131-140(2009)
(4) Nomura, y.kurosawa, n.ogawa, c.m.altraf Irfan, k.yajima, s.handri, t.yamagishi, k.t.nakahira, and y.fukumura: "psychological Evaluation of a student in E-learning Sessions by Hemodynamic Response", IEEJ Trans. EIS, Vol.131, No.1, pp.146-151(2011) (in Japan) Yacun Ministry, Irfan C.M. Althaff, Hill Boss, Niger Sedan Dai, Shidao Zhao, Zhongping, Xiaochuanxinzhi, HANDI Santoso, Fucun Haomai: study on physiological evaluation of subjects in electronic learning by hemodynamic parameters, theory of electrics C, Vol.131, No.1, pp.146-151(2011)
(5) T.watanuki and a.nozawa: "Visualization of a lifetime of a concept for TV contents", Bulletin of Institute of Electrical and Electronic Engineers of measurement, Vol.IM-12, No.63, pp.19-25(2012) (in Japan) Zhang Miyaoguo, Zhao Zeng: visualization of immersion in television content, survey and research by the institute of Electrical and electronics, Vol.IM-12, No.63, pp.19-25(2012)
(6) Mori, c.ono, y.motomra, h.asoh, and a.sakurai: "E mpir Analysis of User Preference Models for Movie recommendation", IEIC Technical report, NC. neuro-presenting, Vol.104, No.759, pp.77-82(2005) (in Japan) Heihuangmai, Xiaoyezhihong, Benmura Ying, Mashengsen tree, Yingjing Mingmen: experimental evaluation of user preference model for movie content recommendation, technical report of Confuction society NC. neural computation, Vol.104, No.759, pp.77-82(2005)
(7) And (4) the field is happy: a hemodynamic reaction; new physiological psychology I book (Tengze Qing, Shimushengzhi, shan Kazai Sheng Zhi Man Shu), northern big-road study, Chapter 10, p.187(1998)
(8) J.a. russell: emotional boustrophedon model (A circular plex model of affect), J.personality and Social Psychology, Vol.39, pp.1161-1178(1980)
< Experimental example 3>
1. Subject matter of the experiment
In recent years, the digitalization and multi-channel of television broadcasting have been advanced. However, with the spread of information communication equipment with a smaller size and higher performance and the growth of information network environments due to the development of IT technology, the phenomenon of de-televising a television as entertainment has been accelerated, and the reasons for watching television have been diversified. Pingtian et al exemplify the reasons for watching television, such as "to understand what is happening and how to move about the society", "to relieve fatigue or relaxation", "to deepen or expand interpersonal relationships", and the like(1). In addition, as for the reason of television viewing and the way of viewing, there is a shift to a view point with weak participation such as an unconscious viewing attitude and a weakened viewing habit with the advance of television viewing. The "exclusive viewing" in which television is watched with attention, the "simultaneous viewing" in which television is watched in parallel with other life activities such as housework, diet, and learning, and the "watching without attention" in which many programs are not watched while switching channels are exemplified by the "watching without attention". Thus, it is obvious that the viewing modes are various and are changed according to factors such as time, place, and mood.
However, most of these studies for classifying viewing modes use psychological responses such as public opinion research and posterior internal provincial evaluation, and no studies for classifying viewing modes using physiological responses are available. Since the video and audio of the tv are visual and auditory stimuli to the living body, they can be regarded as stressors. Stressors induce stress responses in the organism, which produce physiological and psychological changes in response to stress.
When various contents of a television are regarded as stressors, it is expected that the contents are not dealt withThe physiological stress response is different from the viewing manner. The countryside and the like classify the corresponding states of the electronic learning contents through the cardiovascular system indexes(4)。
Therefore, conventionally, classification of viewing modes and preferences of TV video contents based on cardiovascular system indicators has been attempted(5). In addition, not only classification but also construction of a presumption model were studied.
Heihuan et al constructs a user preference model for video content by using naive Bayes, decision trees, Bayesian networks, and neural networks, and extracts prediction accuracy and important variables for content evaluation(6). However, this study is also a model using only psychological responses, and a model for estimating the preference and viewing style using physiological responses is not studied.
The results of the studies so far suggest the possibility of classifying the psychological state of the television viewer such as the preference of the content and immersion by the stress coping method and the heart rate based on the hemodynamic parameters. Therefore, the present study is directed to an experimental study for estimating the physiological and psychological states of the viewer who is watching television on the basis of the cardiovascular system index. Feature vectors are extracted from the cardiovascular system index, and creation and evaluation of each estimation model of preference, viewing style, and excitement-sedation of television image contents are performed.
2. Experiment of
When TV image content is viewed, cardiovascular system measurement by a continuous blood pressure meter is performed. Then, by performing pattern recognition using a hierarchical neural network based on the hemodynamic parameters, preference, viewing style, and excitement/sedation at the time of viewing TV video content are estimated.
< 2.1 > Experimental procedures
As shown in fig. 28, the experiment consisted of 5-minute video viewing and 1-minute before and after the viewing of the still eyes R1 and R2. Preference and immersion evaluation of various video contents are performed by Visual analog Scale (hereinafter abbreviated as VAS) before and after viewing of video and immersion investigation before and after viewing. In addition, changes in subjective excitement-sedation experienced by the subject are recorded in real time during the video viewing.
< 2.2 > Experimental conditions
The subjects were 14 healthy students (age: 19-22, mean age: 21.4, 7 males, 7 females) read by the university in japan. The subject was adequately specified the contents, purpose, and subject of the experiment in advance through oral and written descriptions, and the consent for the assistance of the experiment was confirmed through the signature. The measurement was carried out in a convection-free shielded room at a room temperature of 26.0. + -. 1.6 ℃ and an experimental practitioner was co-located in the same room to switch the image contents and confirm the physiological measurement conditions. In order to adapt the body surface temperature to room temperature, the experiment was performed after 20 minutes or more from the time the subject entered the room.
The subject viewed positive-preference, high-immersion (hereinafter abbreviated as "positive"), negative-preference, low-immersion (hereinafter abbreviated as "negative") video content, and 3 types of terrorist videos showing peculiar results of a large individual difference in immersion feeling despite of low preference, using a liquid crystal television (55 inch, HX850, BRAVIA, SONY) installed 2m in front of the seat. The video content is played by a DVD player (SCPH-3900, manufactured by SONY).
< 2.3 > measurement system
Fig. 29 shows a measurement system. The subject wears the continuous blood pressure monitor on the seat. The subject worn a continuous sphygmomanometer (Finometer model2, Finapres Medical Systems B.V. Inc.) at the second joint of the middle finger of the left hand, recorded on a PC at a sampling frequency of 200 Hz. Further, a keyboard (K270, manufactured by Logicool) is provided in front of the subject, and the excitement-sedation is recorded successively and relatively using software for inputting the subjective excitement-sedation change felt by the subject in real time by the up-down keys of the keyboard.
< 2.4 > evaluation method
The physiological indexes are Mean blood pressure (hereinafter abbreviated as MP), Heart rate (hereinafter abbreviated as HR), Stroke Volume (hereinafter abbreviated as SV), and cardiac outputOutput (hereinafter abbreviated as CO), Total peripheral vascular resistance (hereinafter abbreviated as TPR). It is known that hemodynamic parameters (MP, HR, SV, CO, TPR) show characteristic reaction patterns (pattern I, pattern II) depending on the external pressure, and this is an important concept in understanding the physiological response of the cardiovascular system to pressure. Specifically, mode I is characterized by an increase in myocardial contractile activity and an increase in blood volume in skeletal muscle due to vasodilation, and is said to be an energy-wasting response (proactive response). On the other hand, mode II is characterized by constriction of peripheral blood vessels and a substantial reduction in heart rate, which can be said to be an energy-saving response (passive countermeasure)(7). The baseline for the R1 interval was set to 0 to normalize MP, HR, SV, CO, TPR.
Before the start of the experiment and at the end of the experiment, VAS was measured using 4 subjective sensory measures of "wakefulness", "pleasure and unpleasantness", "vitality", and "like" as psychological indices. "pleasant and unpleasant feelings" and "conscious feelings" were selected as items of psychological evaluation in this study as essential components in the bivariate theory of emotions of Rousin(8). In the VAS, pairs of adjectives are arranged at both ends of a line segment, and a subject can measure psychophysical quantities of the subject by checking an arbitrary position on the line segment. In terms arranged at both ends of the scale, the feeling of wakefulness is set to "particularly sleepy" - "very wakefulness", the feeling of pleasantness and unpleasantness is set to "very unpleasant" - "very pleasant", the feeling of vitality is set to "particularly tired" - "particularly spiritual", and the preference is set to "particularly annoying" - "particularly liked".
Each VAS is prepared on a separate paper, instructing the subject to record in-person in sequence without recursive reference. Further, the immersion sensation on the image was measured by a 5-point scale (1: completely non-immersion-5: considerable immersion) after the end of the experiment. In the TEST of the difference between before and after viewing the video of each psychological index, a corresponding certain t-TEST is used, and the change amount of the entire TEST section in each physiological index is measured in the sequence of Wilcoxon's symbols. The excitation-sedation was normalized with respect to the maximum value added.
Pattern recognition by a hierarchical neural network is used for creating a preference classification model, deriving a discrimination rate, creating an excitement-sedation estimation model for each subject, and deriving an estimation value.
Fig. 30 shows the structure of a hierarchical neural network. The learning rule is set as an error back propagation algorithm, and the output function is set as a Sigmoid function. The layers are divided into 3 layers of an input layer, a middle layer and an output layer. A cross-validation method is used to estimate the accuracy of the model. Further, as shown in fig. 31, a determination time t is assumed for the temporal variation of each indexd。tdIs 300s in image viewing. For each model, at tdIn each feature time series in the interval, each Δ t is focused oniThe feature vector S is defined as a series of slopes at intervals of 10S. The prime number of S is 30. In this context, the number of cells in the input layer is 30 and the number of cells in the intermediate layer is 16.
<2, 4, 1> preference classification model
Using heart rate as a characteristic quantity, as Δ tiThe study was performed for 10s, 20s, 30s, 40s, 50s, and 60 s. The total learning data is 28 patterns of each taste (positive, negative) of the subject 14, and the accuracy of the taste classification model is evaluated by using 27 patterns as learning data and the remaining 1 pattern as unknown data. After learning the feature vector S in the case of "positive" or "negative", the accuracy of the preference classification model is evaluated by inputting unknown data. The output layer is 2 units of "Positive" and "Negative".
<2, 4, 2> viewing mode classification model
The characteristic variable used is a hemodynamic parameter, Δ tiThe study was performed for 10s, 20s, 30s, 40s, 50s, and 60 s. The total learning data is a total of 42 patterns of the stress response modes (active response, passive response, and no stress response) of the subject 14, in which 41 patterns are used as learning data, and the remaining 1 pattern is used as unknown data to evaluate the accuracy of the preference classification model. The output layer is used for active coping and passive copingAnd 3 units without pressure response.
<2, 4, 3> excitation-sedation presumption model
The hemodynamic parameter is used as the feature value, and Δ ti is set to 30 s. The total learning data is 28 patterns of the excitation-sedation values in each taste (positive or negative) of the subject 14, and the accuracy of the taste classification model was evaluated by using 27 patterns as learning data and the remaining 1 pattern as unknown data. The output layer is 1 unit of excitation-tranquilization value.
< 2.5 > results and study
In the VAS, the difference between pre-video viewing and post-video viewing averaged over all subjects in each emotional scale is shown in fig. 32 (+: p <0.1, p <0.05, p < 0.01). In fig. 32, N is 14. According to fig. 32, all subjects had low preference for horror image stimulation. Therefore, in the immersion evaluation at the 5 th stage after the viewing of horror video, 4 or 5 is classified as horror (concentration) (hereinafter abbreviated as "horror (C)"), and 1 or 2 is classified as horror (non-concentration) (hereinafter abbreviated as "horror (N)"). The results are then classified as "positive", "negative", "horror (C)", "horror (N)". The number of "terrorists (C)" was 10, and the number of "terrorists (N)" was 4.
The significant probability p of the symbolic sequential TEST of Wilcoxon in relation to the shift of the baseline from the entire TEST interval of MP, HR, SV, CO, TPR is shown in fig. 33 (+: p <0.1,: p <0.05,: p < 0.01). In the table, P represents a positive response to each index, and N represents a negative response. In fig. 33, N is 14.
Referring to fig. 33, the viewing modes when TV video is viewed are classified. Both "positive" and "terror (C)" have a high immersion, but "positive" is a positive preference and "terror (C)" is a negative preference.
However, both see an increase in MP, HR, SV and CO, and a decrease in TPR. This is a typical mode I response (active response) that is dominated by an increase in myocardial contractile activity. That is, it is obvious that the TV video contents are immersed in the active time control regardless of the preference.
On the other hand, immersion and preference of "negative" and "terrorism (N)" are low. However, their physiological responses are different. That is, in "negative", although a decrease in TPR and HR is observed, no change in MP is observed. It is considered that the response is not either active or passive but no response. On the other hand, although there is no significant variation in HR of "terrorism (N)", significant increases in TPR and MP are characteristic. This is a typical mode II response (passive response) that is mainly an increase in MP caused by an increase in peripheral vasoconstriction.
Thus, it is clear that when stress responses are not shown, the preference is generally low and not immersed in TV image content, but passive responses are shown with respect to horror images. That is, it is obvious that the viewing state cannot be specified only by the preference for TV video, and the viewing state can be classified according to the stress countermeasure method. Furthermore, in fig. 33, HR increased significantly in "positive" and decreased significantly in "negative".
Next, the results of the discrimination rate when discriminating the preference and the viewing system based on the estimation model constructed using the neural network are shown in fig. 34 and 35. In fig. 34 and 35, Δ tiAt 50s, the positive discrimination rate of preference is 83.3%, the positive discrimination rate of viewing style is 75%, and Δ t is other thaniThe comparison shows a higher positive discrimination.
In addition, measured excitation-sedation and predicted excitation-sedation at the time of estimation of excitation-sedation are shown in fig. 36.
Fig. 36 shows the results of subject a who observed least error in excitation-sedation and predicted excitation-sedation. In "positive" on fig. 36, as increase in excitation-sedation was observed, it was predicted that excitation-sedation also increased.
Furthermore, in "negative" in fig. 36, as the excitation-sedation was observed to decrease, the excitation-sedation was predicted to decrease as well. Next, the average error of each subject when the absolute value of the difference between the measured excitation-sedation and the predicted excitation-sedation is used as an error is shown in fig. 37. In FIG. 37, the average error of "positive" is 0.10 to 0.37, the average error of "negative" is 0.11 to 0.29, and the average error is 0.17 on the average among all subjects. That is, excitation-sedation was estimated with an error of about 17% on average. Further, a graph showing a series of measured excitement-sedation and predicted excitement-sedation relationships taken every 10s interval for all subjects is shown in fig. 38. In fig. 38, the correlation coefficient is 0.89, and a high correlation is obtained. As can be seen from fig. 38, the distribution of measured excitation-sedation and predicted excitation-sedation was approximately uniform regardless of the size of the values.
3. Summary of the invention
This study was conducted to extract feature vectors from cardiovascular system indicators for the purpose of estimating the viewing style, preference, and excitement-sedation of television video contents using a neural network, and to generate and evaluate estimation models of the preference, viewing style, and excitement-sedation of television video contents.
As a result, positive preference, "positive" and low preference, high immersion "terror (C)" are classified as active handling, "negative preference, high immersion" terror (N) "are classified as passive handling, and negative preference, low immersion" negative "are classified as no-stress handling. Furthermore, the heart rate differs significantly between "positive" and "negative" center rates, with results that follow the general characteristics of heart rate response. The positive discrimination rate when discriminating between the favorite and viewing manner using the estimation model is at most 83.3% of favorite and 75% of viewing manner.
The result of the excitation-sedation estimated for each subject was that the average error between the excitation-sedation and the predicted excitation-sedation was found to be 10 to 37% in "positive" and 11 to 29% in "negative", with an average of 17% among all subjects. Further, it can be seen that the correlation coefficient between the measured excitation-sedation and the predicted excitation-sedation was 0.89, which is a strong positive correlation. As described above, this suggests the possibility of estimating the preference, viewing style, and excitement-sedation of viewing tv video content using the hemodynamic parameters and the heart rate. In the future, it is planned to further improve the accuracy by comparison and study with other physiological indexes and other discriminators.
< reference >
(1) All-grass of Western ramus: do TV through network recovery, release during the month (2001)
(2) The medicinal composition is prepared from: evaluation and expectation of TV in public opinion survey report-survey report from "role of TV" -, research and survey on NHK presentation No.6 month, pp.2-15(1989)
(3) Wild wood is Yuming: chapter 9 television viewing attitude of female university students and factors 2 and 3: from preliminary investigations (study of program analysis and viewing learning behavior: development of classification systems for displaying educational programs) relating to the personality traits of the viewer, interest in program content, objectives of television viewing, research reports, Vol.18, pp.153-172(1990)
(4) Y.takahashi and s.john: "Recommendation Models of telematics Program Genre, Based on Survey and Analysis of Beha speaker in Watching telematics: toward Human Content Interface Design (2) ", Bulletin of Japanese Society for Science of De sign, Vol.46, No.133, pp.71-80(1999) (in Japanese) Gaoqian, Shackleton John: program type recommendation model based on survey analysis of television viewing behavior: design research facing humanized content interface design (2), Vol.46, No.133, pp.71-80(1999)
(5) Youzong Youzou and Yuanzuo are as follows: "relationship between television viewing attitude and overall program synthesis" as a relaxation time device, "research and investigation in exhibition, Vol.51, pp.2-17(2001)
(6) Nomura, y.kurosawa, n.ogawa, c.m.altraf Irfan, k.yajima, s.handri, t.yamagishi, k.t.nakahira, and y.fukumura: "psychological Evaluation of a student in E-learning Sessions by Hemodynamic Response", IEEJ Trans. EIS, Vol.131, No.1, pp.146-151(2011) Setarian income, Irfan C.M. Althaff, Hill-Boss, Niger Ou, Yudao Boss, Zhongping Sheng, Xiaochuan Xin, Handri Santoso, Fucun Haomai: study on physiological evaluation of subjects in electronic learning by hemodynamic parameters, theory of electrics C, Vol.131, No.1, pp.146-151(2011)
(7) And (4) the field is happy: a hemodynamic reaction; new physiological psychology I book (Tengze Qing, Shizuogong, shan Zai Sheng Zhi, shan Zai Sheng Man Shu), northern Dai Booth, Chapter 10, pp.187(1998)
(8) Michimori: relationship between the alpha attenuation test, subjective sleepiness and performance test, 10th Symposium on Human In front, Vol.10, No.1413, pp.233-236(1992)
(9) M.terasaki, y.kishimoto, and a.koga: "Construction of a multiple mood scale", The Japanese journal of depression, Vol.62, No.6, pp.350-356(1992) Temple Nakazaki, Shousanyang I, ancient New year lovers: generation of a Scale of Multi-sided emotional State, journal of psychological research treatises, Vol.62, No.6, pp.350-356(1992)
(10) J.a. russell: a circumplex model of affect, J.personality and Social Psychology, Vol.39, pp.1161-1178(1980)
Industrial applicability
The stress coping system according to the present invention can analyze the stress of the subject in a non-contact state, and therefore, is widely used in the technical field as a means for grasping the stress state of a worker who works in a factory or the like, a means for grasping the stress state of a driver who is driving an automobile, a means for grasping the stress state of a student who is listening to a lecture, and the like.
Description of the reference numerals
100 pressure response mode determination system
110 biological information acquiring apparatus (biological information acquiring unit)
120 judging device (judging part)
121 feature value storage unit for determination
122 specific site reaction detecting part
123 response mode determination unit
130 learning device (machine learning part)
131 data memory for learning
132 feature value extracting unit
133 feature amount learning unit
134 learning finish model
P person to be examined
IF face image
S1 feature storage process for determination (feature storage step for determination)
S2 specific site reaction detecting Process (specific site reaction detecting step)
S3 response mode determination processing (response mode determination step)
S11 learning data storage processing (learning data storage step)
S12 feature quantity extraction processing (feature quantity extraction step)
S13 feature learning process (feature learning step)
S21 clustering process (clustering step)
S22 image extraction processing (image extraction step)
S23 edge extraction processing (edge extraction step)
S24 fractal parsing processing (fractal parsing step).
Claims (21)
1. A pressure response mode determination system is characterized by comprising:
a biological information acquisition unit that acquires biological information of a subject in a non-contact state;
a determination unit that determines a stress response mode of the subject based on the biological information and a predetermined response pattern,
the response mode is determined based on hemodynamic parameters.
2. The pressure responding mode determining system according to claim 1,
the hemodynamic parameters include a plurality of parameters of mean blood pressure, heart rate, cardiac output, stroke volume, and total peripheral vascular resistance.
3. The pressure coping manner determining system according to claim 1 or 2,
the biological information is a face image.
4. The pressure responding mode determining system according to claim 3,
the facial image is a facial thermal image or a facial visual image.
5. The pressure coping manner determining system according to claim 3 or 4,
the determination unit determines a manner of coping with stress of the subject by observing a stress response of a specific portion of the face included in the face image.
6. The pressure responding mode determining system according to claim 5,
the response modes include three modes of "active response", "passive response", and "non-response".
7. The pressure responding mode determining system according to claim 6,
the determination unit includes a determination feature storage unit for storing a spatial feature corresponding to the "active handling", a spatial feature corresponding to the "passive handling", and a spatial feature corresponding to the "non-handling",
the pressure coping method is determined to be a mode showing one of "active coping", "passive coping", and "no coping", based on the biological information and each spatial feature amount stored in the feature amount storage unit for determination.
8. The pressure responding mode determining system according to claim 7,
the feature amount stored in the feature amount storage unit for determination is a feature amount extracted by a machine learning unit,
the machine learning unit includes:
a learning data storage unit for storing a plurality of learning face images labeled with labels corresponding to "active response", "passive response", and "no response", respectively;
a feature value extraction unit that extracts a spatial feature value of the face image from the face image for learning using a learned model;
and a feature amount learning unit configured to change a network parameter of the learned model so that the spatial feature amount obtained by the feature amount extraction unit is extracted with high accuracy, based on a relationship between an extraction result obtained by the feature amount extraction unit and a label attached to the face image for learning as an extraction target.
9. The pressure coping manner determining system according to claim 7 or 8,
the spatial feature quantity is a fractal dimension calculated based on a face image of a subject.
10. A program for causing a computer to function as a means for determining a stress countermeasure method for a subject, the program comprising:
a determination feature quantity storage step of storing a spatial feature quantity corresponding to the "active correspondence", a spatial feature quantity corresponding to the "passive correspondence", and a spatial feature quantity corresponding to the "no correspondence";
a determination step of determining whether the stress response mode of the subject is a mode showing any one of "active response mode", "passive response mode" and "no response mode" based on the face image of the subject and each spatial feature amount stored in the feature amount storage step for determination,
the response mode is determined by a hemodynamic parameter.
11. The program of claim 9, having:
a learning data storage step of storing a plurality of learning face images labeled with labels corresponding to "active response", "passive response", and "no response", respectively;
a feature amount extraction step of extracting a spatial feature amount of the face image for learning using the learned model;
a learning step of changing a network parameter of the learned model so that the extraction accuracy of the feature amount obtained by the feature amount extraction step is high, based on a relationship between the extraction result obtained by the feature amount extraction step and a label attached to the face image for learning as an extraction target,
the determination feature storage step is a step of storing the spatial feature extracted in the feature extraction step.
12. The program according to claim 10 or 11,
the spatial feature quantity is a fractal dimension calculated based on a face image of a subject.
13. A method for determining a pressure response method is characterized by comprising:
a biological information acquisition step of acquiring biological information of a subject in a non-contact state;
a determination step of determining a stress countermeasure method of the subject based on the biological information and a predetermined response pattern,
the response mode is determined by hemodynamic parameters.
14. A learning device is characterized by comprising:
a learning data storage unit that stores a plurality of learning face images labeled with labels corresponding to response patterns specified by hemodynamic parameters;
a feature value extraction unit that extracts a spatial feature value of the face image of the subject from the face image for learning using a learned model;
and a feature amount learning unit configured to change a network parameter of the learned model so that the spatial feature amount obtained by the feature amount extraction unit is extracted with high accuracy, based on a relationship between an extraction result obtained by the feature amount extraction unit and a label attached to the face image for learning as an extraction target.
15. The learning apparatus of claim 14,
the spatial feature quantity is a fractal dimension calculated based on a face image of a subject.
16. A learning method is characterized by comprising:
a learning data storage step of storing a plurality of learning face images labeled with labels corresponding to response patterns determined by hemodynamic parameters;
a feature amount extraction step of extracting a spatial feature amount of a face image of a subject from the face image for learning by using a learned model;
and a feature amount learning step of changing a network parameter of the learned model so that the spatial feature amount obtained in the feature amount extraction step is extracted with high accuracy, based on a relationship between the extraction result obtained in the feature amount extraction step and a label attached to the face image for learning as an extraction target.
17. The learning method according to claim 16,
the spatial feature quantity is a fractal dimension calculated based on a face image of a subject.
18. A program for causing a computer to function as a mechanism for learning a spatial feature amount of a face image of a subject, comprising:
a learning data storage step of storing a plurality of learning face images labeled with labels corresponding to response patterns determined by hemodynamic parameters;
a feature amount extraction step of extracting a spatial feature amount of a face image of a subject from the face image for learning by using a learned model;
and a feature amount learning step of changing a network parameter of the learned model so that the spatial feature amount obtained in the feature amount extraction step is extracted with high accuracy, based on a relationship between the extraction result obtained in the feature amount extraction step and a label attached to the face image for learning as an extraction target.
19. The program according to claim 18,
the spatial feature quantity is a fractal dimension calculated based on a face image of a subject.
20. A learning completion model is characterized in that,
the learned model is generated by using a plurality of learning face images labeled with labels corresponding to response patterns determined by hemodynamic parameters as training data and performing machine learning on spatial feature values of the face image of the subject.
21. The learned model of claim 20,
the spatial feature quantity is a fractal dimension calculated based on a face image of a subject.
Applications Claiming Priority (3)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| JP2019-158134 | 2019-08-30 | ||
| JP2019158134 | 2019-08-30 | ||
| PCT/JP2020/032763 WO2021040025A1 (en) | 2019-08-30 | 2020-08-28 | Stress management mode determination system, stress management mode determination method, learning device, learning method, program, and trained model |
Publications (2)
| Publication Number | Publication Date |
|---|---|
| CN114466618A true CN114466618A (en) | 2022-05-10 |
| CN114466618B CN114466618B (en) | 2024-11-22 |
Family
ID=74847828
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN202080068209.5A Active CN114466618B (en) | 2019-08-30 | 2020-08-28 | Stress coping style determination system and method, learning device and method, program and learning completion model |
Country Status (2)
| Country | Link |
|---|---|
| JP (1) | JP6896925B2 (en) |
| CN (1) | CN114466618B (en) |
Families Citing this family (3)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN114668396B (en) * | 2022-02-18 | 2024-08-13 | 刘梅颜 | Method and automated device for applying, intervening and recording human psychological stress |
| KR20250149793A (en) * | 2023-03-29 | 2025-10-16 | 닛토덴코 가부시키가이샤 | Psychological state information acquisition device and robot |
| CN119361102B (en) * | 2024-09-29 | 2025-07-04 | 广州医科大学附属妇女儿童医疗中心 | Simulation teaching and management system based on radiology department real medical records medical image files |
Citations (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| JPH0654836A (en) * | 1991-07-30 | 1994-03-01 | Muneaki Mizote | Method and device for inferring psychorogical variation of person |
| JP2007068620A (en) * | 2005-09-05 | 2007-03-22 | Konica Minolta Holdings Inc | Psychological condition measuring apparatus |
-
2020
- 2020-08-28 JP JP2020144666A patent/JP6896925B2/en active Active
- 2020-08-28 CN CN202080068209.5A patent/CN114466618B/en active Active
Patent Citations (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| JPH0654836A (en) * | 1991-07-30 | 1994-03-01 | Muneaki Mizote | Method and device for inferring psychorogical variation of person |
| JP2007068620A (en) * | 2005-09-05 | 2007-03-22 | Konica Minolta Holdings Inc | Psychological condition measuring apparatus |
Non-Patent Citations (3)
| Title |
|---|
| AKIO NOZAWA 等: "Estimation Method of Information Understanding in Communication by Nasal Skin Thermogram", 《IEEJ TRANS.FM》, vol. 126, no. 9, 31 December 2006 (2006-12-31), pages 909 - 915, XP055796194 * |
| HIROTOSHI ASANO 等: "Stress Evaluation while Prolonged Driving Operation Using the Face Skin Temperature", SICE :SOCIETY OF INSTRUMENT AND CONTROL ENGINEER》, vol. 47, no. 1, 31 January 2011 (2011-01-31), pages 2 - 7, XP055796192, DOI: 10.9746/sicetr.47.2 * |
| TAKUYA MENNUKI等,: "Estimation of Mode of Viewing TV and Preference of TV Contents by Autonomic Nervous System Index", 《INSTRUMENT OF ELECTRICAL ENGINEERS OF JAPAN》, vol. 134, no. 10, 29 October 2013 (2013-10-29), pages 1551 - 1556 * |
Also Published As
| Publication number | Publication date |
|---|---|
| CN114466618B (en) | 2024-11-22 |
| JP6896925B2 (en) | 2021-06-30 |
| JP2021037287A (en) | 2021-03-11 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| Bota et al. | A review, current challenges, and future possibilities on emotion recognition using machine learning and physiological signals | |
| Schmidt et al. | Wearable-based affect recognition—A review | |
| Greco et al. | Advances in electrodermal activity processing with applications for mental health | |
| Alarcao et al. | Emotions recognition using EEG signals: A survey | |
| Petrantonakis et al. | Emotion recognition from brain signals using hybrid adaptive filtering and higher order crossings analysis | |
| López-Gil et al. | Method for improving EEG based emotion recognition by combining it with synchronized biometric and eye tracking technologies in a non-invasive and low cost way | |
| Colomer Granero et al. | A comparison of physiological signal analysis techniques and classifiers for automatic emotional evaluation of audiovisual contents | |
| Soleymani et al. | Continuous emotion detection in response to music videos | |
| Moon et al. | Implicit analysis of perceptual multimedia experience based on physiological response: A review | |
| Grandchamp et al. | Stability of ICA decomposition across within-subject EEG datasets | |
| Gaskin et al. | Using wearable devices for non-invasive, inexpensive physiological data collection | |
| CN114466618B (en) | Stress coping style determination system and method, learning device and method, program and learning completion model | |
| Hamzah et al. | EEG‐Based Emotion Recognition Datasets for Virtual Environments: A Survey | |
| Kusano et al. | Stress prediction from head motion | |
| Cheng et al. | Enhancing positive emotions through interactive virtual reality experiences: An eeg-based investigation | |
| Jin et al. | Brain-metaverse interaction for anxiety regulation | |
| Nia et al. | FEAD: Introduction to the fNIRS-EEG affective database-video stimuli | |
| Nagasawa et al. | Continuous estimation of emotional change using multimodal responses from remotely measured biological information | |
| JP7767763B2 (en) | Biological information processing device and biological information processing system | |
| Ivonin et al. | Automatic recognition of the unconscious reactions from physiological signals | |
| CN112613364A (en) | Target object determination method, target object determination system, storage medium, and electronic device | |
| US12201446B2 (en) | Stress coping style determination system, stress coping style determination method, learning device, learning method, program, and learned model | |
| US20240382125A1 (en) | Information processing system, information processing method and computer program product | |
| Koelstra | Affective and Implicit Tagging using Facial Expressions and Electroencephalography. | |
| Arpaia et al. | A wearable brain-computer interface to play an endless runner game by self-paced motor imagery |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| PB01 | Publication | ||
| PB01 | Publication | ||
| SE01 | Entry into force of request for substantive examination | ||
| SE01 | Entry into force of request for substantive examination | ||
| GR01 | Patent grant | ||
| GR01 | Patent grant |