WO2025186299A1 - Système et procédé - Google Patents
Système et procédéInfo
- Publication number
- WO2025186299A1 WO2025186299A1 PCT/EP2025/055940 EP2025055940W WO2025186299A1 WO 2025186299 A1 WO2025186299 A1 WO 2025186299A1 EP 2025055940 W EP2025055940 W EP 2025055940W WO 2025186299 A1 WO2025186299 A1 WO 2025186299A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- subject
- sensory media
- data
- emotional response
- input
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/72—Signal processing specially adapted for physiological signals or for diagnostic purposes
- A61B5/7235—Details of waveform analysis
- A61B5/7264—Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems
- A61B5/7267—Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems involving training the classification device
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/16—Devices for psychotechnics; Testing reaction times ; Devices for evaluating the psychological state
- A61B5/165—Evaluating the state of mind, e.g. depression, anxiety
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/24—Detecting, measuring or recording bioelectric or biomagnetic signals of the body or parts thereof
- A61B5/316—Modalities, i.e. specific diagnostic methods
- A61B5/369—Electroencephalography [EEG]
- A61B5/377—Electroencephalography [EEG] using evoked responses
- A61B5/378—Visual stimuli
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H10/00—ICT specially adapted for the handling or processing of patient-related medical or healthcare data
- G16H10/20—ICT specially adapted for the handling or processing of patient-related medical or healthcare data for electronic clinical trials or questionnaires
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H50/00—ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
- G16H50/20—ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for computer-aided diagnosis, e.g. based on medical expert systems
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H50/00—ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
- G16H50/30—ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for calculating health indices; for individual health risk assessment
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H50/00—ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
- G16H50/70—ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for mining of medical data, e.g. analysing previous cases of other patients
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H10/00—ICT specially adapted for the handling or processing of patient-related medical or healthcare data
- G16H10/40—ICT specially adapted for the handling or processing of patient-related medical or healthcare data for data related to laboratory analysis, e.g. patient specimen analysis
Definitions
- the present disclosure relates to computer implemented methods and systems for encoding a predicted emotional reaction of a user to input sensory media sensorially experienced by the user, and predicting an emotional reaction of a user to input sensory media sensorially experienced by the user.
- Sensory media may refer to media content that engages senses beyond just sight and sound.
- Traditional media primarily rely on sight and sound, but sensory media may incorporate touch, smell, and sometimes even taste into the experience.
- This can include technologies like virtual reality (VR) and augmented reality (AR), which aim to immerse users in a multi-sensory environment.
- VR headsets can provide visual and auditory experiences while also simulating touch through haptic feedback devices.
- the term sensory media is used to refer to media content that engages any one of the human senses, including but not limited to either sight or sound.
- the present disclosure aims to at least partially solve the above problem.
- a computer system for encoding a predicted emotional reaction of a subject to sensory media to be sensorially experienced by the subject comprising: an emotion prediction unit configured receive input data corresponding to sensory media to be sensorially experienced by a subject, to execute a first computer programme configured to determine a predicted emotional response of a subject to the input sensory media based on the input data and output emotional response data corresponding to the predicted emotional response; an anchor determining unit configured to determine at least one anchor within the sensory media to which the emotional response data is attributable, and output the at least one anchor; and an emotion encoding unit configured to generate an emotionally encoded representation of the input sensory media, whereby the emotionally encoded representation of the sensory media is encoded with data relating to the emotional response of the subject to the input sensory media, based on the input sensory media, the emotional response data and the at least one anchor.
- a computer-implemented method of encoding a predicted emotional reaction of a subject to sensory media to be sensorially experienced by the subject comprising: receiving input data corresponding to sensory media to be sensorially experienced by a subject; determining a predicted emotional response of a subject to the input sensory media based on the input data and outputting emotional response data corresponding to the predicted emotional response; determining at least one anchor within the sensory media to which the emotional response data is attributable, and outputting the at least one anchor; and generating an emotionally encoded representation of the input sensory media, whereby the emotionally encoded representation of the sensory media is encoded with data relating to the emotional response of the subject to the input sensory media, based on the input sensory media, the emotional response data and the at least one anchor.
- a computer system for predicting an emotional response of a subject to sensory media experienced by the subject comprising: an emotion prediction unit configured receive input data corresponding to sensory media to be sensorially experienced by a subject, to execute a computer programme configured to determine a predicted emotional response of a subject io me input sensory media based on the input data and output emotional response data corresponding to the predicted emotional response.
- a computer-implemented method of predicting an emotional response of a subject to sensory media experienced by the subject comprising: receiving an input dataset comprising data corresponding to sensory media sensorially experienced by a subject; determining a predicted emotional response of a subject to the input sensory media based on the input data, and at least one anchor within the sensory media to which the emotional response data is attributable; and producing an output dataset comprising emotional response data corresponding to predicted emotional response together with the at least one anchor.
- a machine-learning model for predicting an emotional response of a subject to an sensory media experience by the subject comprising: an artificial neural network trained to determine the predicted emotional response of a subject to an input sensory media based on the input sensory media, having been trained based on training data comprising sensory media data relating to at least one training sensory media experienced by a real subject labelled with corresponding emotional response data relating the real subject’s emotional reaction to the training sensory media.
- a computer-implemented method of training a machine-learning model for predicting an emotional response of a subject to sensory media experiences by the subject comprising: receiving an input training dataset comprising sensory media data relating to at least one training sensory media experienced by a real subject labelled with corresponding emotional response data relating the real subject’s emotional reaction to the training sensory media and training the machine learning algorithm to associate features of an input sensory media with emotional reactions.
- a computer-implemented method of generating a training dataset for a machine-learning model comprising: obtaining physiological response data relating to a physiological response of a real subject to sensory media expedience uy the subject, and sensory media data relating to the sensory media experienced by the subject.
- a training dataset for use in the method of training a machine-learning model of the sixth comprising: physiological response data relating to a physiological response of a real subject to sensory media experience by the subject, and sensory media data relating to the sensory media experienced by the subject.
- a use of the method of the fourth aspect for at least one of: generating an emotionally encoded representation of input sensory media encoded with the predicted emotional response of a subject to the sensory media; and modifying sensory media based on the predicted emotional response of a subject to the sensory media.
- a computer program product comprising instructions which, when the program is executed by a computer, cause the computer to carry out the method of any one of the preceding aspects.
- a data processing device comprising means for carrying out the method of any one of the preceding aspects.
- a computer-readable storage medium comprising instructions which, when executed by a computer, cause the computer to carry out the method of any one of the preceding aspects.
- the subject is a real or virtual subject.
- the first computer programme comprises a first machine learning model trained to determine the predicted emotional response of a subject to the input sensory media based on the input data.
- training data for training the first machine learning model further comprises sensory media data relating to training sensory media experienced by a real subject, labelled with corresponding emotional response data.
- the training emotional response data is based on at least one emotional response of a real subject to training sensory media.
- the emotional response is determined based on physiological data relating to a physiological response of the subject to the training sensory media.
- the emotional response data for training the first machine learning model is determined by a second computer programme configured to determine an emotional response based on input physiological data.
- the second computer programme comprises a second machine learning model trained to determine a predicted emotional response based on input physiological data.
- the second machine learning model is trained based on training data comprising physiological data relating to a physiological response of the subject to the training sensory media labelled with a corresponding emotional response.
- the second computer programme is configured to label the sensory media data with emotional response data by determining the emotional response data based on physiological data corresponding to the sensory media data.
- the physiological response data relates to at least brain activity.
- the physiological response data comprises EEG data.
- the physiological response data relates to at least one of eye movement, pupil dilation, heart rate, and sweating.
- the training data is obtained by a training data obtaining system comprising: a physiological data collection unit configured to collect physiological response data relating to a physiological response of a training subject to an input sensory media.
- the training data obtaining system further comprises a user interface unit configured to enable a user to sensorially experience the input sensory media.
- the training data on which the first machine learning model is trained additionally comprises subject data relating to characteristics of the training subject, such that the first machine learning model is configured to make the determination auuiuuiuuiy based on input subject data relating to the subject.
- the emotionally encoded representation of the input sensory media is visually encoded with data relating to the emotional response of the subject to the input sensory media.
- the encoded data is configured to be visually decodable by a second user.
- the encoded data represents the emotional response of the subject to the input sensory media using variation in colour.
- the encoded data represents the emotional response of the subject to the input sensory media using a heat map.
- the emotionally encoded representation of the input sensory media comprises emotionally encoded sensory media that can be sensorially experienced by the subject.
- a user interface unit is configured to enable the user to sensorially experience the emotionally encoded sensory media.
- the emotionally encoded representation of the input sensory media comprises an image of the emotionally encoded sensory media.
- the emotional response data relates to at least one of a level of stress, a level of attentiveness, and a level of relaxation experienced by the subject.
- the sensory media comprises elements that are sensorially experienced ai least one of visually, audibly, haptically, thermally, equilibrioceptively, nociceptively, olfactorially, and gustatorially.
- the anchor determining unit is configured to determine each anchor based on an interaction between the subject and the input sensory media.
- the interaction comprises at least one of an experienced location of the subject within the input sensory media, an experienced orientation of the subject within the input sensory media, a region of the subject’s sensory attention within the sensory media, and an experienced event within the sensory media.
- Fig. 1 shows an example operational system
- Fig. 2 shows example emotionally encoded representations of a virtual environment
- Fig. 3 shows further example emotionally encoded representations of a virtual environment
- Fig. 4 shows an example training system for a machine learning model.
- Fig. 1 shows a first example system 100 according to the disclosure.
- the example system 100 may comprise a computer system 101 and a display system 102.
- the computer system 101 comprises an emotion prediction unit 103.
- the emotion prediction unit 103 is configured receive input data corresponding to sensory media to be sensorially experienced by a subject and output emotional response data corresponding to a predicted emotional response.
- the display system 102 is configured to display an emotionally encoded representation of the input sensory media, whereby the emotionally encoded representation of the sensory media is encoded with the cuiuuuu response data.
- the sensory media may comprise elements that are sensorially experienced at least one of visually, audibly, haptically, thermally, equilibrioceptively, nociceptively, olfactorially, and gustatorially, for example.
- the sensory media may comprise visual and audible elements experienced by a subject though a combination of a visual display and a speaker.
- the visual display and/or the speaker may form part of a headset worn by a subject, e.g. a VR headset such as those from Oculus VRTM.
- the sensory media may be, or form part of, a virtual environment in some examples.
- the emotional response data may relate to at least one of a level of stress, a level of attentiveness, and a level of relaxation, experienced by the subject. Alternatively, or additionally, the emotional response data may relate to one or more of: happiness, sadness, anger and disgust. Alternatively, or additionally, the emotional response data may relate to one or more of: cognitive load, mental strain, cognitive fatigue, and cognitive relaxation. Preferably the emotional response data relates to a numerical value or score relating to the intensity of one or more emotions, e.g. any of those identified above.
- Fig. 2 shows different emotionally encoded representations of an input virtual environment, as the sensory media, when a subject moved through a path in the virtual environment from A to B, as shown in the in the left-hand part of the Figure.
- the numbers 1 to 4 in the Figure denote points along the path.
- the left -hand parts of the Figure show a heat map 200 of the subject’s emotional reaction at different locations in the virtual environment 300. In this example, the emotional response data is attributable to the subject’s location in the virtual environment.
- the top right part of the Figure is an example of a three-dimensional emotionally encoded representation, whereas the bottom right part of the Figure is an example of a two-dimensional emotionally encoded representation.
- Fig. 3 shows different emotionally encoded representations of an input virtual environment when a subject interacts with an object 400 in the virtual environment, as shown in the in the left-hand part of the Figure.
- the emotional response data is attributable to an object in the virtual environment with which the subject interreacted.
- the object itself may be emotionally encoded 500, e.g. with a colour corresponding with the subject’s emotional reaction.
- a visual representation of the interaction may be provided together with emotional encoding 500 of the object 400.
- the input data 104 corresponding to the sensory media may comprise data relating to a subject’s experience of the sensory media.
- This data may comprise visual and/or audio data, for example.
- This data may be pre-recorded or obtained live, as the subject is experiencing the sensory media. In an example where pre-recorded input data is provided, this may be provided by means of any suitable data storage medium.
- the sensory media may be generated by a sensory media generating unit (not shown).
- the sensory media generating unit may be in data communication with the computer system 101 to provide the input data.
- the sensory media generating unit may provide data and/or instructions to a user interface unit for enabling a subject to sensorially experience an input sensory media.
- input data comprising data relating to a subject’s experience of sensory media may be generated based on an input sensory media.
- a subject’s experience of the sensory media may be simulated.
- movement of a virtual subject through an (e.g. virtual) environment, and/or interaction between a virtual subject and the environment may be simulated. Simulation may be performed with or without user input.
- the computer system 101 may additionally comprise an anchor determining unit (not shown) configured to determine at least one anchor within the input sensory media to which the emotional reaction data is attributable.
- the anchor determining unit may be configured to determine each anchor based on an interaction between the subject and the input sensory media.
- the interaction may comprise at least one of an experienced location of the subject within an (e.g. virtual) environment corresponding to the input sensory media, an experienced orientation of the subject within an (e.g. virtual) environment corresponding to the input sensory media, a region of the subject’s sensory attention within the sensory media, an experienced event within the sensory media, or a time m me sensoi media.
- An experienced location may be a location in the sensory media, at which the subject experiences the sensory media. This may be represented by coordinates within a coordinate system of the sensory media, for example.
- An experienced orientation may be the orientation in the sensory media at which the subject experiences the sensory media. This may be represented by a direction within the coordinate system of the sensory media, for example.
- a region of the subject’s sensory attention may be a region of the sensory media that receives sensory attention from the subject. This may be visual sensory attention, for example based on the experienced location and/or orientation of the subject and/or eye tracking data to determine a region of the sensory media that the subject is looking at.
- sensory attention is not limited to visual attention. For example, if the subject interacts with the sensory media haptically, a region of the sensory media, such as a virtual object, that they experience touching may be a region of haptic sensory attention.
- An experienced event may be an event within the sensory media that is experienced by the subject.
- An experienced event may be any substantial change in the sensory media experienced by the subject.
- the experienced event may relate to any of the senses that the subject is able to experience the sensory media through, for example a visual event or an audio event.
- the event may be a planned narrative event within the sensory media, e.g. part of an unfolding story line.
- Anchors may be determined based on the input sensory media data.
- the input sensory media data may provide data relating to an experienced location and/or an experienced orientation.
- the sensory media data may include interaction data relating to subject interaction with the sensory media, for example if the subject is able to interact via a control means, data relating to the manner of control exercised by the subject may be used to determine an anchor.
- Control means may include one or more of movement sensors (e.g. cameras and associated movement recognition software), mechanical controllers (e.g. buttonsjoysticks, haptic gloves), means for voice control (e.g. microphone and associated voice leeogmuon software).
- Interaction data may also comprise gaze-tracking data relating to where the subject is looking in the sensory media at a given moment.
- Sensory media data may also include data relating to the sensory media itself, for example, events within the sensory media or objects within the sensory media.
- Such sensory media data may be provided by a processing unit configured to generate the sensory media or a simulation.
- the specific data used to determine the anchors may depend on the level of subject interaction permitted with the sensory media.
- the data for determining the anchors may be processed by a processing unit of the system to determine the anchors. This processing may be performed by a different processing unit to that which generates the sensory media. However, in some examples, these processing units may be the same processing units, or different units within the same processing device.
- the insights and inferences available from the sensory media data may differ depending on the data utilised.
- sensory media data relating to the subject’s movement through an (e.g. virtual) environment corresponding to the sensory media may be used to determine which regions of the sensory media have elicited an emotional response.
- the sensory media data relating to specific objects within an (e.g. virtual) environment corresponding to the sensory media, such as the subject’s experienced proximity to an object or sensory interaction with an object may be used to determine which objects have elicited an emotional response.
- the anchors may be parts of the sensory media with which emotional response data may be attributed.
- the anchors may be one or more voxels within a three- dimensional sensory media, or pixels of a two-dimensional representation of the sensory media. These voxels (or pixels) may be associated with a specific location within the sensory media and/or may be associate with a specific object within an (e.g. virtual) environment corresponding to the sensory media, for example.
- the anchor may be a fixed position within the sensory media. In the latter case, the anchor may not be in a fixed position within in the sensory media.
- Anchors may also be associated wnn a specific time-frame within the subject’s experience of the sensory media.
- Data for determining anchors may be sampled at the same rate as data for determining emotional reactions. However, they may alternatively be sampled at different rates. If sampled at different rates, data sampled at the higher rate may be averaged over the interval of the lower sampling rate to provide correspondence. The sampling may be performed continuously for the period the subject experiences the sensory media.
- the computer system 101 of the disclosure may further comprise an emotion encoding unit configured to generate an emotionally encoded representation of the input sensory media, whereby the emotionally encoded representation of the sensory media is encoded with the emotional response data, based on the input sensory media, the emotional response data and the at least one anchor.
- an emotion encoding unit configured to generate an emotionally encoded representation of the input sensory media, whereby the emotionally encoded representation of the sensory media is encoded with the emotional response data, based on the input sensory media, the emotional response data and the at least one anchor.
- the emotionally encoded representation of the input sensory media may be visually encoded with data relating to the emotional response of the subject to the input sensory media.
- the encoded data may be configured to be visually decodable by a user of the system. The user may be different from the subject, or may be the subject.
- the encoded data may represent the emotional response of the user to the input sensory media using variation in colour, shade, tone or transparency. For example, the encoded data may represent the emotional response of the user to the input sensory media using a heat map, as shown in Fig. 2.
- the emotionally encoded representation of the input sensory media may comprise an image of an emotionally encoded sensory media.
- the emotionally encoded representation of the input sensory media may comprise a two- dimensional image of a three-dimensional (e.g. virtual) environment corresponding to the input sensory media. This image may be a top-down (plan) view of an (e.g. virtual) environment corresponding to the sensory media, for example, as shown in Fig. 2.
- the emotion prediction unit 103 is configured to execute a computer programme configured to determine a predicted emotional response of a subject to the input sensory media based on the input data, and output emotional response data corresponding to the predicted emotional response.
- the compuiei programme comprises a first machine learning model trained to determine the predicted emotional response of a subject to the input sensory media based on the input data.
- Fig. 4 shows an example training system 200 for training the machine learning model 201.
- the machine learning model 201 may be trained on training data comprising sensory media data 202 relating to at least one training sensory media experienced by a real subject, labelled with emotional response data 203. Accordingly, the machine learning model 201 learns (in a supervised manner) to associate particular sensory media experiences with particular emotional responses.
- the emotional response data 203 may be based on at least one response of the real subject to the training sensory media.
- the emotional response may be determined based on physiological data 204 relating to a physiological response of the subject to the training sensory media.
- the emotional response data 203 may be determined by a second computer programme 205 configured to determine an emotional response based on input physiological data.
- the second computer programme may comprise a second machine learning model 205 trained to determine an emotional response based on input physiological data.
- Fig. 4 shows a subsystem 206 for training the second machine learning model 205.
- the second machine learning model may be trained based on training data comprising physiological data relating to a physiological response of a subject to training sensory media, labelled with a corresponding emotional response.
- the training data for the second machine learning algorithm may be obtained by measuring test subjects’ physiological responses to tests that elicit particular emotions. Accordingly, the machine learning algorithm 205 learns (in a supervised manner) to associate particular physiological responses to particular emotional responses.
- the tests may comprise real experiences of sensory media.
- emotional responses may be obtained.
- the emotional responses may be self-recorded by the subjects, e.g. through a questionnaire or similar, or obtained by observation by a human tester.
- the emotional responses may be determined based on alternative physiological responses, such as body language and/or fa ia, expressions, by the human tester and/or by a computer programme.
- the physiological response data and the emotional response data may be consolidated together to form the training dataset.
- the training data may additionally comprise subject data relating to characteristics of the training subject, such that the first machine learning-based model is configured to make the determination additionally based on input subject data relating to the subject.
- the subject characteristics include one or more of: age, gender, sex, sexual orientation, culture, nationality, neurodivergence (such as Autism, ADHD, Dyslexia). These characteristics may influence the emotional and physiological responses of different subjects.
- the second machine learning model 205 may be an artificial neural network (ANN), such as a deep neural network (DNN) or multilayer perceptron (MLP).
- ANN artificial neural network
- DNN deep neural network
- MLP multilayer perceptron
- alternative methods for determining the emotional response based on physiological response data may be used.
- a rules-based model may be used instead of a machine learning model.
- Fig. 4 shows a subsystem 207 for obtaining training data for training the first machine learning model 201, including the sensory media data 203 and the physiological response data 204.
- the sensory media data 202 relates to sensory media experienced by a test subject and the physiological response data 204 relates to the physiological response of the test subject to that sensory media. Both sets of data may be time-series data matched in time.
- the sensory media data 203 and the physiological response data 204 may be consolidated together to form a precursor to the training dataset.
- the physiological response data 204 may then be converted to emotional response data 203 to form the training dataset.
- the consolidated data may be divided into different data samples for training the first machine learning model.
- time-series data may be segmented to have a predefined length in time, or segmented to have varying lengths (e.g. including single video frames).
- the training data may additionally comprise subject data relating to characteristics of the training subject, such that the first machine learning-based model is configured to make the determination additionally based on input subject data relating to the subject.
- the subject characteristics include one or more of: age, gender, sex, sexual orientation, culture, nationality. These characteristics may influence the emotional and physiological responses of different subjects.
- the first machine learning model 201 may be an artificial neural network (ANN) model, such as a deep neural network (DNN) or multilayer perceptron (MLP).
- ANN artificial neural network
- DNN deep neural network
- MLP multilayer perceptron
- the physiological data may relate to at least one of brain activity, eye movement, pupil dilation, heart rate, and sweating.
- the training system 200 may comprise a physiological data collection unit configured to collect physiological response data relating to a physiological response of a subject to sensory media.
- the physiological data collection unit may comprise corresponding sub-units to collect the respective data.
- the physiological data collection unit may comprise EEG sensors configured to sense brain activity.
- the physiological data collection unit may comprise a camera (e.g. visible or infrared light), and associated software, to track eye movement and/or pupil dilation.
- the physiological data collection unit may comprise a heart rate monitor (e.g. a Holter monitor).
- the physiological data collection unit may comprise galvanic skin response electrodes to collect data relating to sweating.
- An EEG sensor system may comprise of the following, for example:
- Electrical activity measuring electrodes configured to be placed on or in the proximity to the head with the purpose of receiving and transmitting electrical activity travelling through the scalp having originated form the brain.
- An amplifier for amplifying and/or converting analogue electrical signals from the sensor into a digital signal that can be processed by a processor.
- a signal transmitter that will send the data from the amplifier to the processor.
- An eye tracking system may comprise of the following, for example: 1.
- a visual or infrared light camera directed towards the eyes with the memepose 01 measuring the eye movement and pupil dilation changes of the system subject.
- a receiver unit for the input of visual data which can be translated into digital data for the input of visual data which can be translated into digital data.
- a transmission unit for the purpose of transmission of digital data to a processor.
- a decrease in Alpha pattern brainwaves may indicate that the sensory media has elicited a higher than normal level of attention from the subject, for example.
- Increase in pupil dilation may indicate that the subject is attracted towards an object within the sensory media.
- Galvanic skin response and heart rate may indicate emotional arousal strength.
- the physiological response data may not be converted to emotion response data prior to training the first machine learning model 201.
- the machine learning model may be trained to predict a physiological response to sensory media and a second computer programme may determine emotional response data based on the output predicted physiological response data.
- the trained machine learning model 201 forms part of the emotional prediction unit 103 of the operational system 100. Predictions may be output to a data dashboard 105 to be viewed or otherwise used by a user. This may include the emotionally encoded representation of the sensory media, and may include additional forms of the data. These additional forms may include charts or graphs, for example, showing the changing level of emotion of a subject. The emotionally encoded representation and or additional forms of data may be output in real-time (or near realtime), as a subject experiences the sensory media.
- the emotionally encoded representation of the input sensory media comprises emotionally encoded sensory media that can be sensorially experienced by the user, e.g. via a user interface unit.
- the emotionally encoded representation of the input sensory media may be a modified version of the input sensory media.
- the emotionally encoded representation of the input sensory media may be fed back to the subject or a user of the system 100 as it is generated.
- the emotionally encoded representation may be stored in a memoiy 101 viewing later.
- the emotionally encoded representation of the input sensory media may comprise a representation of the input sensory media integrated with the encoded emotional reaction data.
- the data relating to the encoding may be indistinguishable from of the data relating to the representation of the input sensory media.
- the emotionally encoded representation of the input sensory media may comprise a representation of the input sensory media overlaid (or augmented) with the encoded emotional response data.
- the data relating to the encoding may be separate from the data relating to the representation of the input sensory media.
- the data relating to the encoding may be a two-dimensional or three-dimensional heatmap.
- Generating an emotionally encoded representation of the input sensory media may be preceded by combining emotional response data with the anchors. This may comprise mapping emotional response data to anchors, or vice versa.
- emotional response data for a specific time may be mapped to an anchor for the same time, i.e. relating to the subject’s interaction with the sensory media at the specific time.
- the emotional response data may then be determined and encoded as a heat map within encoded sensory media based on the anchors.
- the emotional response data may indicate a level of one or more emotional states including but not limited to those identified above, namely stress, attention, relaxation, happiness, sadness, anger, disgust, cognitive load, mental strain, cognitive fatigue, and cognitive relaxation. Colours, changes in colours, changes in colour tones and strengths of colours or changes in opaqueness may be used to visually represent these emotional states, e.g. in a heat map.
- the emotional states displayed may be configurable by a user of the system. User configurations may be provided as input via a configuration unit 106, shown in Fig. 1
- a colour range or colour strength may be assigned to correspond with each emotional state.
- the specific colour may be configurable by the user. These colours may represent the strength, decrease, increase or other change; for example, a light blue may repiesem a lower attention measurement, whilst a dark blue may represent a measurement of high attention. If the user moved from the co-ordinate of 0,0,0 to 0,10,10 in a virtual environment and their attention levels were measured to have increased from low attention to high attention at an even rate between the two points, the areas around which they began movement, the area in which they moved, and the destination area, may be coloured, starting with a light blue and demonstrating a gradual change to a dark blue across this path.
- a colour overlay may be placed over the image of the environment. This overlay is generated by analysing the data for its inference of each metric strength, and then applying the appropriate colour range to the correct spatial co-ordinates within the visual representation of the environment.
- copies of the original files containing the representations of the 3D objects in the environment may be created and altered. These files may be located in or linked to the software generating the 3D environment. These files will follow the same 3D layout as the original file, however altering the colours in the appropriate manner to display the aforementioned metrics.
- the files relating to or linked to these specific 3D objects may be copied, and altered colours will be applied in the same manner.
- the predicted emotional response output by the emotion prediction unit 103 may be used to alter the sensory media experienced by the user. This may occur in real-time as the subject is experiencing the sensory media in some case. In other cases, the sensory media may be modified for future subjects to experience. In the example of a computer game, if a level of stress of the subject is determined to be too low or too high compared to a desired level, then sensory media may be modified to include stress inducing features and/or remove stress reducing features, or remove sitesss inuuunig features and/or include stress reducing features respectively. The same principle applies for other emotional reactions experienced by the subject and any or all senses that the sensory media relates to. Examples may include modifying media content, visual image colour, hue or brightness, audio pitch, volume or tempo, or haptic feedback strength.
Landscapes
- Health & Medical Sciences (AREA)
- Engineering & Computer Science (AREA)
- Public Health (AREA)
- Medical Informatics (AREA)
- Life Sciences & Earth Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Biomedical Technology (AREA)
- Pathology (AREA)
- Primary Health Care (AREA)
- Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Epidemiology (AREA)
- Psychiatry (AREA)
- Heart & Thoracic Surgery (AREA)
- Biophysics (AREA)
- Databases & Information Systems (AREA)
- Molecular Biology (AREA)
- Surgery (AREA)
- Animal Behavior & Ethology (AREA)
- Veterinary Medicine (AREA)
- Artificial Intelligence (AREA)
- Psychology (AREA)
- Hospice & Palliative Care (AREA)
- Educational Technology (AREA)
- Developmental Disabilities (AREA)
- Social Psychology (AREA)
- Child & Adolescent Psychology (AREA)
- Evolutionary Computation (AREA)
- Fuzzy Systems (AREA)
- Mathematical Physics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Physiology (AREA)
- Signal Processing (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
Selon un premier aspect de l'invention, l'invention concerne un système informatique destiné à coder une réaction émotionnelle prédite d'un sujet à des supports sensoriels devant être expérimentés sensoriellement par le sujet, comprenant : une unité de prédiction d'émotion configurée pour recevoir des données d'entrée correspondant à des supports sensoriels devant être expérimentés sensoriellement par un sujet, pour exécuter un premier programme informatique configuré pour déterminer une réponse émotionnelle prédite d'un sujet aux supports sensoriels d'entrée sur la base des données d'entrée et délivrer en sortie des données de réponse émotionnelle correspondant à la réponse émotionnelle prédite ; une unité de détermination d'ancrage configurée pour déterminer au moins un ancrage à l'intérieur des supports sensoriels auxquels les données de réponse émotionnelle sont attribuables et délivrer l'au moins un ancrage ; et une unité de codage d'émotion configurée pour générer une représentation codée de manière émotionnelle des supports sensoriels d'entrée, la représentation codée émotionnellement des supports sensoriels étant codée avec des données relatives à la réponse émotionnelle du sujet aux supports sensoriels d'entrée, sur la base des supports sensoriels d'entrée, des données de réponse émotionnelle et d'au moins un ancrage.
Applications Claiming Priority (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| GBGB2403203.9A GB202403203D0 (en) | 2024-03-05 | 2024-03-05 | System and method |
| GB2403203.9 | 2024-03-05 |
Publications (2)
| Publication Number | Publication Date |
|---|---|
| WO2025186299A1 true WO2025186299A1 (fr) | 2025-09-12 |
| WO2025186299A8 WO2025186299A8 (fr) | 2025-10-02 |
Family
ID=90625085
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| PCT/EP2025/055940 Pending WO2025186299A1 (fr) | 2024-03-05 | 2025-03-05 | Système et procédé |
Country Status (2)
| Country | Link |
|---|---|
| GB (1) | GB202403203D0 (fr) |
| WO (1) | WO2025186299A1 (fr) |
Citations (5)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| KR101197978B1 (ko) * | 2008-01-31 | 2012-11-05 | 소니 컴퓨터 엔터테인먼트 아메리카 엘엘씨 | 웃음 탐지기 및 미디어 프리젠테이션에 대한 감정 반응을 추적하기 위한 시스템 및 방법 |
| US20120290514A1 (en) * | 2011-05-11 | 2012-11-15 | Affectivon Ltd. | Methods for predicting affective response from stimuli |
| US9833184B2 (en) * | 2006-10-27 | 2017-12-05 | Adidas Ag | Identification of emotional states using physiological responses |
| US20190012599A1 (en) * | 2010-06-07 | 2019-01-10 | Affectiva, Inc. | Multimodal machine learning for emotion metrics |
| US20230334511A1 (en) * | 2022-04-19 | 2023-10-19 | Dell Products L.P. | Visual attention likelihood estimations for objects of a visual stimulus |
-
2024
- 2024-03-05 GB GBGB2403203.9A patent/GB202403203D0/en not_active Ceased
-
2025
- 2025-03-05 WO PCT/EP2025/055940 patent/WO2025186299A1/fr active Pending
Patent Citations (5)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US9833184B2 (en) * | 2006-10-27 | 2017-12-05 | Adidas Ag | Identification of emotional states using physiological responses |
| KR101197978B1 (ko) * | 2008-01-31 | 2012-11-05 | 소니 컴퓨터 엔터테인먼트 아메리카 엘엘씨 | 웃음 탐지기 및 미디어 프리젠테이션에 대한 감정 반응을 추적하기 위한 시스템 및 방법 |
| US20190012599A1 (en) * | 2010-06-07 | 2019-01-10 | Affectiva, Inc. | Multimodal machine learning for emotion metrics |
| US20120290514A1 (en) * | 2011-05-11 | 2012-11-15 | Affectivon Ltd. | Methods for predicting affective response from stimuli |
| US20230334511A1 (en) * | 2022-04-19 | 2023-10-19 | Dell Products L.P. | Visual attention likelihood estimations for objects of a visual stimulus |
Also Published As
| Publication number | Publication date |
|---|---|
| WO2025186299A8 (fr) | 2025-10-02 |
| GB202403203D0 (en) | 2024-04-17 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| Aranha et al. | Adapting software with affective computing: a systematic review | |
| Benssassi et al. | Wearable assistive technologies for autism: opportunities and challenges | |
| Nasoz et al. | Emotion recognition from physiological signals using wireless sensors for presence technologies | |
| Lisetti et al. | MAUI: a multimodal affective user interface | |
| US7128577B2 (en) | Method for providing data to be used by a therapist for analyzing a patient behavior in a virtual environment | |
| Saravanan et al. | Convolutional Neural Networks-based Real-time Gaze Analysis with IoT Integration in User Experience Design | |
| KR102233099B1 (ko) | 기계학습에 기반한 가상 현실 콘텐츠의 사이버 멀미도 예측 모델 생성 및 정량화 조절 장치 및 방법 | |
| US7972140B2 (en) | Method and apparatus for performing a behaviour analysis using a virtual environment | |
| JP2005237561A (ja) | 情報処理装置及び方法 | |
| Jyotsna et al. | PredictEYE: Personalized time series model for mental state prediction using eye tracking | |
| Bevilacqua et al. | Automated analysis of facial cues from videos as a potential method for differentiating stress and boredom of players in games | |
| WO2019086856A1 (fr) | Systèmes et procédés permettant de combiner et d'analyser des états humains | |
| Lamti et al. | When mental fatigue maybe characterized by Event Related Potential (P300) during virtual wheelchair navigation | |
| Makhataeva et al. | Augmented Reality for Cognitive Impairments | |
| Rincon et al. | Detecting emotions through non-invasive wearables | |
| US20210303070A1 (en) | Devices and headsets | |
| Fukuoka et al. | Sensory attenuation with a virtual robotic arm controlled using facial movements | |
| Burleson | Affective learning companions and the adoption of metacognitive strategies | |
| Somarathna et al. | Towards understanding player experience in virtual reality games through physiological computing | |
| WO2025186299A1 (fr) | Système et procédé | |
| Mavridou | Affective state recognition in Virtual Reality from electromyography and photoplethysmography using head-mounted wearable sensors. | |
| US20250383712A1 (en) | Emotion-based experience | |
| EP4581453A1 (fr) | Expérience basée sur l'émotion | |
| Küster et al. | What could a body tell a social robot that it does not know? | |
| Li et al. | Adapting to the user: A systematic review of personalized interaction in vr |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| 121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 25710455 Country of ref document: EP Kind code of ref document: A1 |