WO2025149652A1 - Méthodes et systèmes pour déterminer une réponse physiologique de l'œil d'un utilisateur - Google Patents
Méthodes et systèmes pour déterminer une réponse physiologique de l'œil d'un utilisateurInfo
- Publication number
- WO2025149652A1 WO2025149652A1 PCT/EP2025/050598 EP2025050598W WO2025149652A1 WO 2025149652 A1 WO2025149652 A1 WO 2025149652A1 EP 2025050598 W EP2025050598 W EP 2025050598W WO 2025149652 A1 WO2025149652 A1 WO 2025149652A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- user
- pupil
- stimuli
- sequence
- sensory
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B3/00—Apparatus for testing the eyes; Instruments for examining the eyes
- A61B3/10—Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions
- A61B3/11—Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions for measuring interpupillary distance or diameter of pupils
- A61B3/112—Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions for measuring interpupillary distance or diameter of pupils for measuring diameter of pupils
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B3/00—Apparatus for testing the eyes; Instruments for examining the eyes
- A61B3/0016—Operational features thereof
- A61B3/0025—Operational features thereof characterised by electronic signal processing, e.g. eye models
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B3/00—Apparatus for testing the eyes; Instruments for examining the eyes
- A61B3/0016—Operational features thereof
- A61B3/0041—Operational features thereof characterised by display arrangements
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B3/00—Apparatus for testing the eyes; Instruments for examining the eyes
- A61B3/02—Subjective types, i.e. testing apparatus requiring the active assistance of the patient
- A61B3/028—Subjective types, i.e. testing apparatus requiring the active assistance of the patient for testing visual acuity; for determination of refraction, e.g. phoropters
- A61B3/032—Devices for presenting test symbols or characters, e.g. test chart projectors
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B3/00—Apparatus for testing the eyes; Instruments for examining the eyes
- A61B3/10—Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions
- A61B3/14—Arrangements specially adapted for eye photography
- A61B3/145—Arrangements specially adapted for eye photography by video means
Definitions
- the present disclosure relates to methods and systems for determining a physiological response of the eye of a user, the user subject to defined sensory stimuli.
- LC-NE locus coeruleus noradrenergic
- the defined sensory stimuli include at least one incongruent sensory stimulus comprising an image of the face of a person, the image of the face of the person featuring a particular defined emotion, wherein the sensory stimulus further comprises an indication of an emotion, in particular a written indicator of an emotion, which does not match the emotion featured on the image of the face.
- the stimuli are, for example, optical or visual stimuli such as images (still images or moving image), acoustic stimuli, or olfactory stimuli.
- the stimuli which are presented or provided to the user during recording of the video may include a primarily sensory stimulus along with a cognitive input.
- the cognitive input may be presented as a text description accompanying the stimulus.
- the cognitive input may also be provided as an acoustic recording of a word, or an olfactory recording of a scent.
- the defined sequence of sensory stimuli includes pairings or pairs including congruent and/or incongruent images, in particular incongruent-incongruent stimuli pairs and congruent-incongruent stimuli pairs.
- the physiological response measure is generated using the difference between the pupil response in a subsequence of incongruent-incongruent vs. congruent-incongruent stimuli pairs. In particular, the pupil response during the second stimulus in each pair is used to generate the physiological response measure. This difference is directly linked to the excitability of the stress system of the brain and has been shown to be a good predictor of positive vs negative stressor response and increased anxiety and/or depressive levels in the future.
- the physiological response measure is generated using the difference between an average pupil response for a plurality of first sub-sequences and an average pupil response for a plurality of second sub-sequences.
- the dilation level of the detected pupil(s) comprises generating a dilation level comprising a dilation level time-series using a plurality of frames of each video.
- the dilation level time-series includes a dilation value for every frame in the video, specifically every frame in which the pupil is detected, as some frames may include blinking.
- the method further comprises determining, by the processor, a stress resilience score for the user using the physiological response measure.
- the physiological response measure may be determined using one or more first stimulus pupil responses and one or more second stimulus pupil responses, in particular by comparing the one or more first stimulus pupil responses to the one or more second stimulus pupil responses.
- the stress resilience score is determined using the physiological response measure and a predictive model.
- the predictive model is configured to determine a stress resilience score using a physiological response measure.
- the predictive model is trained using a training dataset comprising information related to a plurality of study participants.
- the training dataset includes, for each study participant, a physiological response measure and an indicator of whether the study participant had, or developed, symptoms of anxiety or depression, e.g., during a prolonged period of professional stress (6 months).
- determining the dilation level of the detected pupil using the size of the pupil includes determining the size of the pupil using a number of pixels in the frame associated with the pupil.
- the size is determined by identifying a number of pixels in a collection of pixels having a prediction level above the defined threshold.
- the present disclosure also relates to a computer program product comprising computer program code configured to control a processor such that the processor performs at least one of the methods described herein, in particular the method of determining a physiological response of an eye of a user.
- Fig. 1 shows a diagram illustrating a user subject to sensory stimuli during an assessment, in particular a sequence of images, while having his or her face recorded;
- Fig. 2 shows a highly schematic diagram of a server computer for performing the method as described herein;
- Fig. 3 shows a highly schematic drawing of a user device in front of which the user may be during an assessment as described herein and which may perform one or more of the methods described herein;
- Fig. 4 shows a highly schematic system diagram of a user device connected to a server computer via the Internet;
- Fig. 5 shows a diagram illustrating a sequence of stimuli with an associated or corresponding sequence of videos
- Fig. 6 shows a flow diagram illustrating a method for determining a physiological response of the eye of a user
- Fig. 7 shows a flow diagram illustrating a method for determining a physiological response of the eye of a user, including additional steps which may be performed on the server computer and the user device;
- Fig. 8 shows a flow diagram illustrating a method for detecting the pupils in the eye and determining the dilation level in the eyes.
- Fig. 9 shows a schematic diagram illustrating graphically some of the steps for detecting the pupils in the eye and determining the dilation level in the eyes;
- Fig. 10 shows an example of a sub-sequence including two stimuli, specifically a congruent-incongruent pairing
- Fig. 11 shows two charts showing, in the first chart, an average pupil dilation for incongruent-incongruent trials and or congruent-incongruent trials with respect to a time from stimulus onset, and in the second chart, a difference (delta) in the pupil dilation between the congruent-incongruent trials and the incongruent-incongruent trials.
- Figure 1 shows a user 6 sitting in front of a user device 2 during an assessment.
- the assessment may be a psychological assessment, for example involving an emotional Stroop test.
- the user 6 is subject to defined sensory stimuli, which are provided and thus presented to the user 6 by the display 21 of the user device 2.
- the user device 2 may have a sensory stimulation module (which may include the display 21 ) configured to provide a sequence of defined sensory stimuli to the user 2.
- the physiological response of an eye of the user is determined, in particular by recording, using a camera 22 of the user device 2, the face of the user 6. Thereby, as disclosed herein, a physiological response measure of the user 6 may be determined.
- the user 6 may be provided with a human machine interface with which the user may provide input to the user device 2.
- the input may reflect the user’s evaluation of the sensory stimulus and/or may cause the sequence of defined sensory stimuli to advance.
- FIG. 2 shows a block diagram of a server computer 1 which includes several structural components.
- the server computer 1 comprises at least one processor 1 1 configured to perform one or more of the methods, steps, and/or functions as described herein.
- the server computer 1 further includes various components, such as a memory 12, a communication interface, and/or a human machine interface (HMI).
- the components of the server computer 1 are connected to each other via a data communication system, such that they can transmit and/or receive data.
- the term data communication system relates to a communication system that facilitates data communication between two components, devices, systems, or other entities, in particular of the server computer 1.
- the data communication system is wired and includes a wired connection, such as a cable and/or a system bus, and/or includes a wireless connection.
- the sensory stimulation module is configured to provide, to the user, for a given sensory modality, i.e. optical stimuli, a plurality of different stimuli.
- the sensory stimulation module may be directed to provide the plurality of different stimuli, by the processor, according to a defined sequence. Each item in the sequence may be provided to the user for a defined period of time, and/or the user may provide input, using the HMI, to direct the processor to advance to the next item in the sequence.
- a sequence of videos 4 is recorded of the user’s face during the assessment.
- Each video 41 in the sequence of videos 4 corresponds to a particular stimulus 31 .
- the beginning of a particular video 41 may correspond in time to the onset of an associated stimulus 31 .
- the videos may have differing durations.
- the (individual) videos described herein as being in a sequence may therefore be considered equivalent to (individual) segments of a single video demarcated by the timestamps.
- the method 100 is performed by a processor as described herein, for example a processor of a server computer or a processor of a user device. Two or more separate processors may perform this method in conjunction, for example by a first processor performing some steps and a second processor performing other steps.
- the method 100 may be triggered to perform upon initiation or completion of an assessment.
- the sequence of recorded videos may also be received as a data stream, for example a live data-stream, such that the received sequence of videos may be analyzed substantially in real-time.
- the videos show the face of the user and are preferably recorded in a moderately lit environment, such that the face of the user is illuminated and the eyes are visible. Specifically, it is advantageous if the light levels in the environment are such that the pupils are moderately dilated, i.e. neither fully opened, nor fully closed.
- step S101 the processor detects at least one of the pupils in each of the videos in the sequence of videos.
- one or both of the pupils are detected in at least one frame in a particular video in the sequence.
- the pupil(s) detection provides, as a result, coordinates in the frame indicative of a location of the pupil(s) and/or coordinates in the frame corresponding to one or more other defined locations on the face near the pupil(s), for example, for a given eye, the eye itself or particular features of the eye, such as the iris, the sclera, corners of the eye, and/or the eyelid.
- the processor may provide, as an output of step S101 , coordinates indicative or related to a location of the pupil(s), for a plurality of frames of each video in the sequence.
- the processor further determines if the user has one or both eyes closed in a given frame of the video, and does not detect the pupil (provides a null result) for a particular pupil.
- the processor determines a dilation level of at least one pupil detected in each frame of each video, using the size of the pupil.
- the size of the pupil may be defined using one or more of the following size measures: an area of the pupil, a radius or diameter of the pupil, or a circumference of the pupil.
- the size measures may be extracted directly from each frame in which the pupil was detected, for example by detecting a substantially circular pupil/iris edge and using the pupil/iris edge to determine a number of pixels between tangents to the edge on opposing sides of the pupil, by counting a number of pixels inside the edge, and/or by determining a length of the edge.
- the dilation level is a measure or indicator of how dilated the pupil is, i.e. how large or open it is.
- the dilation level may be determined as a relative or absolute dilation level, using the size of the pupil.
- the first stimulus pupil response is related to a congruent-incongruent (stimulus) pairing and further stimulus pupil responses to pairings of the same type are used to determine an aggregate or average first stimulus pupil response.
- the second stimulus pupil response is related to an incongruent-incongruent (stimulus) pairing and further pupil stimulus responses to pairings of the same type are used to determine an aggregate or average second stimulus pupil response.
- 3 to 100 pupil responses to congruent-incongruent pairings may be used to determine an average first stimulus pupil response
- 3 to 100 pupil responses to incongruent- incongruent pairings may be used to determine an average second stimulus pupil response.
- the physiological response measure is then determined by comparing the average first stimulus pupil response and the average second stimulus pupil response.
- a step S1 12 the sequence of sensory stimuli are provided to the user, using the user device 2.
- the processor of the user device 2 is configured to, using the sensory stimulation module, provide the sensory stimuli to the user.
- the sensory stimuli are provided in the defined sequence.
- the user device 2 and/or the sequence of sensory stimuli may be designed such that the sequence is automatically advanced, i.e. that the user device 2 goes from a particular sensory stimulus in the sequence to the next sensory stimulus without user input.
- the user device 2 and/or the sequence of sensory stimuli may be designed such that the sequence advances according to user input received in the user device 2 via the HMI.
- the HMI may include a keyboard or touchscreen, and the user may press a button on the keyboard or touchscreen to advance from a particular sensory stimulus to the next.
- the user may have to provide feedback on the provided sensory stimulus. For example, the user may be asked to identify or classify the provided sensory stimulus into one or more categories, and to provide user input accordingly, for example by pressing a particular button, etc.
- the user input may be recorded by the user device 2.
- a time-point of each user input may be stored, such that, in the embodiment where the sequence advances upon receiving user input, the time-points at which the sequence advanced from one sensory stimulus to the next can be reconstructed, in particular for segmenting the recorded video.
- the sequence advances automatically according to a pre-defined rhythm or timing-plan.
- the sequence may advance regularly, or there may be some variability which may be included in the timing-plan or added, for example, using a random delay period, such that the sequence advances unpredictably.
- the provided sequence of sensory stimuli is a sequence of images, for example in the form of a slideshow.
- the images include faces of people expressing an emotion, such as happiness or fear.
- the emotions preferably comprise primary emotions, e.g. anger, sadness, fear, joy, interest, surprise, disgust, and/or shame.
- cognitive input in the form of a written description is provided.
- the written description for example in the form of a word, may agree with the emotion expressed by the person, in which case the stimulus would be considered a congruent stimulus (e.g., the face expresses happiness, and the cognitive input is the word “HAPPY”).
- the written description may disagree with the emotion expressed by the person, in which case the stimulus would be considered an incongruent stimulus (e.g., the face expresses happiness, and the cognitive input is the word “FEAR”).
- step S113 which is performed simultaneously to step S112 (e.g., in an overlapping time-frame)
- the user device 2 records a video of the user’s face.
- the video may be stored, at least temporarily, on the user device 2.
- the video may also be streamed to the server computer.
- the video is recorded using the camera 22 of the user device 2, for example a webcam directed at the user’s face.
- the environment in which the assessment takes place is preferably lit such that the user’s face is evenly-illuminated, such that the pupils of the user are neither fully closed nor fully open.
- the video is preferably recorded at a high resolution of at least 720p, preferably 1080p, and a sufficiently high frame rate of at least 15 FPS and 100 FPS, for example 30 FPS.
- one video is recorded for the entire duration of the assessment, or individual videos are recorded, one for each provided sensory stimulus or for a group of stimuli (e.g., 6 to 20 stimuli).
- step S114 the user device 2 transmits the recorded video(s), either as one or more data files, or as a data stream, in one or more transmissions T2, to the server computer 1 for evaluation.
- step S115 the server computer 1 receives the recorded video(s) and stores them in the memory for processing.
- step S116 the server computer 1 processes the videos, according to the steps of method 100 described above with reference to Fig. 6, to determine the physiological response measure.
- step S117 which is optional, the server computer transmits, in a transmission T3, the physiological response measure determined in step S116, to the user device 2.
- step S118 the user device 2 receives the physiological response measure.
- the user device 2 may present the physiological response measure to the user, for example on a display of the user device 2.
- Figure 8 shows a flow diagram illustrating a method 120 for detecting one or both pupils in the eyes of the user in at least one frame of each video, and determining the dilation level of the pupil in a frame of a video.
- the method 120 may be performed by a processor. Depending on the embodiment, for example the processor of the server computer or the processor of the user device.
- the method 120 comprises a number of steps S120 to S124 which provide for an exemplary implementation of steps S101 and S102 of method 100.
- steps S120 to S123 provide an exemplary implementation of step S101 of method 100
- steps S121 to S124 provide an exemplary implementation of step S102 of method 100.
- the method 120 may be performed for a frame of at least one video in the sequence of videos.
- the method 120 may be performed for all frames of a subset of videos in the sequence of videos, in particular the subset of the videos associated with congruent- incongruent stimulus pairings and incongruent-incongruent stimulus pairings.
- the method 120 may additionally or alternatively be performed for all frames of all videos.
- the frame of the video is transformed into a grayscale image.
- the frame of the video may also be resized to a lower resolution, for example 640x360 pixels.
- the grayscale transformation and lowering of the resolution reduces the computational requirements for subsequent processing steps, increasing throughput speed and efficiency.
- step S120 the coordinates of eye corners (i.e., the left and right corner of each eye) are determined in the frame of the video.
- the coordinates of the eye corners may be determined using an image processing module configured to determine eye corners in an image of the face of a person.
- the image processing module may be stored in the memory and comprises computer program code and data such that the processor performs the image processing functions described herein.
- the activation map may have a different dimension than the frame of the video.
- the image processing module may be configured to provide, as an output, a down sampled activation map, in particular having a resolution of 360x180 (i.e., down sampled by a factor of two in each dimension). This has the benefit of increasing the throughput speed and efficiency.
- the down sampled activation map may be used to identify the corresponding eye coordinates in the frame, for example by upscaling the activation maps again to the original size and identifying the pixels in the frame with corresponding coordinates.
- the image processing module may apply Gaussian filtering on each of the activation maps.
- the step S120 may be performed for a sequence of frames, in particular all the frames in the video(s).
- the determination of the coordinates of the corners of the eye can then be further improved by application of a filter over the sequence of frames, thereby smoothing over any sudden changes in the coordinates of the eyes.
- a filter For example, a Savitzky-Golay first order polygon filter may be applied over the sequence of frames.
- a cropped frame of each eye is generated using the original frame of the video and the coordinates of the corners of the eyes.
- two cropped frames are generated, one for each eye, the eye being preferably completely encompassed by the cropped frame.
- the edges of the cropped frames are defined using the coordinates of the eyes, such that each cropped frame includes a particular eye.
- the cropped frame may be square, such that the distance between the eye corners defines the dimensions of the square.
- the cropped frames may be resized, in particular up or down sampled.
- the crops are all resized to a standard size, for example 128x128 pixels, for ease of further processing.
- the crops may be transformed to grayscale.
- such cropped frames are generated for a sequence of frames, preferably all the frames in the video(s).
- This step has the benefit over at least some prior art methods in that the location of the eyes does not need to be defined manually. Additionally, this step has the benefit that it is more stable over time, such that a movement of the user’s face does not lead to losing track of the eye.
- the right chart shows the difference (delta) in the dilation level between the congruent- incongruent stimuli pairs and the incongruent-incongruent stimuli pairs.
Landscapes
- Health & Medical Sciences (AREA)
- Life Sciences & Earth Sciences (AREA)
- Engineering & Computer Science (AREA)
- Medical Informatics (AREA)
- Surgery (AREA)
- Biophysics (AREA)
- Biomedical Technology (AREA)
- Heart & Thoracic Surgery (AREA)
- Physics & Mathematics (AREA)
- Molecular Biology (AREA)
- Ophthalmology & Optometry (AREA)
- Animal Behavior & Ethology (AREA)
- General Health & Medical Sciences (AREA)
- Public Health (AREA)
- Veterinary Medicine (AREA)
- Signal Processing (AREA)
- Multimedia (AREA)
- Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)
Abstract
La présente divulgation concerne des méthodes et des systèmes pour déterminer une réponse physiologique d'un œil d'un utilisateur (6), l'utilisateur (6) étant soumis à des stimuli sensoriels définis. La méthode comprend la réception d'une séquence de vidéos enregistrées d'un visage d'un utilisateur (6), l'utilisateur (6) étant soumis à une séquence définie associée de stimuli sensoriels conçus pour déclencher des réactions physiologiques spécifiques. La méthode comprend la détection de l'une ou des deux pupilles dans les yeux de l'utilisateur (6), la détermination d'un niveau de dilatation de la ou des pupilles détectées en fonction d'une taille de la ou des pupilles, la détermination d'une réponse de pupille à l'aide d'un ou de plusieurs niveaux de dilatation, et la génération, par le processeur, d'une mesure de réponse physiologique pour l'utilisateur (6) à l'aide d'une première réponse de pupille à un stimulus, indiquant la réponse de la pupille à un premier stimulus sensoriel, et d'une deuxième réponse de pupille à un stimulus, indiquant la réponse de la pupille à un deuxième stimulus sensoriel.
Applications Claiming Priority (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CH252024 | 2024-01-10 | ||
| CHCH000025/2024 | 2024-01-10 |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| WO2025149652A1 true WO2025149652A1 (fr) | 2025-07-17 |
Family
ID=89663481
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| PCT/EP2025/050598 Pending WO2025149652A1 (fr) | 2024-01-10 | 2025-01-10 | Méthodes et systèmes pour déterminer une réponse physiologique de l'œil d'un utilisateur |
Country Status (1)
| Country | Link |
|---|---|
| WO (1) | WO2025149652A1 (fr) |
Citations (7)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| WO2014179558A1 (fr) * | 2013-05-01 | 2014-11-06 | Musc Foundation For Research Development | Surveillance d'un état fonctionnel neurologique |
| WO2015131067A1 (fr) * | 2014-02-28 | 2015-09-03 | Board Of Regents, The University Of Texas System | Système de détection d'une lésion cérébrale traumatique à l'aide d'analyses de mouvements oculomoteurs |
| US20160262611A1 (en) * | 2013-10-30 | 2016-09-15 | Tel HaShomer Medical Research Infrastructure and S ervices Ltd. | Pupillometers and systems and methods for using a pupillometer |
| US20180180891A1 (en) * | 2016-12-23 | 2018-06-28 | Samsung Electronics Co., Ltd. | Electronic device and method of controlling the same |
| US20190167095A1 (en) * | 2013-01-25 | 2019-06-06 | Wesley W.O. Krueger | Ocular-performance-based head impact measurement applied to rotationally-centered impact mitigation systems and methods |
| US20190290118A1 (en) * | 2018-03-26 | 2019-09-26 | Samsung Electronics Co., Ltd. | Electronic device for monitoring health of eyes of user and method for operating the same |
| US20230052100A1 (en) * | 2020-01-13 | 2023-02-16 | Biotrillion, Inc. | Systems And Methods For Optical Evaluation Of Pupillary Psychosensory Responses |
-
2025
- 2025-01-10 WO PCT/EP2025/050598 patent/WO2025149652A1/fr active Pending
Patent Citations (7)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20190167095A1 (en) * | 2013-01-25 | 2019-06-06 | Wesley W.O. Krueger | Ocular-performance-based head impact measurement applied to rotationally-centered impact mitigation systems and methods |
| WO2014179558A1 (fr) * | 2013-05-01 | 2014-11-06 | Musc Foundation For Research Development | Surveillance d'un état fonctionnel neurologique |
| US20160262611A1 (en) * | 2013-10-30 | 2016-09-15 | Tel HaShomer Medical Research Infrastructure and S ervices Ltd. | Pupillometers and systems and methods for using a pupillometer |
| WO2015131067A1 (fr) * | 2014-02-28 | 2015-09-03 | Board Of Regents, The University Of Texas System | Système de détection d'une lésion cérébrale traumatique à l'aide d'analyses de mouvements oculomoteurs |
| US20180180891A1 (en) * | 2016-12-23 | 2018-06-28 | Samsung Electronics Co., Ltd. | Electronic device and method of controlling the same |
| US20190290118A1 (en) * | 2018-03-26 | 2019-09-26 | Samsung Electronics Co., Ltd. | Electronic device for monitoring health of eyes of user and method for operating the same |
| US20230052100A1 (en) * | 2020-01-13 | 2023-02-16 | Biotrillion, Inc. | Systems And Methods For Optical Evaluation Of Pupillary Psychosensory Responses |
Non-Patent Citations (5)
| Title |
|---|
| GRUESCHOW, MARCUS ET AL.: "Real-world stress resilience is associated with the responsivity of the locus coeruleus", NATURE COMMUNICATIONS, vol. 12, no. 1, 2021, pages 2275 |
| HADI ABDI KHOJASTEH ET AL: "An Intelligent Safety System for Human-Centered Semi-Autonomous Vehicles", ARXIV.ORG, CORNELL UNIVERSITY LIBRARY, 201 OLIN LIBRARY CORNELL UNIVERSITY ITHACA, NY 14853, 10 December 2018 (2018-12-10), XP081022847 * |
| NAHVI ROXANNA J ET AL: "Transcriptome profiles associated with resilience and susceptibility to single prolonged stress in the locus coeruleus and nucleus accumbens in male sprague-dawley rats", BEHAVIOURAL BRAIN RESEARCH, ELSEVIER, AMSTERDAM, NL, vol. 439, 17 October 2022 (2022-10-17), XP087236732, ISSN: 0166-4328, [retrieved on 20221017], DOI: 10.1016/J.BBR.2022.114162 * |
| SYLVIE NOÃ<L ET AL: "Interpreting Human and Avatar Facial Expressions", 24 August 2009, SAT 2015 18TH INTERNATIONAL CONFERENCE, AUSTIN, TX, USA, SEPTEMBER 24-27, 2015; [LECTURE NOTES IN COMPUTER SCIENCE; LECT.NOTES COMPUTER], SPRINGER, BERLIN, HEIDELBERG, PAGE(S) 98 - 110, ISBN: 978-3-540-74549-5, XP019126142 * |
| WARDHANI I K ET AL: "Investigating the relationship between background luminance and self-reported valence of auditory stimuli", ACTA PSYCHOLOGICA, NORTH-HOLLAND, AMSTERDAM, NL, vol. 224, 10 February 2022 (2022-02-10), XP086967431, ISSN: 0001-6918, [retrieved on 20220210], DOI: 10.1016/J.ACTPSY.2022.103532 * |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| JP7165207B2 (ja) | 機械学習ベースの診断分類器 | |
| Giakoumis et al. | Using activity-related behavioural features towards more effective automatic stress detection | |
| JP7111711B2 (ja) | メディアコンテンツ成果の予測のためのデータ処理方法 | |
| US20250228453A1 (en) | Method and system for measuring pupillary light reflex with a mobile phone | |
| KR102155309B1 (ko) | 인지 장애 예측 방법 및 이를 구현한 서버, 사용자 단말 및 어플리케이션 | |
| Ertugrul et al. | Crossing domains for au coding: Perspectives, approaches, and measures | |
| US20140315168A1 (en) | Facial expression measurement for assessment, monitoring, and treatment evaluation of affective and neurological disorders | |
| US20220067519A1 (en) | Neural network synthesis architecture using encoder-decoder models | |
| CN115334957A (zh) | 用于对瞳孔心理感觉反应进行光学评估的系统和方法 | |
| US20130102854A1 (en) | Mental state evaluation learning for advertising | |
| JP2022535799A (ja) | 認知トレーニング及び監視のためのシステム及び方法 | |
| CN109830280A (zh) | 心理辅助分析方法、装置、计算机设备和存储介质 | |
| US12204958B2 (en) | File system manipulation using machine learning | |
| Hossain et al. | Using temporal features of observers’ physiological measures to distinguish between genuine and fake smiles | |
| Geiger et al. | Computerized facial emotion expression recognition | |
| US20200074240A1 (en) | Method and Apparatus for Improving Limited Sensor Estimates Using Rich Sensors | |
| Chiarugi et al. | Facial Signs and Psycho-physical Status Estimation for Well-being Assessment. | |
| Ahmad et al. | CNN depression severity level estimation from upper body vs. face-only images | |
| KR20220158957A (ko) | 시선추적과 실시간 표정분석을 이용한 개인성향 예측 시스템 및 그 방법 | |
| WO2025149652A1 (fr) | Méthodes et systèmes pour déterminer une réponse physiologique de l'œil d'un utilisateur | |
| JP2025148923A (ja) | 認知機能推定装置、認知機能推定方法、及び、プログラム | |
| US20240382125A1 (en) | Information processing system, information processing method and computer program product | |
| JP2024172693A (ja) | 幸福度の推定方法、幸福度の推定プログラム、幸福度の推定装置およびモデルの生成方法 | |
| CN109697413B (zh) | 基于头部姿态的人格分析方法、系统和存储介质 | |
| KR20220100206A (ko) | 비접촉식 측정 데이터를 통한 감정 예측을 위한 인공지능 기반 감정인식 시스템 및 방법 |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| 121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 25700143 Country of ref document: EP Kind code of ref document: A1 |