WO2025030243A1 - System for generating a session for training executive function of a user and method of use thereof - Google Patents
System for generating a session for training executive function of a user and method of use thereof Download PDFInfo
- Publication number
- WO2025030243A1 WO2025030243A1 PCT/CA2024/051040 CA2024051040W WO2025030243A1 WO 2025030243 A1 WO2025030243 A1 WO 2025030243A1 CA 2024051040 W CA2024051040 W CA 2024051040W WO 2025030243 A1 WO2025030243 A1 WO 2025030243A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- signal
- user
- primary
- generated
- rules
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09B—EDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
- G09B19/00—Teaching not covered by other main groups of this subclass
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09B—EDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
- G09B5/00—Electrically-operated educational appliances
- G09B5/06—Electrically-operated educational appliances with both visual and audible presentation of the material to be studied
Definitions
- the present disclosure relates to cognitive function, and more particularly to improving executive function of a user.
- Executive function involves a set of cognitive processes that are necessary for cognitive control of behavior by selecting and monitoring behaviours for reaching one or more chosen goals.
- Executive function may be invoked to override prepotent responses that might otherwise be performed automatically following a stimulus in an external environment.
- a prepotent response is a response which takes priority over other potential responses (following a giving stimulus) (an automated response following a stimulus).
- executive function is often solicited in one or more of the following situations: a situation that involves planning or decision-making; a situation that involves error correction or troubleshooting; a situation where responses are not well- rehearsed or that include new actions; dangerous or technically challenging situations; situations that require the overcoming of a strong habitual response or that requires resisting temptation.
- Executive function is believed to involve the prefrontal cortex.
- Executive function relies on three types of brain function: working memory, mental flexibility and self-control.
- Working memory involves a user’s ability to remember and use retain information over short periods.
- Mental flexibility enables a user to shift action in response to different demands or to apply different rules depending on the setting.
- Self-control enables a user to prioritize actions and resist impulsive actions and/or responses.
- Al provides the basic operation of the system to reduce the human workload, but human intervention is still required to quickly assess and correct the situation resulting from unexpected problems or for tasks where human intervention is prioritized to achieve the human- Al pair’s performance.
- the artificial intelligence may control the vehicle during normal routes, but the user may be required to intervene if the artificial intelligence model malfunctions.
- the present disclosure relates to systems and methods for generating training sessions for improving cognitive executive control (also referred to herein as executive function) of a user.
- the system provides the user with a sequence of stimuli (referred to herein as signals) and requires the user to respond to the sequence of signals by providing an appropriate action in accordance with a set of rules associating the signals types with respective actions.
- the system tests memory by analyzing the user’s ability to remember the set of rules, mental flexibility to testing the user’ s ability to correct adjust their action in accordance with the sequence of signals, and self-control by repressing an action associated with a first signal, but that is changed upon receipt of a second signal following or accompanying the first signal, in accordance with the set of rules.
- the set of rules defines different actions to be performed as a function of a primary signal type selected from a plurality of primary signal types.
- the set of rules further defines a modulation of the action types associated with the first primary signal types in accordance with modulation signal types that are generated following or accompanying the primary signal type.
- the generation of a modulation signal trains the user’s impulse control and mental flexibility to inhibit the initial action corresponding to the primary signal and instead produce the action corresponding to the combination of the primary signal and the modulation signal, in accordance with the set of rules, the modulation signal generated after a time following, or with, the primary signal.
- system may further generate a training program to train the user ’ s executive function by adding further levels of complexity to the set of rules and to the information provided to the user during the course of a training session.
- the user may be presented with mnemonics during the course of a training session.
- Mnemonics includes complementary information (such as words, symbols, drawings, smells, touches, sounds, etc.) that is presented to the user.
- Retention mnemonics are mnemonics that the user is to remember, where distractor mnemonics includes information that the user is to cast aside or ignore during the course of the session.
- the user is asked to recall the retention mnemonics.
- the set of rules may include trigger signal types.
- the trigger signal types When presented to the user, the trigger signal types cause the user to adapt the action performed as a function of the primary signal, optionally the modulation signal, and the trigger signal, in accordance with the set of rules.
- Trigger signal types may be defined in the set of rules to cause the user to accelerate their response actions, slow down their response actions, halt an action, etc.
- a broad aspect is a method for generating a training session to train cognitive executive function of a user, comprising defining a set of rules comprising: a plurality of action types; a plurality of primary signal types, wherein each primary signal type of the plurality of primary signal types is associated with an action type of the plurality of action types, one or more modulator signal types, wherein each of the one or more modulator signal types is for causing a modulation in the action type of the plurality of action types that is associated with a signal type of the plurality of primary signal types; during a time period: periodically causing a generation of a primary signal with a primary signal type selected from the plurality of primary signal types, the generation of the primary signal for causing the user to prepare to initiate an action corresponding to an action type of the plurality of action types corresponding to the generated primary signal in accordance with the set of rules, resulting in the generation of plurality of signal spread over the time period; for at least some of the generated primary signals, causing a generation of a modul
- the set of rules further may include information to generate a response in accordance with one or more mnemonics
- the method may include, during the time period, causing periodically a generation of a mnemonic to a user for causing the user to act in accordance with the set of rules retain information relating to the mnemonic; receiving one or more responses provided by the user corresponding to the generated positive mnemonics and the set of rules; wherein the measuring of the performance of the user is further based on comparing the received one or more responses with expected one or more responses based on the set of rules.
- the one or more mnemonics may be one or more of: a sound, an image; a vibration; an odor; and a taste.
- the one or more mnemonics may be words.
- the set of rules may include information to ignore a subset of one or more distractor mnemonics from the one or more mnemonics, wherein, during the time period, the causing periodically a generation of a mnemonic may include generating at least one of the one or more distractor mnemonics; and wherein the measuring of the performance of the user may be further based on comparing the received one or more responses with expected one or more responses based on the set of rules, including the generated at least one of the one or more distractor mnemonics that are to be ignored by the user.
- the primary signals and the modulator signals may be generated via a virtual reality headset.
- the primary signals and the modulator signals may be generated via an extended reality headset.
- the method may include adjusting a difficulty associated with the generated primary signals and the generated modulator signals in accordance with a performance determined from the comparing of the actions performed by the user to expected actions based on the generated primary signals, the generated modulator signals and the set of rules.
- one or more of the primary signal types of the plurality of signal types may include more than one sensory stimuli selected from an image, a word, a sound, a vibration, an odor and a taste.
- one or more of the modulator signal types of the one or more modulator signal types may include one or more sensory stimuli selected from an image, a sound, a vibration, an odor and a taste.
- the set of rules further may include one or more trigger signal types for adjusting the action type resulting from the combination of a primary signal with a primary signal type and a modulator signal with a modulator signal type, the method including, during the period of time, for at least some of the generated modulator signals, causing a generation of a trigger signal with a trigger signal type selected from the one or more trigger signal types, the generation of the trigger signal for indicating to the user to adapt the action type corresponding to the generated primary signal and the generated modulator signal in accordance with the set of rules, wherein the comparing the actions performed by the user to expected actions may be further based on the generated trigger signals.
- Another broad aspect is a system for generating a training session to train cognitive executive function of a user.
- the system includes a processor; and memory including program code that, when executed by the processor, causes the processor to: define a set of rules comprising: a plurality of action types; a plurality of primary signal types, wherein each primary signal type of the plurality of primary signal types is associated with an action type of the plurality of action types, one or more modulator signal types, wherein each of the one or more modulator signal types is for causing a modulation in the action type of the plurality of action types that is associated with a signal type of the plurality of primary signal types; during a time period: periodically cause a generation of a primary signal with a primary signal type selected from the plurality of primary signal types, the generation of the primary signal for causing the user to prepare to initiate an action corresponding to an action type of the plurality of action types corresponding to the generated primary signal in accordance with the set of rules, resulting in the generation of plurality of signal spread over the time
- the set of rules may include information to generate a response in accordance with one or more mnemonics
- the processor may further cause the processor to, during the time period, cause periodically a generation of a mnemonic to a user for causing the user to act in accordance with the set of rules retain information relating to the mnemonic; receive one or more responses provided by the user corresponding to the generated positive mnemonics and the set of rules; wherein the measuring of the performance of the user may be further based on comparing the received one or more responses with expected one or more responses based on the set of rules.
- the one or more mnemonics is one or more of a sound; an image; a vibration; an odor; and a taste.
- the one or more mnemonics may be words.
- the set of rules may include information to ignore a subset of one or more distractor mnemonics from the one or more mnemonics, and wherein, during the time period, the causing periodically a generation of a mnemonic may include generating at least one of the one or more distractor mnemonics; and wherein the measuring of the performance of the user may be further based on comparing the received one or more responses with expected one or more responses based on the set of rules, including the generated at least one of the one or more distractor mnemonics that are to be ignored by the user.
- the primary signals and the modulator signals may be generated via a virtual reality headset.
- the primary signals and the modulator signals may be generated via an extended reality headset.
- the program code may further cause the processor to adjust a difficulty associated with the generated primary signals and the generated modulator signals in accordance with a performance determined from the comparing of the actions performed by the user to expected actions based on the generated primary signals, the generated modulator signals and the set of rules.
- one or more of the primary signal types of the plurality of signal types may include more than one sensory stimuli selected from an image, a word, a sound, a vibration, an odor and a taste.
- one or more of the modulator signal types of the one or more modulator signal types may include one or more sensory stimuli selected from an image, a sound, a vibration, an odor and a taste.
- the set of rules may include one or more trigger signal types for adjusting the action type resulting from the combination of a primary signal with a primary signal type and a modulator signal with a modulator signal type, the program code further causing the processor to: during the period of time, for at least some of the generated modulator signals, cause a generation of a trigger signal with a trigger signal type selected from the one or more trigger signal types, the generation of the trigger signal for indicating to the user to adapt the action type corresponding to the generated primary signal and the generated modulator signal in accordance with the set of rules, wherein the comparing the actions performed by the user to expected actions may be further based on the generated trigger signals.
- Another broad aspect is a non-transitory computer-readable medium having stored thereon program instructions for generating a training session to train cognitive executive function of a user, the program instructions executable by a processing unit for: defining a set of rules comprising: a plurality of action types; a plurality of primary signal types, wherein each primary signal type of the plurality of primary signal types is associated with an action type of the plurality of action types, one or more modulator signal types, wherein each of the one or more modulator signal types is for causing a modulation in the action type of the plurality of action types that is associated with a signal type of the plurality of primary signal types; during a time period: periodically causing a generation of a primary signal with a primary signal type selected from the plurality of primary signal types, the generation of the primary signal for causing the user to prepare to initiate an action corresponding to an action type of the plurality of action types corresponding to the generated primary signal in accordance with the set of rules, resulting in the generation of plurality of signal spread
- the set of rules may further include information to generate a response in accordance with one or more mnemonics
- the program instructions may be further executable by the processing unit for: during the time period, causing periodically a generation of a mnemonic to a user for causing the user to act in accordance with the set of rules retain information relating to the mnemonic; receiving one or more responses provided by the user corresponding to the generated positive mnemonics and the set of rules; wherein the measuring of the performance of the user may be further based on comparing the received one or more responses with expected one or more responses based on the set of rules.
- the one or more mnemonics may be one or more of: a sound; an image; a vibration; an odor; and a taste.
- the one or more mnemonics may be words.
- the set of rules further may include information to ignore a subset of one or more distractor mnemonics from the one or more mnemonics, and wherein, during the time period, the causing periodically a generation of a mnemonic may include generating at least one of the one or more distractor mnemonics; and wherein the measuring of the performance of the user may be further based on comparing the received one or more responses with expected one or more responses based on the set of rules, including the generated at least one of the one or more distractor mnemonics that are to be ignored by the user.
- the primary signals and the modulator signals may be generated via a virtual reality headset.
- the primary signals and the modulator signals may be generated via an extended reality headset.
- the program instructions may be further executable by the processing unit for adjusting a difficulty associated with the generated primary signals and the generated modulator signals in accordance with a performance determined from the comparing of the actions performed by the user to expected actions based on the generated primary signals, the generated modulator signals and the set of rules.
- the one or more of the primary signal types of the plurality of signal types may include more than one sensory stimuli selected from an image, a word, a sound, a vibration, an odor and a taste.
- one or more of the modulator signal types of the one or more modulator signal types may include one or more sensory stimuli selected from an image, a sound, a vibration, an odor and a taste.
- the set of rules may include one or more trigger signal types for adjusting the action type resulting from the combination of a primary signal with a primary signal type and a modulator signal with a modulator signal type
- the program instructions may be further executable by the processing unit for, during the period of time, for at least some of the generated modulator signals, causing a generation of a trigger signal with a trigger signal type selected from the one or more trigger signal types, the generation of the trigger signal for indicating to the user to adapt the action type corresponding to the generated primary signal and the generated modulator signal in accordance with the set of rules, wherein the comparing the actions performed by the user to expected actions may be further based on the generated trigger signals.
- Figure 1 is block diagram of an exemplary system for training cognitive executive function of a user
- Figure 2 is a block diagram of an exemplary software architecture for generating a training executive function of a user
- Figure 3 is a flowchart diagram of an exemplary method of training executive function of a user
- Figure 4 is a diagram illustrating an exemplary set of rules, where the primary signals, combined with the modulation signals, on the leftmost column may be combined with different trigger signals of a given trigger signal type, resulting in a different expected action that the expected action from the primary signal alone;
- Figure 5A is an illustration of an exemplary primary signal combined with a modulation signal
- Figure 5B is an illustration of a different exemplary primary signal combined with a modulation signal
- Figure 5C is an illustration of a different exemplary primary signal combined with a modulation signal
- Figure 5D is an illustration of a different exemplary primary signal combined with a modulation signal
- Figure 5E is an illustration of a different exemplary primary signal combined with a modulation signal
- Figure 5F is an illustration of a different exemplary primary signal combined with a modulation signal
- Figure 5G is an illustration of a different exemplary primary signal combined with a modulation signal
- Figure 5H is an illustration of a different exemplary primary signal combined with a modulation signal.
- Figure 6 is an illustration of an exemplary primary signal, an exemplary modulation signal, and exemplary mnemonics presented on a display.
- the present disclosure relates to systems and methods for generating a training session for training cognitive executive function of a user.
- the system presents a series of stimuli or signals to the user, and the user is prompted to act by performing the appropriate action in accordance with the signal(s) that have been presented to the user, as established based on a set of rules with the user.
- the generating of the signals may test the three main tenets of executive function, namely working memory, mental flexibility and self-control.
- a primary signal is presented to the user, prompting the user to perform a specific action corresponding to the type of the primary signal.
- a modulation signal is presented to the user.
- the modulator stimulus requires the user to exert self-control and not perform the initial action associated with the primary signal, and instead demonstrate mental flexibility and perform the action corresponding to the combination of a type of the primary signal and a type of the modulation signal in accordance with the set of rules.
- the user also anticipates that in some instances, a modulation signal may not be generated, where the user is to move forward with the performance of the action type corresponding to the primary signal type of the primary signal, based on the set of rules.
- the user is caused to anticipate a possible modulator signal, that can cause the user to carry out different actions, or perform the action type corresponding to the primary signal type if no modulation signal is generated.
- Working memory is also tested by requiring the user to perform the requisite actions of the correct action type in accordance with the set of rules.
- the actions performed by the user are compared to expected action types, based on the primary signal and modulation signal generated, in accordance with the set of rules.
- a performance of the user (and in some instance, a value corresponding to an improvement or worsening of the executive function of the user) is then established.
- FIG. 1 illustrating an exemplary system 100 for generating a training session for training a cognitive executive function of a user.
- the system 100 may be in communication with one or more data sources (e.g. for storing a series of stimuli) and/or one or more remote computers (not shown).
- the system 100 has a processor 102, memory 101, a stimulus generator 103 and a user input interface 105.
- the system 100 may have a display 104 and an input / output (I/O) interface 106.
- the processor 102 may be a general-purpose programmable processor.
- the processor 102 is shown as being unitary, but the processor 102 may also be multicore, or distributed (e.g. a multi-processor).
- the computer readable memory 101 stores program instructions and data used by the processor 102.
- the computer readable memory 101 may also store instructions to generate primary signals, instructions to generate modulator signals, instructions to generate trigger signals, instructions to generate mnemonics, etc.
- the memory 101 may be non-transitory.
- the computer readable memory 101 though shown as unitary for simplicity in the present example, may comprise multiple memory modules and/or caching. In particular, it may comprise several layers of memory such as a hard drive, external drive (e.g. SD card storage) or the like and a faster and smaller RAM module.
- the RAM module may store data and/or program code currently being, recently being or soon to be processed by the processor 102 as well as cache data and/or program code from a hard drive.
- a hard drive may store program code and be accessed to retrieve such code for execution by the processor 102 and may be accessed by the processor 102 to store and access data.
- the memory 101 may have a recycling architecture for storing, for instance, user input of actions performed by a user during a training session, a performance value for a user, a difficulty level of a training session generated for a user, where older data files are deleted when the memory 101 is full or near being full, or after the older data files have been stored in memory 101 for a certain time.
- the I/O interface 106 is in communication with the processor 102.
- the I/O interface 106 may include a network interface and may be a wired or wireless interface for establishing a remote connection with, for example, a remote server, an external data source, a remote computer, etc.
- the I/O interface 103 may be an Ethernet port, a WAN port, a TCP port, etc.
- the processor 102, the memory 101 and the I/O interfaces 106 may be linked via bus connections.
- the stimulus generator 103 is a device to generate certain signals (e.g. primary signals, modulation signals, trigger signals) for presentation to a user. For instance, the stimulus generator
- the stimulus generator 103 may be one or more of a display (may include display 104), a speaker, one or more motors for producing a vibration, a dispenser for one or more smells, one or more LEDs (light emitting diodes), etc.
- the stimulus generator 103 may include a transducer for generating the signals.
- the user input interface 105 is a device through which the user may provide input to the system 100 (e.g. when performing a training session).
- a user input interface 105 may be, or include, a mouse, a keyboard, a joystick, a controller, a touchscreen (e.g. of display 104), a microphone (for capturing speech or sounds from the user), an eye tracker, a motion detector, etc.
- the display 104 is a screen for sharing information to the user (e.g. during a training session, such as the primary stimuli, the modulator stimuli, the trigger stimuli, etc.) The display
- 104 may be a screen for a computer, a screen for a virtual-reality headset, a screen for an extended- reality system, a touchscreen (where the display 104 may also act as a user input interface 105), etc.
- the system 100 may be, or may include (composed by processor 102, memory 101, etc.), a computer, such as a desktop computer, a laptop, a tablet computer, a smartphone, a virtual- reality computer system, an extended-reality computer system, etc.
- a computer such as a desktop computer, a laptop, a tablet computer, a smartphone, a virtual- reality computer system, an extended-reality computer system, etc.
- the system 100 may be connected (e.g. through an Internet connection, through a local network such as a LAN network) to a remote server or database for transmitting an optionally storing usage information on a subject using the system 100 (e.g. a duration of usage of the system, a time of start and finish of a training session using the system, a performance of a user, an identifier for a user, a position of the system e.g. represented by GPS coordinates, etc.)
- usage information on a subject using the system 100 e.g. a duration of usage of the system, a time of start and finish of a training session using the system, a performance of a user, an identifier for a user, a position of the system e.g. represented by GPS coordinates, etc.
- FIG. 2 illustrating an exemplary software architecture for executive function training 200.
- the executive function training software architecture 200 may be implemented by system 100.
- the system 100 has program code, stored in memory 101, that includes the primary signal generation module 210.
- the system 100 has program code, stored in memory 101, that includes the modulation signal generation module 220.
- the system 100 has program code, stored in memory 101, that includes the trigger signal generation module 230.
- the system 100 has program code, stored in memory 101, that includes the primary mnemonic generation module 240.
- the system 100 has program code, stored in memory 101, that includes the performance evaluation module 250.
- Each of the primary signal generation module 210, the modulation signal generation module 220, the trigger signal generation module 230 and the primary mnemonic generation module 240 includes program code configured to implement the functionality of the modules as is described herein.
- the primary signal generation module 210 includes program code stored in memory 101 that, when executed by the processor 102, causes the processor 102 to generate the primary signal (e.g. through the stimulus generator 103 and/or on the display 104).
- the primary signal may be presented as an image or part of an image on the display 104.
- the primary signal may trigger other senses of the user, being a sound, a taste, a tactile sensation, a smell, or a combination thereof.
- Exemplary primary signals are presented in Figures 5A-5H.
- the primary signal generation module 210 causes the generation of periodic primary signals over time during the course of a training session.
- the time between a generation of primary signals may be constant, where in other instances, the time between the generation of primary signals may be varied.
- the modulation signal generation module 220 includes program code stored in memory 101 that, when executed by the processor 102, causes the processor 102 to generate a modulation signal (e.g. on the display 104 or through the stimulus generator 103).
- the primary signal generation module 210 may call the modulation signal generation module 220 to produce the modulation signal along with, or after, the primary signal.
- the modulation signal may be generated with each generation of a primary signal.
- the modulation signal may be generated only with respect to certain primary signals, where not all primary signals will be accompanied by a modulation signal.
- the modulation signal may be presented as an image or part of an image on the display 104. However, it will be understood that the modulation signal may trigger other senses of the user, being a sound, a taste, a tactile sensation, a smell, or a combination thereof.
- Exemplary modulation signals are presented in the second, third and fourth columns of Figure 4.
- the modulation signal generation module 220 may cause the processor 102 to generate a plurality of modulation signals with respect to a single generated primary signal.
- the user is presented with a plurality of modulation signals, adding to the complexity of the analysis performed by the user to execute the appropriate action, where the determination of the appropriate action requires a discerning of the meaning behind the combination of modulation signals, combined with the primary signals.
- the trigger signal generation module 230 includes program code stored in memory 101 that, when executed by the processor 102, causes the processor 102 to generate a trigger signal (e.g. on the display 104 or through the stimulus generator 103).
- the primary signal generation module 210 may call the trigger signal generation module 230 to produce the modulation signal along with, or after, the primary signal.
- the modulation signal generation module 220 may call the trigger signal generation module 230 to produce the modulation signal along with, or after, the modulation signal.
- the trigger signal may be a modification of the primary signal and/or the modulation signal.
- the trigger signal may be a new symbol presented on a display 104 along with the primary signal, or the primary signal and the modulation signal.
- the trigger signal may be presented along with the primary signal, along with the modulation signal, or along with the primary signal and modulation signal.
- the modulation signal may be presented after the primary signal, after the modulation signal, or after the primary signal and modulation signal when the primary signal and modulation signal are presented together.
- only some of the modulation signals may be presented with a trigger signal.
- only some of the primary signals may be presented with a trigger signal.
- each of the modulation signals may be accompanied by a trigger signal.
- each of the primary signals may be presented with a trigger signal.
- the trigger signal may be presented as an image or part of an image on the display 104. However, it will be understood that the trigger signal may trigger other senses of the user, being a sound, a taste, a tactile sensation, a smell, or a combination thereof.
- the mnemonic generation module 240 includes program code stored in memory 101 that, when executed by the processor 102, causes the processor 102 to generate one or more mnemonics during the course of a training session (e.g. on the display 104, or by the stimulus generator 103).
- the mnemonic may be a word or image presented on a display 104.
- the mnemonic may be a word or sound generated through a speaker by the stimulus generator 103.
- the mnemonic may be a smell produced by the stimulus generator 103.
- a command to generated information for the user is produced by the mnemonic generation module 240 indicative of if the mnemonic should be retained or discarded by the user during the course of a training session, the information determined as a function of a set of rules defined at the executive training software architecture 200, and conveyed to the user.
- the information may be a colour, where words or images of a certain colour are to be retained by the user, and words or images of another colour are to be discarded by the user.
- the information may be a font, where words or images of a certain font are to be retained by the user, and words or images of another font are to be discarded by the user.
- the information may be a tactile sensation, where the presentation of each mnemonic is accompanied by a tactile sensation, where the information of a type of tactile sensation gives the user information to discard a given mnemonic, and information of a different type of tactile sensation gives the user information to retain a given mnemonic.
- the performance evaluation module 250 includes program code stored in memory 101 that, when executed by the processor 102, causes the processor 102 to evaluate a performance of a user during a specific training session in accordance with a set of rules defined at the executive function training software architecture 200 and the primary signals, modulation signals, trigger signals and mnemonics generated during the course of the training session from commands issued by one or more of the primary signal generation module 210, the modulation signal generation module 220, the trigger signal generation module 230 and the mnemonic generation module 240.
- the performance evaluation module 250 gathers or receives information on the primary signal type of the primary signal which is generated by the primary signal generation module 210.
- the performance evaluation module 250 gathers or receives information on the modulation signal type of the modulation signal which is generated by the modulation signal generation module 220.
- the performance evaluation module 250 gathers or receives information on the trigger signal type of the trigger signal which is generated by the primary trigger signal generation module 230.
- the performance evaluation module 250 also receives or gathers information on the action performed by the user following a generating of each primary signal by the primary signal generation module 210.
- the information on the action performed by the user may be transmitted or gathered from the user input interface 105, where the user performs an action on the user input interface.
- a camera or microphone may also be used to record information on the action performed by the user following the generating of the primary signal.
- a time between the generating of the primary signal (or of the modulation signal, or of the trigger signal) may be recorded by the performance evaluation module 250.
- the performance evaluation module 250 generates a query to retrieve from memory a set of rules defining an expected action in accordance with a generation primary signal, presented alone, or combined with one or more of the modulation signal(s) and the trigger signal(s).
- the performance evaluation module 250 then causes the processor 102 to compare the received action performed by the user following the generating of a primary signal (optionally combined with the modulation signal and/or the trigger signal), with the expected action in accordance with the generation of the primary signal, based on the primary signal type, presented alone, or combined with one or more of the modulation signal(s) and the trigger signal(s), based on respectively, the modulation signal type for each modulation signal or a combination of modulation signals and/or the trigger signal type for each trigger signal or a combination of trigger signals.
- the processor 102 is caused by the performance evaluation module 250 to perform the comparison after each primary signal during the course of the training session.
- a value indicative of the number of matches between the performed actions by the user and the expected actions by the user may be generated by the processor 102.
- the performance evaluation module 250 gathers or receives information on the mnemonic that is generated by the mnemonic generation module 240 and information generated by the mnemonic generation module 240 to indicate to the user if the mnemonic should be retained or discarded (a retention mnemonic or a distractor mnemonic). The performance evaluation module 250 then queries memory 101 to retrieve the set of rules regarding if a mnemonic is to be retained or discarded based on the information presented to the user along with the mnemonic (distractor mnemonics). The performance evaluation module 250 receives information from the user (e.g.
- the performance evaluation module 250 causes the processor 102 to compare the mnemonic received by the user with the mnemonics that are expected to be retained by the user based on the set of rules.
- the performance evaluation module 250 may cause the processor 102 to output a score or value indicative of a performance of a user with respect to the mnemonics generated by the mnemonic generation module 240.
- the performance evaluation module 250 causes the processor 102 to generate metrics, a value or a score on a performance of the user during a training session based from the comparison of the expected actions following the generation of the primary signals and the actions (not performing an action may be considered an action by the software architecture 200) performed by the user, and in some instances, based from the mnemonics retained by the user, and provided as input to the system 100, with the retention mnemonics that are expected to be retained by the user based on the set of rules.
- the performance evaluation module 250 may cause the processor 102 to compare the performance of the user between different training sessions, and may cause a modulation of a difficulty of the training session based from the comparison for a future training session (e.g. by adding more modulation signal types, trigger signals, more rules regarding the mnemonics to be retained or discarded, etc.)
- a set of rules is defined at step 310 (e.g. through the receipt of user input; adapted by the software architecture to increase or decrease the difficulty of the training session).
- the set of rules establishes an expected action performed by the user in accordance with a primary signal of a primary signal type that is presentable to the user, where the primary signal may be adjusted in accordance with one or more modulation signals, and one or more trigger signals.
- Each primary signal type may be associated to an expected action. For instance, each primary signal may be associated with a different action (in some cases, more than one primary signal may be associated with a same expected action).
- the set of rules further defines how the expected actions associated with each of the primary signal types may be: - adapted (where a different action is expected) when one or more modulation signals, each modulation signal of a given modulation type, is generated in relation to a primary signal.
- a set of rules may include a plurality of modulation signal types.
- a modulation signal type combined with a primary signal type may generate an expected action.
- the expected action following a combination of a primary signal of a primary signal type and a modulation signal of a modulation signal type may be different from (or, in some embodiments, the same as) the expected action from the primary signal of the same primary signal type alone.
- a set of rules may include three primary signal types, A, B and C, each associated with a different expected action, 1, 2 and 3.
- Primary signal type A is associated with expected action 1
- primary signal type B is associated with expected action 2
- primary signal type C is associated with expected action 3.
- the set of rules may include two modulation signals, a and b.
- the set of rules may define that a combination of primary signal “A” with modulation signal “a” results in action 4 as expected input, a combination of primary signal “A” with modulation signal “b” results in action 5 as expected input, and so forth.
- a combination of a primary signal and a modification signal may result in the same expected action as when the expected signal is generated alone. For instance, if the primary signal type A is associated with expected action 1, then a combination of the primary signal type A and of the modulation signal “a” may also be associated with expected action 1;
- the trigger signal types may be defined in the set of rules to adjust the action that is expected from the user from the presentation of a primary signal, or a combination of a primary signal and of a modulation signal.
- the trigger signal type may be defined in the set of rules to “accelerate”, “slow down”, “stop”, “delay”, etc. the action associated to the primary signal, or the primary signal combined with the modulator signal,.
- the expected action by the user, to assess the user’s performance during the training session may be defined in accordance with the trigger signal outputted and conveyed to the user, based on the set of rules, combined with the primary signal, and optionally the modulation signal.
- the trigger signal may be presented as an alteration of the appearance of the primary signal and/or the modulation signal (e.g. a change in colour, in size, in opacity, in orientation, etc. of the primary signal and/or the modulation signal).
- the set of rules further defines identifier types for mnemonics presented to the user during the course of a training session, where an identifier of a specific identifier type indicates if the user is to retain the mnemonic (a retention mnemonic) or discard the mnemonic (a distractor mnemonic) during the course of a training session.
- Exemplary identifier types may be a colour, where a first colour may indicate to the user that the user should retain the mnemonic (e.g. the word, the symbol), and a second colour may indicate to the user that the user should discard the mnemonic (e.g. the word, the symbol).
- Other exemplary identifier types may be an orientation of the mnemonic, a size of the mnemonic, an opacity of the mnemonic, a position of the mnemonic on a display, etc.
- the set of rules are stored in memory of the system, and may be queried or analyzed to determine a type of primary signal to generate, a type of modulation signal to generate, a type of trigger signal to generate.
- the set of rules is retrieved from memory and analyzed for determining a performance of a user during a training session, to determine if the actions performed by the user during the course of the user match the expected actions by the user in accordance with the set of rules (where an increased match between the actions performed by the user during the course of the user match the expected actions by the user is correlated with a better performance during the training session).
- the set of rules may be further retrieved and analyzed to determine if the mnemonics inputted by the user as being mnemonics to retain correspond to those mnemonics expected to be retained in accordance with the identifier types presented as information to the user along with the mnemonics.
- the set of rules may be communicated to the user prior to a training session, by, e.g. displaying the set of rules to the user on a display of a computing device.
- a command is generated to initiate a training session at step 320.
- Metadata defining the training session is generated for the training session dataset.
- the metadata may include, for instance, a name of the training session, an identifier of the user (e.g. name) performing the training session, a date, a start time, a length, etc.
- a set of rules is retrieved or analyzed for the initiated training session.
- the command to initiate training session is followed or accompanied to generate a first primary signal in accordance with the set of rules for the training session.
- a primary signal is generated at step 330.
- the primary signal may be presented on a display.
- the primary signal may be a sound, a taste, a sensation, etc.
- the generated primary signal is provided to the user.
- the primary signal may be accompanied or followed by a modulation signal.
- the modulation signal is generated at step 340.
- the modulation signal may be presented on a display.
- the modulation signal may be a sound, a taste, a sensation, etc.
- the generated modulation signal is provided to the user.
- a trigger signal may be generated at step 350.
- the trigger signal may accompany or follow the modulation signal.
- the trigger signal may accompany or follow the primary signal.
- the trigger signal may modulate the presentation (e.g. the appearance) of the primary signal and/or the modulation signal, providing additional information to the user, the user expected to recognize the difference in the received primary signal and/or modulation signal.
- the trigger signal may be presented on a display.
- the trigger signal may be a sound, a taste, a sensation, etc.
- the generated trigger signal is provided to the user.
- one or more mnemonics may be generated for presentation to the user at step 360.
- the one or more mnemonics may be presented on a display (e.g. a word or an image).
- the one or more mnemonics may surround the primary signal.
- the one or more mnemonics may be a sound, a taste, a sensation, etc. The generated one or more mnemonics is provided to the user.
- Identifier information may be presented to the user along with the mnemonic to provide the user with information to ascertain if the mnemonic is a retention mnemonic or a distractor mnemonic.
- the identifier information may be a colour, where a first colour is indicative of a retention mnemonic and a second colour is indicative of a distractor mnemonic.
- the one or more mnemonics may accompany or follow the generation of the primary signal.
- the one or more mnemonics may accompany or follow the generation of the modulation signal.
- the one or more mnemonics may accompany or follow the generation of the trigger signal.
- a time of generation of the one or more mnemonics may be independent from an instance of generation of the primary signal, the modulation signal and/or the trigger signal.
- the one or more mnemonics may be generated randomly throughout the duration of the training session.
- the mnemonics 611 and 612 may be presented on a display next to the primary signal 500 (combined with the modulation signal).
- the mnemonic “TWIST” is in a first colour, the colour being the identifier information, the first colour identifying “TWIST” as a retention mnemonic based on the set of rules.
- the mnemonic “DODGE” is in a second colour, the colour being the identifier information, the first colour identifying “TWIST” as a distractor mnemonic based on the set of rules,
- the input provided by the user as a resulting action from the primary signal, combined with the modulation signal and/or the trigger signal, is received at step 370.
- the input may be provided by the user using a user input interface.
- a determination of the state of the training session is queried at step 380.
- the duration of the training session may be defined by the system as a time period, where a determination of the lapse of the time period causes an end of the training session.
- the duration of the training session may be defined by a number of primary signals to be generated for a training session, where a generating of a primary signal may cause an increase of an integer counting the number of primary signals generated. When the value representing the number of primary signals generated matches a threshold value for the number of primary signals generated for a given training session, a command may be issued to end the training session.
- steps 330 and 370, and optionally steps 340, 350 and/or 360 are repeated.
- a new primary signal is generated of a given primary signal type, and optionally accompanied or followed by one or more of a modulation signal, a trigger signal and one or more mnemonics.
- step 380 If a determination is made at step 380 that the training session has ended, then the resulting actions performed by the user during the course of the training session are compared to expected results for the training session at step 390.
- the expected results are generated from the set of rules and each of the primary signals (optionally combined with the modulation signal and/or the trigger signal) generated during the course of the training session.
- An expected action is determined from the primary action type of each generated primary action, and optionally the modulation signal type of the modulation signal related to the primary signal, and/or optionally the trigger signal type of the trigger signal related to the primary signal.
- the user may provide input regarding the mnemonics to retain (e.g. at the end of the training session, during the training session), presented during the course of the training session.
- a comparison of the results provided by the user with the expected results regarding the generated mnemonics is determined. The determination may be performed using the set of rules specific for the mnemonics, based on the identifier information provided to the user with each of the mnemonics, where the identifier information provides the user with the identifier type.
- the identifier type of the generated mnemonic establishes if the mnemonic is to be retained by the user, or discarded by the user. For instance, an identifier type of a first colour is associated with a mnemonic to retain, while an identifier type of a second colour is associated with a mnemonic to discard.
- a performance of the user is measured at step 390, the performance indicating of a state of executive function performance of the user.
- the measurement of the performance of the user may result in the generating of a score (e.g. a value, a percentage, a ratio) indicative of an overall correctness of the results provided by the user (with respect to the performed and, optionally the mnemonics) when compared to the expected answers.
- a calculation of a change of performance may be performed for the user for a current training session and a past training session.
- a calculation may be performed to compare the user to a cohort of other users, where the results of the user may be compared to the results of the members of the cohort (e.g. when performing a training session of an equivalent difficulty).
- the results of a training session may be weighed to set a difficulty level of a future training session.
- a more difficult training session results in a more complex set of rules (e.g. adding additional primary signal types with corresponding expected actions by the user, adding additional modulation signal types, or adding one or more sets of modulation signals with modulation signal types, further modulating the first set of modulation signals with given modulation signal types, adding further trigger signal types, etc.)
- a selection of primary signals of a given primary signal type, and optionally modulation signals of a given modulation signal type and/or trigger signals of a given trigger signal type may be performed randomly by the system using, e.g., a random number generator, where a number is associated with a given primary signal type, modulation signal type and/or trigger signal type.
- the system further determines a location of the primary signals and optionally the modulation signals, the trigger signals and/or the mnemonics in a virtual three-dimensional space of the virtual- reality headset, to give the user a simulation of a 360 space, or three-dimensional space.
- the signal(s) are visible when the user wearing the virtual-reality headset performs a movement which aligns the field of view of the user with the position of the signal in the virtual space, the signal appearing in the field of view of the user as represented by the images generated on the display of the virtual-reality headset.
- the system may measure, using one or more positional sensors (e.g.
- a change in orientation of the head or of the body of the user to orient the user in virtual space (e.g. mapping the user in virtual space from the change in position and/or orientation of the user in real space) where the system causes a change in the image stream generated on the display in accordance with the change in orientation of the head or of the body of the user in the real -world, determined from the positional sensors.
- a translation of the objects appearing in the virtual space corresponds to the change in the position and/or orientation of the user in real-space.
- one of more of the primary signals of a given primary signal type, and optionally modulation signals of a given modulation signal type and/or trigger signals of a given trigger signal type are sound-based, holophonic sound, or three-dimensional sound, may be used to imitate the sound being produced in a three-dimensional space, simulating that the sound is originating from a given direction.
- the holophonic-based sound may guide the user to move around (rotate) in real- space, causing the system to adjust the projected image on the display of the virtual -reality headset accordingly, the sound, e.g., acting as a queue regarding the direction of another of the primary signals (or modulation signals, trigger signals and/or mnemonics) which may be displayed on the screen.
- the primary signals or modulation signals, trigger signals and/or mnemonics
- the primary signals may be generated using a hologram.
- the signals and stimuli may be presented in a virtual reality 3D space (where information is displayed across 360 degrees along one, two or three of axes x, y and z).
- the system implementing VR technology may include one or more position sensors, such as accelerometers, gyroscopes, etc. to detect a movement of the head and/or body of the user, where the image displayed by the system adapts in accordance with the head or body movement of the user, to simulate virtually the user moving in real space.
- the system may require that the user physically move in order to perceive certain of the signals and stimuli.
- Each of the signals / stimuli may be associated with coordinates (x,y and z coordinates) in the 3D virtual space, at which location the signal and/or stimuli is to be displayed when appropriate, in accordance with the program code executed by the processor of the system.
- the stimuli and signals may be displayed as part of a volumetric video or a volumetric-based reality, where a user can experience a recording with six degrees of freedom, namely the X, Y and Z axes, but also based on pitch, yaw and roll.
- the primary signals may be associated with a set of coordinates for generating the primary signal in the virtual 3D space.
- the modulator signals may be associated with a set of coordinates for generating the modulator in the virtual 3D space.
- the mnemonics may be associated with a set of coordinates for generating the mnemonic in the virtual 3D space.
- the differences between the primary signals may be subtle, as in the examples provided in Figures 5A-5H, for instance, by modifying an orientation of a line found in zone 500B of the primary signal.
- the modulation signal 520 may also be generated within the primary signal 500 on a display, occupying a space defined by the primary signal 500. However, it will be understood that the modulation signal does not have to presented within the primary signal, but can be presented next to the modulation signal.
- the first column of Figure 4 illustrates exemplary primary signals each combined with a modulation signal.
- the second column, third column and fourth column illustrate exemplary trigger signals, where the trigger signal type of the trigger signal, in combination with the primary signal type and the modulation signal type, determines in the set of rules which action is to be expected from the user when presented with the combination of the primary signal, the modulation signal and the trigger signal.
- the expected action is an execution of action 1.
- the expected action is instead an inhibition of action 1 (where action 1 is not performed).
- the expected action is to perform action 1 in a slowed down manner.
- the trigger signals may be presented next to or on a display, along with the primary signal (and optionally the modulation signal and the mnemonics).
- the set of rules is analyzed by the system to determine if the actions performed by the user during the training session match the expected actions, based on the primary signals (and in some cases, the modulation signal and/or the trigger signal) presented to the user during the course of a training session.
- the present example is to provide a non-limitative example of a session generated by the system for training executive function of a user in accordance with the present teachings. It will be understood that the present example is but for illustrative purposes, and does not limit the scope of the present teachings.
- a set of rules is configured at the system, defining a set of expected actions depending on the primary signal type of the primary signal presented to a user, and along with the modulation type of the modulation signal equally presented to a user (e.g. using a transducer).
- the set of rules may also be defined to include further changes to the expected actions in accordance with trigger signals of one or more given trigger signal types presented in association with the primary signal and/or the modulation signal.
- the system is configured to present the primary signals, and optionally the modulation signals and/or the trigger signals to the user through a display of a virtual-reality headset (e.g. where the field of view of a user is determined for the virtual-reality headset.
- the primary signals, and optionally the modulation signals and/or the trigger signals are presented within the determined field of view for the user while wearing the virtual-reality headset).
- a first primary signal type may be a smiley face, for causing the pressing of a left button of a controller as an expected action type.
- a second primary signal type may be a frown face, for causing the pressing of a right button of a controller as an expected action type.
- a first modulation signal type may be a black circle.
- a second modulation type may be a white circle.
- the combination of the first modulation signal type with the first primary signal type causes a double-press of the left button.
- the combination of the second modulation signal type with the first primary signal type causes a long press of the left button.
- the combination of the first modulation signal type with the second primary signal type causes a press of the left button, instead of the right button.
- the combination of the second modulation signal type with the second primary signal type causes a long press of the right button.
- a first trigger signal type may be to display the primary signal in a first colour (e.g. for this example, the first trigger signal type is to display the primary signal in blue).
- a second trigger signal type may be to display the primary signal in a first colour (e.g. for this example, the second trigger signal type is to display the primary signal in green).
- the primary signal is presented in blue, the expected action is not to perform any action.
- the primary signal is presented in green, the expected action is to repeat the action that was expected prior to the application of the trigger signal in accordance with the set of rules.
Landscapes
- Engineering & Computer Science (AREA)
- Business, Economics & Management (AREA)
- Physics & Mathematics (AREA)
- Educational Administration (AREA)
- Educational Technology (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Entrepreneurship & Innovation (AREA)
- Electrically Operated Instructional Devices (AREA)
Abstract
A method for training a cognitive executive function of a user; it includes defining a set of rules periodically causing a generation of a primary signal with a primary signal type selected from the plurality of primary signal types, the generation of the primary signal for causing the user to prepare to initiate an action corresponding to an action type of the plurality of action types corresponding to the generated primary signal in accordance with the set of rules; for at least some of the generated primary signals, causing a generation of a modulator signal with a modulator signal type selected from the one or more modulator signal types, receiving information on actions performed by the user during the time period; comparing the actions performed by the user to expected actions; and measuring a performance of the user based from the comparing.
Description
SYSTEM FOR GENERATING A SESSION FOR TRAINING EXECUTIVE FUNCTION OF A USER AND METHOD OF USE THEREOF
[0001] The present application claims priority from U.S. provisional patent application No. 63/518,056 with a filing date of August 7, 2023, incorporated by reference herein.
Technical Field
[0002] The present disclosure relates to cognitive function, and more particularly to improving executive function of a user.
Background
[0003] Executive function involves a set of cognitive processes that are necessary for cognitive control of behavior by selecting and monitoring behaviours for reaching one or more chosen goals. Executive function may be invoked to override prepotent responses that might otherwise be performed automatically following a stimulus in an external environment. A prepotent response is a response which takes priority over other potential responses (following a giving stimulus) (an automated response following a stimulus). In contrast, executive function is often solicited in one or more of the following situations: a situation that involves planning or decision-making; a situation that involves error correction or troubleshooting; a situation where responses are not well- rehearsed or that include new actions; dangerous or technically challenging situations; situations that require the overcoming of a strong habitual response or that requires resisting temptation. Executive function is believed to involve the prefrontal cortex.
[0004] Executive function relies on three types of brain function: working memory, mental flexibility and self-control. Working memory involves a user’s ability to remember and use retain information over short periods. Mental flexibility enables a user to shift action in response to different demands or to apply different rules depending on the setting. Self-control enables a user to prioritize actions and resist impulsive actions and/or responses.
[0005] As such, executive function is crucial for humans, living in an uncertain and unpredictable perceptive-cognitive environment that renders it difficult to predict future situations. This naturally unstable state of a human’s environment has an impact on goal-directed actions with a permanent need to generate actions, to stop actions, to change decisions, to switch actions and to change action to a new appropriate behavior based on the last perceptive-cognitive information processed by our brain, thereby applying executive function. These modulations of our actions are
ruled by a process called cognitive executive control, the ability to carry out goal-directed behaviour using complex mental processes and cognitive abilities. In tasks or jobs requiring a high level of cognitive processing and performance, cognitive executive control becomes critical for rapid behavior adaptations to unexpected events. Such tasks may be found in certain professional sports, such as soccer, football, tennis, race car driving, etc. Such tasks may also be found in certain jobs such as those of aircraft pilots, soldiers, air-traffic-controller, etc.
[0006] Moreover, certain jobs require working along with systems running on artificial intelligence (Al).
[0007] Al provides the basic operation of the system to reduce the human workload, but human intervention is still required to quickly assess and correct the situation resulting from unexpected problems or for tasks where human intervention is prioritized to achieve the human- Al pair’s performance. For instance, for self-driving cars, the artificial intelligence may control the vehicle during normal routes, but the user may be required to intervene if the artificial intelligence model malfunctions.
[0008] As such, a system for generating training sessions to improve cognitive function of a user would be advantageous.
Summary
[0009] The present disclosure relates to systems and methods for generating training sessions for improving cognitive executive control (also referred to herein as executive function) of a user. [0010] The system provides the user with a sequence of stimuli (referred to herein as signals) and requires the user to respond to the sequence of signals by providing an appropriate action in accordance with a set of rules associating the signals types with respective actions. As such, the system tests memory by analyzing the user’s ability to remember the set of rules, mental flexibility to testing the user’ s ability to correct adjust their action in accordance with the sequence of signals, and self-control by repressing an action associated with a first signal, but that is changed upon receipt of a second signal following or accompanying the first signal, in accordance with the set of rules.
[0011] The set of rules defines different actions to be performed as a function of a primary signal type selected from a plurality of primary signal types. The set of rules further defines a modulation of the action types associated with the first primary signal types in accordance with modulation signal types that are generated following or accompanying the primary signal type.
The generation of a modulation signal trains the user’s impulse control and mental flexibility to inhibit the initial action corresponding to the primary signal and instead produce the action corresponding to the combination of the primary signal and the modulation signal, in accordance with the set of rules, the modulation signal generated after a time following, or with, the primary signal.
[0012] In some instances, the system may further generate a training program to train the user ’ s executive function by adding further levels of complexity to the set of rules and to the information provided to the user during the course of a training session.
[0013] In some instances, the user may be presented with mnemonics during the course of a training session. Mnemonics includes complementary information (such as words, symbols, drawings, smells, touches, sounds, etc.) that is presented to the user. Retention mnemonics are mnemonics that the user is to remember, where distractor mnemonics includes information that the user is to cast aside or ignore during the course of the session. At the end of a training session, the user is asked to recall the retention mnemonics.
[0014] In some instances, the set of rules may include trigger signal types. When presented to the user, the trigger signal types cause the user to adapt the action performed as a function of the primary signal, optionally the modulation signal, and the trigger signal, in accordance with the set of rules. Trigger signal types may be defined in the set of rules to cause the user to accelerate their response actions, slow down their response actions, halt an action, etc.)
[0015] A broad aspect is a method for generating a training session to train cognitive executive function of a user, comprising defining a set of rules comprising: a plurality of action types; a plurality of primary signal types, wherein each primary signal type of the plurality of primary signal types is associated with an action type of the plurality of action types, one or more modulator signal types, wherein each of the one or more modulator signal types is for causing a modulation in the action type of the plurality of action types that is associated with a signal type of the plurality of primary signal types; during a time period: periodically causing a generation of a primary signal with a primary signal type selected from the plurality of primary signal types, the generation of the primary signal for causing the user to prepare to initiate an action corresponding to an action type of the plurality of action types corresponding to the generated primary signal in accordance with the set of rules, resulting in the generation of plurality of signal spread over the time period; for at least some of the generated primary signals, causing a generation of a modulator
signal with a modulator signal type selected from the one or more modulator signal types, the generation of the modulator signal for indicating to the user to modulate the action type corresponding to the generated primary signal in accordance with the set of rules; receiving information on actions performed by the user during the time period; comparing the actions performed by the user to expected actions based on the generated primary signals, the generated modulator signals and the set of rules; and measuring a performance of the user based from the comparing.
[0016] In some embodiments, the set of rules further may include information to generate a response in accordance with one or more mnemonics, and the method may include, during the time period, causing periodically a generation of a mnemonic to a user for causing the user to act in accordance with the set of rules retain information relating to the mnemonic; receiving one or more responses provided by the user corresponding to the generated positive mnemonics and the set of rules; wherein the measuring of the performance of the user is further based on comparing the received one or more responses with expected one or more responses based on the set of rules.
[0017] In some embodiments, the one or more mnemonics may be one or more of: a sound, an image; a vibration; an odor; and a taste.
[0018] In some embodiments, the one or more mnemonics may be words.
[0019] In some embodiments, the set of rules may include information to ignore a subset of one or more distractor mnemonics from the one or more mnemonics, wherein, during the time period, the causing periodically a generation of a mnemonic may include generating at least one of the one or more distractor mnemonics; and wherein the measuring of the performance of the user may be further based on comparing the received one or more responses with expected one or more responses based on the set of rules, including the generated at least one of the one or more distractor mnemonics that are to be ignored by the user.
[0020] In some embodiments, the primary signals and the modulator signals may be generated via a virtual reality headset.
[0021] In some embodiments, the primary signals and the modulator signals may be generated via an extended reality headset.
[0022] In some embodiments, during the period of time, the method may include adjusting a difficulty associated with the generated primary signals and the generated modulator signals in accordance with a performance determined from the comparing of the actions performed by the
user to expected actions based on the generated primary signals, the generated modulator signals and the set of rules.
[0023] In some embodiments, one or more of the primary signal types of the plurality of signal types may include more than one sensory stimuli selected from an image, a word, a sound, a vibration, an odor and a taste.
[0024] In some embodiments, one or more of the modulator signal types of the one or more modulator signal types may include one or more sensory stimuli selected from an image, a sound, a vibration, an odor and a taste.
[0025] In some embodiments, the set of rules further may include one or more trigger signal types for adjusting the action type resulting from the combination of a primary signal with a primary signal type and a modulator signal with a modulator signal type, the method including, during the period of time, for at least some of the generated modulator signals, causing a generation of a trigger signal with a trigger signal type selected from the one or more trigger signal types, the generation of the trigger signal for indicating to the user to adapt the action type corresponding to the generated primary signal and the generated modulator signal in accordance with the set of rules, wherein the comparing the actions performed by the user to expected actions may be further based on the generated trigger signals.
[0026] Another broad aspect is a system for generating a training session to train cognitive executive function of a user. The system includes a processor; and memory including program code that, when executed by the processor, causes the processor to: define a set of rules comprising: a plurality of action types; a plurality of primary signal types, wherein each primary signal type of the plurality of primary signal types is associated with an action type of the plurality of action types, one or more modulator signal types, wherein each of the one or more modulator signal types is for causing a modulation in the action type of the plurality of action types that is associated with a signal type of the plurality of primary signal types; during a time period: periodically cause a generation of a primary signal with a primary signal type selected from the plurality of primary signal types, the generation of the primary signal for causing the user to prepare to initiate an action corresponding to an action type of the plurality of action types corresponding to the generated primary signal in accordance with the set of rules, resulting in the generation of plurality of signal spread over the time period; for at least some of the generated primary signals, cause a generation of a modulator signal with a modulator signal type selected
from the one or more modulator signal types, the generation of the modulator signal for indicating to the user to modulate the action type corresponding to the generated primary signal in accordance with the set of rules; receive information on actions performed by the user during the time period; compare the actions performed by the user to expected actions based on the generated primary signals, the generated modulator signals and the set of rules; and measure a performance of the user based from the comparing.
[0027] In some embodiments, the set of rules may include information to generate a response in accordance with one or more mnemonics, wherein the processor may further cause the processor to, during the time period, cause periodically a generation of a mnemonic to a user for causing the user to act in accordance with the set of rules retain information relating to the mnemonic; receive one or more responses provided by the user corresponding to the generated positive mnemonics and the set of rules; wherein the measuring of the performance of the user may be further based on comparing the received one or more responses with expected one or more responses based on the set of rules.
[0028] In some embodiments, the one or more mnemonics is one or more of a sound; an image; a vibration; an odor; and a taste.
[0029] In some embodiments, the one or more mnemonics may be words.
[0030] In some embodiments, the set of rules may include information to ignore a subset of one or more distractor mnemonics from the one or more mnemonics, and wherein, during the time period, the causing periodically a generation of a mnemonic may include generating at least one of the one or more distractor mnemonics; and wherein the measuring of the performance of the user may be further based on comparing the received one or more responses with expected one or more responses based on the set of rules, including the generated at least one of the one or more distractor mnemonics that are to be ignored by the user.
[0031] In some embodiments, the primary signals and the modulator signals may be generated via a virtual reality headset.
[0032] In some embodiments, the primary signals and the modulator signals may be generated via an extended reality headset.
[0033] In some embodiments, during the period of time, the program code may further cause the processor to adjust a difficulty associated with the generated primary signals and the generated modulator signals in accordance with a performance determined from the comparing of the actions
performed by the user to expected actions based on the generated primary signals, the generated modulator signals and the set of rules.
[0034] In some embodiments, one or more of the primary signal types of the plurality of signal types may include more than one sensory stimuli selected from an image, a word, a sound, a vibration, an odor and a taste.
[0035] In some embodiments, one or more of the modulator signal types of the one or more modulator signal types may include one or more sensory stimuli selected from an image, a sound, a vibration, an odor and a taste.
[0036] In some embodiments, the set of rules may include one or more trigger signal types for adjusting the action type resulting from the combination of a primary signal with a primary signal type and a modulator signal with a modulator signal type, the program code further causing the processor to: during the period of time, for at least some of the generated modulator signals, cause a generation of a trigger signal with a trigger signal type selected from the one or more trigger signal types, the generation of the trigger signal for indicating to the user to adapt the action type corresponding to the generated primary signal and the generated modulator signal in accordance with the set of rules, wherein the comparing the actions performed by the user to expected actions may be further based on the generated trigger signals.
[0037] Another broad aspect is a non-transitory computer-readable medium having stored thereon program instructions for generating a training session to train cognitive executive function of a user, the program instructions executable by a processing unit for: defining a set of rules comprising: a plurality of action types; a plurality of primary signal types, wherein each primary signal type of the plurality of primary signal types is associated with an action type of the plurality of action types, one or more modulator signal types, wherein each of the one or more modulator signal types is for causing a modulation in the action type of the plurality of action types that is associated with a signal type of the plurality of primary signal types; during a time period: periodically causing a generation of a primary signal with a primary signal type selected from the plurality of primary signal types, the generation of the primary signal for causing the user to prepare to initiate an action corresponding to an action type of the plurality of action types corresponding to the generated primary signal in accordance with the set of rules, resulting in the generation of plurality of signal spread over the time period; for at least some of the generated primary signals, causing a generation of a modulator signal with a modulator signal type selected
from the one or more modulator signal types, the generation of the modulator signal for indicating to the user to modulate the action type corresponding to the generated primary signal in accordance with the set of rules; receiving information on actions performed by the user during the time period; comparing the actions performed by the user to expected actions based on the generated primary signals, the generated modulator signals and the set of rules; and measuring a performance of the user based from the comparing.
[0038] In some embodiments, the set of rules may further include information to generate a response in accordance with one or more mnemonics, wherein the program instructions may be further executable by the processing unit for: during the time period, causing periodically a generation of a mnemonic to a user for causing the user to act in accordance with the set of rules retain information relating to the mnemonic; receiving one or more responses provided by the user corresponding to the generated positive mnemonics and the set of rules; wherein the measuring of the performance of the user may be further based on comparing the received one or more responses with expected one or more responses based on the set of rules.
[0039] In some embodiments, the one or more mnemonics may be one or more of: a sound; an image; a vibration; an odor; and a taste.
[0040] In some embodiments, the one or more mnemonics may be words.
[0041] In some embodiments, the set of rules further may include information to ignore a subset of one or more distractor mnemonics from the one or more mnemonics, and wherein, during the time period, the causing periodically a generation of a mnemonic may include generating at least one of the one or more distractor mnemonics; and wherein the measuring of the performance of the user may be further based on comparing the received one or more responses with expected one or more responses based on the set of rules, including the generated at least one of the one or more distractor mnemonics that are to be ignored by the user.
[0042] In some embodiments, the primary signals and the modulator signals may be generated via a virtual reality headset.
[0043] In some embodiments, the primary signals and the modulator signals may be generated via an extended reality headset.
[0044] In some embodiments, the program instructions may be further executable by the processing unit for adjusting a difficulty associated with the generated primary signals and the generated modulator signals in accordance with a performance determined from the comparing of
the actions performed by the user to expected actions based on the generated primary signals, the generated modulator signals and the set of rules.
[0045] In some embodiments, the one or more of the primary signal types of the plurality of signal types may include more than one sensory stimuli selected from an image, a word, a sound, a vibration, an odor and a taste.
[0046] In some embodiments, one or more of the modulator signal types of the one or more modulator signal types may include one or more sensory stimuli selected from an image, a sound, a vibration, an odor and a taste.
[0047] In some embodiments, the set of rules may include one or more trigger signal types for adjusting the action type resulting from the combination of a primary signal with a primary signal type and a modulator signal with a modulator signal type, wherein the program instructions may be further executable by the processing unit for, during the period of time, for at least some of the generated modulator signals, causing a generation of a trigger signal with a trigger signal type selected from the one or more trigger signal types, the generation of the trigger signal for indicating to the user to adapt the action type corresponding to the generated primary signal and the generated modulator signal in accordance with the set of rules, wherein the comparing the actions performed by the user to expected actions may be further based on the generated trigger signals.
Brief Description of the Drawings
[0048] The invention will be better understood by way of the following detailed description of embodiments of the invention with reference to the appended drawings, in which:
[0049] Figure 1 is block diagram of an exemplary system for training cognitive executive function of a user;
[0050] Figure 2 is a block diagram of an exemplary software architecture for generating a training executive function of a user;
[0051] Figure 3 is a flowchart diagram of an exemplary method of training executive function of a user;
[0052] Figure 4 is a diagram illustrating an exemplary set of rules, where the primary signals, combined with the modulation signals, on the leftmost column may be combined with different trigger signals of a given trigger signal type, resulting in a different expected action that the expected action from the primary signal alone;
[0053] Figure 5A is an illustration of an exemplary primary signal combined with a
modulation signal;
[0054] Figure 5B is an illustration of a different exemplary primary signal combined with a modulation signal;
[0055] Figure 5C is an illustration of a different exemplary primary signal combined with a modulation signal;
[0056] Figure 5D is an illustration of a different exemplary primary signal combined with a modulation signal;
[0057] Figure 5E is an illustration of a different exemplary primary signal combined with a modulation signal;
[0058] Figure 5F is an illustration of a different exemplary primary signal combined with a modulation signal;
[0059] Figure 5G is an illustration of a different exemplary primary signal combined with a modulation signal;
[0060] Figure 5H is an illustration of a different exemplary primary signal combined with a modulation signal; and
[0061] Figure 6 is an illustration of an exemplary primary signal, an exemplary modulation signal, and exemplary mnemonics presented on a display.
Detailed Description
[0062] The present disclosure relates to systems and methods for generating a training session for training cognitive executive function of a user. The system presents a series of stimuli or signals to the user, and the user is prompted to act by performing the appropriate action in accordance with the signal(s) that have been presented to the user, as established based on a set of rules with the user.
[0063] The generating of the signals may test the three main tenets of executive function, namely working memory, mental flexibility and self-control.
[0064] Namely, a primary signal is presented to the user, prompting the user to perform a specific action corresponding to the type of the primary signal. However, in some instances, after the presenting of the primary signal, a modulation signal is presented to the user. The modulator stimulus requires the user to exert self-control and not perform the initial action associated with the primary signal, and instead demonstrate mental flexibility and perform the action corresponding to the combination of a type of the primary signal and a type of the modulation
signal in accordance with the set of rules. The user also anticipates that in some instances, a modulation signal may not be generated, where the user is to move forward with the performance of the action type corresponding to the primary signal type of the primary signal, based on the set of rules. As such, the user is caused to anticipate a possible modulator signal, that can cause the user to carry out different actions, or perform the action type corresponding to the primary signal type if no modulation signal is generated. Working memory is also tested by requiring the user to perform the requisite actions of the correct action type in accordance with the set of rules.
[0065] The actions performed by the user are compared to expected action types, based on the primary signal and modulation signal generated, in accordance with the set of rules. A performance of the user (and in some instance, a value corresponding to an improvement or worsening of the executive function of the user) is then established.
[0066] Unless the context requires otherwise, throughout the specification and claims which follow, the word “comprise” and variations thereof, such as, “comprises” and “comprising” are to be construed in an open, inclusive sense, that is as “including, but not limited to.”
[0067] Reference throughout this specification to “one embodiment” or “an embodiment” means that a particular feature, structure or characteristic described in connection with the embodiment is included in at least one embodiment. Thus, the appearances of the phrases “in one embodiment” or “in an embodiment” in various places throughout this specification are not necessarily all referring to the same embodiment. Furthermore, the particular features, structures, or characteristics may be combined in any suitable manner in one or more embodiments.
[0068] As used in this specification and the appended claims, the singular forms “a,” “an,” and “the” include plural referents unless the content clearly dictates otherwise. It should also be noted that the term “or” is generally employed in its sense including “and/or” unless the content clearly dictates otherwise.
[0069] From the foregoing it will be appreciated that, although specific embodiments have been described herein for purposes of illustration, various modifications may be made without deviating from the spirit and scope of the teachings. Accordingly, the claims are not limited by the disclosed embodiments.
[0070] EXEMPLARY SYSTEM FOR TRAINING COGNITIVE EXECUTIVE FUNCTION OF A USER:
[0071 ] Reference is now made to Figure 1 , illustrating an exemplary system 100 for generating
a training session for training a cognitive executive function of a user. It will be understood that the system 100 may be in communication with one or more data sources (e.g. for storing a series of stimuli) and/or one or more remote computers (not shown).
[0072] The system 100 has a processor 102, memory 101, a stimulus generator 103 and a user input interface 105. The system 100 may have a display 104 and an input / output (I/O) interface 106.
[0073] The processor 102 may be a general-purpose programmable processor. In this example, the processor 102 is shown as being unitary, but the processor 102 may also be multicore, or distributed (e.g. a multi-processor).
[0074] The computer readable memory 101 stores program instructions and data used by the processor 102. The computer readable memory 101 may also store instructions to generate primary signals, instructions to generate modulator signals, instructions to generate trigger signals, instructions to generate mnemonics, etc. The memory 101 may be non-transitory. The computer readable memory 101, though shown as unitary for simplicity in the present example, may comprise multiple memory modules and/or caching. In particular, it may comprise several layers of memory such as a hard drive, external drive (e.g. SD card storage) or the like and a faster and smaller RAM module. The RAM module may store data and/or program code currently being, recently being or soon to be processed by the processor 102 as well as cache data and/or program code from a hard drive. A hard drive may store program code and be accessed to retrieve such code for execution by the processor 102 and may be accessed by the processor 102 to store and access data. The memory 101 may have a recycling architecture for storing, for instance, user input of actions performed by a user during a training session, a performance value for a user, a difficulty level of a training session generated for a user, where older data files are deleted when the memory 101 is full or near being full, or after the older data files have been stored in memory 101 for a certain time.
[0075] The I/O interface 106 is in communication with the processor 102. The I/O interface 106 may include a network interface and may be a wired or wireless interface for establishing a remote connection with, for example, a remote server, an external data source, a remote computer, etc. For instance, the I/O interface 103 may be an Ethernet port, a WAN port, a TCP port, etc.
[0076] The processor 102, the memory 101 and the I/O interfaces 106 may be linked via bus connections.
[0077] The stimulus generator 103 is a device to generate certain signals (e.g. primary signals, modulation signals, trigger signals) for presentation to a user. For instance, the stimulus generator
103 may be one or more of a display (may include display 104), a speaker, one or more motors for producing a vibration, a dispenser for one or more smells, one or more LEDs (light emitting diodes), etc. The stimulus generator 103 may include a transducer for generating the signals.
[0078] The user input interface 105 is a device through which the user may provide input to the system 100 (e.g. when performing a training session). A user input interface 105 may be, or include, a mouse, a keyboard, a joystick, a controller, a touchscreen (e.g. of display 104), a microphone (for capturing speech or sounds from the user), an eye tracker, a motion detector, etc. [0079] The display 104 is a screen for sharing information to the user (e.g. during a training session, such as the primary stimuli, the modulator stimuli, the trigger stimuli, etc.) The display
104 may be a screen for a computer, a screen for a virtual-reality headset, a screen for an extended- reality system, a touchscreen (where the display 104 may also act as a user input interface 105), etc.
[0080] The system 100 may be, or may include (composed by processor 102, memory 101, etc.), a computer, such as a desktop computer, a laptop, a tablet computer, a smartphone, a virtual- reality computer system, an extended-reality computer system, etc.
[0081] In some instances, the system 100 may be connected (e.g. through an Internet connection, through a local network such as a LAN network) to a remote server or database for transmitting an optionally storing usage information on a subject using the system 100 (e.g. a duration of usage of the system, a time of start and finish of a training session using the system, a performance of a user, an identifier for a user, a position of the system e.g. represented by GPS coordinates, etc.)
[0082] EXEMPLARY SOFTWARE ARCHITECTURE FOR GENERATING A TRAINING SESSION FOR TRAINING EXECUTIVE FUNCTION OF A SUBJECT:
[0083] Reference is now made to Figure 2, illustrating an exemplary software architecture for executive function training 200. The executive function training software architecture 200 may be implemented by system 100.
[0084] The system 100 has program code, stored in memory 101, that includes the primary signal generation module 210. The system 100 has program code, stored in memory 101, that includes the modulation signal generation module 220. The system 100 has program code, stored
in memory 101, that includes the trigger signal generation module 230. The system 100 has program code, stored in memory 101, that includes the primary mnemonic generation module 240. The system 100 has program code, stored in memory 101, that includes the performance evaluation module 250. Each of the primary signal generation module 210, the modulation signal generation module 220, the trigger signal generation module 230 and the primary mnemonic generation module 240 includes program code configured to implement the functionality of the modules as is described herein.
[0085] The primary signal generation module 210 includes program code stored in memory 101 that, when executed by the processor 102, causes the processor 102 to generate the primary signal (e.g. through the stimulus generator 103 and/or on the display 104). For exemplary purposes, the primary signal may be presented as an image or part of an image on the display 104. However, it will be understood that the primary signal may trigger other senses of the user, being a sound, a taste, a tactile sensation, a smell, or a combination thereof. Exemplary primary signals are presented in Figures 5A-5H.
[0086] The primary signal generation module 210 causes the generation of periodic primary signals over time during the course of a training session. In some instances, the time between a generation of primary signals may be constant, where in other instances, the time between the generation of primary signals may be varied.
[0087] The modulation signal generation module 220 includes program code stored in memory 101 that, when executed by the processor 102, causes the processor 102 to generate a modulation signal (e.g. on the display 104 or through the stimulus generator 103). The primary signal generation module 210 may call the modulation signal generation module 220 to produce the modulation signal along with, or after, the primary signal. In some instances, the modulation signal may be generated with each generation of a primary signal. In other instances, the modulation signal may be generated only with respect to certain primary signals, where not all primary signals will be accompanied by a modulation signal. For exemplary purposes, the modulation signal may be presented as an image or part of an image on the display 104. However, it will be understood that the modulation signal may trigger other senses of the user, being a sound, a taste, a tactile sensation, a smell, or a combination thereof. Exemplary modulation signals are presented in the second, third and fourth columns of Figure 4.
[0088] In some embodiments, the modulation signal generation module 220 may cause the
processor 102 to generate a plurality of modulation signals with respect to a single generated primary signal. The user is presented with a plurality of modulation signals, adding to the complexity of the analysis performed by the user to execute the appropriate action, where the determination of the appropriate action requires a discerning of the meaning behind the combination of modulation signals, combined with the primary signals.
[0089] The trigger signal generation module 230 includes program code stored in memory 101 that, when executed by the processor 102, causes the processor 102 to generate a trigger signal (e.g. on the display 104 or through the stimulus generator 103). The primary signal generation module 210 may call the trigger signal generation module 230 to produce the modulation signal along with, or after, the primary signal. The modulation signal generation module 220 may call the trigger signal generation module 230 to produce the modulation signal along with, or after, the modulation signal. The trigger signal may be a modification of the primary signal and/or the modulation signal. The trigger signal may be a new symbol presented on a display 104 along with the primary signal, or the primary signal and the modulation signal. In some implementations, the trigger signal may be presented along with the primary signal, along with the modulation signal, or along with the primary signal and modulation signal. In some implementations, the modulation signal may be presented after the primary signal, after the modulation signal, or after the primary signal and modulation signal when the primary signal and modulation signal are presented together. In some instances, only some of the modulation signals may be presented with a trigger signal. In some instances, only some of the primary signals may be presented with a trigger signal. In some instances, each of the modulation signals may be accompanied by a trigger signal. In some instances, each of the primary signals may be presented with a trigger signal. For exemplary purposes, the trigger signal may be presented as an image or part of an image on the display 104. However, it will be understood that the trigger signal may trigger other senses of the user, being a sound, a taste, a tactile sensation, a smell, or a combination thereof.
[0090] The mnemonic generation module 240 includes program code stored in memory 101 that, when executed by the processor 102, causes the processor 102 to generate one or more mnemonics during the course of a training session (e.g. on the display 104, or by the stimulus generator 103). The mnemonic may be a word or image presented on a display 104. The mnemonic may be a word or sound generated through a speaker by the stimulus generator 103. The mnemonic may be a smell produced by the stimulus generator 103. A command to generated information for
the user is produced by the mnemonic generation module 240 indicative of if the mnemonic should be retained or discarded by the user during the course of a training session, the information determined as a function of a set of rules defined at the executive training software architecture 200, and conveyed to the user. For instance, the information may be a colour, where words or images of a certain colour are to be retained by the user, and words or images of another colour are to be discarded by the user. In other instances, the information may be a font, where words or images of a certain font are to be retained by the user, and words or images of another font are to be discarded by the user. In some instance, the information may be a tactile sensation, where the presentation of each mnemonic is accompanied by a tactile sensation, where the information of a type of tactile sensation gives the user information to discard a given mnemonic, and information of a different type of tactile sensation gives the user information to retain a given mnemonic.
[0091] The performance evaluation module 250 includes program code stored in memory 101 that, when executed by the processor 102, causes the processor 102 to evaluate a performance of a user during a specific training session in accordance with a set of rules defined at the executive function training software architecture 200 and the primary signals, modulation signals, trigger signals and mnemonics generated during the course of the training session from commands issued by one or more of the primary signal generation module 210, the modulation signal generation module 220, the trigger signal generation module 230 and the mnemonic generation module 240. [0092] When the primary signal generation module 210 causes the generating of a primary signal during the course of the training session, the performance evaluation module 250 gathers or receives information on the primary signal type of the primary signal which is generated by the primary signal generation module 210.
[0093] When the modulation signal generation module 220 causes the generating of a modulation signal during the course of the training session, the performance evaluation module 250 gathers or receives information on the modulation signal type of the modulation signal which is generated by the modulation signal generation module 220.
[0094] When the trigger signal generation module 230 causes the generating of a trigger signal during the course of the training session, the performance evaluation module 250 gathers or receives information on the trigger signal type of the trigger signal which is generated by the primary trigger signal generation module 230.
[0095] The performance evaluation module 250 also receives or gathers information on the
action performed by the user following a generating of each primary signal by the primary signal generation module 210. The information on the action performed by the user may be transmitted or gathered from the user input interface 105, where the user performs an action on the user input interface. In some instances, a camera or microphone may also be used to record information on the action performed by the user following the generating of the primary signal. A time between the generating of the primary signal (or of the modulation signal, or of the trigger signal) may be recorded by the performance evaluation module 250.
[0096] The performance evaluation module 250 generates a query to retrieve from memory a set of rules defining an expected action in accordance with a generation primary signal, presented alone, or combined with one or more of the modulation signal(s) and the trigger signal(s). The performance evaluation module 250 then causes the processor 102 to compare the received action performed by the user following the generating of a primary signal (optionally combined with the modulation signal and/or the trigger signal), with the expected action in accordance with the generation of the primary signal, based on the primary signal type, presented alone, or combined with one or more of the modulation signal(s) and the trigger signal(s), based on respectively, the modulation signal type for each modulation signal or a combination of modulation signals and/or the trigger signal type for each trigger signal or a combination of trigger signals.
[0097] The processor 102 is caused by the performance evaluation module 250 to perform the comparison after each primary signal during the course of the training session. A value indicative of the number of matches between the performed actions by the user and the expected actions by the user may be generated by the processor 102.
[0098] When the mnemonic generation module 240 causes the generating of a mnemonic during the course of the training session, the performance evaluation module 250 gathers or receives information on the mnemonic that is generated by the mnemonic generation module 240 and information generated by the mnemonic generation module 240 to indicate to the user if the mnemonic should be retained or discarded (a retention mnemonic or a distractor mnemonic). The performance evaluation module 250 then queries memory 101 to retrieve the set of rules regarding if a mnemonic is to be retained or discarded based on the information presented to the user along with the mnemonic (distractor mnemonics). The performance evaluation module 250 receives information from the user (e.g. through the user input interface 105) on the mnemonics presented during the course of the training session, with respect to which the user believes should have been
retained (retention mnemonics). The performance evaluation module 250 causes the processor 102 to compare the mnemonic received by the user with the mnemonics that are expected to be retained by the user based on the set of rules. The performance evaluation module 250 may cause the processor 102 to output a score or value indicative of a performance of a user with respect to the mnemonics generated by the mnemonic generation module 240.
[0099] The performance evaluation module 250 causes the processor 102 to generate metrics, a value or a score on a performance of the user during a training session based from the comparison of the expected actions following the generation of the primary signals and the actions (not performing an action may be considered an action by the software architecture 200) performed by the user, and in some instances, based from the mnemonics retained by the user, and provided as input to the system 100, with the retention mnemonics that are expected to be retained by the user based on the set of rules.
[0100] The performance evaluation module 250 may cause the processor 102 to compare the performance of the user between different training sessions, and may cause a modulation of a difficulty of the training session based from the comparison for a future training session (e.g. by adding more modulation signal types, trigger signals, more rules regarding the mnemonics to be retained or discarded, etc.)
[0101] EXEMPLARY METHOD OF GENERATING AN EXECUTIVE FUNCTION TRAINING SESSION FOR A USER:
[0102] Reference is now made to Figure 3, illustrating an exemplary method 300 of generating an executive function training session for a user.
[0103] A set of rules is defined at step 310 (e.g. through the receipt of user input; adapted by the software architecture to increase or decrease the difficulty of the training session). The set of rules establishes an expected action performed by the user in accordance with a primary signal of a primary signal type that is presentable to the user, where the primary signal may be adjusted in accordance with one or more modulation signals, and one or more trigger signals. Each primary signal type may be associated to an expected action. For instance, each primary signal may be associated with a different action (in some cases, more than one primary signal may be associated with a same expected action).
[0104] The set of rules further defines how the expected actions associated with each of the primary signal types may be:
- adapted (where a different action is expected) when one or more modulation signals, each modulation signal of a given modulation type, is generated in relation to a primary signal. A set of rules may include a plurality of modulation signal types. A modulation signal type combined with a primary signal type may generate an expected action. The expected action following a combination of a primary signal of a primary signal type and a modulation signal of a modulation signal type may be different from (or, in some embodiments, the same as) the expected action from the primary signal of the same primary signal type alone. For instance, a set of rules may include three primary signal types, A, B and C, each associated with a different expected action, 1, 2 and 3. Primary signal type A is associated with expected action 1, primary signal type B is associated with expected action 2 and primary signal type C is associated with expected action 3. The set of rules may include two modulation signals, a and b. The set of rules may define that a combination of primary signal “A” with modulation signal “a” results in action 4 as expected input, a combination of primary signal “A” with modulation signal “b” results in action 5 as expected input, and so forth. In some instances, a combination of a primary signal and a modification signal may result in the same expected action as when the expected signal is generated alone. For instance, if the primary signal type A is associated with expected action 1, then a combination of the primary signal type A and of the modulation signal “a” may also be associated with expected action 1;
- adapted to modulate the expected action in accordance with a trigger signal of a trigger signal type which is generated and conveyed to the user. The trigger signal types may be defined in the set of rules to adjust the action that is expected from the user from the presentation of a primary signal, or a combination of a primary signal and of a modulation signal. For instance, the trigger signal type may be defined in the set of rules to “accelerate”, “slow down”, “stop”, “delay”, etc. the action associated to the primary signal, or the primary signal combined with the modulator signal,. As such, the expected action by the user, to assess the user’s performance during the training session, may be defined in accordance with the trigger signal outputted and conveyed to the user, based on the set of rules, combined with the primary signal, and optionally the modulation signal. The trigger signal may be presented as an alteration of the appearance of the primary signal and/or the modulation signal (e.g. a change in colour, in size, in opacity, in orientation, etc. of the
primary signal and/or the modulation signal).
[0105] The set of rules further defines identifier types for mnemonics presented to the user during the course of a training session, where an identifier of a specific identifier type indicates if the user is to retain the mnemonic (a retention mnemonic) or discard the mnemonic (a distractor mnemonic) during the course of a training session. Exemplary identifier types may be a colour, where a first colour may indicate to the user that the user should retain the mnemonic (e.g. the word, the symbol), and a second colour may indicate to the user that the user should discard the mnemonic (e.g. the word, the symbol). Other exemplary identifier types may be an orientation of the mnemonic, a size of the mnemonic, an opacity of the mnemonic, a position of the mnemonic on a display, etc.
[0106] The set of rules are stored in memory of the system, and may be queried or analyzed to determine a type of primary signal to generate, a type of modulation signal to generate, a type of trigger signal to generate. The set of rules is retrieved from memory and analyzed for determining a performance of a user during a training session, to determine if the actions performed by the user during the course of the user match the expected actions by the user in accordance with the set of rules (where an increased match between the actions performed by the user during the course of the user match the expected actions by the user is correlated with a better performance during the training session). The set of rules may be further retrieved and analyzed to determine if the mnemonics inputted by the user as being mnemonics to retain correspond to those mnemonics expected to be retained in accordance with the identifier types presented as information to the user along with the mnemonics.
[0107] The set of rules may be communicated to the user prior to a training session, by, e.g. displaying the set of rules to the user on a display of a computing device.
[0108] A command is generated to initiate a training session at step 320. Metadata defining the training session is generated for the training session dataset. The metadata may include, for instance, a name of the training session, an identifier of the user (e.g. name) performing the training session, a date, a start time, a length, etc. A set of rules is retrieved or analyzed for the initiated training session. The command to initiate training session is followed or accompanied to generate a first primary signal in accordance with the set of rules for the training session.
[0109] A primary signal is generated at step 330. In one embodiment, the primary signal may be presented on a display. In other embodiments, the primary signal may be a sound, a taste, a
sensation, etc. The generated primary signal is provided to the user.
[0110] In some embodiments, the primary signal may be accompanied or followed by a modulation signal. In these embodiments, the modulation signal is generated at step 340. In one embodiment, the modulation signal may be presented on a display. In other embodiments, the modulation signal may be a sound, a taste, a sensation, etc. The generated modulation signal is provided to the user.
[0111] In some instances, a trigger signal may be generated at step 350. In some examples, the trigger signal may accompany or follow the modulation signal. In some examples, the trigger signal may accompany or follow the primary signal. The trigger signal may modulate the presentation (e.g. the appearance) of the primary signal and/or the modulation signal, providing additional information to the user, the user expected to recognize the difference in the received primary signal and/or modulation signal.
[0112] In one embodiment, the trigger signal may be presented on a display. In other embodiments, the trigger signal may be a sound, a taste, a sensation, etc. The generated trigger signal is provided to the user.
[0113] In some instances, one or more mnemonics may be generated for presentation to the user at step 360. In one embodiment, the one or more mnemonics may be presented on a display (e.g. a word or an image). In some instances, the one or more mnemonics may surround the primary signal. In other embodiments, the one or more mnemonics may be a sound, a taste, a sensation, etc. The generated one or more mnemonics is provided to the user.
[0114] Identifier information may be presented to the user along with the mnemonic to provide the user with information to ascertain if the mnemonic is a retention mnemonic or a distractor mnemonic. For instance, the identifier information may be a colour, where a first colour is indicative of a retention mnemonic and a second colour is indicative of a distractor mnemonic.
[0115] The one or more mnemonics may accompany or follow the generation of the primary signal. The one or more mnemonics may accompany or follow the generation of the modulation signal. The one or more mnemonics may accompany or follow the generation of the trigger signal. In some examples, a time of generation of the one or more mnemonics may be independent from an instance of generation of the primary signal, the modulation signal and/or the trigger signal. The one or more mnemonics may be generated randomly throughout the duration of the training session.
[0116] As shown in Figure 6, the mnemonics 611 and 612 may be presented on a display next to the primary signal 500 (combined with the modulation signal). The mnemonic “TWIST” is in a first colour, the colour being the identifier information, the first colour identifying “TWIST” as a retention mnemonic based on the set of rules. The mnemonic “DODGE” is in a second colour, the colour being the identifier information, the first colour identifying “TWIST” as a distractor mnemonic based on the set of rules,
[0117] The input provided by the user as a resulting action from the primary signal, combined with the modulation signal and/or the trigger signal, is received at step 370. The input may be provided by the user using a user input interface.
[0118] A determination of the state of the training session is queried at step 380. The duration of the training session may be defined by the system as a time period, where a determination of the lapse of the time period causes an end of the training session. The duration of the training session may be defined by a number of primary signals to be generated for a training session, where a generating of a primary signal may cause an increase of an integer counting the number of primary signals generated. When the value representing the number of primary signals generated matches a threshold value for the number of primary signals generated for a given training session, a command may be issued to end the training session.
[0119] If a determination is made at step 380 that the training session has not ended, then steps 330 and 370, and optionally steps 340, 350 and/or 360, are repeated. For each sequence involving steps 330 and 370, and optionally steps 340, 350 and/or 360, a new primary signal is generated of a given primary signal type, and optionally accompanied or followed by one or more of a modulation signal, a trigger signal and one or more mnemonics.
[0120] If a determination is made at step 380 that the training session has ended, then the resulting actions performed by the user during the course of the training session are compared to expected results for the training session at step 390. The expected results are generated from the set of rules and each of the primary signals (optionally combined with the modulation signal and/or the trigger signal) generated during the course of the training session. An expected action is determined from the primary action type of each generated primary action, and optionally the modulation signal type of the modulation signal related to the primary signal, and/or optionally the trigger signal type of the trigger signal related to the primary signal.
[0121] In some instances, the user may provide input regarding the mnemonics to retain (e.g.
at the end of the training session, during the training session), presented during the course of the training session. A comparison of the results provided by the user with the expected results regarding the generated mnemonics is determined. The determination may be performed using the set of rules specific for the mnemonics, based on the identifier information provided to the user with each of the mnemonics, where the identifier information provides the user with the identifier type. The identifier type of the generated mnemonic establishes if the mnemonic is to be retained by the user, or discarded by the user. For instance, an identifier type of a first colour is associated with a mnemonic to retain, while an identifier type of a second colour is associated with a mnemonic to discard.
[0122] A performance of the user is measured at step 390, the performance indicating of a state of executive function performance of the user. The measurement of the performance of the user may result in the generating of a score (e.g. a value, a percentage, a ratio) indicative of an overall correctness of the results provided by the user (with respect to the performed and, optionally the mnemonics) when compared to the expected answers. A calculation of a change of performance may be performed for the user for a current training session and a past training session. A calculation may be performed to compare the user to a cohort of other users, where the results of the user may be compared to the results of the members of the cohort (e.g. when performing a training session of an equivalent difficulty).
[0123] The results of a training session may be weighed to set a difficulty level of a future training session. A more difficult training session results in a more complex set of rules (e.g. adding additional primary signal types with corresponding expected actions by the user, adding additional modulation signal types, or adding one or more sets of modulation signals with modulation signal types, further modulating the first set of modulation signals with given modulation signal types, adding further trigger signal types, etc.)
[0124] In some examples, a selection of primary signals of a given primary signal type, and optionally modulation signals of a given modulation signal type and/or trigger signals of a given trigger signal type, may be performed randomly by the system using, e.g., a random number generator, where a number is associated with a given primary signal type, modulation signal type and/or trigger signal type.
[0125] In some examples where the training session is generated using a virtual reality headset, the system further determines a location of the primary signals and optionally the modulation
signals, the trigger signals and/or the mnemonics in a virtual three-dimensional space of the virtual- reality headset, to give the user a simulation of a 360 space, or three-dimensional space. The signal(s) are visible when the user wearing the virtual-reality headset performs a movement which aligns the field of view of the user with the position of the signal in the virtual space, the signal appearing in the field of view of the user as represented by the images generated on the display of the virtual-reality headset. As such, the system may measure, using one or more positional sensors (e.g. gyroscopes, accelerators, etc.) a change in orientation of the head or of the body of the user, to orient the user in virtual space (e.g. mapping the user in virtual space from the change in position and/or orientation of the user in real space) where the system causes a change in the image stream generated on the display in accordance with the change in orientation of the head or of the body of the user in the real -world, determined from the positional sensors. A translation of the objects appearing in the virtual space corresponds to the change in the position and/or orientation of the user in real-space. This simulation enables the training session to be deployed in the virtual space in a manner that imitates the user navigating in the real space. Similarly, in some instances, when one of more of the primary signals of a given primary signal type, and optionally modulation signals of a given modulation signal type and/or trigger signals of a given trigger signal type are sound-based, holophonic sound, or three-dimensional sound, may be used to imitate the sound being produced in a three-dimensional space, simulating that the sound is originating from a given direction. As such, the holophonic-based sound may guide the user to move around (rotate) in real- space, causing the system to adjust the projected image on the display of the virtual -reality headset accordingly, the sound, e.g., acting as a queue regarding the direction of another of the primary signals (or modulation signals, trigger signals and/or mnemonics) which may be displayed on the screen.
[0126] It will be further understood that in some examples, the primary signals (or modulation signals, trigger signals and/or mnemonics) may be generated using a hologram.
[0127] In some instances, the signals and stimuli may be presented in a virtual reality 3D space (where information is displayed across 360 degrees along one, two or three of axes x, y and z). The system implementing VR technology may include one or more position sensors, such as accelerometers, gyroscopes, etc. to detect a movement of the head and/or body of the user, where the image displayed by the system adapts in accordance with the head or body movement of the user, to simulate virtually the user moving in real space. As such, the system may require that the
user physically move in order to perceive certain of the signals and stimuli. Each of the signals / stimuli may be associated with coordinates (x,y and z coordinates) in the 3D virtual space, at which location the signal and/or stimuli is to be displayed when appropriate, in accordance with the program code executed by the processor of the system. As such, the stimuli and signals may be displayed as part of a volumetric video or a volumetric-based reality, where a user can experience a recording with six degrees of freedom, namely the X, Y and Z axes, but also based on pitch, yaw and roll.
[0128] As such, in some instances, the primary signals may be associated with a set of coordinates for generating the primary signal in the virtual 3D space. In some instances, the modulator signals may be associated with a set of coordinates for generating the modulator in the virtual 3D space. In some instances, the mnemonics may be associated with a set of coordinates for generating the mnemonic in the virtual 3D space.
[0129] EXEMPLARY PRIMARY SIGNALS AND MODULATION SIGNALS:
[0130] Reference is now made to Figures 5A-5H, illustrating exemplary primary signals 500 each combined with a modulation signal 510.
[0131] The differences between the primary signals may be subtle, as in the examples provided in Figures 5A-5H, for instance, by modifying an orientation of a line found in zone 500B of the primary signal.
[0132] The modulation signal 520 may also be generated within the primary signal 500 on a display, occupying a space defined by the primary signal 500. However, it will be understood that the modulation signal does not have to presented within the primary signal, but can be presented next to the modulation signal.
[0133] It will be understood that the differences between the appearances of primary signals of different the primary signal types may be more significant than those displayed in Figures SASH (e.g. the framework of the primary signals may vary for different primary signal types).
[0134] It will be further understood that the primary signals and modulation signals of Figures 5A-5H are but for illustrative purposes, and that the appearance of the primary signals and/or modulation signals may be different from those of Figures 5A-5H without departing from the present teachings. Moreover, the primary signals and/or modulation signals may, in some instances, not be an image, but could be, for instance, a sound, a smell, a feeling, a taste, or a combination thereof.
[0135] EXEMPLARY SET OF RULES FOR GENERATING A TRAINING SESSION: [0136] Reference is now made to Figure 4, illustrating an exemplary set of rules determined for generating a training session. The example provided in Figure 4 defines primary signal types, modulation signal types and trigger signal types.
[0137] The first column of Figure 4 illustrates exemplary primary signals each combined with a modulation signal. The second column, third column and fourth column illustrate exemplary trigger signals, where the trigger signal type of the trigger signal, in combination with the primary signal type and the modulation signal type, determines in the set of rules which action is to be expected from the user when presented with the combination of the primary signal, the modulation signal and the trigger signal.
[0138] The combinations of primary signals and modulation signals illustrated in the leftmost column of Figure 4 correspond to those of Figures 5A-5H.
[0139] For instance, when the first variation of the primary signal and modulation signal is combined with the first trigger signal, in the first column, illustrated as a white diamond, the expected action is an execution of action 1. However, when the same first variation of the primary signal and modulation signal is combined with a trigger signal displayed as a black diamond, the expected action is instead an inhibition of action 1 (where action 1 is not performed). When the same first variation of the primary signal and a modulation signal is combined with a trigger signal displayed as a white inversed triangle, the expected action is to perform action 1 in a slowed down manner.
[0140] The trigger signals may be presented next to or on a display, along with the primary signal (and optionally the modulation signal and the mnemonics).
[0141] The set of rules is analyzed by the system to determine if the actions performed by the user during the training session match the expected actions, based on the primary signals (and in some cases, the modulation signal and/or the trigger signal) presented to the user during the course of a training session.
[0142] EXAMPLE 1 OF A GENERATED TRAINING SESSION:
[0143] The present example is to provide a non-limitative example of a session generated by the system for training executive function of a user in accordance with the present teachings. It will be understood that the present example is but for illustrative purposes, and does not limit the scope of the present teachings.
[0144] A set of rules is configured at the system, defining a set of expected actions depending on the primary signal type of the primary signal presented to a user, and along with the modulation type of the modulation signal equally presented to a user (e.g. using a transducer). The set of rules may also be defined to include further changes to the expected actions in accordance with trigger signals of one or more given trigger signal types presented in association with the primary signal and/or the modulation signal. The system is configured to present the primary signals, and optionally the modulation signals and/or the trigger signals to the user through a display of a virtual-reality headset (e.g. where the field of view of a user is determined for the virtual-reality headset. The primary signals, and optionally the modulation signals and/or the trigger signals are presented within the determined field of view for the user while wearing the virtual-reality headset).
[0145] A first primary signal type may be a smiley face, for causing the pressing of a left button of a controller as an expected action type. A second primary signal type may be a frown face, for causing the pressing of a right button of a controller as an expected action type.
[0146] A first modulation signal type may be a black circle. A second modulation type may be a white circle. The combination of the first modulation signal type with the first primary signal type causes a double-press of the left button. The combination of the second modulation signal type with the first primary signal type causes a long press of the left button. The combination of the first modulation signal type with the second primary signal type causes a press of the left button, instead of the right button. The combination of the second modulation signal type with the second primary signal type causes a long press of the right button.
[0147] A first trigger signal type may be to display the primary signal in a first colour (e.g. for this example, the first trigger signal type is to display the primary signal in blue). A second trigger signal type may be to display the primary signal in a first colour (e.g. for this example, the second trigger signal type is to display the primary signal in green). When the primary signal is presented in blue, the expected action is not to perform any action. When the primary signal is presented in green, the expected action is to repeat the action that was expected prior to the application of the trigger signal in accordance with the set of rules.
[0148] Although the invention has been described with reference to preferred embodiments, it is to be understood that modifications may be resorted to as will be apparent to those skilled in the art. Such modifications and variations are to be considered within the purview and scope of the
present invention.
[0149] Representative, non-limiting examples of the present invention were described above in detail with reference to the attached drawing. This detailed description is merely intended to teach a person of skill in the art further details for practicing preferred aspects of the present teachings and is not intended to limit the scope of the invention. Furthermore, each of the additional features and teachings disclosed above and below may be utilized separately or in conjunction with other features and teachings.
[0150] Moreover, combinations of features and steps disclosed in the above detailed description, as well as in the experimental examples, may not be necessary to practice the invention in the broadest sense, and are instead taught merely to particularly describe representative examples of the invention. Furthermore, various features of the above-described representative examples, as well as the various independent and dependent claims below, may be combined in ways that are not specifically and explicitly enumerated in order to provide additional useful embodiments of the present teachings.
Claims
1. A method for generating a training session to train cognitive executive function of a user, comprising: defining a set of rules comprising: a plurality of action types; a plurality of primary signal types, wherein each primary signal type of the plurality of primary signal types is associated with an action type of the plurality of action types, one or more modulator signal types, wherein each of the one or more modulator signal types is for causing a modulation in the action type of the plurality of action types that is associated with a signal type of the plurality of primary signal types; during a time period: periodically causing a generation of a primary signal with a primary signal type selected from the plurality of primary signal types, the generation of the primary signal for causing the user to prepare to initiate an action corresponding to an action type of the plurality of action types corresponding to the generated primary signal in accordance with the set of rules, resulting in the generation of plurality of signal spread over the time period; for at least some of the generated primary signals, causing a generation of a modulator signal with a modulator signal type selected from the one or more modulator signal types, the generation of the modulator signal for indicating to the user to modulate the action type corresponding to the generated primary signal in accordance with the set of rules; receiving information on actions performed by the user during the time period; comparing the actions performed by the user to expected actions based on the generated primary signals, the generated modulator signals and the set of rules; and measuring a performance of the user based from the comparing.
2. The method as defined in claim 1, wherein the set of rules further includes information to generate a response in accordance with one or more mnemonics, further comprising: during the time period, causing periodically a generation of a mnemonic to a user for causing the user to act in accordance with the set of rules retain information relating to the mnemonic; receiving one or more responses provided by the user corresponding to the generated
positive mnemonics and the set of rules; wherein the measuring of the performance of the user is further based on comparing the received one or more responses with expected one or more responses based on the set of rules.
3. The method as defined in claim 2, wherein the one or more mnemonics is one or more of: a sound an image a vibration an odor; and a taste.
4. The method as defined in claim 2, wherein the one or more mnemonics are words.
5. The method as defined in any one of claims 2 to 4, wherein the set of rules further includes information to ignore a subset of one or more distractor mnemonics from the one or more mnemonics, wherein, during the time period, the causing periodically a generation of a mnemonic includes generating at least one of the one or more distractor mnemonics; and wherein the measuring of the performance of the user is further based on comparing the received one or more responses with expected one or more responses based on the set of rules, including the generated at least one of the one or more di stractor mnemonics that are to be ignored by the user.
6. The method as defined in any one of claims 1 to 5, wherein the primary signals and the modulator signals are generated via a virtual reality headset.
7. The method as defined in any one of claims 1 to 5, wherein the primary signals and the modulator signals are generated via an extended reality headset.
8. The method as defined in any one of claims 1 to 7, during the period of time, further comprising adjusting a difficulty associated with the generated primary signals and the generated modulator signals in accordance with a performance determined from the comparing of the actions performed by the user to expected actions based on the generated primary signals, the generated modulator signals and the set of rules.
9. The method as defined in any one of claims 1 to 8, wherein one or more of the primary signal types of the plurality of signal types includes more than one sensory stimuli selected from an image, a word, a sound, a vibration, an odor and a taste.
10. The method as defined in any one of claims 1 to 9, wherein one or more of the modulator signal types of the one or more modulator signal types includes one or more sensory stimuli selected from an image, a sound, a vibration, an odor and a taste.
11. The method as defined in any one of claims 1 to 10, where the set of rules further comprises one or more trigger signal types for adjusting the action type resulting from the combination of a primary signal with a primary signal type and a modulator signal with a modulator signal type, the method further comprising: during the period of time, for at least some of the generated modulator signals, causing a generation of a trigger signal with a trigger signal type selected from the one or more trigger signal types, the generation of the trigger signal for indicating to the user to adapt the action type corresponding to the generated primary signal and the generated modulator signal in accordance with the set of rules, wherein the comparing the actions performed by the user to expected actions is further based on the generated trigger signals.
12. A system for generating a training session to train cognitive executive function of a user, comprising: a processor; and memory including program code that, when executed by the processor, causes the processor to: define a set of rules comprising: a plurality of action types; a plurality of primary signal types, wherein each primary signal type of the plurality of primary signal types is associated with an action type of the plurality of action types, one or more modulator signal types, wherein each of the one or more modulator signal types is for causing a modulation in the action type of the plurality of action types that is associated with a signal type of the plurality of primary signal types; during a time period: periodically cause a generation of a primary signal with a primary signal type selected from the plurality of primary signal types, the generation of the
primary signal for causing the user to prepare to initiate an action corresponding to an action type of the plurality of action types corresponding to the generated primary signal in accordance with the set of rules, resulting in the generation of plurality of signal spread over the time period; for at least some of the generated primary signals, cause a generation of a modulator signal with a modulator signal type selected from the one or more modulator signal types, the generation of the modulator signal for indicating to the user to modulate the action type corresponding to the generated primary signal in accordance with the set of rules; receive information on actions performed by the user during the time period; compare the actions performed by the user to expected actions based on the generated primary signals, the generated modulator signals and the set of rules; and measure a performance of the user based from the comparing.
13. The system as defined in claim 12, wherein the set of rules further includes information to generate a response in accordance with one or more mnemonics, wherein the processor further causes the processor to: during the time period, cause periodically a generation of a mnemonic to a user for causing the user to act in accordance with the set of rules retain information relating to the mnemonic; receive one or more responses provided by the user corresponding to the generated positive mnemonics and the set of rules; wherein the measuring of the performance of the user is further based on comparing the received one or more responses with expected one or more responses based on the set of rules.
14. The system as defined in claim 13, wherein the one or more mnemonics is one or more of: a sound an image a vibration an odor; and a taste.
15. The system as defined in claim 13, wherein the one or more mnemonics are words.
16. The system as defined in any one of claims 12 to 15, wherein the set of rules further includes information to ignore a subset of one or more distractor mnemonics from the one or more
mnemonics, and wherein, during the time period, the causing periodically a generation of a mnemonic includes generating at least one of the one or more distractor mnemonics; and wherein the measuring of the performance of the user is further based on comparing the received one or more responses with expected one or more responses based on the set of rules, including the generated at least one of the one or more di stractor mnemonics that are to be ignored by the user.
17. The system as defined in any one of claims 12 to 16, wherein the primary signals and the modulator signals are generated via a virtual reality headset.
18. The system as defined in any one of claims 12 to 17, during the period of time, wherein the program code further causes the processor to adjust a difficulty associated with the generated primary signals and the generated modulator signals in accordance with a performance determined from the comparing of the actions performed by the user to expected actions based on the generated primary signals, the generated modulator signals and the set of rules.
19. The system as defined in any one of claims 12 to 18, where the set of rules further comprises one or more trigger signal types for adjusting the action type resulting from the combination of a primary signal with a primary signal type and a modulator signal with a modulator signal type, the program code further causing the processor to: during the period of time, for at least some of the generated modulator signals, cause a generation of a trigger signal with a trigger signal type selected from the one or more trigger signal types, the generation of the trigger signal for indicating to the user to adapt the action type corresponding to the generated primary signal and the generated modulator signal in accordance with the set of rules, wherein the comparing the actions performed by the user to expected actions is further based on the generated trigger signals.
20. A non-transitory computer-readable medium having stored thereon program instructions for generating a training session to train cognitive executive function of a user, the program instructions executable by a processing unit for: defining a set of rules comprising: a plurality of action types; a plurality of primary signal types, wherein each primary signal type of the plurality
of primary signal types is associated with an action type of the plurality of action types, one or more modulator signal types, wherein each of the one or more modulator signal types is for causing a modulation in the action type of the plurality of action types that is associated with a signal type of the plurality of primary signal types; during a time period: periodically causing a generation of a primary signal with a primary signal type selected from the plurality of primary signal types, the generation of the primary signal for causing the user to prepare to initiate an action corresponding to an action type of the plurality of action types corresponding to the generated primary signal in accordance with the set of rules, resulting in the generation of plurality of signal spread over the time period; for at least some of the generated primary signals, causing a generation of a modulator signal with a modulator signal type selected from the one or more modulator signal types, the generation of the modulator signal for indicating to the user to modulate the action type corresponding to the generated primary signal in accordance with the set of rules; receiving information on actions performed by the user during the time period; comparing the actions performed by the user to expected actions based on the generated primary signals, the generated modulator signals and the set of rules; and measuring a performance of the user based from the comparing.
Applications Claiming Priority (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| US202363518056P | 2023-08-07 | 2023-08-07 | |
| US63/518,056 | 2023-08-07 |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| WO2025030243A1 true WO2025030243A1 (en) | 2025-02-13 |
Family
ID=94533153
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| PCT/CA2024/051040 Pending WO2025030243A1 (en) | 2023-08-07 | 2024-08-07 | System for generating a session for training executive function of a user and method of use thereof |
Country Status (1)
| Country | Link |
|---|---|
| WO (1) | WO2025030243A1 (en) |
Citations (3)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| EP2836277A1 (en) * | 2012-04-10 | 2015-02-18 | Apexk Inc. | Interactive cognitive-multisensory interface apparatus and methods for assessing, profiling, training, and/or improving performance of athletes and other populations |
| EP3863000A1 (en) * | 2010-11-11 | 2021-08-11 | The Regents Of The University Of California | Enhancing cognition in the presence of distraction and/or interruption |
| CN114201053A (en) * | 2022-02-17 | 2022-03-18 | 北京智精灵科技有限公司 | Cognition enhancement training method and system based on neural regulation |
-
2024
- 2024-08-07 WO PCT/CA2024/051040 patent/WO2025030243A1/en active Pending
Patent Citations (3)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| EP3863000A1 (en) * | 2010-11-11 | 2021-08-11 | The Regents Of The University Of California | Enhancing cognition in the presence of distraction and/or interruption |
| EP2836277A1 (en) * | 2012-04-10 | 2015-02-18 | Apexk Inc. | Interactive cognitive-multisensory interface apparatus and methods for assessing, profiling, training, and/or improving performance of athletes and other populations |
| CN114201053A (en) * | 2022-02-17 | 2022-03-18 | 北京智精灵科技有限公司 | Cognition enhancement training method and system based on neural regulation |
Non-Patent Citations (1)
| Title |
|---|
| HARDY JOSEPH L., NELSON ROLF A., THOMASON MORIAH E., STERNBERG DANIEL A., KATOVICH KIEFER, FARZIN FARAZ, SCANLON MICHAEL: "Enhancing Cognitive Abilities with Comprehensive Training: A Large, Online, Randomized, Active-Controlled Trial", PLOS ONE, PUBLIC LIBRARY OF SCIENCE, US, vol. 10, no. 9, US , pages e0134467, XP093280127, ISSN: 1932-6203, DOI: 10.1371/journal.pone.0134467 * |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| US12002180B2 (en) | Immersive ecosystem | |
| US7128577B2 (en) | Method for providing data to be used by a therapist for analyzing a patient behavior in a virtual environment | |
| JP6966654B2 (en) | Virtual vehicle operation methods, model training methods, operation devices, and storage media | |
| CN110167421B (en) | System for holistically measuring clinical parameters of visual function | |
| CN108542404B (en) | Attention evaluation device, VR device, and readable storage medium | |
| US20110091847A1 (en) | Method, system, and computer software code for the adaptation of training via performance diagnosis based on (neuro)physiological metrics | |
| US9652032B2 (en) | Simulated training environments based upon fixated objects in specified regions | |
| US20070015128A1 (en) | Method and apparatus for performing a behaviour analysis using a virtual environment | |
| Conati et al. | Further Results on Predicting Cognitive Abilities for Adaptive Visualizations. | |
| Kai et al. | A Comparison of Video-Based and Interaction-Based Affect Detectors in Physics Playground. | |
| CN118982940A (en) | A combat trauma teaching and training system based on MR technology | |
| WO2022104139A1 (en) | Method and system for an immersive and responsive enhanced reality | |
| CN109773807B (en) | Motion control method and robot | |
| US8721341B2 (en) | Simulated training environments based upon foveated object events | |
| CN116597716A (en) | Evaluation and training method, system and storage medium of personnel operation ability | |
| CN113611416A (en) | Psychological scene assessment method and system based on virtual reality technology | |
| KR20240146701A (en) | Apparatus and Method for Analyzing Performance of User Carrying Out XR Interaction based Task | |
| Shou et al. | Optimizing parameters for accurate position data mining in diverse classrooms layouts | |
| WO2025030243A1 (en) | System for generating a session for training executive function of a user and method of use thereof | |
| CN102132294A (en) | Method, system and computer program product for providing simulation with advance notice of events | |
| CN118866372A (en) | Injury medical simulation training method and system based on VR technology | |
| JP2023037694A (en) | Control apparatus of driving training apparatus | |
| US20120215507A1 (en) | Systems and methods for automated assessment within a virtual environment | |
| KR102682045B1 (en) | Authoring environment of engineering practice VR contents and authoring method thereof | |
| KR102765693B1 (en) | Transparent augmenter-based vocational training system and method |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| 121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 24850419 Country of ref document: EP Kind code of ref document: A1 |