SYSTEMS AND METHODS TO DETECT BREATHING PARAMETERS
AND PROVIDE BIOFEEDBACK
CROSS-REFERENCE TO RELATED APPLICATIONS
[0001] This Application claims priority to U.S. Provisional Patent Application No. 62/408,677, filed 14 Oct 2016, and U.S. Provisional Patent Application No. 62/456,105, filed 7 Feb 2017, and U.S. Provisional Patent Application No. 62/480,496, filed 2 Apr 2017.
TECHNICAL FIELD
[0002] This application relates to head-mounted systems to measure facial temperature.
ACKNOWLEDGMENTS
[0003] Gil Thieberger would like to thank his dear and beloved teacher, Lama Dvora-hla, for her extraordinary teachings and manifestation of wisdom, love, compassion and morality, and for her endless efforts, support, and skill in guiding him and others on their paths to freedom and ultimate happiness. Gil would also like to thank his beloved parents for raising him exactly as they did.
BACKGROUND
[0004] The manifestation of many physiological responses involves temperature changes at various regions of the human face (or near it). For example, breathing can cause changes to temperatures on regions of the face (e.g., the upper lip) and/or in front of it (due to the exhale stream). In another example, various mental states may cause distinct thermal patterns on the forehead. Thus, monitoring and analyzing such temperatures can be useful for many health-related and life-logging related applications. However, collecting such data over time when people are going through their daily activities can be very difficult. Typically, collection of such data involves utilizing thermal cameras that are bulky, expensive and need to be continually pointed at a person's face. Additionally, due to the people's movements in their day-to-day activities, collecting the required measurements often involves performing various complex image analysis procedures, such as procedures involving image registration and face tracking. Therefore, there is a need to be able to collect thermal measurements at various regions of a person's face. Preferably, the measurements are to be collected over a long period of time, while the person performs various day-today activities.
SUMMARY
[0005] Some aspects of this disclosure involve various embodiments of wearable systems configured to collect thermal measurements related to respiration and/or brain activity. Optionally, the systems include a frame configured to be worn on a user's head, and one or more non-contact thermal camera (e.g., thermopile or microbolometer based sensor). Each thermal camera is small and lightweight, located close to the user's face, and may be physically coupled to the frame. In one embodiment, each thermal camera does not occlude any of the user's mouth and nostrils and is configured to take thermal measurements of one or more of the following: a portion of the right side of the user's upper lip, a portion of the left side of the user's upper lip, and/or a portion of the user's mouth. The thermal measurements are forwarded to a computer that calculates breathing related parameters, such as breathing rate, an extent to which the breathing was done through the mouth, an extent to which the breathing was done through the nostrils, and ratio between exhaling and inhaling durations. In another embodiment, a thermal camera takes measurements of the forehead, which are indicative of brain activity of the user. Optionally, the thermal measurements of the forehead are used to detect a state of the user, such as whether the user is exhibiting symptoms of anger, Attention Deficit Hyperactivity Disorder (ADHD), and/or a headache. In some embodiments, the thermal measurements of the systems described above may be used to provide biofeedback sessions, such as neurofeedback and/or breathing biofeedback.
BRIEF DESCRIPTION OF THE DRAWINGS
[0006] The embodiments are herein described by way of example only, with reference to the following drawings:
[0007] FIG. la and FIG. lb illustrate various inward-facing head-mounted cameras coupled to an eyeglasses frame;
[0008] FIG. 2 illustrates inward-facing head-mounted cameras coupled to an augmented reality device;
[0009] FIG. 3 illustrates head-mounted cameras coupled to a virtual reality device;
[0010] FIG. 4 illustrates a side view of head-mounted cameras coupled to an augmented reality device;
[0011] FIG. 5 illustrates a side view of head-mounted cameras coupled to a sunglasses frame;
[0012] FIG. 6 to FIG. 9 illustrate HMSs configured to measure various ROIs relevant to some of the embodiments describes herein;
[0013] FIG. 10 to FIG. 13 illustrate various embodiments of systems that include inward-facing head- mounted cameras having multi-pixel sensors (FPA sensors);
[0014] FIG. 14a, FIG. 14b, and FIG. 14c illustrate embodiments of two right and left clip-on devices that re configured to attached/detached from an eyeglasses frame;
[0015] FIG. 15a and FIG. 15b illustrate an embodiment of a clip-on device that includes inward-facing head-mounted cameras pointed at the lower part of the face and the forehead;
[0016] FIG. 16a and FIG. 16b illustrate embodiments of right and left clip-on devices that are configured to be attached behind an eyeglasses frame;
[0017] FIG. 17a and FIG. 17b illustrate an embodiment of a single -unit clip-on device that is configured to be attached behind an eyeglasses frame;
[0018] FIG. 18 illustrates embodiments of right and left clip-on devices, which are configured to be attached/detached from an eyeglasses frame, and have protruding arms to hold inward-facing head- mounted cameras;
[0019] FIG. 19 illustrates a scenario in which an alert regarding a possible stroke is issued;
[0020] FIG. 20 illustrates an embodiment of a system that collects thermal measurements related to respiration, in which four inward-facing head-mounted thermal cameras (CAMs) are coupled to a football helmet;
[0021] FIG. 21 illustrates a situation in which an alert is issued to a user when it is detected that the ratio the duration of exhaling and inhaling is too low;
[0022] FIG. 22 illustrates an embodiment of a system that collects thermal measurements related to respiration, in which four CAMs are coupled to the bottom of an eyeglasses frame;
[0023] FIG. 23a to FIG. 24c illustrate how embodiments described herein may help train an elderly user to exhale during effort;
[0024] FIG. 25a and FIG. 25b illustrate a fitness app running on smartphone, which instructs the user to exhale while bending down, and to inhale while straightening up;
[0025] FIG. 26 illustrates a fitness app running on smartphone, which instructs the user to stay in a triangle pose for 8 breath cycles;
[0026] FIG. 27 illustrates notifying a user about mouth breathing and suggesting to breathe through the nose;
[0027] FIG. 28 illustrates an exemplary UI that shows statistics about the dominant nostril and mouth breathing during the day;
[0028] FIG. 29 illustrates a virtual robot that the user sees via augmented reality (AR), which urges the user to increase the ratio between the duration of the user's exhales and inhales;
[0029] FIG. 30 illustrates an asthmatic patient who receives an alert that his breathing rate increased to an extent that often precedes an asthma attack;
[0030] FIG. 31a is a schematic illustration of a left dominant nostril;
[0031] FIG. 31b is a schematic illustration of a right dominant nostril;
[0032] FIG. 31c is a schematic illustration of balanced breathing;
[0033] FIG. 32 is a schematic illustration of an embodiment of a system that identifies the dominant nostril;
[0034] FIG. 33 illustrates an embodiment of a system for calculating a respiratory parameter;
[0035] FIG. 34 illustrates an embodiment of a system configured to provide neurofeedback and/or breathing biofeedback;
[0036] FIG. 35, FIG. 36, and FIG. 37 illustrate an embodiment of eyeglasses with head-mounted thermal cameras, which are able to differentiate between different states of the user based on thermal
patterns of the forehead;
[0037] FIG. 38 illustrates an embodiment of a clip-on device configured to be attached and detached from a frame of eyeglasses multiple times;
[0038] FIG. 39 illustrates a scenario in which a user has neurofeedback session during a day-to-day activity; and
[0039] FIG. 40a and FIG. 40b are schematic illustrations of possible embodiments for computers.
DETAILED DESCRIPTION
[0040] A "thermal camera" refers herein to a non-contact device that measures electromagnetic radiation having wavelengths longer than 2500 nanometer (nm) and does not touch its region of interest (ROI). A thermal camera may include one sensing element (pixel), or multiple sensing elements that are also referred to herein as "sensing pixels", "pixels", and/or focal-plane array (FPA). A thermal camera may be based on an uncooled thermal sensor, such as a thermopile sensor, a microbolometer sensor (where microbolometer refers to any type of a bolometer sensor and its equivalents), a pyroelectric sensor, or a ferroelectric sensor.
[0041] Sentences in the form of "thermal measurements of an ROI" (usually denoted THROi or some variant thereof) refer to at least one of: (i) temperature measurements of the ROI (TROI), such as when using thermopile or microbolometer sensors, and (ii) temperature change measurements of the ROI (ATROI), such as when using a pyroelectric sensor or when deriving the temperature changes from temperature measurements taken at different times by a thermopile sensor or a microbolometer sensor.
[0042] In some embodiments, a device, such as a thermal camera, may be positioned such that it occludes an ROI on the user's face, while in other embodiments, the device may be positioned such that it does not occlude the ROI. Sentences in the form of "the system/camera does not occlude the ROI" indicate that the ROI can be observed by a third person located in front of the user and looking at the ROI, such as illustrated by all the ROIs in FIG. 7, FIG. 1 1 and FIG. 19. Sentences in the form of "the system/camera occludes the ROI" indicate that some of the ROIs cannot be observed directly by that third person, such as ROIs 19 and 37 that are occluded by the lenses in FIG. la, and ROIs 97 and 102 that are occluded by cameras 91 and 96, respectively, in FIG. 9.
[0043] Although many of the disclosed embodiments can use occluding thermal cameras successfully, in certain scenarios, such as when using an HMS on a daily basis and/or in a normal day-to-day setting, using thermal cameras that do not occlude their ROIs on the face may provide one or more advantages to the user, to the HMS, and/or to the thermal cameras, which may relate to one or more of the following: esthetics, better ventilation of the face, reduced weight, simplicity to wear, and reduced likelihood to being tarnished.
[0044] A "Visible-light camera" refers to a non-contact device designed to detect at least some of the
visible spectrum, such as cameras with optical lenses and CMOS or CCD sensors.
[0045] The term "inward -facing head-mounted camera" refers to a camera configured to be worn on a user's head and to remain pointed at its ROI, which is on the user's face, also when the user's head makes angular and lateral movements (such as movements with an angular velocity above 0.1 rad/sec, above 0.5 rad/sec, and/or above 1 rad/sec). A head-mounted camera (which may be inward -facing and/or outward- facing) may be physically coupled to a frame worn on the user's head, may be attached to eyeglass using a clip-on mechanism (configured to be attached to and detached from the eyeglasses), or may be mounted to the user's head using any other known device that keeps the camera in a fixed position relative to the user's head also when the head moves. Sentences in the form of "camera physically coupled to the frame" mean that the camera moves with the frame, such as when the camera is fixed to (or integrated into) the frame, or when the camera is fixed to (or integrated into) an element that is physically coupled to the frame. The abbreviation "CAM" denotes "inward-facing head-mounted thermal camera", the abbreviation "CAMout" denotes "outward-facing head-mounted thermal camera", the abbreviation "VCAM" denotes "inward-facing head-mounted visible-light camera", and the abbreviation "VCAMom" denotes "outward- facing head-mounted visible-light camera".
[0046] Sentences in the form of "a frame configured to be worn on a user's head" or "a frame worn on a user's head" refer to a mechanical structure that loads more than 50% of its weight on the user's head. For example, an eyeglasses frame may include two temples connected to two rims connected by a bridge; the frame in Oculus Rift™ includes the foam placed on the user's face and the straps; and the frames in Google Glass™ and Spectacles by Snap Inc. are similar to eyeglasses frames. Additionally or alternatively, the frame may connect to, be affixed within, and/or be integrated with, a helmet (e.g., sports, motorcycle, bicycle, and/or combat helmets) and/or a brainwave -measuring headset.
[0047] When a thermal camera is inward-facing and head-mounted, challenges faced by systems known in the art that are used to acquire thermal measurements, which include non-head-mounted thermal cameras, may be simplified and even eliminated with some of the embodiments described herein. Some of these challenges may involve dealing with complications caused by movements of the user, image registration, ROI alignment, tracking based on hot spots or markers, and motion compensation in the IR domain.
[0048] In various embodiments, cameras are located close to a user's face, such as at most 2 cm, 5 cm, 10 cm, 15 cm, or 20 cm from the face (herein "cm" denotes to centimeters). The distance from the face/head in sentences such as "a camera located less than 15 cm from the face/head" refers to the shortest possible distance between the camera and the face/head. The head-mounted cameras used in various embodiments may be lightweight, such that each camera weighs below 10 g, 5 g, 1 g, and/or 0.5 g (herein "g" denotes to grams).
[0049] The following figures show various examples of HMSs equipped with head-mounted cameras. FIG. la illustrates various inward-facing head-mounted cameras coupled to an eyeglasses frame 15. Cameras 10 and 12 measure regions 11 and 13 on the forehead, respectively. Cameras 18 and 36 measure
regions on the periorbital areas 19 and 37, respectively. The HMS further includes an optional computer 16, which may include a processor, memory, a battery and/or a communication module. FIG. lb illustrates a similar HMS in which inward-facing head- mounted cameras 48 and 49 measure regions 41 and 41, respectively. Cameras 22 and 24 measure regions 23 and 25, respectively. Camera 28 measures region 29. And cameras 26 and 43 measure regions 38 and 39, respectively.
[0050] FIG. 2 illustrates inward-facing head-mounted cameras coupled to an augmented reality device such as Microsoft HoloLens™. FIG. 3 illustrates head-mounted cameras coupled to a virtual reality device such as Facebook's Oculus Rift™. FIG. 4 is a side view illustration of head-mounted cameras coupled to an augmented reality device such as Google Glass™. FIG. 5 is another side view illustration of head-mounted cameras coupled to a sunglasses frame.
[0051] FIG. 6 to FIG. 9 illustrate HMSs configured to measure various ROIs relevant to some of the embodiments describes herein. FIG. 6 illustrates a frame 35 that mounts inward-facing head-mounted cameras 30 and 31 that measure regions 32 and 33 on the forehead, respectively. FIG. 7 illustrates a frame 75 that mounts inward-facing head-mounted cameras 70 and 71 that measure regions 72 and 73 on the forehead, respectively, and inward-facing head-mounted cameras 76 and 77 that measure regions 78 and 79 on the upper lip, respectively. FIG. 8 illustrates a frame 84 that mounts inward-facing head-mounted cameras 80 and 81 that measure regions 82 and 83 on the sides of the nose, respectively. And FIG. 9 illustrates a frame 90 that includes (i) inward-facing head-mounted cameras 91 and 92 that are mounted to protruding arms and measure regions 97 and 98 on the forehead, respectively, (ii) inward-facing head- mounted cameras 95 and 96, which are also mounted to protruding arms, which measure regions 101 and 102 on the lower part of the face, respectively, and (iii) head-mounted cameras 93 and 94 that measure regions on the periorbital areas 99 and 100, respectively.
[0052] FIG. 10 to FIG. 13 illustrate various inward-facing head-mounted cameras having multi-pixel sensors (FPA sensors), configured to measure various ROIs relevant to some of the embodiments describes herein. FIG. 10 illustrates head-mounted cameras 120 and 122 that measure regions 121 and 123 on the forehead, respectively, and mounts head-mounted camera 124 that measure region 125 on the nose. FIG. 11 illustrates head-mounted cameras 126 and 128 that measure regions 127 and 129 on the upper lip, respectively, in addition to the head-mounted cameras already described in FIG. 10. FIG. 12 illustrates head-mounted cameras 130 and 132 that measure larger regions 131 and 133 on the upper lip and the sides of the nose, respectively. And FIG. 13 illustrates head-mounted cameras 134 and 137 that measure regions 135 and 138 on the right and left cheeks and right and left sides of the mouth, respectively, in addition to the head-mounted cameras already described in FIG. 12.
[0053] In some embodiments, the head-mounted cameras may be physically coupled to the frame using a clip-on device configured to be attached detached from a pair of eyeglasses in order to secure/release the device to/from the eyeglasses, multiple times. The clip-on device holds at least an inward-facing camera, a processor, a battery, and a wireless communication module. Most of the clip-on device may be located in front of the frame (as illustrated in FIG. 14b, FIG. 15b, and FIG. 18), or alternatively, most of
the clip-on device may be located behind the frame, as illustrated in FIG. 16b and FIG. 17b.
[0054] FIG. 14a, FIG. 14b, and FIG. 14c illustrate two right and left clip-on devices 141 and 142, respectively, configured to attached detached from an eyeglasses frame 140. The clip-on device 142 includes an inward-facing head-mounted camera 143 pointed at a region on the lower part of the face (such as the upper lip, mouth, nose, and/or cheek), an inward-facing head-mounted camera 144 pointed at the forehead, and other electronics 145 (such as a processor, a battery, and/or a wireless communication module). The clip-on devices 141 and 142 may include additional cameras illustrated in the drawings as black circles.
[0055] FIG. 15a and FIG. 15b illustrate a clip-on device 147 that includes an inward-facing head- mounted camera 148 pointed at a region on the lower part of the face (such as the nose), and an inward- facing head-mounted camera 149 pointed at the forehead. The other electronics (such as a processor, a battery, and/or a wireless communication module) is located inside the box 150, which also holds the cameras 148 and 149.
[0056] FIG. 16a and FIG. 16b illustrate two right and left clip-on devices 160 and 161, respectively, configured to be attached behind an eyeglasses frame 165. The clip-on device 160 includes an inward- facing head-mounted camera 162 pointed at a region on the lower part of the face (such as the upper lip, mouth, nose, and/or cheek), an inward-facing head-mounted camera 163 pointed at the forehead, and other electronics 164 (such as a processor, a battery, and/or a wireless communication module). The clip- on devices 160 and 161 may include additional cameras illustrated in the drawings as black circles.
[0057] FIG. 17a and FIG. 17b illustrate a single -unit clip-on device 170, configured to be attached behind an eyeglasses frame 176. The single-unit clip-on device 170 includes inward-facing head-mounted cameras 171 and 172 pointed at regions on the lower part of the face (such as the upper lip, mouth, nose, and/or cheek), inward-facing head-mounted cameras 173 and 174 pointed at the forehead, a spring 175 configured to apply force that holds the clip-on device 170 to the frame 176, and other electronics 177 (such as a processor, a battery, and/or a wireless communication module). The clip-on device 170 may include additional cameras illustrated in the drawings as black circles.
[0058] FIG. 18 illustrates two right and left clip-on devices 153 and 154, respectively, configured to attached/detached from an eyeglasses frame, and having protruding arms to hold the inward-facing head- mounted cameras. Head-mounted camera 155 measures a region on the lower part of the face, head- mounted camera 156 measures regions on the forehead, and the left clip-on device 154 further includes other electronics 157 (such as a processor, a battery, and/or a wireless communication module). The clip- on devices 153 and 154 may include additional cameras illustrated in the drawings as black circles.
[0059] It is noted that the elliptic and other shapes of the ROIs in some of the drawings are just for illustration purposes, and the actual shapes of the ROIs are usually not as illustrated. It is possible to calculate the accurate shape of an ROI using various methods, such as a computerized simulation using a 3D model of the face and a model of a head-mounted system (HMS) to which a thermal camera is physically coupled, or by placing a LED instead of the sensor (while maintaining the same field of view)
and observing the illumination pattern on the face. Furthermore, illustrations and discussions of a camera represent one or more cameras, where each camera may have the same FOV and/or different FOVs. Unless indicated to the contrary, the cameras may include one or more sensing elements (pixels), even when multiple sensing elements do not explicitly appear in the figures; when a camera includes multiple sensing elements then the illustrated ROI usually refers to the total ROI captured by the camera, which is made of multiple regions that are respectively captured by the different sensing elements. The positions of the cameras in the figures are just for illustration, and the cameras may be placed at other positions on the HMS.
[0060] Sentences in the form of an "ROI on an area", such as ROI on the forehead or an ROI on the nose, refer to at least a portion of the area. Depending on the context, and especially when using a CAM having just one pixel or a small number of pixels, the ROI may cover another area (in addition to the area). For example, a sentence in the form of "an ROI on the nose" may refer to either: 100% of the ROI is on the nose, or some of the ROI is on the nose and some of the ROI is on the upper lip.
[0061] Various embodiments described herein involve detections of physiological responses based on user measurements. Some examples of physiological responses include stress, an allergic reaction, an asthma attack, a stroke, dehydration, intoxication, or a headache (which includes a migraine). Other examples of physiological responses include manifestations of fear, startle, sexual arousal, anxiety, joy, pain or guilt. Still other examples of physiological responses include physiological signals such as a heart rate or a value of a respiratory parameter of the user. Optionally, detecting a physiological response may involve one or more of the following: determining whether the user has/had the physiological response, identifying an imminent attack associated with the physiological response, and/or calculating the extent of the physiological response.
[0062] In some embodiments, detection of the physiological response is done by processing thermal measurements that fall within a certain window of time that characterizes the physiological response. For example, depending on the physiological response, the window may be five seconds long, thirty seconds long, two minutes long, five minutes long, fifteen minutes long, or one hour long. Detecting the physiological response may involve analysis of thermal measurements taken during multiple of the above -described windows, such as measurements taken during different days. In some embodiments, a computer may receive a stream of thermal measurements, taken while the user wears an HMS with coupled thermal cameras during the day, and periodically evaluate measurements that fall within a sliding window of a certain size.
[0063] In some embodiments, models are generated based on measurements taken over long periods. Sentences of the form of "measurements taken during different days" or "measurements taken over more than a week" are not limited to continuous measurements spanning the different days or over the week, respectively. For example, "measurements taken over more than a week" may be taken by eyeglasses equipped with thermal cameras, which are worn for more than a week, 8 hours a day. In this example, the user is not required to wear the eyeglasses while sleeping in order to take measurements over more than a
week. Similarly, sentences of the form of "measurements taken over more than 5 days, at least 2 hours a day" refer to a set comprising at least 10 measurements taken over 5 different days, where at least two measurements are taken each day at times separated by at least two hours.
[0064] Utilizing measurements taken of a long period (e.g., measurements taken on "different days") may have an advantage, in some embodiments, of contributing to the generalizability of a trained model. Measurements taken over the long period likely include measurements taken in different environments and/or measurements taken while the measured user was in various physiological and/or mental states (e.g., before/after meals and/or while the measured user was sleepy/energetic/happy/depressed, etc.). Training a model on such data can improve the performance of systems that utilize the model in the diverse settings often encountered in real-world use (as opposed to controlled laboratory-like settings). Additionally, taking the measurements over the long period may have the advantage of enabling collection of a large amount of training data that is required for some machine learning approaches (e.g., "deep learning").
[0065] Detecting the physiological response may involve performing various types of calculations by a computer. Optionally, detecting the physiological response may involve performing one or more of the following operations: comparing thermal measurements to a threshold (when the threshold is reached that may be indicative of an occurrence of the physiological response), comparing thermal measurements to a reference time series, and/or by performing calculations that involve a model trained using machine learning methods. Optionally, the thermal measurements upon which the one or more operations are performed are taken during a window of time of a certain length, which may optionally depend on the type of physiological response being detected. In one example, the window may be shorter than one or more of the following durations: five seconds, fifteen seconds, one minute, five minutes, thirty minute, one hour, four hours, one day, or one week. In another example, the window may be longer than one or more of the aforementioned durations. Thus, when measurements are taken over a long period, such as measurements taken over a period of more than a week, detection of the physiological response at a certain time may be done based on a subset of the measurements that falls within a certain window near the certain time; the detection at the certain time does not necessarily involve utilizing all values collected throughout the long period.
[0066] In some embodiments, detecting the physiological response of a user may involve utilizing baseline thermal measurement values, most of which were taken when the user was not experiencing the physiological response. Optionally, detecting the physiological response may rely on observing a change to typical temperatures at one or more ROIs (the baseline), where different users might have different typical temperatures at the ROIs (i.e., different baselines). Optionally, detecting the physiological response may rely on observing a change to a baseline level, which is determined based on previous measurements taken during the preceding minutes and/or hours.
[0067] In some embodiments, detecting a physiological response involves determining the extent of the physiological response, which may be expressed in various ways that are indicative of the extent of the
physiological response, such as: (i) a binary value indicative of whether the user experienced, and/or is experiencing, the physiological response, (ii) a numerical value indicative of the magnitude of the physiological response, (iii) a categorial value indicative of the severity/extent of the physiological response, (iv) an expected change in thermal measurements of an ROI (denoted THROI or some variation thereof), and/or (v) rate of change in THROI. Optionally, when the physiological response corresponds to a physiological signal (e.g., a heart rate, a breathing rate, and an extent of frontal lobe brain activity), the extent of the physiological response may be interpreted as the value of the physiological signal.
[0068] Herein, "machine learning" methods refers to learning from examples using one or more approaches. Optionally, the approaches may be considered supervised, semi-supervised, and/or unsupervised methods. Examples of machine learning approaches include: decision tree learning, association rule learning, regression models, nearest neighbors classifiers, artificial neural networks, deep learning, inductive logic programming, support vector machines, clustering, Bayesian networks, reinforcement learning, representation learning, similarity and metric learning, sparse dictionary learning, genetic algorithms, rule -based machine learning, and/or learning classifier systems.
[0069] Herein, a "machine learning-based model" is a model trained using machine learning methods. For brevity's sake, at times, a "machine learning -based model" may simply be called a "model". Referring to a model as being "machine learning-based" is intended to indicate that the model is trained using machine learning methods (otherwise, "model" may also refer to a model generated by methods other than machine learning).
[0070] In some embodiments, which involve utilizing a machine learning-based model, a computer is configured to detect the physiological response by generating feature values based on the thermal measurements (and possibly other values), and/or based on values derived therefrom (e.g., statistics of the measurements). The computer then utilizes the machine learning-based model to calculate, based on the feature values, a value that is indicative of whether, and/or to what extent, the user is experiencing (and/or is about to experience) the physiological response. Optionally, calculating said value is considered "detecting the physiological response". Optionally, the value calculated by the computer is indicative of the probability that the user has/had the physiological response.
[0071] Herein, feature values may be considered input to a computer that utilizes a model to perform the calculation of a value, such as the value indicative of the extent of the physiological response mentioned above. It is to be noted that the terms "feature" and "feature value" may be used interchangeably when the context of their use is clear. However, a "feature" typically refers to a certain type of value, and represents a property, while "feature value" is the value of the property with a certain instance (sample). For example, a feature may be temperature at a certain ROI, while the feature value corresponding to that feature may be 36.9 0 C in one instance and 37.3 0 C in another instance.
[0072] In some embodiments, a machine learning-based model used to detect a physiological response is trained based on data that includes samples. Each sample includes feature values and a label. The feature values may include various types of values. At least some of the feature values of a sample are
generated based on measurements of a user taken during a certain period of time (e.g., thermal measurements taken during the certain period of time). Optionally, some of the feature values may be based on various other sources of information described herein. The label is indicative of a physiological response of the user corresponding to the certain period of time. Optionally, the label may be indicative of whether the physiological response occurred during the certain period and/or the extent of the physiological response during the certain period. Additionally or alternatively, the label may be indicative of how long the physiological response lasted. Labels of samples may be generated using various approaches, such as self-report by users, annotation by experts that analyze the training data, automatic annotation by a computer that analyzes the training data and/or analyzes additional data related to the training data, and/or utilizing additional sensors that provide data useful for generating the labels. It is to be noted that herein when it is stated that a model is trained based on certain measurements (e.g., "a model trained based on THROi taken on different days"), it means that the model was trained on samples comprising feature values generated based on the certain measurements and labels corresponding to the certain measurements. Optionally, a label corresponding to a measurement is indicative of the physiological response at the time the measurement was taken.
[0073] Various types of feature values may be generated based on thermal measurements. In one example, some feature values are indicative of temperatures at certain ROIs. In another example, other feature values may represent a temperature change at certain ROIs. The temperature changes may be with respect to a certain time and/or with respect to a different ROI. In order to better detect physiological responses that take some time to manifest, in some embodiments, some feature values may describe temperatures (or temperature changes) at a certain ROI at different points of time. Optionally, these feature values may include various functions and/or statistics of the thermal measurements such as minimum/maximum measurement values and/or average values during certain windows of time.
[0074] It is to be noted that when it is stated that feature values are generated based on data comprising multiple sources, it means that for each source, there is at least one feature value that is generated based on that source (and possibly other data). For example, stating that feature values are generated from thermal measurements of first and second ROIs (THROII and THROE, respectively) means that the feature values may include a first feature value generated based on THROII and a second feature value generated based on THROU- Optionally, a sample is considered generated based on measurements of a user (e.g., measurements comprising THROII and THROO) when it includes feature values generated based on the measurements of the user.
[0075] In addition to feature values that are generated based on thermal measurements, in some embodiments, at least some feature values utilized by a computer (e.g., to detect a physiological response or train a mode) may be generated based on additional sources of data that may affect temperatures measured at various facial ROIs. Some examples of the additional sources include: (i) measurements of the environment such as temperature, humidity level, noise level, elevation, air quality, a wind speed, precipitation, and infrared radiation; (ii) contextual information such as the time of day (e.g., to account
for effects of the circadian rhythm), day of month (e.g., to account for effects of the lunar rhythm), day in the year (e.g., to account for seasonal effects), and/or stage in a menstrual cycle; (iii) information about the user being measured such as sex, age, weight, height, and/or body build. Alternatively or additionally, at least some feature values may be generated based on physiological signals of the user obtained by sensors that are not thermal cameras, such as a visible-light camera, a photoplethysmogram (PPG) sensor, an electrocardiogram (ECG) sensor, an electroencephalography (EEG) sensor, a galvanic skin response (GSR) sensor, or a thermistor.
[0076] The machine learning-based model used to detect a physiological response may be trained, in some embodiments, based on data collected in day-to-day, real world scenarios. As such, the data may be collected at different times of the day, while users perform various activities, and in various environmental conditions. Utilizing such diverse training data may enable a trained model to be more resilient to the various effects different conditions can have on the values of thermal measurements, and consequently, be able to achieve better detection of the physiological response in real world day-to-day scenarios.
[0077] Since real world day-to-day conditions are not the same all the time, sometimes detection of the physiological response may be hampered by what is referred to herein as "confounding factors". A confounding factor can be a cause of warming and/or cooling of certain regions of the face, which is unrelated to a physiological response being detected, and as such, may reduce the accuracy of the detection of the physiological response. Some examples of confounding factors include: (i) environmental phenomena such as direct sunlight, air conditioning, and/or wind; (ii) things that are on the user's face, which are not typically there and/or do not characterize the faces of most users (e.g., cosmetics, ointments, sweat, hair, facial hair, skin blemishes, acne, inflammation, piercings, body paint, and food leftovers); (iii) physical activity that may affect the user's heart rate, blood circulation, and/or blood distribution (e.g., walking, running, jumping, and/or bending over); (iv) consumption of substances to which the body has a physiological response that may involve changes to temperatures at various facial ROIs, such as various medications, alcohol, caffeine, tobacco, and/or certain types of food; and/or (v) disruptive facial movements (e.g., frowning, talking, eating, drinking, sneezing, and coughing).
[0078] Occurrences of confounding factors may not always be easily identified in thermal measurements. Thus, in some embodiments, systems may incorporate measures designed to accommodate for the confounding factors. In some embodiments, these measures may involve generating feature values that are based on additional sensors, other than the thermal cameras. In some embodiments, these measures may involve refraining from detecting the physiological response, which should be interpreted as refraining from providing an indication that the user has the physiological response. For example, if an occurrence of a certain confounding factor is identified, such as strong directional sunlight that heats one side of the face, the system may refrain from detecting that the user had a stroke. In this example, the user may not be alerted even though a temperature difference between symmetric ROIs on both sides of the face reaches a threshold that, under other circumstances, would warrant alerting the user.
[0079] Training data used to train a model for detecting a physiological response may include, in some embodiments, a diverse set of samples corresponding to various conditions, some of which involve occurrence of confounding factors (when there is no physiological response and/or when there is a physiological response). Having samples in which a confounding factor occurs (e.g., the user is in direct sunlight or touches the face) can lead to a model that is less susceptible to wrongfully detect the physiological response (which may be considered an occurrence of a false positive) in real world situations.
[0080] After a model is trained, the model may be provided for use by a system that detects the physiological response. Providing the model may involve performing different operations, such as forwarding the model to the system via a computer network and/or a shared computer storage medium, storing the model in a location from which the system can retrieve the model (such as a database and/or cloud-based storage), and/or notifying the system regarding the existence of the model and/or regarding an update to the model.
[0081] A model for detecting a physiological response may include different types of parameters. Following are some examples of various possibilities for the model and the type of calculations that may be accordingly performed by a computer in order to detect the physiological response: (a) the model comprises parameters of a decision tree. Optionally, the computer simulates a traversal along a path in the decision tree, determining which branches to take based on the feature values. A value indicative of the physiological response may be obtained at the leaf node and/or based on calculations involving values on nodes and/or edges along the path; (b) the model comprises parameters of a regression model (e.g., regression coefficients in a linear regression model or a logistic regression model). Optionally, the computer multiplies the feature values (which may be considered a regressor) with the parameters of the regression model in order to obtain the value indicative of the physiological response; and/or (c) the model comprises parameters of a neural network. For example, the parameters may include values defining at least the following: (i) an interconnection pattern between different layers of neurons, (ii) weights of the interconnections, and (iii) activation functions that convert each neuron's weighted input to its output activation. Optionally, the computer provides the feature values as inputs to the neural network, computes the values of the various activation functions and propagates values between layers, and obtains an output from the network, which is the value indicative of the physiological response.
[0082] A user interface (UI) may be utilized, in some embodiments, to notify the user and/or some other entity, such as a caregiver, about the physiological response and/or present an alert responsive to an indication that the extent of the physiological response reaches a threshold. The UI may include a screen to display the notification and/or alert, a speaker to play an audio notification, a tactile UI, and/or a vibrating UI. In some embodiments, "alerting" about a physiological response of a user refers to informing about one or more of the following: the occurrence of a physiological response that the user does not usually have (e.g., a stroke, intoxication, and/or dehydration), an imminent physiological response (e.g., an allergic reaction, an epilepsy attack, and/or a migraine), and an extent of the
physiological response reaching a threshold (e.g., stress and/or anger reaching a predetermined level). FIG. 19 illustrates a scenario in which an alert regarding a possible stroke is issued. The figure illustrates a user waring a frame with at least two CAMs (562 and 563) for measuring ROIs on the right and left cheeks (ROIs 560 and 561, respectively). The measurements indicate that the left side of the face is colder than the right side of the face. Based on these measurements, and possibly additional data, the system detects the stroke and issues an alert.
[0083] The CAMs can take respiratory -related thermal measurements when their ROIs are on the user's upper lip, the user's mouth, the space where the exhale stream form the user's nose flows, and/or the space where the exhale stream from the user's mouth flows. In some embodiments, one or more of the following respiratory parameters may be calculated based on the respiratory-related thermal measurements taken during a certain period of time:
[0084] "Breathing rate" represents the number of breaths per minute the user took during the certain period. The breathing rate may also be formulated as the average time between successive inhales and/or the average between successive exhales.
[0085] "Respiration volume" represents the volume of air breathed over a certain duration (usually per minute), the volume of air breathed during a certain breath, tidal volume, and/or the ratio between two or more breaths. For example, the respiration volume may indicate that a first breath was deeper than a second breath, or that breaths during a first minute were shallower than breaths during a second minute.
[0086] "Mouth breathing vs nasal breathing" indicates whether during the certain period the user breathed mainly through the mouth (a state characterized as "mouth breathing") or mainly through the nose (a state characterized as "nose breathing" or "nasal breathing"). Optionally, this parameter may represent the ratio between nasal and mouth breathing, such as a proportion of the certain period during which the breathing was more mouth breathing, and/or the relative volume of air exhaled through the nose vs the mouth. In one example, breathing mainly through the mouth refers to inhaling more than 50% of the air through the mouth (and less than 50% of the air through the nose).
[0087] "Exhale duration / Inhale duration" represents the exhale(s) duration during the certain period, the inhale(s) duration during the certain period, and/or a ratio of the two aforementioned durations. Optionally, this respiratory parameter may represent one or more of the following: (i) the average duration of the exhales and/or inhales, (ii) a maximum and/or minimum duration of the exhales and/or inhales during the certain period, and (iii) a proportion of times in which the duration of exhaling and/or inhaling reached a certain threshold.
[0088] "Post-exhale breathing pause" represents the time that elapses between when the user finishes exhaling and starts inhaling again. "Post-inhale breathing pause" represents the time that elapses between when the user finishes inhaling and when the user starts exhaling after that. The post exhale/inhale breathing pauses may be formulated utilizing various statistics, such as an average post exhale/inhale breathing pause during a certain period, a maximum or minimum duration of post exhale/inhale breathing pause during the certain period, and/or a proportion of times in which the duration
of post exhale/inhale breathing pause reached a certain threshold.
[0089] "Dominant nostril" is the nostril through which most of the air is exhaled (when exhaling through the nose). Normally the dominant nostril changes during the day, and the exhale is considered balanced when the amount of air exhaled through each nostril is similar. Optionally, the breathing may be considered balanced when the difference between the volumes of air exhaled through the right and left nostrils is below a predetermined threshold, such as 20% or 10%. Additionally or alternatively, the breathing may be considered balanced during a certain duration around the middle of the switching from right to left or left to right nostril dominance. For example, the certain duration of balanced breathing may be about 4 minutes at the middle of the switching between dominant nostrils.
[0090] "Temperature of the exhale stream" may be measured based on thermal measurements of the stream that flows from one or both nostrils, and/or the heat pattern generated on the upper lip by the exhale stream from the nose. In one example, it is not necessary to measure the exact temperature of the exhale stream as long as the system is able to differentiate between different temperatures of the exhale stream based on the differences between series of thermal measurements taken at different times. Optionally, the series of thermal measurements that are compared are temperature measurements received from the same pixel(s) of a head-mounted thermal camera.
[0091] "Shape of the exhale stream" (also referred to as "SHAPE") represents the three-dimensional (3D) shape of the exhale stream from at least one of the nostrils. The SHAPE changes during the day and may reflect the mental, physiological, and/or energetic state of a user. Usually the temperature of the exhale stream is different from the temperature of the air in the environment; this enables a thermal camera, which captures a portion of the volume through which the exhale stream flows, to take a measurement indicative of the SHAPE, and/or to differentiate between different shapes of the exhale stream (SHAPEs). Additionally, the temperature of the exhale stream is usually different from the temperature of the upper lip, and thus exhale streams having different shapes may generate different thermal patterns on the upper lip. Measuring these different thermal patterns on the upper lip may enable a computer to differentiate between different SHAPEs. In one embodiment, differences between values measured by adjacent thermal pixels of CAM, which measure the exhale stream and/or the upper lip over different time intervals, may correspond to different SHAPEs. In one example, it is not necessary to measure the exact SHAPE as long as it is possible to differentiate between different SHAPEs based on the differences between the values of the adjacent thermal pixels. In another embodiment, differences between average values, measured by the same thermal pixel over different time intervals, may correspond to different SHAPEs. In still another embodiment, the air that is within certain boundaries of a 3D shape that protrudes from the user's nose, which is warmer than the environment air, as measured by CAM, is considered to belong to the exhale stream.
[0092] In one embodiment, the SHAPE may be represented by one or more thermal images taken by one or more CAMs. In this embodiment, the shape may correspond to a certain pattern in the one or more images and/or a time series describing a changing pattern in multiple images. In another embodiment, the
SHAPE may be represented by at least one of the following parameters: the angle from which the exhale stream blows from a nostril, the width of the exhale stream, the length of the exhale stream, and other parameters that are indicative of the 3D SHAPE. Optionally, the SHAPE may be defined by the shape of a geometric body that confines it, such as a cone or a cylinder, protruding from the user's nose. For example, the SHAPE may be represented by parameters such as the cone's height, the radius of the cone's base, and/or the angle between the cone's altitude axis and the nostril.
[0093] "Smoothness of the exhale stream" represents a level of smoothness of the exhale stream from the nose and/or the mouth. In one embodiment, the smoothness of the exhale stream is a value that can be determined based on observing the smoothness of a graph of the respiratory-related thermal measurements. Optionally, it is unnecessary for the system to measure the exact smoothness of the exhale stream as long as it is able to differentiate between smoothness levels of respiratory -related thermal measurements taken at different times. Optionally, the compared thermal measurements taken at different times may be measured by the same pixels and/or by different pixels. As a rule of thumb, the smoother the exhale stream, the lower the stress and the better the physical condition. For example, the exhale stream of a healthy young person is often smoother than the exhale stream of an elderly person, who may even experience short pauses in the act of exhaling.
[0094] There are well known mathematical methods to calculate the smoothness of a graph, such as Fourier transform analysis, polynomial fit, differentiability classes, multivariate differentiability classes, parametric continuity, and/or geometric continuity. In one example, the smoothness of TH OI indicative of the exhale stream is calculated based on a Fourier transform of a series of THROI. In the case of Fourier transform, the smaller the power of the high-frequencies portion, the smoother the exhale is, and vice versa. Optionally, one or more predetermined thresholds differentiate between the high-frequency and low-frequency portions in the frequency domain. In another example, the smoothness of THROI indicative of the exhale stream is calculated using a polynomial fit (with a bounded degree) of a series of THROI. Optionally, the degree of the polynomial used for the fit is proportional (e.g., linear) to the number of exhales in the time series. In the case of polynomial fit, the smoothness may be a measure of the goodness of fit between the series of THROI and the polynomial. For example, the lower the squared error, the smoother the graph is considered. In still another embodiment, the smoothness of THROI indicative of the exhale stream may be calculated using a machine learning-based model trained with training data comprising reference time series of THROI for which the extent of smoothness is known.
[0095] In an alternative embodiment, a microphone is used to measure the exhale sounds. The smoothness of the exhale stream may be a value that is proportional to the smoothness of the audio measurement time series taken by the microphone (e.g., as determined based on the power of the high- frequency portion obtained in a Fourier transform of the time series of the audio).
[0096] There are various approaches that may be employed in order to calculate values of one or more of the respiratory parameters mentioned above based on respiratory-related thermal measurements. Optionally, calculating the values of one or more of the respiratory parameters may be based on
additional inputs, such as statistics about the user (e.g., age, gender, weight, height, and the like), indications about the user's activity level (e.g., input from a pedometer), and/or physiological signals of the user (e.g., heart rate and respiratory rate). Roughly speaking, some approaches may be considered analytical approaches, while other approaches may involve utilization of a machine learning-based model.
[0097] In some embodiments, one or more of the respiratory parameters mentioned above may be calculated based on the respiratory-related thermal measurements by observing differences in thermal measurements. In one embodiment, certain pixels that have alternating temperature changes may be identified as corresponding to exhale streams. In this embodiment, the breathing rate may be a calculated frequency of the alternating temperature changes at the certain pixels. In another embodiment, the relative difference in magnitude of temperature changes at different ROIs, such as the alternating temperature changes that correspond to breathing activity, may be used to characterize different types of breathing. For example, if temperature changes at ROI near the nostrils reach a first threshold, while temperature changes at an ROI related to the mouth do not reach a second threshold, then the breathing may be considered nasal breathing; while if the opposite occurs, the breathing may be considered mouth breathing. In another example, if temperature changes at an ROI near the left nostril and/or on the left side of the upper lip are higher than temperature changes at an ROI near the right nostril and/or on the right side of the upper lip, then the left nostril may be considered the dominant nostril at the time the measurements were taken. In still another example, the value of a respiratory parameter may be calculated as a function of one or more input values from among the respiratory-related thermal measurements.
[0098] In other embodiments, one or more of the respiratory parameters may be calculated by generating feature values based on the respiratory-related thermal measurements and utilizing a model to calculate, based on the feature values, the value of a certain respiratory parameter from among the parameters mentioned above. The model for the certain respiratory parameter is trained based on samples. Each sample comprises the feature values based on respiratory-related thermal measurements, taken during a certain period of time, and a label indicative of the value of the certain respiratory parameter during the certain period of time. For example, the feature values generated for a sample may include the values of pixels measured by the one or more cameras, statistics of the values of the pixels, and/or functions of differences of values of pixels at different times. Additionally or alternatively, some of the feature values may include various low-level image analysis features, such as feature derived using Gabor filters, local binary patterns and their derivatives, features derived using algorithms such as SIFT, SURF, and/or ORB, image keypoints, HOG descriptors and features derived using PCA or LDA. The labels of the samples may be obtained through various ways. Some examples of approaches for generating the labels include manual reporting (e.g., a user notes the type of his/her breathing), manual analysis of thermal images (e.g., an expert determines a shape of an exhale stream), and/or utilizing sensors (e.g., a chest strap that measures the breathing rate and volume).
[0099] Training the model for the certain respiratory parameter based on the samples may involve utilizing one or more machine learning-based training algorithms, such as a training algorithm for a
decision tree, a regression model, or a neural network. Once the model is trained, it may be utilized to calculate the value of the certain respiratory parameter based on feature values generated based on respiratory -related thermal measurements taken during a certain period, for which the label (i.e., the value of the certain respiratory parameter) may not be known.
[0100] Various respiratory parameters can be important indicators for both emotional state and physical condition in general. Monitoring a person's respiratory parameters in day-to-day settings can be difficult due to the person's movements and activities. Additionally, when used for long periods (e.g., several hours each day), some current approaches for measuring some respiratory parameters may be impractical (e.g., a spirometer) and/or uncomfortable (e.g., systems involving chest straps). Thus, there is a need to be able to monitor a user's respiratory parameters in a comfortable way that is practical for long term monitoring.
[0101] Collecting thermal measurements of various regions of a user's face can have many health- related (and other) applications. In particular, thermal measurements of regions below the nostrils can enable monitoring of the user's respiration. However, movements of the user and/or of the user's head can make acquiring this data difficult with many of the known approaches. Some embodiments described herein utilize various combinations of head-mounted thermal cameras, which may be physically coupled to a HMS, in order to collect thermal measurements.
[0102] In one embodiment, a system configured to calculate a respiratory parameter includes an inward-facing head-mounted thermal camera (CAM) and a computer. CAM is worn on a user's head and takes thermal measurements of a region below the nostrils (THROI), where THROI are indicative of the exhale stream. The "region below the nostrils", which is indicative of the exhale stream, refers to one or more regions on the upper lip, the mouth, and/or air volume(s) through which the exhale streams from the nose and/or mouth flow. The flowing of the typically warm air of the exhale stream can change the temperature at the one or more regions, so thermal measurements of those one or more regions can provide information about properties of the exhale stream. The computer (i) generates feature values based on THROI, and (ii) utilizes a model to calculate a respiratory parameter based on the feature values. The respiratory parameter may be indicative of the user's breathing rate, and the model was trained based on previous THROI of the user taken during different days. FIG. 33 illustrates one embodiment of a system for calculating a respiratory parameter. The system includes a computer 445 and CAM that is coupled to the eyeglasses frame worn by the user 420 and provides THROI 443.
[0103] The computer 445 generates feature values based on THROI 443, and possibly other sources of data. Then the computer utilizes a model 442 to calculate, based on the feature values, a value 447 of the respiratory parameter. The value 447 may be indicative of at least one of the following: breathing rate, respiration volume, whether the user is breathing mainly through the mouth or through the nose, exhale (inhale) duration, post-exhale (post-inhale) breathing pause, a dominant nostril, a shape of the exhale stream, smoothness of the exhale stream, and/or temperature of the exhale stream. Optionally, the respiratory parameters calculated by the computer 445 may be indicative of the respiration volume.
Optionally, the value 447 is stored (e.g., for life-logging purposes) and/or forwarded to a software agent operating on behalf of the user (e.g., in order for the software agent to make a decision regarding the user).
[0104] The feature values generated by the computer 445 may include any of the feature values described in this disclosure that are utilized to detect a physiological response. Optionally, the thermal measurements may undergo various forms of filtering and/or normalization. For example, the feature values generated based on TH OI may include: time series data comprising values measured by CAM, average values of certain pixels of CAM, and/or values measured at certain times by the certain pixels. Additionally, the feature values may include values generated based on additional measurements of the user taken by one or more additional sensors (e.g., measurements of heart rate, heart rate variability, brainwave activity, galvanic skin response, muscle activity, and/or an extent of movement). Additionally or alternatively, at least some of the feature values may include measurements of the environment in which the user is in, and/or confounding factors that may interfere with the detection.
[0105] A user interface (UI) 448 may be utilized to present the value 447 of the respiratory parameter and/or present an alert (e.g., to the user 420 and/or to a caregiver). In one example, UI 448 may be used to alert responsive to an indication that the value 447 reaches a threshold (e.g., when the breathing rate exceeds a certain value and/or after the user 420 spent a certain duration mouth breathing instead of nasal breathing). In another example, UI 448 may be used to alert responsive to detecting that the probability of a respiratory-related attack reaches a threshold.
[0106] In one embodiment, the value 447 may be indicative of the smoothness of the exhale stream. Optionally, the value 447 may be presented to the user 420 to increase the user's awareness to the smoothness of his/her exhale stream. Optionally, responsive to detecting that the smoothness is below a predetermined threshold, the computer 445 may issue an alert for the user 420 (e.g., via the UI 448) in order to increase the user's awareness to the user's breathing.
[0107] The model 442 is trained on data that includes previous THROI of the user 420 and possibly other users. Optionally, the previous measurements were taken on different days and/or over a period longer than a week. Training the model 442 typically involves generating samples based on the previous THROI and corresponding labels indicative of values of the respiratory parameter. The labels may come from different sources. In one embodiment, one or more of the labels may be generated using a sensor that is not a thermal camera, which may or may not be physically coupled to a frame worn by the user. The sensor's measurements may be analyzed by a human expert and/or a software program in order to generate the labels. In one example, the sensor is part of a smart shirt and/or chest strap that measures various respiratory (and other) parameters, such as Hexoskin™ smart shirt. In another embodiment, one or more of the labels may come from an external source such as an entity that observes the user, which may be a human observer or a software program. In yet another embodiment, one or more of the labels may be provided by the user, for example by indicating whether he/she is breathing through the mouth or nose and/or which nostril is dominant.
[0108] The samples used to train the model 442 usually include samples corresponding to different values of the respiratory parameter. In some embodiments, the samples used to train the model 442 include samples generated based on THROI taken at different times of the day, while being at different locations, and/or while conducting different activities. In one example, the samples are generated based on THROI taken in the morning and THROI taken in the evening. In another example, the samples are generated based on THROI of a user taken while being indoors, and THROI of the user taken while being outdoors. In yet another example, the samples are generated based on THROI taken while a user was sitting down, and THROI taken while the user was walking, running, and/or engaging in physical exercise (e.g., dancing, biking, etc.).
[0109] Additionally or alternatively, the samples used to train the model 442 may be generated based on THROI taken while various environmental conditions persisted. For example, the samples include first and second samples generated based on THROI taken while the environment had first and second temperatures, with the first temperature being at least 10 0 C warmer than the second temperature. In another example, the samples include samples generated based on measurements taken while there were different extents of direct sunlight and/or different extents of wind blowing.
[0110] Various computational approaches may be utilized to train the model 442 based on the samples described above. In one example, training the model 442 may involve selecting a threshold based on the samples. Optionally, if a certain feature value reaches the threshold then a certain respiratory condition is detected (e.g., unsmooth breathing). Optionally, the model 442 includes a value describing the threshold. In another example, a machine learning-based training algorithm may be utilized to train the model 442 based on the samples. Optionally, the model 442 includes parameters of at least one of the following types of models: a regression model, a neural network, a nearest neighbor model, a support vector machine, a support vector machine for regression, a naive Bayes model, a Bayes network, and a decision tree.
[0111] In some embodiments, a deep learning algorithm may be used to train the model 442. In one example, the model 442 may include parameters describing multiple hidden layers of a neural network. In one embodiment, when THROI include measurements of multiple pixels, the model 442 may include a convolution neural network (CNN). In one example, the CNN may be utilized to identify certain patterns in the thermal images, such as patterns of temperatures in the region of the exhale stream that may be indicative of a respiratory parameter, which involve aspects such as the location, direction, size, and/or shape of an exhale stream from the nose and/or mouth. In another example, calculating a value of a respiratory parameter, such as the breathing rate, may be done based on multiple, possibly successive, thermal measurements. Optionally, calculating values of the respiratory parameter based on thermal measurements may involve retaining state information that is based on previous measurements. Optionally, the model 442 may include parameters that describe an architecture that supports such a capability. In one example, the model 442 may include parameters of a recurrent neural network (RNN), which is a connectionist model that captures the dynamics of sequences of samples via cycles in the
network's nodes. This enables RNNs to retain a state that can represent information from an arbitrarily long context window. In one example, the RNN may be implemented using a long short-term memory (LSTM) architecture. In another example, the RNN may be implemented using bidirectional recurrent neural network architecture (BRNN).
[0112] The computer 445 may detect a respiratory-related attack (such as an asthma attack, an epileptic attack, an anxiety attack, a panic attack, and a tantrum) based on feature values generated based on THROI 443. The computer 445 may further receive additional inputs (such as indications of consuming a substance, a situation of the user, and/or thermal measurements of the forehead), and detect the respiratory-related attack based on the additional inputs. For example, the computer 445 may generate one or more of the feature values used to calculate the value 447 based on the additional inputs.
[0113] In a first embodiment, the computer 445 utilizes an indication of consumption of a substance to detect a respiratory-related attack. Optionally, the model 442 is trained based on: a first set of THROI taken while the user experienced a respiratory-related attack after consuming the substance, and a second set of THROI taken while the user did not experience a respiratory-related attack after consuming the substance. The duration to which "after consuming" refers depends on the substance and may last from minutes to hours. Optionally, the consuming of the substance involves consuming a certain drug and/or consuming a certain food item, and the indication is indicative of the time and/or the amount consumed.
[0114] In a second embodiment, the computer 445 utilizes an indication of a situation of the user to detect a respiratory-related attack. Optionally, the model 442 is trained based on: a first set of THROI taken while the user was in the situation and experienced a respiratory-related attack, and a second set of THROI taken while the user was in the situation and did not experience a respiratory -related attack. Optionally, the situation involves (i) interacting with a certain person, (ii) a type of activity the user is conducting, selected from at least two different types of activities associated with different levels of stress, and/or (iii) a type of activity the user is about to conduct (e.g., within thirty minutes), selected from at least two different types of activities associated with different levels of stress.
[0115] In a third embodiment, the system includes another CAM that takes thermal measurements of a region on the forehead (T¾) of the user, and the computer 445 detects a respiratory related attack based on THROI and THF. For example, THROI and THF may be utilized to generate one or more of the feature values used to calculate the value indicative of the probability that the user is experiencing, or is about to experience, the respiratory-related attack. Optionally, the model 442 was trained based on a first set of THROI and THF taken while the user experienced a respiratory-related attack, and a second set of THROI and THF taken while the user did not experience a respiratory-related attack.
[0116] The system may optionally include a sensor 435 that takes measurements mm0Ve 450 that are indicative of movements of the user 420; the system further detects the physiological response based on mm0Ve 450. The sensor 435 may include one or more of the following sensors: a gyroscope and/or an accelerometer, an outward-facing visible-light camera (that feeds an image processing algorithm to detect movement from a series of images), a miniature radar (such as low-power radar operating in the range
between 30 GHz and 3,000 GHz), a miniature active electro-optics distance measurement device (such as a miniature Lidar), and/or a triangulation wireless device (such as a GPS receiver). Optionally, the sensor 435 is physically coupled to the frame or belongs to a device carried by the user (e.g., a smartphone or a smartwatch).
[0117] In a first embodiment, the computer 445 may detect the respiratory-related attack if the value 447 of the respiratory parameter reaches a first threshold, while mm0ve 450 do not reach a second threshold. In one example, reaching the first threshold indicates a high breathing rate, which may be considered too high for the user. Additionally, in this example, reaching the second threshold may mean that the user is conducting arduous physical activity. Thus, if the user is breathing too fast and this is not because of physical activity, then the computer 445 detects this as an occurrence of a respiratory -related attack (e.g., an asthma attack or a panic attack).
[0118] In a second embodiment, the computer 445 may generate feature values based on mm0Ve 450 in addition to TH OI 443, and utilize an extended model to calculate, based on these feature values, a value indicative of the probability that the user is experiencing, or is about to experience, the respiratory related attack. In one example, the feature values used along with the extended model (which may be the model 442 or another model) include one or more of the following: (i) values comprised in THROI 443, (ii) values of a respiratory parameter of the user 420, which are generated based on THROI 443 (iii) values generated based on additional measurements of the user 420 (e.g., measurements of heart rate, heart rate variability, brainwave activity, galvanic skin response, muscle activity, and an extent of movement), (iv) measurements of the environment in which the user 420 was in while THROI 443 were taken, (v) indications of various occurrences which may be considered confounding factors (e.g., touching the face, thermal radiation directed at the face, or airflow directed at the face), and/or (vi) values indicative of movements of the user (which are based on mm0ve 450).
[0119] The extended model is trained on samples generated from prior mmoVe and THROI, and corresponding labels indicating times of having the respiratory-related attack. The labels may come from various sources, such as measurements of the user (e.g., to detect respiratory distress), observations by a human and/or software, and/or the indications may be self -reported by the user. The samples used to train the extended model may be generated based on measurements taken over different days, and encompass measurements taken when the user was in different situations.
[0120] Usually the exhaled air warms up the skin below the nostrils, and during inhale the skin below the nostrils cools. This enables the system to identify the exhale based on measuring an increase in the temperature of the skin below the nostrils an inhale, and identify the inhale based on measuring a decrease in the temperature of the skin below the nostrils.
[0121] Synchronizing a physical effort with the breathing is highly recommended by therapists and sport instructors. For example, some elderly and/or unfit people can find it difficult to stand up and/or make other physical efforts because many of them do not exhale while making the effort, and/or do not synchronize the physical effort with their breathing. These people can benefit from a system that reminds
them to exhale while making the effort, and/or helps them synchronize the physical effort with their breathing. As another example, in many kinds of physical activities it is highly recommended to exhale while making a physical effort and/or exhale during certain movements (such as exhale while bending down in Uttanasana).
[0122] In one embodiment, the computer 445 determines based on mm0ve 450 and THROI 443 whether the user exhaled while making a physical effort above a predetermined threshold. Optionally, the computer receives a first indication that the user is making or is about to make the physical effort, commands a user interface (UI) to suggest the user to exhale while making the physical effort, and commands the UI to play a positive feedback in response to determining that the user managed to exhale while making the physical effort. Additionally, the computer may further command the UI to play an explanation why the user should try next time to exhale while making the physical effort in response to determining that the user did not exhale while making the physical effort.
[0123] FIG. 23a to FIG. 24c illustrate how the system described above may help train an elderly user to exhale during effort. In FIG. 23a the system identifies that the user inhaled rather than exhaled while getting up from a sitting position in a chair; the system alerts the user about this finding and suggests that next time the user should exhale while getting up. In FIG. 23b, the system identifies that the user exhaled at the correct time and commends the user on doing so. Examples of physical efforts include standing up, sitting down, manipulating with the hands an item that requires applying a significant force, defecating, dressing, leaning over, and/or lifting an item.
[0124] In FIG. 24a the system identifies that the user inhaled rather than exhaled while bending down to the dishwasher, and presents a thumbs-down signal (e.g., on the user's smartphone). In FIG. 24b the system identifies that the user exhaled while bending down to the dishwasher, and presents a thumbs-up signal. In FIG. 24c illustrates a smartphone app for counting the thumbs-up and thumbs-down signals identified during a day. The app may show various statistics, such as thumbs-up / thumbs-down during the past week, from start training with the app, according to locations the user is, while being with certain people, and/or organized according to types of exercises (such as a first counter for yoga, a second counter for housework, and a third counter for breathing during work time).
[0125] In one embodiment, the computer 445: (i) receives from a fitness app (also known as a personal trainer app) an indication that the user should exhale while making a movement, (ii) determines, based on nimove, when the user is making the movement, and (iii) determines, based on THROI, whether the user exhaled while making the movement. Optionally, the computer commands the UI to (i) play a positive feedback in response to determining that the user managed to exhale while making the physical effort, and/or (ii) play an alert and/or an explanation why the user should try next time to exhale while making the physical effort in response to determining that the user did not exhale while making the physical effort. FIG. 25a illustrates a fitness app running on smartphone 196, which instructs the user to exhale while bending down. CAM coupled to eyeglasses frame 181 measures the user breathing and is utilized by the fitness app that helps the user to exhale correctly. FIG. 25b illustrates instructing the user to inhale
while straightening up.
[0126] In another embodiment, the computer 445 : (i) receives from a fitness app a certain number of breath cycles during which the user should perform a physical exercise, such as keeping a static yoga pose for a certain number of breath cycles, or riding a spin bike at a certain speed for a certain number of breath cycles, (ii) determines, based on mm0ve, when the user performs the physical exercise, and (iii) counts, based on THROI, the number of breath cycles the user had while performing the physical exercise. Optionally, the computer commands the UI to play an instruction switch to another physical exercise responsive to detecting that the user performed the physical exercise for the certain number of breath cycles. Additionally or alternatively, the computer commands the UI to play a feedback that refers to the number of counted breath cycles responsive to detecting that the user performed the physical exercise for a number of breath cycles that is lower than the certain number of breath cycles. FIG. 26 illustrates a fitness app running on smartphone 197, which instructs the user to stay in a triangle pose for 8 breath cycles. CAM coupled to eyeglasses frame 18 1 measures the breathing and is utilized by the fitness app that calculates the breath cycles and counts the time to stay in the triangle pose according to the measured breath cycles.
[0127] The duration of exhaling and inhaling (denoted herein teXhaie and tmhaie, respectively) can have various physiological effects. For example, for some users, breathing with prolonged inhales (relative to the exhales) can increase the possibility of suffering an asthma attack. In particular, keeping the duration of exhaling longer than the duration of inhaling (i.e., texhaie/tin aie > 1 , and preferably texhaie tinhaie≥ 2) may provide many benefits, such as having a calming effect and relieving asthma symptoms. In one embodiment, a computer is further configured to calculate, based on THROI, the ratio between exhale and inhale durations (tex tinhaie).
[0128] Many people are not aware of their breathing most of the time. These people can benefit from a system that is able to calculate texhaie tinhaie and provide them with feedback when it is beneficial to increase the ratio. In one embodiment, a computer suggests the user, via the UI, to increase texhaie/fnhaie when it falls below a threshold. Optionally, the computer updates occasionally the calculation of texhaie/tinhaie, and suggests to progressively increase texhaie/fnhaie at least until reaching a ratio of 1 .5. Optionally, the computer stops suggesting to the user to increase texhaie/tinhaie responsive to identifying that texhaie tinhaie≥ . In another embodiment, the computer is configured to: (i) receive a first indication that the user's stress level reaches a first threshold, (ii) identify, based on THROI, that the ratio between exhaling and inhaling durations (texhaie/tinhaie) is below a second threshold that is below 1 .5 , and (iii) command the UI to suggest to the user to prolong the exhale until texhaie/tinhaie reaches a third threshold that is at least 1.5.
[0129] FIG. 21 illustrates a situation in which an alert is issued to a user when it is detected that the ratio texhaie tin aie is too low. Another scenario in which such an alert may be issued to a user is illustrated in FIG. 29, which shows a virtual robot that the user sees via augmented reality (AR). The robot urges the user to increase the ratio between the duration of the user's exhales and inhales in order to alleviate the stress that builds up. Monitoring of respiratory parameters, and in particular, the ratio texhaie/tin aie can help
a user address a variety of respiratory-related symptoms, as described in the following examples.
[0130] Asthma attacks are related to a person's breathing. Identifying certain changes in respiratory parameters, such as breathing rate above a predetermined threshold, can help a computer to detect an asthma attack based on the thermal measurements. Optionally, the computer utilizes a model, which was trained on previous measurements of the user taken while the user had an asthma attack, to detect the asthma attack based on the thermal measurements. FIG. 30 illustrates an asthmatic patient who receives an alert (e.g., via an augmented reality display) that his breathing rate increased to an extent that often precedes an asthma attack. In addition to the breathing rate, the computer may base its determination that an asthma attack is imminent on additional factors, such as sounds and/or movement analysis as described below.
[0131] In a first embodiment, the computer may receive recordings of the user obtained with a microphone. Such recordings may include sounds that can indicate that an asthma attack is imminent; these sounds may include: asthmatic breathing sounds, asthma wheezing, and/or coughing. Optionally, the computer analyzes the recordings to identify occurrences of one or more of the above sounds. Optionally, taking into account the recordings of the user can affect how the computer issues alerts regarding an imminent asthma attack. For example, a first alert provided to the user in response to identifying the increase in the user's breathing rate above the predetermined threshold without identifying at least one of the body sounds may be less intense than a second alert provided to the user in response to identifying both the increase in the user's breathing rate above the predetermined threshold and at least one of the body sounds. Optionally, in the example above, the first alert may not be issued to the user at all.
[0132] In a second embodiment, the computer may receive measurements obtained from a movement sensor worn by the user and configured to measure user movements. Some movements that may be measured and may be related to an asthma attack include: spasms, shivering, and/or sagittal plane movements indicative of one or more of asthma wheezing, coughing, and/or chest tightness. Optionally, the computer analyzes the measurements of the movement sensor to identify occurrences of one or more of the above movements. Optionally, considering the measured movements can affect how the computer issues alerts regarding an imminent asthma attack. For example, a first alert provided to the user in response to identifying an increase in the user's breathing rate above a predetermined threshold, without measuring a movement related to an asthma attack, is less intense than a second alert provided to the user in response to identifying the increase in the user's breathing rate above the predetermined threshold while measuring a movement related to an asthma attack.
[0133] In some embodiments, a first alert may be considered less intense than a second alert if it is less likely to draw the user's attention. For example, the first alert may not involve a sound effect or involve a low-volume effect, while the second alert may involve a sound effect (which may be louder than the first's). In another example, the first alert may involve a weaker visual cue than the second alert (or no visual cue at all). Examples of visual cues include flashing lights on a device or images brought to the
foreground on a display. In still another example, the first alert is not provided to the user and therefore does not draw the user's attention (while the second alert is provided to the user).
[0134] In one embodiment, responsive to a determination that an asthma attack is imminent, the UI suggests the user to take a precaution, such as increasing texhaie/tinhaie, preforming various breathing exercises (e.g., exercises that involve holding the breath), and/or taking medication (e.g., medication administered using an inhaler), in order to decrease or prevent the severity of the imminent asthma attack. Optionally, detecting the signs of an imminent asthma attack includes identifying an increase in the breathing rate above a predetermined threshold.
[0135] Stress is also related to a person's breathing. In one embodiment, a computer receives a first indication that the user's stress level reaches a threshold and receives a second indication (i) that the ratio between exhaling and inhaling durations is below 1.5 (texhaie/tinhaie < 1.5), and/or (ii) that the user's breathing rate reached a predetermined threshold. Then the computer may command a UI to suggest the user to increase texhaie tin aie to at least 1.5. Optionally, the computer receives the first indication from a wearable device, calculates texhaie/tinhaie based on TH OI (which is indicative of the exhale stream), and commands the UI to provide the user with an auditory and/or visual feedback indicative of the change in texnaie tinhaie in response to the suggestion to increase the ratio. Optionally, the computer may command the UI to update the user about changes in the stress level in response to increasing texhaie tinhaie, and may provide positive reinforcement to help the user to maintain the required ratio at least until a certain improvement in the stress level is achieved.
[0136] FIG. 22 illustrates one embodiment of a system configured to collect thermal measurements related to respiration, in which four inward-facing head-mounted thermal cameras (CAMs) are coupled to the bottom of an eyeglasses frame 181. CAMs 182 and 185 are used to take thermal measurements of regions on the right and left sides of the upper lip (186 and 187, respectively), and CAMs 183 and 184 are used to take thermal measurements of a region on the user's mouth 188 and/or a volume protruding out of the user's mouth. At least some of the ROIs may overlap, which is illustrated as vertical lines in the overlapping areas. Optionally, one or more of the CAMs includes a microbolometer focal-plane array (FPA) sensor or a thermopile FPA sensor.
[0137] In one embodiment, a computer detects whether the user is breathing mainly through the mouth or through the nose based on measurements taken by CAMs 182, 183, 184 and 185. Optionally, the system helps the user to prefer breathing through the nose instead of breathing through the mouth by notifying the user when he/she is breathing through the mouth, and/or by notifying the user that the ratio between mouth breathing and nose breathing reaches a predetermined threshold. In one embodiment, the computer detects whether the user is breathing mainly through the right nostril or through the left nostril based on measurements taken by CAMs 182 and 185.
[0138] The system may further include a VCAM 189 to take images (IM) of a region on the nose and/or mouth, which are used to calculate a respiratory parameter (e.g., detect whether the user is breathing mainly through the mouth or through the nose, detect the inhale duration, and/or detect the post-
inhale pause duration). In one embodiment, one or more feature values may be generated based on IM. The feature values may be generated using various image processing techniques and represent various low-level image properties. Some examples of such features may include features generated using Gabor filters, local binary patterns and their derivatives, features generated using algorithms such as SIFT, SURF, and/or ORB, and features generated using PCA or LDA. The one or more feature values may be utilized in the calculation of the respiratory parameter in addition to feature values generated based on the thermal measurements.
[0139] In one embodiment, the inward-facing head-mounted visible-light camera 189 takes images of a region on the user's mouth, and IM are indicative of whether the mouth is open or closed. A computer utilizes a model to detect, based on IM and TH OI (such as the thermal measurements taken by at least one of CAMs 182-185), whether the user is breathing mainly through the mouth or through the nose. Optionally, the model was trained based on: a first set of THROI taken while IM was indicative that the mouth is open, and a second set of THROI taken while IM was indicative that the mouth is closed. Optionally, the system may help the user to prefer breathing through the nose instead of breathing through the mouth by notifying the user when he/she is breathing through the mouth, and/or by notifying the user that the ratio between mouth breathing and nose breathing reaches a predetermined threshold. FIG. 27 illustrates notifying the user that she breathes mainly through the mouth and should switch to breathing through the nose, while having a physical exercise such as spinning. FIG. 28 illustrates an exemplary UI that shows statistics about the dominant nostril and mouth breathing during the day.
[0140] In one embodiment, the inward-facing head-mounted visible-light camera 189 takes images of a region on the nose, and the computer identifies an inhale (and/or differentiates between an inhale and a breathing pause that follows the inhale) based on image processing of IM to detect movements of the nose, especially at the edges of the nostrils, which are indicative of inhaling.
[0141] FIG. 20 illustrates another embodiment of a system configured to collect thermal measurements related to respiration, in which four CAMs are coupled to a football helmet. CAMs 190 and 191 are used to take thermal measurements of regions on the right and left sides of the upper lip (appear as shaded regions on the users face), and CAMs 192 and 193 are used to take thermal measurements of a region on the user's mouth and/or a volume protruding out of the user's mouth. The illustrated CAMs are located outside of the exhale streams of the mouth and nostrils in order to maintain good measurement accuracy also when using thermal sensors such as thermopiles.
[0142] In some embodiments, the system further includes at least one in-the-ear earbud comprising a microphone to measure sounds inside the ear canal. A computer may identify an inhale based on audio signal analysis of the recordings from the earbud. Optionally, the inhale sounds measured by the earbud are stronger when the dominant nostril is the nostril closer to the ear in which the earbud is plugged in, compared to the inhale sounds measured by the earbud when the other nostril is the dominant nostril. Optionally, the computer detects whether the user is breathing mainly through the mouth or through the nose based on the thermal measurements and the sounds measured by the earbud. And then the system
can help the user to prefer nasal breathing over mouth breathing by alerting the user when he/she breathes mainly through the mouth.
[0143] When a person breathes primarily through the nose (i.e., nasal breathing), one of the nostrils may be more dominant than the other nostril, with most of the exhaled air flowing through it. The right and the left nostrils may switch roles as the dominant nostril several times a day, and sometimes the breathing is essentially the same through both nostrils. Research has shown that brain activity, and in particular, which hemisphere is relatively more effective, is correlated with the dominant nostril. When the left nostril is dominant, the right hemisphere ("right -brain") is typically more effective at performing certain activities associated with it, and when the right nostril is dominant, the left hemisphere ("left- brain") is more effective at performing certain activities associated with it.
[0144] Since each side of the brain plays a different role in different types of activities, it has long been believed by yoga practitioners, and to some extent confirmed in research, that it is better to conduct certain activities when a certain nostril is dominant. For example, exercising, eating, digesting, and taking care of the body (e.g., defecating) are better done when the right nostril is dominant. Various forms of mental activity, such as planning, memorizing, writing, thinking, etc. are better done when the left nostril is dominant.
[0145] Scheduling activities according to dominant nostril provides benefits; however, keeping track of which nostril is dominant is not an easy task for most people. Thus, there is a need for a way to automatically keep track of the dominant nostril and utilize this information in order to organize activities to coincide with periods in which their favorite nostril is the dominant nostril.
[0146] In some embodiments, the dominant nostril at a given time is the nostril through which most of the air is exhaled (with a closed mouth). Optionally, the dominant nostril is the nostril through which at least 70% of the air is exhaled. The different types of nostril dominance are illustrated in FIG. 31a to FIG. 31c. FIG. 31a is a schematic illustration of a left dominant nostril (note the significantly larger exhale stream from the left nostril). FIG. 31b is a schematic illustration of a right dominant nostril. And FIG. 31c is a schematic illustration of a balanced nasal breathing.
[0147] FIG. 32 is a schematic illustration of one embodiment of a system configured to identify the dominant nostril. The system includes at least one CAM 750, a computer 752, and an optional UI 754. CAM 750 may be similar to the CAMs in FIG. 22. CAM 750 takes thermal measurements of first and second ROIs below the right and left nostrils (THROII and THROU, respectively) of the user. Optionally, each CAM does not occlude any of the user's mouth and nostrils. Optionally, each CAM is located less than 15 cm from the user's face and above the user's upper lip. Optionally, each CAM weighs below 10 g or below 2 g, and utilizes microbolometer or thermopile sensors. Optionally, each CAM includes multiple sensing elements that are configured to take THROII and/or THROD- In one example, each CAM includes at least 6 sensing elements, and each of THROII and THROO is based on measurements of at least 3 sensing elements. Optionally, the system includes a frame to which CAM is physically coupled.
[0148] In one embodiment, the at least one CAM includes at least first and second thermal cameras
(CAMl and CAM2, respectively) that take TH OII and THROO, respectively, located less than 15 cm from the user's face. CAMl is physically coupled to the right half of the frame and captures the exhale stream from the right nostril better than it captures the exhale stream from the left nostril, and CAM2 is physically coupled to the left half of the frame and captures the exhale stream from the left nostril better than it captures the exhale stream from the right nostril.
[0149] The computer identifies the dominant nostril based on THROU and THROE (and possibly other data such as THROD), which were taken during a certain duration. Optionally, the certain duration is longer than at least one of the following durations: a duration of one exhale, a duration of one or more breathing cycles, a half a minute, a minute, and five minutes.
[0150] In one embodiment, the computer utilizes a model to identify the dominant nostril. Optionally, the model was trained based on previous THROII, THROO, and indications indicative of which of the nostrils was dominant while the previous THROII and THROE were taken. In one example, the computer generates feature values based on THROII and THROO (and optionally THROD), and utilizes the model to calculate, based on the feature values, a value indicative of which of the nostrils is dominant.
[0151] In one embodiment, the computer identifies whether the user's breathing may be considered balanced breathing. Optionally, breathing is considered balanced breathing when the streams through the right and the left nostrils are essentially equal, such as when the extent of air exhaled through the left nostril is 40% to 60% of the total of the air exhaled through the nose. Balanced breathing of a normal healthy human usually lasts 1 -4 minutes during the time of switching between the dominant nostrils. Optionally, the computer notifies the user when the user's breathing is balanced. Optionally, the computer suggests to the user, via a UI, to meditate during the balanced breathing
[0152] The total time the different nostrils remain dominant may be indicative of various medical conditions. In one embodiment, when there is a significant imbalance of the daily total time of left nostril dominance compared to total time of right nostril dominance, and especially if this condition continues for two or more days (and is significantly different from the user's average statistics), it may be an indication of an approaching health problem. For example, when the total time of left nostril dominance is greater than the total time of right nostril dominance, the approaching problem may be more mentally related than physically related; and when the total time of right nostril dominance is greater than the total time of left nostril dominance, the approaching problem may be more physically related than mentally related. In another embodiment, a greater extent of left nostril dominance is related to digestion problems, inner gas, diarrhea, and male impotence; and a greater extent of right nostril dominance may be related to high blood pressure, acid reflux, and ulcers.
[0153] In one embodiment, the computer monitors nostril dominance over a certain period, and issues an alert when at least one of the following occurs: (i) a ratio between the total times of the right and left nostril dominance during the certain period reaches a threshold (e.g., the threshold may be below 0.3 or above 0.7) (ii) an average time to switch from right to left nostril dominance reaches a threshold (e.g., a threshold longer than 3 hours), and (iii) an average time to switch from left to right nostril dominance
reaches a threshold.
[0154] The following are some examples of various applications in which the computer may utilize information about the dominant nostril, which is identified based on THROII and THROU, in order to assist the user in various ways.
[0155] For some people, a certain dominant nostril may be associated with a higher frequency of having certain health problems, such as an asthma attack or a headache. Making a person aware of which nostril is more associated with the health problem can help the user to alleviate the health problem by switching the dominant nostril. Two examples of ways to switch the dominant nostril include: (i) to plug the current dominant nostril and breathe through the other nostril; and (ii) to lay on the side of the current dominant nostril (i.e., lying on the left side to switch from left to right dominant nostril, and vice versa). In one embodiment, the computer detects that the user is having an asthma attack, notifies the user about the current dominant nostril (which is associated with a higher frequency of asthma attacks), and suggests to switch the dominant nostril (to alleviate the asthma attack). In another embodiment, the computer detects the user has a headache, notifies the user about the current dominant nostril (which is associated with a higher frequency of headaches), and suggests to switch the dominant nostril.
[0156] In one embodiment, the length of the exhale stream is considered the distance from the nose at which the exhale stream can still be detected. For each person, there is a threshold that may change during the day and responsive to different situations. When the length of the exhale stream is below the threshold, it may indicate that the person is calm; and when the length of the exhale stream is longer than the threshold, it may indicate excitement. In general, the shorter the length of the exhale stream the less energy is invested in the breathing process and the less stress the person experiences. An exception may be arduous physical activity (which can increase the length of the exhale stream due to larger volumes of air that are breathed). In one embodiment, THROII and THROK are indicative of the length of the exhale stream, and the computer calculates level of excitement of the user based on the length of the exhale stream. Optionally, the longer the length, the higher the excitement/stress, and vice versa. In one example, the at least one CAM uses multiple sensing elements to take thermal measurements of regions located at different lengths below the nostrils. In this example, the larger the number of the sensing elements that detect the exhale stream, the longer the length of the exhale stream. Optionally, the amplitude of the temperature changes measured by the sensing elements is also used to estimate the length, shape, and/or uniformity of the exhale stream.
[0157] Ancient yoga texts teach that learning to extend the duration of the time gaps between inhaling and exhaling, and/or between exhaling and inhaling, increases life span. In one embodiment, the computer assists the user to extend the duration of the time gap between inhaling and exhaling by performing at least one of the following: (i) calculating the average time gap between inhaling and exhaling over a predetermined duration, and providing the calculation to the user via a user interface (UI), (ii) calculating the average time gap between inhaling and exhaling over a first predetermined duration, and reminding the user via the UI to practice extending the duration when the average time gap is shorter
than a first predetermined threshold, and (iii) calculating the average time gap between inhaling and exhaling over a second predetermined duration, and encouraging the user via the UI when the average time gap reaches a second predetermined threshold. It is to be noted that to stop breathing after exhaling is considered more beneficial but also more dangerous, therefore the system may enable the user to select different required durations for stopping the breathing after inhaling and for stopping breathing after exhaling.
[0158] Typically, the dominant nostril switches sides throughout the day, with the duration between each switch varying, depending on the individual and other factors. Disruption of the typical nasal switching cycle may be indicative of physiological imbalance, emotional imbalance, and/or sickness. For example, slower switching of the dominant nostril may be, in some cases, a precursor of some diseases. In one embodiment, the computer learns the typical sequence of switching between dominant nostrils based on previous measurements of the user taken over more than a week, and issues an alert upon detecting an irregularity in the sequence of changes between the dominant nostrils. In one example, the irregularity involves a switching of the dominant nostril within a period of time that is shorter than a certain period typical for the user, such as shorter than forty minutes. In another example, the irregularity involves a lack of switching of the dominant nostril for a period that is greater than a certain period typical for the user, such as longer than three hours. In yet another example, the cycles of the dominant nostril may be described as a time series (e.g., stating for each minute a value indicative of the dominant nostril). In this example, the computer may have a record of previous time series of the user, acquired when the user was healthy, and the computer may compare the time series to one or more of the previous time series in order to determine whether a sufficiently similar match is found. A lack of such a similar match may be indicative of the irregularity.
[0159] As discussed above, the shape of the exhale stream (SHAPE) from the nostrils changes over time. With the at least one CAM it is possible, in some embodiments, to obtain measurements indicative of at least some of the different typical SHAPEs. A non -limiting reason for the system's ability to measure the different SHAPEs is that the exhale stream has a higher temperature than both the typical temperature of the environment and the typical temperature of the upper lip. As a result, the particles of the exhale stream emit at a higher power than both the environment and the upper lip, which enables CAM to measure the SHAPE over time.
[0160] As discussed above, different SHAPEs may be characterized by different 3D shape parameters (e.g., the angle from which the exhale stream blows from a nostril, the width of the exhale stream, the length of the exhale stream, and other parameters that are indicative of the 3D SHAPE). Additionally, different SHAPEs may be associated with different states of the user, such as different physiological and/or mental conditions the user may be in. In some embodiments, the computer calculates the SHAPE based on TH OII and THROD- Optionally, calculating the shape involves calculating values of one or more parameters that characterize the exhale stream's shape (e.g., parameters related to the 3D SHAPE). Optionally, calculating the SHAPE involves generating a reference pattern for the SHAPE. For example,
the reference pattern may be a consensus image and/or heat map that is based on THROII and THROE taken over multiple breaths.
[0161] In other embodiments, the computer identifies a SHAPE based on THROII and THROE. Optionally, the identified SHAPE belongs to a set that includes at least first and second SHAPEs, between which the computer differentiates. Optionally, the first and second SHAPEs are indicative of at least one of the following: two of the five great elements according to the Vedas, two different emotional states of the user, two different moods of the user, two different energetic levels of the user, and a healthy state of the user versus an unhealthy state of the user. In one example, the first SHAPE is indicative of a powerful alert energetic level, while the second SHAPE is indicative of a tired energetic level, and the computer uses this information to improve computerized interactions with the user.
[0162] The SHAPE may be related to the dominant nostril at the time. In one embodiment, the first SHAPE occurs more frequently when the right nostril is dominant, and the second SHAPE occurs more frequently when the left nostril is dominant. In another embodiment, both the first and the second SHAPEs occur more frequently when the right nostril is dominant.
[0163] In one example, differentiating between the first and second SHAPEs means that there are certain first THROII and THROE that the computer identifies as corresponding to the first SAHPE and not as corresponding to the second SHAPE, and there are certain second THROII and THROE that the computer identifies as corresponding to the second SHAPR and as not corresponding to the first SHAPE. In another example, differentiating between first and second SHAPEs means that there are certain third THROII and THROE that the computer identifies as having a higher affinity to the first SHAPE compared to their affinity to the second SHAPE, and there are certain fourth THROII and THROE that the computer identifies as having a higher affinity to the second SHAPE compared to their affinity to the first SHAPE.
[0164] In some embodiments, the SHAPE is identified by the computer based on THROII, THROE, and optionally other sources of data. Since the SHAPE does not typically change between consecutive breaths, detecting the shape of the exhale may be done based on multiple measurements of multiple exhales. Using such multiple measurements can increase the accuracy of the identification of the shape. In one example, the first and second SHAPEs are identified based on first and second sets of THROII and THROI2 taken during multiple exhales over first and second non-overlapping respective durations, each longer than a minute.
[0165] The computer may utilize different approaches to identify the SHAPE. In one embodiment, the computer may compare THROII and THROE to one or more reference patterns to determine whether THROII and THROU are similar to a reference pattern from among the one or more reference patterns. For example, if the similarity to a reference pattern reaches a threshold, the exhale stream measured with THROII and THROE may be identified as having the shape corresponding to the shape of the reference pattern. Determining whether THROII and THROE are similar to a reference pattern may be done using various image similarity functions, such as determining the distance between each pixel in the reference pattern and its counterpart in THROII and THROE- One way this can be done is by converting THROII and
THROI2 into a vector of pixel temperatures, and comparing it to a vector of the reference pattern (using some form of vector similarity metric like a dot product or the L2 norm).
[0166] The one or more reference patterns may be generated in different ways. In one embodiment, the one or more reference patterns are generated based on previous THROII and THROE of the user taken on different days. Optionally, the SHAPEs were known while previous THROII and THROE of the user taken. In one example, the SHAPE is associated with a state of the user at the time (e.g., relaxed vs. anxious). In another example, the SHAPE may be determined using an external thermal camera (which is not head- mounted). In yet another example, the SHAPE is determined by manual annotation. In one embodiment, the one or more reference patterns are generated based on previous THROII and THROE of one or more other users.
[0167] In some embodiments, the SHAPE may be discovered through clustering. Optionally, the computer may cluster sets of previous THROII and THROE of the user into clusters. Where sets of THROII and THROU in the same cluster are similar to each other and the exhale streams they measured are assumed to have the same shape. Thus, each of the clusters may be associated with a certain SHAPE to which it corresponds. In one example, the clusters include at least first and second clusters that correspond to the aforementioned first and second SHAPEs.
[0168] The computer may utilize a machine learning-based model to identify the SHAPE. In one embodiment, the computer generates feature values based on THROII and THROE, and utilizes a model to classify THROII and THROE to a class corresponding to the SHAPE. Optionally, the class corresponds to the aforementioned first or second shapes. Optionally, the model is trained based on previous THROII and THROE of the user taken during different days.
[0169] In one embodiment, the computer receives an indication of the user's breathing rate, and uses this information along with the SHAPE at that time in order to suggest to the user to perform various activities and/or alert the user. Optionally, the indication of the user's breathing rate is calculated based on THROU and THROE- In one example, the SHAPE is correlative with the state of the user, and different states combined with different breathing rates may have different meaning, which cause the computer to suggest different activities. The different activities may vary from different work/learning related activities to different physical activities to different treatments. In one example, the computer suggests to the user, via the UI, to perform a first activity in response to detecting that the breathing rate reached a threshold while identifying the first SHAPE. However, the computer suggest to the user to perform a second activity, which is different from the first activity, in response to detecting that the breathing rate reached the threshold while identifying the second SHAPE. In another example, the computer alerts the user, via the UI, in response to detecting that the breathing rate reached a threshold while identifying the first SHAPE, and the computer does not alert the user in response to detecting that the breathing rate reached the threshold while identifying the second SHAPE. In this example, the SHAPE may be correlated with the state of the user, and different states may be associated with different normal breathing rates. When the difference between the current breathing rate and the normal breathing rate (associated
with the current SHAPE) reaches a threshold, the user may be in an abnormal state that warrants an alert.
[0170] In another embodiment, the computer configures a software agent that prioritizes activities for the user based on the identified SHAPE, such that a first activity is prioritized over a second activity responsive to identifying the first SHAPE, and the second activity is prioritized over the first activity responsive to identifying the second SHAPE. It is noted that the system may prioritize different activities for different SHAPEs also when the measured breathing rate and respiration volume are the same.
[0171] In still another embodiment, the computer learns a flow of typical changes between different SHAPEs based on previous measurements of the user, and issues an alert upon detecting an irregularity related to a flow of changes between the SHAPEs. For example, the irregularity may involve a new SHAPE, more frequent changes between SHAPEs, having certain SHAPEs for more or less time than usual, etc.
[0172] In yet another embodiment, the computer receives data about types of foods consumed by the user, stores the data in a memory, and finds correlations between the SHAPEs and the types of foods. These correlations may be used to make suggestions to the user. For example, the computer may suggest the user eat a first type of food responsive to identifying the first SHAPE, and suggest that the user eat a second type of food responsive to identifying the second SHAPE. According to Ayurveda medicine, it is preferred to eat according to the three doshas and the five great elements. In times when the SHAPE is indicative of the dominant element (out of the five great elements), the computer may guide the user which types of food suit the identified dominant element, and/or may help the user to avoid inappropriate types of foods by identifying the types of food the user eat (or about to eat) and alert the user when the identified food is inappropriate to the current dominant element (that was identified based on the SHAPE).
[0173] The way people breathe is known to affect their emotional and/or physiological state. For example, breathing more slowly, gently and deeply can help to calm and relax, can reduce tension and anxiety, and can improve concentration and memory. Shallow and fast breathing can contribute to anxiety, muscular tension, panic attacks, headaches, and fatigue. Yogic techniques that teach control of various aspects of the breathing offer many advantages in both treating undesired conditions and reaching new desired mental and physical conditions. However, it is usually difficult for people to become aware of their breathing, and gaining such awareness often requires years of practice.
[0174] Biofeedback is a technique that teaches self-regulation of various physiological processes through a feedback provided to the user. Biofeedback involves the measurement of the user and providing the user a feedback indicative of the measured physiological activity. The feedback enables the user to improve awareness and control of the activity. Current methods for monitoring breathing typically require uncomfortable setup (e.g., chest straps), which makes it impractical to use in real-world environments on a daily basis, and is often unavailable on demand when a person needs it. Additionally, current breathing monitoring techniques monitor only certain breathing parameters (e.g., breathing rate and volume), and do not monitor many other parameters that may affect the emotional and/or physiological state. Thus,
there is a need for a comfortable wearable breathing biofeedback device.
[0175] FIG. 34 illustrates one embodiment of a system configured to provide neurofeedback (based on measurements of thermal camera 720) and/or breathing biofeedback (based on measurements of at least one of thermal cameras 723, 725, 727 and 729). Thermal camera 720 takes thermal measurements of a region on the forehead 721, thermal cameras 723 and 725 take thermal measurements of regions on the right and left sides of the upper lip, respectively, and thermal cameras 727 and 729 take thermal measurements of regions on the user's mouth and/or volumes protruding out of the user's mouth. The thermal cameras are physically coupled to a frame 731 that may be part of an augmented-realty system in which the visual feedback of the breathing biofeedback and/or neurofeedback is presented to the user via UI 732. The system may control the breathing biofeedback and/or neurofeedback session based on measurements taken by additional sensors, such as (i) sensor 722, which may be an outward-facing thermal camera that measures the intensity of infrared radiation directed at the face, and (ii) thermal cameras 724 and 726 that measure regions on the right and left periorbital areas, respectively.
[0176] In one embodiment, a system configured to provide a breathing biofeedback session for a user includes at least one inward-facing head-mounted thermal camera (CAM) and a user interface (UI). The at least one CAM takes thermal measurements of a region below the nostrils (THROI), and THROI are indicative of the exhale stream. The UI provides feedback, calculated based on THROI, as part of a breathing biofeedback session for the user. Optionally, the breathing biofeedback system may include additional elements such as a frame, a computer, additional sensors, and/or thermal cameras as described below.
[0177] The at least one CAM may have various configurations. In a first example, each of the at least one CAM is located less than 15 cm from the user's face and above the user's upper lip, and does not occlude any of the user's mouth and nostrils. Optionally, THROI include thermal measurements of at least first and second regions below right and left nostrils of the user. Optionally, the at least one CAM consists of a single CAM.
[0178] In a second example, the system further includes a frame worn on the user's head. THROI include thermal measurements of first and second regions below right and left nostrils (THROII and THROI2, respectively) of the user. The at least one CAM includes first and second thermal cameras for taking THROII and THROO, respectively, which are located less than 15 cm from the user's face and above the nostrils. The first thermal camera is physically coupled to the right half of the frame and captures the exhale stream from the right nostril better than it captures the exhale stream from the left nostril, and the second thermal camera is physically coupled to the left half of the frame and captures the exhale stream from the left nostril better than it captures the exhale stream from the right nostril.
[0179] In a third example, THROI include thermal measurements of first, second and third regions on the user's face, which are indicative of exhale streams from the right nostril, the left nostril, and the mouth, respectively. The first and second regions are below the right and left nostrils, respectively, and the third region includes the mouth and/or a volume protruding out of the mouth.
[0180] The UI provides the feedback for the user during the breathing biofeedback session. The UI may also receive instructions from the user (e.g., verbal commands and/or menu selections) to control the session parameters, such session duration, goal, and type of game to be played. The UI may include different types of hardware in different embodiments. Optionally, the UI includes a display that presents the user with video and/or 3D images, and/or a speaker that plays audio. Optionally, the UI is part of a device carried by the user. Optionally, the UI is part of a HMS to which the at least one CAM is coupled. Some examples of displays that may be used in some embodiments include a screen of a handheld device (e.g., a screen of a smartphone or a smartwatch), a screen of a head-mounted device (e.g., a screen of an augmented reality system or a virtual reality system), and a retinal display. In one embodiment, the UI may provide tactile feedback to the user (e.g., vibrations).
[0181] In some embodiments, at least some of the feedback presented to the user via the UI is intended to indicate to the user whether, and optionally to what extent, the user's breathing (as determined based on TH OI) is progressing towards a target pattern. The feedback may be further designed to guide the user to breathe at his/her resonant frequency, which maximize amplitude of respiratory sinus arrhythmia and is in the range of 4.5 to 7.0 breaths/min.
[0182] The feedback may indicate the user's progress towards the target in different ways, which may involve visual indications, audio indications, and/or tactile indications. In one embodiment, the user is provided with a visual cue indicating the extent of the user's progress. For example, an object may change states and/or locations based on how close the user is to the target, such as an image of a car that moves forward as the user advances towards the target, and backwards if the user regresses. In one example, the feedback may include an audio-visual video of a fish that swims to the left when the exhale becomes smoother and stops swimming or even swims to the right when the exhale becomes less smooth. In another embodiment, the user is provided with an audio cue indicating the extent of the user's progress. For example, music played to the user may change its volume, tune, tone, and/or tempo based on whether the user is advancing towards the target or regressing from it, and/or different music pieces may be played when the user is at different rates of progression. In still another embodiment, the user is provided with a tactile cue indicating the extent of the user's progress. For example, a device worn and/or carried by the user may vibrate at drfferent frequencies and/or at different strengths based on how far the user is from a goal of the session.
[0183] Breathing biofeedback requires closing the feedback loop on a signal that changes fast enough. Smoothness of the exhale stream, the shape, and/or the BRV have components that change at frequency above 2Hz, which may be fast enough to act as the parameter on which the breathing biofeedback loop is closed. The feedback may be calculated and presented to the user at frequencies higher than 1 Hz, 2 Hz, 5 Hz, 10 Hz, 20 Hz and/or 40 Hz (which are all higher than the user's breathing rate).
[0184] The computer calculates, based on THROI, a characteristic of the user's breathing, and generates the feedback based on the characteristic. Some breathing characteristics may be difficult to control, and often people are not even aware of them. However, breathing biofeedback can help the user achieve
awareness and/or gain control over his/her breathing, and as a result improve the user's state.
[0185] One characteristic of the breathing, which the computer may take into account when controlling the breathing biofeedback session, is the smoothness of the exhale stream. Optionally, the smoothness of the exhale stream refers to a mathematical property of sets of values that include values of TH OI taken over a period of time (e.g., values in a window that includes a portion of a breath, or even one or more breaths). The smoothness may be considered a property of graphs of the sets of values, and may represent how much of a variance there is in these values when compared to an average trend line that corresponds to the breathing. As discussed above, the smoothness may be calculated in various ways such as using Fourier transform and/or measuring a fit to a low order polynomial.
[0186] In one embodiment, the feedback is indicative of similarity between current smoothness of the exhale stream and target smoothness of the exhale stream. The current smoothness is calculated in realtime based on THROI, and the target smoothness is calculated based on previous THROI of the user taken while the user was in a state considered better than the user's state while starting the breathing biofeedback session. Optionally, the similarity may be formulated as the distance between the current smoothness and the target smoothness.
[0187] In one embodiment, the feedback is indicative of at least one of the following: whether the smoothness is above or below a predetermined threshold, and whether the smoothness has increased or decreased since a previous feedback that was indicative of the smoothness. Optionally, the smoothness is calculated at frequency >4Hz, and the delay from detecting a change in the smoothness to updating the feedback provided to the user is <0.5 second. As another option, the feedback may be indicative of whether the smoothness is above or below the predetermined threshold, and the user interface may update the feedback provided to the user at a rate >2Hz.
[0188] Another characteristic of the breathing, which the computer may take into account when controlling the breathing biofeedback session, is the shape of the exhale stream (SHAPE). Optionally, the SHAPE is described by one or more parameters that represent a 3D shape that bounds the exhale stream that flows from one or both of the nostrils. Optionally, the feedback is indicative of whether the SHAPE matches a predetermined shape, and/or whether the SHAPE has become more similar or less similar to the certain shape since a previous feedback that was indicative of the SHAPE. In one embodiment, the feedback is indicative of similarity between current shape of the exhale stream (SHAPE) and target SHAPE, wherein the current SHAPE is calculated in real-time based on THROI, and the target SHAPE is calculated based on at least one of the following: (i) previous THROI of the user taken while the user was in a state considered better than the user's state while starting the breathing biofeedback session, and (ii) THROI of other users taken while the other users were in a state considered better than the user's state while starting the breathing biofeedback session.
[0189] Another characteristic of the breathing, which the computer may take into account when controlling the breathing biofeedback session, is the breathing rate variability (BRV), which is indicative of the variations between consecutive breathes. Optionally, the feedback may be indicative of similarity
between current breathing rate variability (BRV) and a target BRV, wherein the current BRV is calculated in real-time based on THROI, and the target BRV is calculated based on previous THROI of the user taken while the user was in a state considered better than the user's state while starting the breathing biofeedback session. Additionally or alternatively, the feedback may be indicative of whether the BRV is above or below a predetermined threshold, and/or whether a predetermined component of the BRV has increased or decreased since a previous feedback that was indicative of the BRV.
[0190] Similarly to how heart rate variability (HRV) is calculated, there are various computational approaches known in the art that may be used to calculate the BRV based on THROI- In one embodiment, calculating the BRV involves identifying matching events in consecutive breaths (such as start exhaling, exhale peak, and/or inhale peak), and analyzing the variability between these matching events. In another embodiment, the user's breathing is represented as time series data from which low frequency and high frequency components of the integrated power spectrum within the time series signal are extracted using Fast Fourier Transform (FFT). A ratio of the low and high frequency of the integrated power spectrum within these components is computed and analysis of the dynamics of this ratio over time is used to estimate the BRV. In still another embodiment, the BRV may be determined using a machine learning- based model. The model may be trained on samples, each including feature values generated based on THROI taken during a certain period and a label indicative of the BRV during the certain period.
[0191] In some embodiments, the computer calculates a value indicative of similarity between a current THROI pattern and a previous THROI pattern of the user taken while the user was in a target state, and generates the feedback based on the similarity. Examples of THROI patterns include at least one of: a spatial pattern (e.g., a pattern in a thermal image received from a FPA sensor), a pattern in the time domain (e.g., a pattern detected in a time series of the thermal measurements), and a pattern in the frequency domain (e.g., a pattern detected in a Fourier transform of the thermal measurements).
[0192] Biofeedback sessions may have different target states in different embodiments. Generally, the purpose of a session is to bring the user's state during the biofeedback session (the "present state") to become more similar to a target state. In one embodiment, while the user was in the target state, one or more of the following were true: the user was healthier compared to the present state, the user was more relaxed compared to the present state, a stress level of the user was below a threshold, and the user was more concentrated compared to the present state. Additionally, the computer may receive an indication of a period during which the user was in the target state based on a report made by the user (the previous THROI pattern comprises THROI taken during the period), measurements of the user with a sensor other than CAM, semantic analysis of text written by the user, and/or analysis of the user's speech.
[0193] In another embodiment, the computer calculates a value indicative of similarity between current THROI and previous THROI of the user taken while the user was in a target state, and generates the feedback based on the similarity. The similarity may be calculated by comparing (i) a current value of a characteristic of the user's breathing, calculated based on THROI, to (ii) a target value of the characteristic of the user's breathing, calculated based on the previous THROI- Here, the feedback may be indicative of
whether the current value of the characteristic of the user's breathing has become more similar or less similar to the target value of the characteristic of the user's breathing since a previous (related) feedback.
[0194] In still another embodiment, the computer compares a current set comprising feature values generated based on THROI to a target set comprising feature values generated based on previous THROI of the user, where the feature values are indicative of values of respiratory parameter(s).
[0195] In some embodiments, the system configured to provide a breathing biofeedback session receives indications of when the user is in the target state. Given such indications, the system may collect THROI taken during these times and utilize them in biofeedback sessions to steer the user towards the desired target (these collected THROI may be considered the previous THROI mentioned above). There are various sources for the indications of when the user is in the certain target state. In one example, the user may report when he/she is in such a state (e.g., through an "app" or a comment made to a software agent). In another example, measurements of the user with one or more sensors other than CAM may provide indications that the user is in a certain physiological and/or emotional state that corresponds to the certain target state. In still another example, an indication of a period of time in which the user was in a certain target state may be derived from analysis of communications of the user, such as using semantic analysis of text written by the user, and/or analysis of the user's speech.
[0196] In some embodiments, the computer may utilize a machine learning-based model to determine whether the session is successful (or is expected to be) and/or to determine the user's progress in the breathing biofeedback session at a given time (e.g., the rate of improvement the user is displaying at that time and/or how close the user is to the session's target). Optionally, the computer generates feature values based on THROI (e.g., values of THROI and/or statistics of THROI taken over different periods during the session), and utilizes the model to calculate a value indicative of the progress and/or session success. Optionally, the model is trained on samples comprising feature values based on previously taken THROI and labels indicative of the success of the session and/or progress at the time those THROI were taken. Optionally, the samples may be generated based on previously taken THROI of the user. Additionally or alternatively, the samples may be generated based on previously taken THROI of other users. Optionally, the samples include samples generated based on THROI taken on different days, and/or while the measured user was in different situations.
[0197] It has been long believed in various Asian philosophies that the way a person breathes affects the person's emotional and/or physiological state, and conversely, that breathing in a certain way can change the person's state. It is usually difficult for people to become aware of the various characteristics of their breathing, and gaining such awareness often requires years of practice. Thus, there is a need to enable monitoring of a person's breathing characteristics in an easy way, which does not require special expertise on the person's part.
[0198] In one embodiment, a system configured to select a state of a user includes at least one CAM and a computer. Each of the at least one CAM is worn on the user's head and takes thermal measurements of at least three regions below the nostrils (THs) of the user; wherein THs are indicative of shape of the
exhale stream (SHAPE). The computer (i) generates feature values based on THs, where the feature values are indicative of the SHAPE, and (ii) utilize a model to select the state of the user, from among potential states of the user, based on the feature values. Optionally, the model is utilized to calculate a value based on the feature values. In one example, the calculated value is indicative of which state the user is in, and the computer may calculate probabilities that the user is in each of the potential states, and select the state for which the probability is highest. In another example, the calculated value is an output of a classifier (e.g., a neural network -based classifier), which is indicative of the state the user is in.
[0199] In order for THs to be indicative of the SHAPE, the at least one CAM needs to capture at least three regions from which the shape can be inferred. In a first example, the sensing elements of the at least one CAM include: (i) at least three vertical sensing elements pointed at different vertical positions below the nostrils where the exhale stream is expected to flow, and/or (ii) at least three horizontal sensing elements pointed at different horizontal positions below the nostrils where the exhale stream is expected to flow. Optionally, the larger the number of the vertical sensing elements that detect the exhale stream, the longer the length of the exhale stream, and the larger the number of the horizontal sensing elements that detect the exhale stream, the wider the exhale stream. Additionally, the amplitude of the temperature changes measured by the sensing elements may also be used to estimate the shape and/or uniformity of the exhale stream. It is noted that when a CAM, from among the at least one CAM, is located above the upper lip and pointed downwards, the vertical sensing elements (from the second example above) also provide data about the width of the exhale stream, and the horizontal sensing elements also provide data about the length of the exhale stream.
[0200] In a second example, the at least three regions from which the shape can be inferred are located on (i) at least two vertical positions below the nostrils having a distance above 5 mm between their centers, and (ii) at least two horizontal positions below the nostrils having a distance above 5 mm between their centers. Optionally, the at least three regions represent: (i) parameters of a 3D shape that confines the exhale stream, and THs are the parameters' values, (ii) locations indicative of different lengths of the exhale stream (such as 8 cm, 16 cm, 24 cm, and 32 cm), and/or (iii) locations indicative of different angles characteristic of directions of some of the different SHAPES of the exhale stream (such as locations indicative of a difference of as at least 5° , 10° , or 25° between the directions of the different SHAPEs).
[0201] The potential states corresponding to the different SHAPEs may include various physiological and/or emotional states, and usually have to be learned and classified for each user because they depend on the user's physiological and emotional composition. Additionally, the potential states may include general states corresponding to either being healthy or being unhealthy. In some embodiments, at least some of the potential states may correspond to being in a state in which a certain physiological response is likely to occur in the near future (e.g., within the next thirty minutes). Thus, identifying that the user is in such a state can be used to alert regarding the certain physiological response which the user is expected to have in order for the user and/or some other party to take action to address it.
[0202] The feature values generated by the computer in order to calculate the SHAPE may include some of the various feature values described in this disclosure that are used to detect a physiological response. In particular, one or more of the feature values are generated based on THs, and may include raw and/or processed values collected by one or more sensing elements of the at least one CAM. Additionally or alternatively, these feature values may include feature values derived from analysis of THs in order to determine various characteristics of the user's breathing. The feature values include at least one feature value indicative of the SHAPE. For example, the at least one feature value may describe properties of the thermal patterns of THs. Optionally, the feature values include additional feature values indicative of the breathing rate, breathing rate variability, and/or smoothness of the exhale stream.
[0203] The model used to select the user's state based on THs (and optionally other sources of data) may be, in some embodiments, a machine learning-based model. Optionally, the model is trained based on samples comprising feature values generated based on previous on THs taken when the user being measured was in a known state. Optionally, the previous THs include thermal measurements of one or more other users (who are not the user whose state is selected based on THs); in this case, the model may be considered a general model. Optionally, the previous THs include thermal measurements of the user whose state is selected based on THs; in this case, the model may be considered personalized for this user. Optionally, the previous THs include thermal measurements taken during different days. Optionally, for each state from among the potential states, the samples include one or more samples that are generated based on THs taken while the user being measured was in the state. Optionally, the model was trained based on: previous THs taken while the user was in a first potential state from among the potential states, and other previous THs taken while the user was in a second potential state from among the potential states. Optionally, the model was trained based on: previous THs taken from users while the users were in a first potential state from among the potential states, and other previous THs taken while the users were in a second potential state from among the potential states. Optionally, for the same breathing rate, respiration volume, and dominant nostril, the computer is configured to select different states when THs are indicative of different SHAPEs that correspond to different potential states.
[0204] For each state from among the potential states, the samples include one or more samples that have a label corresponding to the state. The labels for the samples may be generated based on indications that may come from various sources. In one embodiment, a user whose THs are used to generate a sample may provide indications about his/her state, such as by entering values via an app when having a headache or an anger attack. Additionally or alternatively, an observer of that user, which may be another person or a software agent, may provide indications about the user's state. For example, a parent may determine that certain behavior patterns of a child correspond to displaying symptomatic behavior of a certain state. In another embodiment, indications of the state of a user whose THs are used to generate a sample may be determined based on measurements of physiological signals of the user, such as measurements of the heart rate, heart rate variability, galvanic skin response, and/or brain activity (e.g., using EEG).
[0205] In some embodiments, characteristics of the user's breathing may be indicative of a future state of the user (e.g., a state to which the user may be transitioning). Thus, certain changes in the characteristics of the user's breathing can be used to predict the future state. In these cases, some samples that include feature values generated based on THs taken during a certain period may be assigned a label based on an indication corresponding to a future time (e.g., a label corresponding to the state of the user 15 or 30 minutes after the certain period). A model trained on such data may be used to predict the user's state at the future time and/or calculate a value indicative of the probability that the user will be in a certain state a certain amount of time into the future.
[0206] Given a set of samples that includes feature values generated based on THs (and optionally the other sources of data) and labels indicative of the state, the model can be trained using various machine learning-based training algorithms. Optionally, the model may include various types of parameters, depending on the type of training algorithm utilized to generate the model. For example, the model may include parameters of one or more of the following: a regression model, a support vector machine, a neural network, a graphical model, a decision tree, a random forest, and other models of other types of machine learning classification and/or prediction approaches.
[0207] In some embodiments, a deep learning algorithm may be used to train the model. In one example, the model may include parameters describing multiple hidden layers of a neural network. In one embodiment, when THs include measurements of multiple pixels, such as when the at least one CAM includes a FPA, the model may include a convolution neural network (CNN). In one example, a CNN may be utilized to identify certain patterns in the thermal images, such as patterns of temperatures in the region of the exhale stream that may be indicative a respiratory parameter, which involve aspects such as the location, direction, size, and/or shape of an exhale stream from the nose and/or mouth. In another example, determining a state of the user based on one characteristics of the user's breathing (e.g., various respiratory parameters), may be done based on multiple, possibly successive, thermal measurements. Optionally, estimating the state of the user may involve retaining state information about the one or more characteristics that is based on previous measurements. Optionally, the model may include parameters that describe an architecture that supports such a capability. In one example, the model may include parameters of a recurrent neural network (RNN), which is a connectionist model that captures the dynamics of sequences of samples via cycles in the network's nodes. This enables R Ns to retain a state that can represent information from an arbitrarily long context window. In one example, the RNN may be implemented using a long short-term memory (LSTM) architecture. In another example, the RNN may be implemented using a bidirectional recurrent neural network architecture (BRNN).
[0208] In order to generate a model suitable for identifying the state of the user in real-world day-today situations, in some embodiments, the samples used to train the model are based on thermal measurements (and optionally the other sources of data) taken while the user was in different situations, locations, and/or conducting different activities. For example, the model may be trained based on some sample based on previous thermal measurements taken while the user was indoors and other samples
based on other previous thermal measurements taken while the user was outdoors. In another example, the model may be trained based on some sample based on some previous thermal measurements taken while the user was sitting and other samples based on other previous thermal measurements taken while the user was walking.
[0209] In one embodiment, the computer detects the SHAPE based on THs. Optionally, the detected SHAPE corresponds to a certain state of the user, and the computer bases the selection of the state on the detected SHAPE. Optionally, the computer generates one or more of the feature values used to select the state based on the detected SHAPE. For example, the one or more feature values may be indicative of various parameters of the SHAPE (e.g., parameters of a 3D geometrical body to which the SHAPE corresponds).
[0210] To detect the SHAPE the computer may utilize a model that was trained based on previous THs of the user. Optionally, the previous THs of the user were taken during different days. In one embodiment, the model includes one or more reference patterns generated based on the previous THs. Optionally, each reference pattern corresponds to a certain SHAPE, and is based on a subset of the previous THs for which the certain SHAPE was identified. For example, identifying the certain SHAPE may be done using analysis of thermal images of the exhale stream obtained using an external thermal camera that is not head-mounted and/or by a human expert. In this embodiment, detecting the SHAPE may be done by comparing THs to the one or more reference thermal patterns and determining whether there is a sufficiently high similarity between the thermal pattern of THs and at least one of the one or more reference thermal patterns.
[0211] In another embodiment, the model may be a machine learning-based model that was trained on samples, with each sample comprising feature values generated based on a subset of the previous THs (e.g., the subset includes previous THs taken during a certain period), and a label representing the SHAPE corresponding to the subset of the previous THs. In one example, the feature values include values of temperatures of various sensing elements of the at least one CAM. In another example, the feature values may include low-level image properties obtained by applying various image processing techniques to the subset of the previous THs. In this embodiment, detecting the SHAPE may be done by generating feature values based on THs and utilizing the model to calculate, based on the feature values, a value indicative of the SHAPE corresponding THs.
[0212] The SHAPE is a property that may be independent, at least to a certain extent, of other respiratory parameters. Thus, THs taken at different times may have different SHAPEs detected, even if some other aspects of the breathing at those times are the same (as determined based on values of certain respiratory parameters). In one example, for the same breathing rate of the user, the computer detects a first SHAPE based on a first THs, and detects a second SHAPE based on a second THs. In this example, the first and second THs have different thermal patterns, e.g., as determined using a similarity function between vector representations of the first and second THs (which gives a similarity below a threshold). In another example, for the same breathing rate, respiration volume and dominant nostril, the computer
detects a first SHAPE based on a first THs, and detects a second SHAPE based on a second THs (where the first and second THs have different thermal patterns).
[0213] In one embodiment, the system includes a frame worn on the user's head. Each of the at least one CAM is located less than 15 cm from the user's face and does not occlude any of the user's mouth and nostrils. The at least one CAM includes at least first and second inward-facing head-mounted thermal cameras (CAM1 and CAM2, respectively) that take TH OU and THROK, respectively. CAM1 is physically coupled to the right half of the frame and captures the exhale stream from the right nostril better than it captures the exhale stream from the left nostril, and CAM2 is physically coupled to the left half of the frame and captures the exhale stream from the left nostril better than it captures the exhale stream from the right nostril. In another embodiment, the at least three regions below the nostrils include a first region on the right side of the user's upper lip, a second region on the left side of the user's upper lip, and a third region on the mouth of the user, where thermal measurements of the third region are indicative of the exhale stream from the user's mouth. In still another embodiment, the at least three regions below the nostrils include a first region comprising a portion of the volume of the air below the right nostril where the exhale stream from the right nostril flows, a second region comprising a portion of the volume of the air below the left nostril where the exhale stream from the left nostril flows, and a third region comprising a portion of a volume protruding out of the mouth where the exhale stream from the user's mouth flows.
[0214] In one embodiment, a system configured to present a user's state based on SHAPE, includes a CAM and a UI. The at least one CAM takes thermal measurements of at least three regions below the nostrils (THs) of the user, where THs are indicative of SHAPE. The UI present the user's state based on THs. Optionally, for the same breathing rate, the UI presents different states for the user when THs are indicative of different SHAPEs that correspond to different potential states. Optionally, each of the at least one CAM does not occlude any of the user's mouth and nostrils. Optionally, the system further includes a computer that generates feature values based on THs, and utilizes a model to select the state, from among potential states, based on the feature values.
[0215] The physiological and emotional state of a person can often be associated with certain cortical activity. Various phenomena, which may be considered abnormal states, such as anger or displaying symptomatic behavior of Attention Deficit Disorder (ADD) or Attention Deficit Hyperactivity Disorder (ADHD), are often associated with certain atypical cortical activity. This atypical cortical activity can generate certain thermal patterns on the forehead. Thus, there is a need for a way to take thermal measurements of the forehead in real world day-to-day situations. Preferably, in order to be comfortable and more aesthetically acceptable, these measurements should be taken without involving direct physical contact with the forehead or occluding it.
[0216] Some types of normal and abnormal cortical activities generate different thermal patterns on the forehead, which may be used for health-related and other applications. However, movements of the user and/or of the user's head can make acquiring this data difficult with many of the known approaches. Some embodiments described herein utilize one or more head-mounted thermal cameras that remain
pointed at the forehead also when the user's head makes angular movements. The head-mounted thermal cameras are able to take enough measurements in order to enable a computer to differentiate between a first thermal pattern of the forehead that indicates a normal state and a second thermal pattern of the forehead that indicates an abnormal state.
[0217] FIG. 35, FIG. 36, and FIG. 37 illustrate one embodiment of eyeglasses 700 with head-mounted thermal cameras, which are able to differentiate between different states of the user based on thermal patterns of the forehead. The illustrated system includes first and second CAMs (701, 702) mounted to the upper right and left portions of the eyeglasses frame, respectively, to take thermal measurements of the forehead. The system further include a sensor 703 mounted to the bridge, which may be utilized to take measurements (rriconf) indicative of an occurrence of one or more of the various confounding factors described herein. The CAMs forward the thermal measurements to a computer that may differentiate, based on the thermal measurements of the forehead, between normal and abnormal states of a user (which are illustrated as normal vs migraine vs angry in FIG. 35, and not angry vs angry in FIG. 36). The computer may further differentiate between extents of a condition, which is illustrated as severe OCD vs less severe OCD after a treatment in FIG. 37.
[0218] In one embodiment, a system configured to differentiate between normal and abnormal states, includes at least one CAM and a computer. The at least one CAM is worn on a user's head and takes thermal measurements of at least first and second regions on the right side of the forehead (THRI and THR2, respectively) of the user. The at least one CAM further takes thermal measurements of at least third and fourth regions on the left side of the forehead (THLI and THL2, respectively). The middles of the first and third regions are at least 1 cm above the middles of the second and fourth regions, respectively. Each of the least one CAM is located below the first and third regions, and does not occlude any portion of the first and third regions. Optionally, CAM also does not occlude the second and fourth regions. The computer determines, based on THRI, THR2, THLI, and THL2, whether the user is in a normal state or an abnormal state. Preferably, this embodiment assumes that the user's hair does not occlude the first, second, third and fourth regions on the forehead. Optionally, the at least one CAM includes a CAM that includes a sensor and a lens, and the sensor plane is tilted by more than 2° relative to the lens plane according to the Scheimpfmg principle in order to capture sharper images by the CAM, when at least one CAM is worn by the user. Here, the lens plane refers to a plane that is perpendicular to the optical axis of the lens, which may include one or more lenses.
[0219] In one embodiment, the at least one CAM includes at least first and second inward-facing head- mounted thermal cameras (CAMl and CAM2, respectively) located to the right and to the left of the vertical symmetry axis that divides the user's face, respectively (i.e., the axis the goes down the center of the user's forehead and nose). CAMl is configured to take THRI and THR2, and CAM2 is configured to take THLI and THL2- Optionally, CAMl and CAM2 are located at least 1 cm from each other. In one example, CAMl and CAM2 are 701 and 702 that are illustrated in FIG. 36. Being able to detect a pattern on the forehead may involve utilization of multiple sensing elements (pixels) by each of CAMl and
CAM2. Optionally, each of CAM1 and CAM2 weighs below 10 g, is located less than 10 cm from the user's face, and includes microbolometer or thermopile sensor with at least 6 sensing elements. Optionally, CAM1 includes at least two multi-pixel thermal cameras, one for taking measurements of the first region, and another one for taking measurements of the second region; CAM2 also includes at least two multi-pixel thermal cameras, one for taking measurements of the third region, and another one for taking measurements of the fourth region.
[0220] The computer determines, based on THRI, THR2, THLI, and THL2, whether the user is in a normal state or an abnormal state. In one embodiment, the state of the user is determined by comparing THRI , THR2, THLI, and THL2 to reference thermal patterns of the forehead that include at least one reference thermal pattern that corresponds to the normal state and at least one reference thermal pattern that corresponds to the abnormal state. Optionally, a reference thermal pattern is determined from previous THRI , THR2, THLI, and THL2 of the user, taken while the user was in a certain state corresponding to the reference thermal pattern (e.g., normal or abnormal states). Determining whether THRI , THR2, THLI, and THL2, are similar to a reference thermal pattern may be done using various image similarity functions, such as determining the distance between each pixel in the reference thermal pattern and its counterpart in THRI , THR2, THLI, or THL2- One way this can be done is by converting THRI , THR2, THLI, or T¾2 into a vector of pixel temperatures, and comparing it to a vector of the reference thermal pattern (using some form of vector similarity metric like a dot product or the L2 norm). Optionally, if the similarity reaches a threshold, the user is considered to be in the state to which the reference thermal pattern corresponds.
[0221] In another embodiment, the computer determines that the user is in a certain state (e.g., normal or abnormal) by utilizing a model to calculate, based on feature values generated from THRI, THR2, THLI , and THL2, a value indicative of the extent to which the user is in the certain state. Optionally, the model is trained based on samples, each comprising feature values generated based on previous THRI, THR2, THLI , and THL2 of the user, taken while the user was in the certain state. In some embodiments, determining whether the user is in a certain state involves determining that THRI, H 2, THLI, and T¾2 taken during at least a certain period of time (e.g., at least ten seconds, at least one minute, or at least ten minutes) are similar to a reference thermal pattern that corresponds to the certain state.
[0222] Being in a normal/abnormal state may correspond to different behavioral and/or physiological responses. In one embodiment, the abnormal state involves the user displaying symptoms of one or more of the following: an anger attack, Attention Deficit Disorder (ADD), and Attention Deficit Hyperactivity Disorder (ADHD). In this embodiment, being in the normal state refers to usual behavior of the user that does not involve displaying said symptoms. In another embodiment, when the user is in the abnormal state, the user will display within a predetermined duration (e.g., shorter than an hour), with a probability above a predetermined threshold, symptoms of one or more of the following: anger, ADD, and ADHD. In this embodiment, when the user is in the normal state, the user will display the symptoms within the predetermined duration with a probability below the predetermined threshold. In yet another embodiment,
when the user is in the abnormal state the user suffers from a headache, and when the user is in the normal state, the user does not suffer from a headache. In still another embodiment, the abnormal state refers to times in which the user has a higher level of concentration compared to the normal state that refers to time in which the user has a usual level of concentration. Although the thermal patterns of the forehead are usually specific to the user, they are usually repetitive, and thus the system may able to learn some thermal patterns of the user that correspond to various states.
[0223] Touching the forehead can change the forehead's thermal pattern, even though the user's state did not actually change. Optionally, the system further includes a sensor configured to provide an indication indicative of whether the user touches the forehead. Although the touch is expected to influence thermal readings from the touched area, the computer may continue to operate, for a predetermined duration, according to a state identified shortly (e.g., 1 -20 sec) before receiving the indication, even if it identifies a different state shortly after receiving the indication. In one example, the sensor is a visible-light camera, and the computer uses image processing to determine whether the user touched the forehead and/or for how long.
[0224] The computer may alert the user responsive to identifying an irregularity in TH I, TH 2, THLI , and THL2, which does not result from interference, such as touching the forehead. For example, the irregularity may involve a previously unobserved thermal pattern of the forehead. Optionally, the user may be questioned in order to determine if there is a medical reason for the irregularity, such as a stroke or dehydration, in which case medical assistance may be offered, e.g., by summoning medical personnel to the user's location. Optionally, the computer alerts the user when identifying that the user is in an abnormal state associated with antisocial behavior (e.g., an anger attack).
[0225] Additional thermal cameras may be utilized to take thermal measurements that may be used to detect the user's state. For example, the system may include at least one additional CAM for taking thermal measurements of regions on the nose and below the nostrils (THROD and THROM, respectively) of the user. Optionally, the additional CAM weighs below 10 g, is physically coupled to a frame worn on the user's head, and is located less than 15 cm from the face. Optionally, the computer determines the user's state also based on THROD and THROM- Optionally, the computer (i) generates feature values based on THRI , THR2, THLI , THL2, THROD, and THROM, and (ii) utilizes a model to determine the user's state based on the feature values. Optionally, the model was trained based on a first set of previous THRI , THR2, THLI, THL2, THROD, and THROM taken while the user was in the normal state and a second set of previous THRI , THR2, THLI, THL2, THROD, and THROM taken while the user was in the abnormal state.
[0226] In another example, the system may include another CAM for taking thermal measurements of a region on the periorbital area (THROD) of the user. Optionally, the computer determines the state of the user also based on THROD- Optionally, the computer is further configured to: (i) generate feature values based on THRI, TH 2, THLI, THL2, and THROD, and (ii) utilize a model to determine the user's state based on the feature values. Optionally, the model was trained based on a first set of previous THRI , THR2, THLI, THL2, and THROD taken while the user was in the normal state and a second set of previous THRI ,
THR2, THLI, THL2, and THROD taken while the user was in the abnormal state.
[0227] Determining the user's state based on THRI, THR2, THLI, and THL2 (and optionally other sources of data) may be done using a machine learning-based model. Optionally, the model is trained based on samples comprising feature values generated based on previous THRI , THR2, THLI, and THL2 taken when the user was in a known state (e.g., for different times it was known whether the user was in the normal or abnormal state). Optionally, the user may provide indications about his/her state, such as by entering values via an app when having a headache or an anger attack. Additionally or alternatively, an observer of the user, which may be another person or a software agent, may provide the indications about the user's state. For example, a parent may determine that certain behavior patterns of a child correspond to displaying symptomatic behavior of ADHD. In another example, indications of the state of the user may be determined based on measurements of physiological signals of the user, such as measurements of the heart rate, heart rate variability, breathing rate, galvanic skin response, and/or brain activity (e.g., using EEG).
[0228] In some embodiments, one or more of the feature values in the samples may be based on other sources of data (different from THRI, THR2, THLI, and THL2)- These may include additional thermal cameras, additional physiological measurements of the user, and/or measurements of the environment in which the user was while the measurements were taken. In one example, at least some of the feature values used in samples include additional physiological measurements indicative of one or more of the following signals of the user: heart rate, heart rate variability, brainwave activity, galvanic skin response, muscle activity, and extent of movement. In another example, at least some of the feature values used in samples include measurements of the environment that are indicative of one or more of the following values of the environment in which the user was in: temperature, humidity level, noise level, air quality, wind speed, and infrared radiation level.
[0229] Given a set of samples comprising feature values generated based on THRI, THR2, THLI, and THL2 (and optionally the other sources of data) and labels generated based on the indications, the model can be trained using various machine learning-based training algorithms. Optionally, the model is utilized by a classifier that classifies the user's state (e.g., normal/abnormal) based on feature values generated based on THRI, THR2, THLI, and THL2 (and optionally the other sources). Optionally, the model may include various types of parameters, depending on the type of training algorithm utilized to generate the model. For example, the model may include parameters of one or more of the following: a regression model, a support vector machine, a neural network, a graphical model, a decision tree, a random forest, and other models of other types of machine learning classification and/or prediction approaches.
[0230] In some embodiments, the model is trained utilizing deep learning algorithms. Optionally, the model includes parameters describing multiple hidden layers of a neural network. Optionally, the model includes a convolution neural network (CNN), which is useful for identifying certain patterns in the thermal images, such as patterns of temperatures on the forehead. Optionally, the model may be utilized to identify a progression of a state of the user (e.g., a gradual forming of a certain thermal pattern on the
forehead). In such cases, the model may include parameters that describe an architecture that supports a capability of retaining state information. In one example, the model may include parameters of a recurrent neural network (RNN), which is a connectionist model that captures the dynamics of sequences of samples via cycles in the network's nodes. This enables RN s to retain a state that can represent information from an arbitrarily long context window. In one example, the RNN may be implemented using a long short-term memory (LSTM) architecture. In another example, the RNN may be implemented using a bidirectional recurrent neural network architecture (BRNN).
[0231] In order to generate a model suitable for identifying the state of the user in real-world day-today situations, in some embodiments, the samples used to train the model are based on thermal measurements (and optionally the other sources of data) taken while the user was in different situations, locations, and/or conducting different activities. In a first example, the model may be trained based on a first set of previous thermal measurements taken while the user was indoors and in the normal state, a second set of previous thermal measurements taken while the user was indoors and in the abnormal state, a third set of previous thermal measurements taken while the user was outdoors and in the normal state, and a fourth set of previous thermal measurements taken while the user was outdoors and in the abnormal state. In a second example, the model may be trained based on a first set of previous thermal measurements taken while the user was sitting and in the normal state, a second set of previous thermal measurements taken while the user was sitting and in the abnormal state, a third set of previous thermal measurements taken while the user was standing and/or moving around and in the normal state, and a fourth set of previous thermal measurements taken while the user was standing and/or moving around and in the abnormal state. Usually the movements while standing and/or moving around, and especially when walking or running, are greater compared to the movement while sitting; therefore, a model trained on samples taken during both sitting and standing and/or moving around is expected to perform better compared to a model trained on samples taken only while sitting.
[0232] Having the ability to determine the state of the user can be advantageous when it comes to scheduling tasks for the user and/or making recommendations for the user, which suits the user's state. In one embodiment, responsive to determining that the user is in the normal state, the computer prioritizes a first activity over a second activity, and responsive to determining that the user is in the abnormal state, the computer prioritizes the second activity over the first activity. Optionally, accomplishing each of the first and second activities requires at least a minute of the user's attention, and the second activity is more suitable for the abnormal state than the first activity. Optionally, and the first activity is more suitable for the normal state than the second activity. Optionally, prioritizing the first and second activities is performed by a calendar management program, a project management program, and/or a "to do" list program. Optionally, prioritizing a certain activity over another means one or more of the following: suggesting the certain activity before suggesting the other activity, suggesting the certain activity more frequently than the other activity (in the context of the specific state), allotting more time for the certain activity than for the other activity, and giving a more prominent reminder for the certain activity than for
the other activity (e.g., an auditory indication vs. a mention in a calendar program that is visible only if the calendar program is opened).
[0233] Such state -dependent prioritization may be implemented in various scenarios. In one example, the normal state refers to a normal concentration level, the abnormal state refers to a lower than normal concentration level, and the first activity requires a high attention level from the user compared to the second activity. For instance, the first and second activities may relate to different topics of a self -learning program for school; when identifying that the user is in the normal concentration state, a math class is prioritized higher than a sports lesson; and when identifying that the user is in the lower concentration state, the math class is prioritized lower than the sports lesson. In another example, the normal state refers to a normal anger level, the abnormal state refers to a higher than normal anger level, and the first activity involves more interactions of the user with other humans compared to the second activity. In still another example, the normal state refers to a normal fear level, the abnormal state refers to a panic attack, and the second activity is expected to have a more relaxing effect on the user compared to the first activity.
[0234] In one embodiment, a system configured to alert about an abnormal state includes at least one CAM and a user interface (UI). The at least one CAM takes thermal measurements of at least first and second regions on the right side of the forehead (THRI and THR2, respectively) of the user, and takes thermal measurements of at least third and fourth regions on the left side of the forehead (THLI and THL2, respectively). The middles of the first and third regions are at least 1 cm above the middles of the second and fourth regions, respectively. Each of the at least one CAM is located below the first and third regions, and does not occlude any portion of the first and third regions. The UI provides an alert about an abnormal state of the user, where the abnormal state is determined based on THRI, THR2, THLI, and THL2- Optionally, the system includes a transmitter that may be used to transmit THRI, THR2, THLI, and THL2 to a computer that determines, based on THRI, THR2, THLI, and T¾2, whether the user is in the normal state or the abnormal state. The computer may include a wearable computer, a computer belonging to a smartphone or a smartwatch carried by the user, and/or cloud-based server. Optionally, responsive to determining that the user is in an abnormal state, the computer commands the UI to provide the alert. For example, the computer may send a signal to a smartphone app, and/or to a software agent that has control of the UI, to provide the alert. In another example, the computer may send an instruction to the UI to provide the alert. Optionally, the alert is provided as text, image, sound, and/or haptic feedback.
[0235] Neurofeedback is a technique that teaches self-regulation of various brain functions through a feedback provided to the user. Neurofeedback involves measuring the user and providing the user a feedback indicative of the measured brain activity. The feedback enables the user to improve awareness and control of the brain activity. While neurofeedback has been researched and sometimes used to treat many brain-related conditions (e.g., ADHD, pain, addiction, depression, headache and more), it is generally a cumbersome procedure that involves conducting an electroencephalogram (EEG) or Hemoencephalography (pirHEG and nirHEG). Consequently, neurofeedback is often a treatment that is provided at specialized clinics, administered by trained personnel in a controlled environment, and/or
requires specialized equipment that is bulky, uncomfortable to use and expensive. Thus, most currently available neurofeedback treatments involve a setup that typically prohibits its daily use in real-world environments (e.g., at home or at work), and is often not available on demand, when a person needs it. There is a need for a way to provide neurofeedback sessions whenever people need them.
[0236] Collecting thermal measurements of various regions of a user's face can have many health- related (and other) applications. In particular, thermal measurements of the forehead can be indicative of brain activity, and therefore may be useful to detect various brain-related conditions and for brain-related treatments, such as neurofeedback. However, movements of the user and/or of the user's head can make acquiring this data difficult with many of the known approaches. Furthermore, various factors such as touching the forehead, thermal radiation directed at the forehead, and/or air blowing at the forehead may alter the forehead temperature measurements. These factors, which are unrelated to the user's brain activity, may be considered confounding factors that can hinder the accuracy of detections of brain function-related conditions that are based on thermal measurements, and may decrease the effectiveness of treatments such as neurofeedback.
[0237] Neurofeedback sessions can assist in treating various brain function-related conditions and/or disorders. In order to maximize their effectives, it may be advantageous to have neurofeedback treatments while a person suffers and/or exhibits the symptoms of a brain function-related condition and/or disorder. The following are descriptions of embodiments of a wearable system that may be utilized for this purpose. Some embodiments of a neurofeedback system described below involve a wearable, lightweight device that is aesthetically acceptable, and may be utilized as needed in day-to-day situations.
[0238] Some examples of disorders that may be treated with some embodiments of the neurofeedback system described herein include disorders related to (i) frontal lobe dysfunction, such as ADHD, headaches, anger, anxiety, and depression, (ii) paroxysmal disorders, such as headaches, seizures, rage reactions, and panic attacks, (iii) chronic pain, and (iv) stress. It is noted that the term "neurofeedback" also covers biofeedback and other similar feedback-based treatments.
[0239] FIG. 34 (already discussed above) illustrates one embodiment of a system configured to provide neurofeedback (based on measurements of CAM 720) and/or breathing biofeedback (based on measurements of at least some of thermal cameras 723, 725, 727 and 729). The system illustrated in FIG. 37, which uses two inward-facing head- mounted thermal cameras (701 and 702) to measure the forehead, may be used for neurofeedback together with a UI (not illustrated). Other embodiments of neurofeedback HMSs may involve more than two inward-facing head-mounted thermal cameras to measure the forehead. Some embodiments of neurofeedback HMSs may include one or more sensors such as the sensor 722, which are used to take rriconf as discussed below.
[0240] FIG. 39 illustrates a scenario in which a user has neurofeedback session during a day-to-day activity, such as during school time. For example, the session may be initiated because the user felt that he was losing concentration and/or the system might have determined that the user was exhibiting symptoms of ADHD and/or was in an undesirable state. The user wears an Augmented Reality device
(AR), which includes user interface 710 that includes a display to present the augmented images. The AR includes one or more inward-facing head-mounted thermal cameras, which may be similar to the system illustrated in FIG. 34. The neurofeedback session involves attempting to control brain activity by causing the augmented reality video of the car 711 to drive forwards. For example, the car 711 drives forwards when the temperature at certain regions of the forehead 712 increases, and drives backwards when the temperature at the certain regions of the forehead 712 decreases.
[0241] In one embodiment, a neurofeedback system includes at least an inward-facing head-mounted thermal camera (CAM) and a user interface (UI). Optionally, the neurofeedback system may include additional elements such as a frame, a computer, and/or additional sensors and/or thermal cameras, as described below.
[0242] CAM is worn on a user's head and takes thermal measurements of a region on the forehead (THF) of the user. CAM is positioned such that when the user is upright, CAM is located below the middle of the region on the user's forehead. Optionally, CAM does not occlude the center of the forehead, and as such, may be more aesthetically pleasing than systems that have elements that occlude the center of the forehead. Optionally, CAM is located close to the forehead, at a distance below 15 cm, 10 cm, or 5 cm from the user's face. Optionally, CAM may use a single pixel sensor (e.g., discrete thermophile sensor) or a multiple pixel sensor (e.g., microbolometer FPA).
[0243] In one embodiment, THF measured by CAM includes the area known in the field of electroencephalography as the "Fpz point", which is typically located at a point that is between 5% and 15% the distance from the nasion to the Inion (e.g., approximately at around 10% the distance). Optionally, in this embodiment, THF may be indicative of temperature changes at the Fpz point. Additionally or alternatively, the region on the forehead measured by CAM may include the center of the forehead, and THF may optionally be indicative of temperature changes at the center of the forehead.
[0244] In another embodiment, CAM may measure at least four areas on the user's forehead covering regions on the upper right side of the forehead, lower right side of the forehead, upper left side of the forehead, and lower left side of the forehead, respectively. Optionally, in this embodiment, THF may be indicative of a thermal pattern of the user's forehead. Optionally, in this embodiment, "CAM" refers to multiple inward-facing thermal cameras, which include at least first and second inward-facing head- mounted thermal cameras (CAM1 and CAM2, respectively). CAM1 takes the measurements of the upper right side of the forehead and the lower right side of the forehead, and CAM2 takes the measurements of the upper left side of the forehead and the lower left side of the forehead. Optionally, THF may include measurements of at least six areas on the user's forehead. Optionally, the at least four areas and the at least six areas each include at least one area that covers the Fpz point.
[0245] Due to the proximity of CAM to the face, in some embodiments, there may be an acute angle between the optical axis of CAM and the forehead. In order to improve the sharpness of thermal images of the forehead, in some embodiments, CAM may include a sensor and a lens, which are configured such that the sensor plane is tilted by more than 2° relative to the lens plane according to the Scheimpflug
principle, which may enable the capture of sharper images of the forehead when CAM is close to the face.
[0246] The UI provides a feedback to the user during the neurofeedback session, which is determined based on THF and optionally mc0nf (that is indicative of confounding factors). Optionally, providing the session for the user involves receiving instructions from the user (e.g., verbal commands and/or menu selections), which may affect the type of feedback the user receives (e .g., what type of session or "game" will be played in the session, how long the session should last, etc.).
[0247] In some embodiments, at least some of the feedback presented to the user via the UI is intended to indicate to the user whether, and optionally to what extent, the user's brain activity (as determined based on THF) is progressing towards a target. Optionally, the target may correspond to a state of brain activity that causes THF to have a certain value. Optionally, the target may correspond to a typical THF pattern of the user. Optionally, typical THF pattern of the user is a pattern of temperatures on different points on the forehead, which determined based on previous THF that are measured when the user was in a typical, normal state, and not exhibiting symptoms of anger, ADHD, a headache, etc. In one example, the user may be considered to make progress in the neurofeedback session if the temperature of the forehead (or a certain region on the forehead) becomes closer to a target temperature. In another example, the user may be considered to make progress in the neurofeedback session if the variability of temperatures across certain regions of the forehead reduces. In yet another example, the user may be considered to make progress in the neurofeedback session if asymmetry of temperatures of the forehead reduces. And in still another example, the user may be considered to make progress in the neurofeedback session if THF pattern measured during the session becomes more similar to a certain target thermal pattern. Optionally, the user may receive feedback indicative of decreasing positive progress (or negative progress) when the THF pattern measured during the session becomes less similar to the typical THF pattern.
[0248] In one embodiment, video played as part of the feedback is played according to a protocol suitable for a passive infrared hemoencephalography (pIR HEG) session, which is a form of biofeedback for the brain that measures and displays information on the thermal output of the frontal lobe. In one configuration, pIR HEG involves increasing the forehead temperature by watching a movie that provides the feedback. The movie plays when the measured forehead temperature rises and stops when the temperature drops. The system may increase the threshold as the user learns how to raise the forehead temperature, and the user is instructed to calmly concentrate on making the movie continue to play.
[0249] The computer controls the neurofeedback session based on THF and optionally rriconf. In one embodiment, the computer compares THF to a target temperature. Optionally, different pixels of CAM may be compared to different target temperatures, or the target temperature may refer to an average temperature of the forehead. In another embodiment, the computer may calculate changes to temperature of the forehead (ATF) based on THF, and utilizes ATF to control the neurofeedback session. In yet another embodiment, the computer may compare THF to a target thermal pattern of the forehead, and the progress of the user in the neurofeedback session is evaluated based on a similarity between THF and the target
thermal pattern, and/or a change in extent of similarity between THF and the target thermal pattern.
[0250] In one embodiment, THF includes measurements of at least four non-collinear regions on the forehead (e.g., all the four regions do not lie on the same straight line), and the computer controls the neurofeedback session by providing the user a feedback via the user interface. The computer calculates a value indicative of similarity between a current THF pattern and a previous THF pattern of the user taken while the user was in a target state, and generates, based on the similarity, the feedback provided to the user as part of the neurofeedback session. The THF pattern may refer to a spatial pattern of the at least four non-collinear regions on the forehead (e.g., a pattern in a thermal image received from a FPA sensor), and/or to a pattern in the time domain of the at least four non-collinear regions on the forehead (e.g., a pattern detected in a time series of the thermal measurements).
[0251] Neurofeedback sessions may have different target states in different embodiments. Generally, the purpose of a session is to bring the user's state during the neurofeedback session (the "present state") to become more similar to a target state. In one embodiment, while the user was in the target state, one or more of the following were true: the user was healthier compared to the present state, the user was more relaxed compared to the present state, a stress level of the user was below a threshold, the user's pain level was below a threshold, the user had no headache, the user did not suffer from depression, and the user was more concentrated compared to the present state. Additionally, the computer may receive an indication of a period during which the user was in the target state based on a report made by the user (the previous THF pattern comprises THF taken during the period), measurements of the user with a sensor other than CAM, semantic analysis of text written by the user, and/or analysis of the user's speech.
[0252] In some embodiments, the computer may utilize a machine learning-based model to determine whether the session is successful (or is expected to be) and/or to determine the user's progress in the neurofeedback session at a given time. Optionally, the computer generates feature values based on THF, and utilizes the model to calculate a value indicative of the progress and/or session success. Optionally, the model is trained on samples comprising feature values based on previously taken THF and labels indicative of the success of the session and/or progress at the time those THF were taken. Optionally, the samples may be generated based on previously taken THF of the user and/or of other users. Optionally, the samples include samples generated based on THF of the user taken on different days, and/or while the user was in different situations.
[0253] At a given time, temperatures measured at different areas of the forehead may be different. A value, which is a function of the temperatures at the different areas and is indicative of their variability, may be referred to herein as the "temperature variability" of the measurements. In one example, the function of the temperatures is the statistical variance of the temperatures. Having high temperature variability can be a sign that the user is suffering from various conditions, such as anger, a headache, depression, and/or anxiety. Optionally, a target of the neurofeedback session may be to lower the temperature variability of THF. Optionally, progress of the neurofeedback session may be evaluated based on a value of the temperature variability of THF, an extent that the temperature variability of THF has
decreased, and/or a rate at which the temperature variability of THF has decreased.
[0254] Various brain function-related conditions may be manifested via asymmetrical thermal patterns on the forehead. Optionally, a target of a neurofeedback session in such cases may be to decrease the asymmetry of the thermal patterns. In one embodiment, CAM is located to the right of the vertical symmetry axis that divides the user's face (e.g. 701), and the region is on the right side of the forehead. The neurofeedback system may include a second inward-facing head-mounted thermal camera (e.g. 702), located to the left of the vertical symmetry axis, which takes thermal measurements of a second region on the left side of the forehead (THE_). Optionally, the computer provides to the user a feedback that becomes more positive as the temperature asymmetry between THF and THF2 decreases.
[0255] Different regions on the forehead may be associated with different importance, with respect to various physiological responses and/or conditions that may be treated with neurofeedback sessions. In one embodiment, regions that are more important are associated with higher weights compared to weights associated with regions that are less important. Optionally, these weights may be utilized by the computer to calculate various values such as an average temperature of the forehead, which with the weights may be considered a "weighted average temperature". Similarly, a temperature variability of THF that is calculated while taking into account the weights associated with the various areas may be a "weighted temperature variability", and temperature asymmetry between THF and THF2, which is calculated while taking into account the weights associated with the various areas may be a "weighted temperature asymmetry". In some embodiments, providing feedback to the user based on one or more of the above "weighted" values may increase the efficacy of the neurofeedback session.
[0256] The temperature variability may be an indicator for the success or failure of the neurofeedback session. A session that causes a decreasing of the temperature variability below a certain first threshold may be considered a successful session that can be terminated, while a session that causes an increase of the temperature variability above a certain second threshold may be considered a failed session that should be terminated in order to prevent worsening the symptoms. In one embodiment, the computer terminates the neurofeedback session when THF are indicative of the temperature variability decreasing below the certain first threshold. Additionally or alternatively, the computer may terminate the neurofeedback session when THF are indicative of the temperature variability increasing above the certain second threshold.
[0257] In a similar fashion, the temperature asymmetry may be an indicator for the success or failure of the neurofeedback session for certain disorders. In one embodiment, the computer terminates the neurofeedback session when THF are indicative of the temperature asymmetry decreasing below a certain first threshold. Additionally or alternatively, the computer may terminate the neurofeedback session when THF are indicative of the temperature asymmetry increasing above a certain second threshold.
[0258] Having neurofeedback sessions, in a real world, day-to-day situations can involve conditions that are less sterile and not as controlled as the conditions that typically encountered when conducting such sessions at a clinic or a laboratory. In particular, thermal measurements of the forehead may be
affected by various factors that are unrelated to the type of brain activity the user is conducting, as part of the session; these factors may often be absent and/or less extreme in controlled settings and/or may be noticed and accounted for by a practitioner (who for example, may tell the user not to touch the forehead). Such factors may be referred to herein as confounding factors. Some examples of confounding factors include touching the forehead (e.g., with one's fingers), thermal radiation directed at the forehead (e.g., direct sunlight), and direct airflow on the forehead (e.g., from an air conditioner). Each of these factors can cause changes in THF that are not due to brain activity. In order to account for one or more of these confounding factors, in some embodiments, the neurofeedback includes a wearable sensor that takes measurements (denoted mCOnf) indicative of at least one of the following confounding factors: touching the forehead, thermal radiation directed at the forehead, and direct airflow on the forehead. Optionally, the wearable sensor is coupled to a frame worn on the user's head. The following are some examples of types of sensors that the wearable sensor may involve in some embodiments of the neurofeedback system.
[0259] In one embodiment, the wearable sensor is an outward-facing head-mounted thermal camera (CAMout) that takes thermal measurements of the environment (THENV)- Optionally, the angle between the optical axes of CAM and CAM0Ut is at least one or more of the following angles: 45° , 90° , 130° , 170° , and 180° . In another embodiment, the wearable sensor provides measurements indicative of times at which the user touches the forehead. Optionally, the wearable sensor includes a visible-light camera, a miniature radar (such as low-power radar operating in the range between 30 GHz and 3,000 GHz), an active electro -optics distance measurement device (such as a miniature Lidar), and/or an ultrasound sensor. In yet another embodiment, the sensor may be an anemometer that is physically coupled to a frame worn on the user's head, is located less than 15 cm from the face, and provides a value indicative of a speed of air directed at the face.
[0260] There are various way in which the computer may utilize mCOnf to account for occurrences of a confounding factor during the neurofeedback session. In one embodiment, an occurrence of the confounding factor may prompt the computer to alert the user about the occurrence. In one example, the computer may identify, based on rriconf, that the extent of a confounding factor reached a threshold, and command the user interface to alert the user that the neurofeedback session is less accurate due to the confounding factor. In another example, upon identifying that the extent of a confounding factor reached the threshold, the computer may refrain from updating the feedback provided to the user as part of the neurofeedback session for at least a certain duration. The certain duration may be a fixed period (e.g., 0.2 seconds from reaching the threshold), and/or may last until mCOnf indicate that the extent of the confounding factor is below the threshold.
[0261] In one embodiment, the computer may adjust the values of THF based on the values of r iconf according to a certain function and/or transformation. For example, THF may be normalized with respect to the intensity of thermal radiation directed at the face and/or the speed of wind directed at the face. In another embodiment, in which the computer utilizes the machine learning-based model to calculate a value indicative of the progress and/or success of the session, the computer may utilize mconf to generate
at least some of the feature values that are utilized to calculate the value indicative of the progress and/or success. Optionally, the model is trained based on samples that include at least some samples that are based on THF and mc0nf that were taken while a confounding factor affected THF.
[0262] Another approach that may be utilized by the computer, in some embodiments, is to learn to differentiate between changes to THF due to brain activity and changes to THF due to various confounding factors (which may have different characteristics). In one embodiment, the computer may generate feature values based on sets of THF and r iconf, and utilize a second machine learning-based model to detect, based on the feature values, whether a change in THF occurred responsive to brain activity or a confounding factor. Optionally, the second model may be trained on samples generated based on measurements taken at times that a confounding factor affected THF and on other samples based on measurements taken at times that the confounding factor did not affect THF.
[0263] It is to be noted that since in real-world scenarios confounding factors can affect THF, utilizing one or more of the various measures described above may assist the computer to provide better neurofeedback sessions. Thus, in some embodiments, on average, neurofeedback sessions based on THF and rriconf provide better results than neurofeedback sessions based on THF without mconf- [0264] In addition to confounding factors of which mconf may be indicative, in some embodiments, the computer may take into account in a similar way, other cofounding factors. In one embodiment, the neurofeedback system may include an additional wearable and/or head-mounted sensor used to detect a movement of the frame relative to the head while the frame is still worn, a change in the user's position, and/or a change in the user's body temperature. In another embodiment, the neurofeedback system may include a humidity sensor and/or an environmental temperature sensor, which may be coupled to the user.
[0265] Consumption of various substances may also be considered confounding factors. In one embodiment, the computer may receive an indication of whether the user took medication before the neurofeedback session (e.g., the type of medication and dosage), whether the user smoked, consumed alcohol, etc. Each of these factors may affect THF in certain ways that may not necessarily be because of the user's brain activity. In a similar way to how the computer handles confounding factors in the description above, the computer may warn about the session being ineffective (e.g., after consuming alcohol or drags) and/or perform various normalizations and/or computations to address these confounding factors (e.g., by generating feature values indicating the consumption of the substances).
[0266] Another way in which some confounding factors may be addressed, involves providing better insolation for the forehead region from the environment while the neurofeedback session is being conducted. To this end, one embodiment involves utilization of a clip-on structure designed to be attached and detached from the frame multiple times (e.g., it may be attached before a neurofeedback session starts and detached after the session terminates). Optionally, the clip-on includes a cover that occludes (when attached to the frame) the forehead region measured by CAM, which drives the neurofeedback. The clip- on may protect the region against environmental radiation, wind, and touching the region. FIG. 38 illustrates a clip-on 716 configured to be attached and detached from the frame 700 multiple times. The
clip-on 716 includes a cover configured to occlude the region on the user's forehead (when the clip-on is attached to the frame) and a mechanism that holds the clip-on to the frame.
[0267] This selective use of the clip-on 716 can enable CAM 718 to provide different types of measurements. For example, THF taken while the clip-on is attached may be less noisy then measurements taken when the clip-on is not attached. In some embodiments, measurements obtained without the clip-on may be too noisy for an effective neurofeedback session due to environmental confounding factors. Thus, in one embodiment, CAM may be used to detect that the user needs a neurofeedback session while the clip-on does not cover the region on the forehead (e.g., based on a thermal pattern of the forehead that indicates that the user is in an abnormal state). Optionally, the user is prompted to attach the clip-on and commence with the neurofeedback session. After the clip-on is attached, CAM takes THF that are used effectively for the neurofeedback session (and may be of better quality than THF taken when the clip-on is not attached).
[0268] The neurofeedback system may include, in some embodiments, one or more additional CAMs to measure physiological signals indicative of respiration, stress, and other relevant parameters. Optionally, a target of the neurofeedback session may include bringing these physiological signals to a certain value in addition to a target that is related to THF.
[0269] In one example, the neurofeedback system may include a second inward-facing head-mounted thermal camera (CAM2) that takes thermal measurements of a region below the nostrils (THN), which is indicative of the user's breathing. Optionally, the computer may control the neurofeedback session also based on THN. Optionally, THN is utilized to calculate values of one or more respiratory parameters, such as breathing rate, exhale duration, and/or smoothness of the exhale stream. Optionally, a target state for the neurofeedback session involves having certain values of the one or more respiratory parameters fall in certain ranges. In one example, CAM2 may be the thermal camera 727 or the thermal camera 729, which are illustrated in FIG. 34. Optionally, the computer calculates the user's breathing rate based on THN, and guides the user to breathe at his/her resonant frequency, which maximizes the amplitude of respiratory sinus arrhythmia and is in the range of 4.5 to 7.0 breaths/min.
[0270] In another example, the neurofeedback system may include second and third inward-facing head-mounted thermal cameras (CAM2 and CAM3, respectively), which take thermal measurements of regions on the periorbital area and the nose (TH OU and THROD, respectively). Optionally, the computer may control the neurofeedback session also based on THROO and THROD. For example, the computer may calculate a stress level of the user based on THROU and/or THROD, and a target state of the neurofeedback session may correspond to a certain stress level the user is supposed to have. Optionally, THROEZ and THROD may be utilized to calculate a stress level of the user. For example, CAM2 may be the thermal camera 724 or 726, and CAM3 may be the thermal camera 733, which are illustrated in FIG. 34.
[0271] Various embodiments described herein involve an HMS that may be connected, using wires and/or wirelessly, with a device carried by the user and/or a non-wearable device. The HMS may include a battery, a computer, sensors, and a transceiver.
[0272] FIG. 40a and FIG. 40b are schematic illustrations of possible embodiments for computers (400, 410) that are able to realize one or more of the embodiments discussed herein that include a "computer". The computer (400, 410) may be implemented in various ways, such as, but not limited to, a server, a client, a personal computer, a network device, a handheld device (e.g., a smartphone), an HMS (such as smart glasses, an augmented reality system, and/or a virtual reality system), a computing device embedded in a wearable device (e.g., a smartwatch or a computer embedded in clothing), a computing device implanted in the human body, and/or any other computer form capable of executing a set of computer instructions. Herein, an augmented reality system refers also to a mixed reality system. Further, references to a computer or processor include any collection of one or more computers and/or processors (which may be at different locations) that individually or jointly execute one or more sets of computer instructions. For example, a first computer may be embedded in the HMS that communicates with a second computer embedded in the user's smartphone that communicates over the Internet with a cloud computer.
[0273] The computer 400 includes one or more of the following components: processor 401 , memory 402, computer readable medium 403, user interface 404, communication interface 405, and bus 406. The computer 410 includes one or more of the following components: processor 411 , memory 412, and communication interface 413.
[0274] Thermal measurements that are forwarded to a processor/computer may include "raw" values that are essentially the same as the values measured by thermal cameras, and/or processed values that are the result of applying some form of preprocessing and/or analysis to the raw values. Examples of methods that may be used to process the raw values include analog signal processing, digital signal processing, and various forms of normalization, noise cancellation, and/or feature extraction.
[0275] At least some of the methods described herein are "computer-implemented methods" that are implemented on a computer, such as the computer (400, 410), by executing instructions on the processor (401, 411). Optionally, the instructions may be stored on a computer -readable medium, which may optionally be a non-transitory computer-readable medium. In response to execution by a system including a processor and memory, the instructions cause the system to perform the method steps.
[0276] Herein, a direction of the optical axis of a VCAM or a CAM that has focusing optics is determined by the focusing optics, while the direction of the optical axis of a CAM without focusing optics (such as a single pixel thermopile) is determined by the angle of maximum responsivity of its sensor. When optics are utilized to take measurements with a CAM, then the term CAM includes the optics (e.g., one or more lenses). In some embodiments, the optics of a CAM may include one or more lenses made of a material suitable for the required wavelength, such as one or more of the following materials: Calcium Fluoride, Gallium Arsenide, Germanium, Potassium Bromide, Sapphire, Silicon, Sodium Chloride, and Zinc Sulfide. In other embodiments, the CAM optics may include one or more diffractive optical elements, and/or or a combination of one or more diffractive optical elements and one or more refractive optical elements.
[0277] When CAM includes an optical limiter/ field limiter/ FOV limiter (such as a thermopile sensor inside a standard TO-39 package with a window, or a thermopile sensor with a polished metal field limiter), then the term CAM may also refer to the optical limiter. Depending on the context, the term CAM may also refer to a readout circuit adjacent to CAM, and/or to the housing that holds CAM.
[0278] Herein, references to thermal measurements in the context of calculating values based on thermal measurements, generating feature values based on thermal measurements, or comparison of thermal measurements, relate to the values of the thermal measurements (which are values of temperature or values of temperature changes). Thus, a sentence in the form of "calculating based on THROI" may be interpreted as "calculating based on the values of THROI", and a sentence in the form of "comparing THROO and THROO" may be interpreted as "comparing values of THROII and values of THROO".
[0279] Depending on the embodiment, thermal measurements of an ROI (usually denoted THROI or using a similar notation) may have various forms, such as time series, measurements taken according to a varying sampling frequency, and/or measurements taken at irregular intervals. In some embodiments, thermal measurements may include various statistics of the temperature measurements (T) and/or the changes to temperature measurements (ΔΤ), such as minimum, maximum, and/or average values. Thermal measurements may be raw and/or processed values. When a thermal camera has multiple sensing elements (pixels), the thermal measurements may include values corresponding to each of the pixels, and/or include values representing processing of the values of the pixels. The thermal measurements may be normalized, such as normalized with respect to a baseline (which is based on earlier thermal measurements), time of day, day in the month, type of activity being conducted by the user, and/or various environmental parameters (e.g., the environment's temperature, humidity, radiation level, etc.).
[0280] As used herein, references to "one embodiment" (and its variations) mean that the feature being referred to may be included in at least one embodiment of the invention. Moreover, separate references to "one embodiment", "some embodiments", "another embodiment", "still another embodiment", etc., may refer to the same embodiment, may illustrate different aspects of an embodiment, and/or may refer to different embodiments.
[0281] Some embodiments may be described using the verb "indicating", the adjective "indicative", and/or using variations thereof. Herein, sentences in the form of "X is indicative of Y" mean that X includes information correlated with Y, up to the case where X equals Y. For example, sentences in the form of "thermal measurements indicative of a physiological response" mean that the thermal measurements include information from which it is possible to infer the physiological response. Stating that "X indicates Y" or "X indicating Y" may be interpreted as "X being indicative of Y". Additionally, sentences in the form of "provide/receive an indication indicating whether X happened" may refer herein to any indication method, including but not limited to: sending/receiving a signal when X happened and not sending/receiving a signal when X did not happen, not sending/receiving a signal when X happened and sending/receiving a signal when X did not happen, and/or sending/receiving a first signal when X happened and sending/receiving a second signal X did not happen.
[0282] Herein, "most" of something is defined as above 51% of the something (including 100% of the something). Both a "portion" of something and a "region" of something refer herein to a value between a fraction of the something and 100% of the something. For example, sentences in the form of a "portion of an area" may cover between 0.1% and 100% of the area. As another example, sentences in the form of a "region on the user's forehead" may cover between the smallest area captured by a single pixel (such as 0.1 % or 5% of the forehead) and 100% of the forehead. The word "region" refers to an open-ended claim language, and a camera said to capture a specific region on the face may capture just a small part of the specific region, the entire specific region, and/or a portion of the specific region together with additional region(s).
[0283] Sentences in the form of "angle greater than 20° " refer to absolute values (which may be +20° or -20° in this example), unless specifically indicated, such as in a phrase having the form of "the optical axis of CAM is 20° above/below the Frankfort horizontal plane" where it is clearly indicated that the CAM is pointed upwards/downwards. The Frankfort horizontal plane is created by two lines from the superior aspects of the right/left external auditory canal to the most inferior point of the right/left orbital rims.
[0284] The terms "comprises," "comprising," "includes," "including," "has," "having", or any other variation thereof, indicate an open-ended claim language that does not exclude additional limitations. The "a" or "an" is employed to describe one or more, and the singular also includes the plural unless it is obvious that it is meant otherwise; for example, sentences in the form of "a CAM configured to take thermal measurements of a region (THROI)" refers to one or more CAMs that take thermal measurements of one or more regions, including one CAM that takes thermal measurements of multiple regions; as another example, "a computer" refers to one or more computers, such as a combination of a wearable computer that operates together with a cloud computer.
[0285] The phrase "based on" is intended to mean "based, at least in part, on". Additionally, stating that a value is calculated "based on X" and following that, in a certain embodiment, that the value is calculated "also based on Y", means that in the certain embodiment, the value is calculated based on X and Y.
[0286] The terms "first", "second" and so forth are to be interpreted merely as ordinal designations, and shall not be limited in themselves. A predetermined value is a fixed value and/or a value determined any time before performing a calculation that compares a certain value with the predetermined value. A value is also considered to be a predetermined value when the logic, used to determine whether a threshold that utilizes the value is reached, is known before start performing computations to determine whether the threshold is reached.
[0287] The embodiments of the invention may include any variety of combinations and/or integrations of the features of the embodiments described herein. Although some embodiments may depict serial operations, the embodiments may perform certain operations in parallel and/or in different orders from those depicted. Moreover, the use of repeated reference numerals and/or letters in the text and/or
drawings is for the purpose of simplicity and clarity and does not in itself dictate a relationship between the various embodiments and/or configurations discussed. The embodiments are not limited in their applications to the order of steps of the methods, or to details of implementation of the devices, set in the description, drawings, or examples. Moreover, individual blocks illustrated in the figures may be functional in nature and therefore may not necessarily correspond to discrete hardware elements.
[0288] Certain features of the embodiments, which may have been, for clarity, described in the context of separate embodiments, may also be provided in various combinations in a single embodiment. Conversely, various features of the embodiments, which may have been, for brevity, described in the context of a single embodiment, may also be provided separately or in any suitable sub-combination. Embodiments described in conjunction with specific examples are presented by way of example, and not limitation. Moreover, it is evident that many alternatives, modifications, and variations will be apparent to those skilled in the art. It is to be understood that other embodiments may be utilized and structural changes may be made without departing from the scope of the embodiments. Accordingly, this disclosure is intended to embrace all such alternatives, modifications, and variations that fall within the spirit and scope of the appended claims and their equivalents.
[0289] The following paragraphs disclose claim text that the Applicant intends to file in Divisional patent applications. Following each embodiment, which describes an independent claim, there are multiple dependent claims starting with "optionally". In the Divisional patent applications, the optional dependent claims may be arranged according to any order and multiple dependencies. It is specifically noted that the order of the optional dependent claims below is not limiting and any order thereof may be claimed.
[0290] In one embodiment, a system configured to calculate a respiratory parameter, comprising:
[0291] an inward-facing head-mounted thermal camera (CAM) configured to take thermal measurements of a region below the nostrils (THROI) of a user; wherein THROI are indicative of the exhale stream; and
[0292] a computer configured to:
[0293] generate feature values based on THROI; and
[0294] utilize a model to calculate a respiratory parameter based on the feature values; wherein the model was trained based on previous THROI of the user taken during different days.
[0295] Optionally, the respiratory parameter is indicative of the user's breathing rate; CAM is located above the user's upper lip and less than 15 cm from the user's face, and does not occlude any of the user's mouth and nostrils; THROI comprise thermal measurements of at least first and second regions below right and left nostrils (THROII and THROU, respectively) of the user, which are indicative of exhale streams from the right and left nostrils, respectively; and THROI further comprise thermal measurement of at least one of a region on the mouth and a volume protruding out of the mouth (THROD) of the user, indicative of exhale stream from the mouth.
[0296] Optionally, the computer is further configured to detect, based on THROI, THROE, and THROD,
whether the user is breathing mainly through the mouth or through the nose.
[0297] Optionally, the computer is further configured to detect, based on THROI and THROU, whether the user is breathing mainly through the right nostril or through the left nostril.
[0298] Optionally, further comprising at least one inward-facing head-mounted visible-light camera configured to take images of a region on the mouth (IMM), wherein IMM are indicative of whether the mouth is open or closed; and the computer is further configured to utilize the model to detect, based on THROI and IMM, whether the user is breathing mainly through the mouth or through the nose; wherein the model was trained based on: a first set of THROI taken while IMM was indicative that the mouth is open, and a second set of THROI taken while IMM was indicative that the mouth is closed.
[0299] Optionally, the computer is further configured to calculate smoothness of the exhale stream based on THROI-
[0300] Optionally, responsive to detecting that the smoothness is below a predetermined threshold, the computer is further configured to alert the user in order to increase the user's awareness to the user's breathing.
[0301] Optionally, the computer is further configured to utilize an indication of consuming a substance to detect a respiratory-related attack by training the model based on: a first set of THROI taken while the user experienced a respiratory-related attack after consuming the substance, and a second set of THROI taken while the user did not experience a respiratory-related attack after consuming the substance; and wherein the consuming of the substance involves consuming a certain drug and/or consuming a certain food item.
[0302] Optionally, the computer is further configured to utilize an indication of a situation of the user to detect a respiratory-related attack by training the model based on: a first set of THROI taken while the user was in the situation and experienced a respiratory-related attack, and a second set of THROI taken while the user was in the situation and did not experience a respiratory-related attack; and wherein the situation involves one or more of the following: (i) interacting with a certain person, (ii) a type of activity the user is conducting, selected from at least two different types of activities associated with different levels of stress, and (iii) a type of activity the user is about to conduct within thirty minutes, selected from at least two different types of activities associated with different levels of stress.
[0303] Optionally, further comprising a sensor configured to take measurements indicative of the user's movements (mm0ve), and the computer is further configured to detect a respiratory-related attack based on mmove and THROI.
[0304] Optionally, the respiratory-related attack involves the user experiencing at least one of the following: an asthma attack, an epileptic attack, an anxiety attack, a panic attack, and a tantrum; and further comprising a user interface configured to alert the user responsive to detecting that a probability of the respiratory-related attack reaches a threshold.
[0305] Optionally, further comprising a sensor configured to take measurements indicative of the user's movements (mm0ve), and the computer is further configured to determine, based on mm0Ve and THROI,
whether the user exhaled while making a physical effort above a predetermined threshold.
[0306] Optionally, the computer is further configured to: receive a first indication that the user is making or is about to make the physical effort, command a user interface (UI) to suggest the user to exhale while making the physical effort, and command the UI to play a positive feedback in response to determining that the user managed to exhale while making the physical effort.
[0307] Optionally, further comprising a sensor configured to take measurements indicative of the user's movements (mm0Ve) , and the computer is further configured to: (i) receive from a fitness app an indication that the user should exhale while making a movement, (ii) determine, based on mm0Ve, when the user makes the movement, and (iii) determine, based on THROI, whether the user exhaled while making the movement.
[0308] Optionally, further comprising a sensor configured to take measurements indicative of the user's movements (mm0Ve) , and the computer is further configured to: (i) receive from a fitness app a certain number of breath cycles during which the user should perform a physical exercise, (ii) determine, based on nimove, when the user performs the physical exercise, and (iii) count, based on THROI, the number of breath cycles the user had while performing the physical exercise.
[0309] Optionally, the model was further trained based on previous THROI of the user taken while the user had an asthma attack; and the computer is further configured to calculate the user's breathing rate based on THROI, and to command a user interface to alert the user about an imminent asthma attack responsive to identifying an increase in the breathing rate that is associated with an asthma attack according to the model.
[0310] Optionally, further comprising a microphone configured to record the user; wherein the computer is further configured to analyze the recording in order to identify at least one of the following body sounds: asthmatic breathing sounds, asthma wheezing, and coughing; wherein a first alert provided to the user in response to identifying an increase in the breathing rate, without identifying at least one of the body sounds, is less intense than a second alert provided to the user in response to identifying both the increase in the breathing rate and at least one of the body sounds.
[0311] Optionally, further comprising a movement sensor worn by the user and configured to measure movements of the user; wherein the computer is further configured to analyze the measurements of the movement sensor in order to identify movements indicative of at least one of the following: spasm, shivering, and sagittal plane movement indicative of one or more of asthma wheezing, coughing, and chest tightness; wherein a first alert provided to the user in response to identifying an increase in the breathing rate, without identifying at least one of the movements, is less intense than a second alert provided to the user in response to identifying both the increase in the breathing rate and at least one of the movements.
[0312] Optionally, the computer is further configured to calculate, based on THROI, ratio between exhale and inhale durations (texjiaie/tinhaie), and to instruct the user, via a user interface, to prolong the exhale when texjiaie/tinhaie falls below a predetermined threshold.
[0313] Optionally, the computer is further configured to: (i) receive an indication that the user's stress level reaches a first threshold, (ii) identify, based on THROI, that the ratio between exhaling and inhaling durations (texhaie/tinhaie) is below a second threshold that is below 1.5, and (iii) command a user interface to suggest the user to prolong the exhale until texhaie/tinhaie reaches a third threshold that is at least 1.5.
[0314] Optionally, further comprising another inward-facing head-mounted thermal camera configure to take thermal measurements of a region on the forehead (THF); wherein the computer is further configured to utilize THROI and THF to detect a respiratory-related attack; wherein the model was trained based on: a first set of THROI and THF taken while the user experienced a respiratory-related attack, and a second set of THROI and THF taken while the user did not experience a respiratory -related attack.
[0315] In one embodiment, a method for calculating a respiratory parameter, comprising:
[0316] taking, using an inward-facing head-mounted thermal camera (CAM), thermal measurements of a region below the nostrils (THROI) of a user; wherein THROI are indicative of the exhale stream;
[0317] generating feature values based on THROI; and
[0318] utilizing a model to calculate a respiratory parameter based on the feature values; wherein the model was trained based on previous THROI of the user taken during different days.
[0319] Optionally, THROI comprise thermal measurements of regions below the right and left nostrils and thermal measurement of a region on the mouth, and further comprising detecting based on THROI whether the user is breathing mainly through the mouth or through the nose.
[0320] Optionally, further comprising taking, using a sensor, measurements indicative of the user's movements (mmove), and determining, based on mmove and THROI, whether the user exhaled while making a physical effort above a predetermined threshold.
[0321] Optionally, further comprising: training the model based on previous THROI of the user taken while the user had an asthma attack, calculating the user's breathing rate based on THROI, and alerting the user about an imminent asthma attack responsive to identifying an increase in the breathing rate that is associated with an asthma attack according to the model.
[0322] In one embodiment, a system configured to identify the dominant nostril, comprising:
[0323] at least one inward-facing head-mounted thermal camera (CAM) configured to take thermal measurements of first and second regions below the right and left nostrils (THROII and THROO, respectively); wherein the at least one CAM does not occlude any of the user's mouth and nostrils; and
[0324] a computer configured to identify the dominant nostril based on THROII and THROO.
[0325] Optionally, each CAM from among the at least one CAM is physically coupled to a frame configured to be worn on the user's head, weighs below 10 g, is located less than 15 cm from the user's face and above the user's upper lip, and comprises multiple sensing elements.
[0326] Optionally, further comprising a frame configured to be worn on the user's head; wherein the at least one CAM comprises at least first and second inward-facing head-mounted thermal cameras (CAM1 and CAM2, respectively) configured to take THROII and THROD, respectively, and located less than 15 cm from the user's face; CAM1 is physically coupled to the right half of the frame and captures the exhale
stream from the right nostril better than it captures the exhale stream from the left nostril, and CAM2 is physically coupled to the left half of the frame and captures the exhale stream from the left nostril better than it captures the exhale stream from the right nostril.
[0327] Optionally, the computer is further configured to learn the typical sequence of switching between the dominant nostrils based on previous measurements of the user taken over more than a week, and to issue an alert upon detecting an irregularity in the sequence of changes between the dominant nostrils.
[0328] Optionally, the computer is further configured to identify balanced breathing when breaths through the right and the left nostrils are equal, and to notify the user accordingly.
[0329] Optionally, the computer is further configured to monitor nostril dominance over a certain period, and issue an alert when at least one of the following occurs: (i) a ratio between the total times of the right and left nostril dominance during the certain period reaches a threshold (ii) an average time to switch from right to left nostril dominance reaches a threshold, and (iii) an average time to switch from left to right nostril dominance reaches a threshold.
[0330] Optionally, the computer is further configured to detect that the user is experiencing an asthma attack, and to command a user interface to update the user about the current dominant nostril and to suggest the user to switch the current dominant nostril.
[0331] Optionally, the computer is further configured to detect that the user has a headache, and to command a user interface to update the user about the current dominant nostril and suggest the user to switch the current dominant nostril.
[0332] Optionally, THROII and THROG are indicative of the length of the exhale stream, and the computer is further configured to calculate level of excitement of the user based on the length of the exhale stream; whereby the longer the length the higher the excitement of the user.
[0333] Optionally, the computer is further configured to assist the user to extend the duration of the time gap between inhaling and exhaling by performing at least one of the following: (i) calculating, based on THROII and THROE, the average time gap between inhaling and exhaling over a predetermined duration, and providing the calculation to the user via a user interface (UI), (ii) calculating, based on THROII and THROO, the average time gap between inhaling and exhaling over a first predetermined duration, and reminding the user via the UI to practice extending the duration when the average time gap is shorter than a first predetermined threshold, and (iii) calculating, based on THROII and THROEZ, the average time gap between inhaling and exhaling over a second predetermined duration, and encouraging the user via the UI when the average time gap reaches a second predetermined threshold.
[0334] Optionally, the computer is further configured to identify a shape of the exhale stream (SHAPE) based on THROII and THROD, and to differentiate between at least first and second shapes of the exhale stream (SHAPEs).
[0335] Optionally, the first SHAPE is identified based on a first set of THROII and THROO taken during multiple exhales over a first duration longer than a minute, the second SHAPE is identified based on a
second set of TH OII and THROO taken during multiple exhales over a second duration longer than a minute, and the first duration precedes the second duration.
[0336] Optionally, the computer is further configured to learn a sequence of typical changes between different SHAPEs based on previous measurements of the user taken over more than a week, and to issue an alert upon detecting an irregularity in a sequence of changes between the different SHAPEs.
[0337] Optionally, the computer is further configured to suggest the user eat a first type of food responsive to identifying the first SHAPE, and suggest that the user eat a second type of food responsive to identifying the second SHAPE.
[0338] Optionally, the computer is further configured to receive data about types of foods consumed by the user, store the data in a memory, and find correlations between the SHAPEs and the types of foods.
[0339] Optionally, the computer is further configured to prioritize activities for the user based on the identified SHAPE, such that a first activity is prioritized over a second activity responsive to identifying the first SHAPE, and the second activity is prioritized over the first activity responsive to identifying the second SHAPE.
[0340] Optionally, the computer is further configured to receive an indication of the user's breathing rate, and to: (i) suggest to the user, via a user interface, to perform a first activity in response to detecting that the breathing rate reached a threshold while identifying the first SHAPE, and (ii) suggest to the user to perform a second activity, which is different from the first activity, in response to detecting that the breathing rate reached the threshold while identifying the second SHAPE.
[0341] Optionally, the computer is further configured to receive an indication of the user's breathing rate, and to: (i) alert the user, via a user interface, in response to detecting that the breathing rate reached a threshold while identifying the first SHAPE, and (ii) not alert the user in response to detecting that the breathing rate reached the threshold while identifying the second SHAPE.
[0342] Optionally, the computer is configured to identify the SHAPE by comparing THROII and THROO to one or more reference patterns that were generated based on previous THROII and THROO of the user taken on different days.
[0343] Optionally, identifying the SHAPE comprises: generating feature values based on THROII and THROI2, and utilizing a model to classify THROII and THROO to a class corresponding to the first shape or the second shape, based on the feature values; and wherein the model is trained based on previous THROII and THROE of the user taken during different days.
[0344] Optionally, the first and second SHAPEs are indicative of at least one of the following: two of the five great elements according to the Vedas, two different emotional states of the user, two different moods of the user, two different energetic levels of the user, and a healthy state of the user versus an illness state of the user.
[0345] In one embodiment, a system configured to provide a breathing biofeedback session, comprising:
[0346] at least one inward-facing head-mounted thermal camera (CAM) configured to take thermal
measurements of a region below the nostrils (THROI) of a user; wherein THROI are indicative of the exhale stream; and
[0347] a user interface configured to provide feedback, calculated based on THROI, as part of a breathing biofeedback session for the user.
[0348] Optionally, each CAM is located less than 15 cm from the user's face and above the user's upper lip, and does not occlude any of the user's mouth and nostrils; and wherein THROI comprises thermal measurements of at least first and second regions below right and left nostrils of the user.
[0349] Optionally, further comprising a frame configured to be worn on the user's head; wherein THROI comprises thermal measurements of first and second regions below right and left nostrils (THROII and THROI2, respectively) of the user, and the at least one CAM comprises first and second thermal cameras (CAMl and CAM2, respectively) configured to take THROII and THROI2, respectively, and located less than 15 cm from the user's face and above the nostrils; CAMl is physically coupled to the right half of the frame and captures the exhale stream from the right nostril better than it captures the exhale stream from the left nostril, and CAM2 is physically coupled to the left half of the frame and captures the exhale stream from the left nostril better than it captures the exhale stream from the right nostril.
[0350] Optionally, THROI comprises thermal measurements of first, second and third regions, which are indicative of exhale streams from the right nostril, the left nostril, and the mouth, respectively; the first and second regions are below the right and left nostrils, respectively; and the third region comprises at least one of the following: a region on the mouth, and a volume protruding out of the mouth.
[0351] Optionally, further comprising a computer configured to calculate the feedback; wherein the feedback is indicative of similarity between current smoothness of the exhale stream and target smoothness of the exhale stream; wherein the current smoothness is calculated in real-time based on THROI, and the target smoothness is calculated based on previous THROI of the user taken while the user was in a state considered better than the user's state while starting the breathing biofeedback session.
[0352] Optionally, further comprising a computer configured to calculate in real-time smoothness of the exhale stream based on THROI, and the feedback is indicative of at least one of the following: whether the smoothness is above or below a predetermined threshold, and whether the smoothness has increased or decreased since a previous feedback that was indicative of the smoothness.
[0353] Optionally, the smoothness is calculated at frequency >4Hz, and the delay from detecting a change in the smoothness to updating the feedback provided to the user is <0.5 second.
[0354] Optionally, the feedback is indicative of whether the smoothness is above or below the predetermined threshold, and the user interface is configured to update the feedback provided to the user at a rate >2Hz.
[0355] Optionally, further comprising a computer configured to calculate the feedback; wherein the feedback is indicative of similarity between current shape of the exhale stream (SHAPE) and target SHAPE; wherein the current SHAPE is calculated in real-time based on THROI, and the target SHAPE is calculated based on at least one of the following: (i) previous THROI of the user taken while the user was
in a state considered better than the user's state while starting the breathing biofeedback session, and (ii) THROI of other users taken while the other users were in a state considered better than the user's state while starting the breathing biofeedback session.
[0356] Optionally, further comprising a computer configured to calculate the feedback; wherein the feedback is indicative of similarity between current breathing rate variability (BRV) and target BRV; wherein BRV is indicative of variations between consecutive breathes, the current BRV is calculated in real-time based on THROI, and the target BRV is calculated based on previous THROI of the user taken while the user was in a state considered better than the user's state while starting the breathing biofeedback session.
[0357] Optionally, further comprising a computer configured to calculate the feedback; wherein breathing rate variability (BRV) is indicative of variations between consecutive breathes, and is calculated in real-time based on THROI; and wherein the feedback is indicative of at least one of the following: whether the BRV is above or below a predetermined threshold, and whether a predetermined component of the BRV has increased or decreased since a previous feedback that was indicative of the BRV.
[0358] Optionally, further comprising a computer configured to: calculate a value indicative of similarity between a current THROI pattern and a previous THROI pattern of the user taken while the user was in a target state, and generate the feedback based on the similarity; wherein a THROI pattern refers to at least one of: a spatial pattern, a pattern in the time domain, and a pattern in the frequency domain.
[0359] Optionally, while the user was in the target state, one or more of the following were true: the user was healthier compared to the state of the user while the current THROI were taken (the present state), the user was more relaxed compared to the present state, a stress level of the user was below a threshold, and the user was more concentrated compared to the present state.
[0360] Optionally, the computer is further configured to receive an indication of a period during which the user was in the target state based on at least one of the following: a report made by the user, measurements of the user with a sensor other than CAM, semantic analysis of text written by the user, and analysis of the user's speech; and wherein the previous THROI pattern is based on THROI taken during the period.
[0361] Optionally, further comprising a computer configured to: calculate a value indicative of similarity between current THROI and previous THROI of the user taken while the user was in a target state, and generate the feedback based on the similarity; wherein the similarity is calculated by comparing (i) a current value of a characteristic of the user's breathing, calculated based on THROI, to (ii) a target value of the characteristic of the user's breathing, calculated based on the previous THROI.
[0362] Optionally, further comprising a computer configured to calculate the feedback, wherein the feedback is further designed to guide the user to breathe at his/her resonant frequency, which maximizes the amplitude of respiratory sinus arrhythmia and is in the range of 4.5 to 7.0 breaths/min.
[0363] In one embodiment, a method for conducting a breathing biofeedback session, comprising:
[0364] taking, using at least one an inward-facing head-mounted thermal camera (CAM), thermal measurements of a region below the nostrils (THROI) of a user; wherein THROI are indicative of the exhale stream;
[0365] taking target THROI (TARGET) when the user is in a desired state;
[0366] taking current THROI (CURRENT) of the user; and
[0367] providing the user with real-time feedback indicative of similarity between TARGET and CURRENT.
[0368] Optionally, further comprising calculating target smoothness of the exhale stream based on TARGET and calculating current smoothness of the exhale stream based on CURRENT; wherein the feedback is indicative of similarity between the target smoothness and the current smoothness.
[0369] Optionally, further comprising calculating target shape of the exhale stream (SHAPE) based on TARGET and calculating current SHAPE based on CURRENT; wherein the feedback is indicative of similarity between the current SHAPE and the target SHAPE.
[0370] Optionally, further comprising calculating target breathing rate variability (BRV) based on TARGET and calculating current BRV based on CURRENT; wherein BRV is indicative of variations between consecutive breathes, and the feedback is indicative of similarity between the current BRV and the target BRV.
[0371] In one embodiment, a system configured to select a state of a user, comprising:
[0372] at least one inward-facing head-mounted thermal camera (CAM) configured to take thermal measurements of at least three regions below the nostrils (THs); wherein THs are indicative of shape of the exhale stream (SHAPE); and
[0373] a computer configured to:
[0374] generate feature values based on THs, whereby the feature values are indicative of the SHAPE; and
[0375] utilize a model to select the state of the user, from among potential states of the user, based on the feature values.
[0376] Optionally, for the same breathing rate, the computer is configured to select different states when THs are indicative of different shapes of the exhale stream (SHAPEs) that correspond to different potential states.
[0377] Optionally, the different potential states comprise different emotional states.
[0378] Optionally, the different potential states comprise at least one of: (i) different physiological responses, and (ii) healthy and unhealthy states.
[0379] Optionally, the model was trained based on: previous THs taken while the user was in a first potential state from among the potential states, and other previous THs taken while the user was in a second potential state from among the potential states.
[0380] Optionally, the model was trained based on: previous THs taken from users while the users were in a first potential state from among the potential states, and other previous THs taken while the users
were in a second potential state from among the potential states.
[0381] Optionally, for the same breathing rate, respiration volume, and dominant nostril, the computer is configured to select different states when THs are indicative of different SHAPEs that correspond to different potential states.
[0382] Optionally, the at least one CAM comprises (i) at least three vertical sensing elements pointed at different vertical positions below the nostrils where the exhale stream is expected to flow, and (ii) at least three horizontal sensing elements pointed at different horizontal positions below the nostrils where the exhale stream is expected to flow; wherein the larger the number of the vertical sensing elements that detect the exhale stream, the longer the length of the exhale stream, and the larger the number of the horizontal sensing elements that detect the exhale stream, the wider the exhale stream.
[0383] Optionally, the at least three regions are located on (i) at least two vertical positions below the nostrils having a distance above 5 mm between their centers, and (ii) at least two horizontal positions below the nostrils having a distance above 5 mm between their centers.
[0384] Optionally, the at least three regions represent at least one of the following: (i) parameters of a 3D shape that confines the exhale stream, and THs are the parameters' values, (ii) locations corresponding to different lengths of the exhale stream, and (iii) locations corresponding to different angles that characterize directions of some of different SHAPEs.
[0385] Optionally, for a series of durations, the computer is further configured to select respective shapes for the exhale stream, from among potential SHAPEs, based on the feature values; and wherein the potential states represent different series of shapes corresponding to the series of durations.
[0386] Optionally, the at least one CAM does not occlude any of the user's mouth and nostrils, and is located less than 15 cm from the user's face and above the user's upper lip.
[0387] Optionally, further comprising a frame configured to be worn on the user's head; wherein each CAM of the at least one CAM is located less than 15 cm from the user's face and does not occlude any of the user's mouth and nostrils; and wherein the at least one CAM comprises at least first and second inward-facing head-mounted thermal cameras (CAM1 and CAM2, respectively) configured to take THROU and THROO, respectively, and; CAM1 is physically coupled to the right half of the frame and captures the exhale stream from the right nostril better than it captures the exhale stream from the left nostril, and CAM2 is physically coupled to the left half of the frame and captures the exhale stream from the left nostril better than it captures the exhale stream from the right nostril.
[0388] Optionally, the at least three regions below the nostrils include a first region on the right side of the user's upper lip, a second region on the left side of the user's upper lip, and a third region on the mouth of the user; and wherein thermal measurements of the third region are indicative of the exhale stream from the user's mouth.
[0389] Optionally, the at least three regions below the nostrils include a first region comprising a portion of the volume of the air below the right nostril where the exhale stream from the right nostril flows, a second region comprising a portion of the volume of the air below the left nostril where the
exhale stream from the left nostril flows, and a third region comprising a portion of a volume protruding out of the mouth where the exhale stream from the user's mouth flows.
[0390] In one embodiment, a system configured to present a user's state based on shape of the exhale stream (SHAPE), comprising:
[0391] at least one inward-facing head-mounted thermal camera (CAM) configured to take thermal measurements of at least three regions below the nostrils (THs) of the user; wherein THs are indicative of the SHAPE; and
[0392] a user interface (UI) configured to present the user's state based on THs; wherein, for the same breathing rate, the UI presents different states for the user when THs are indicative of different SHAPEs that correspond to different potential states.
[0393] Optionally, each of the at least one CAM does not occlude any of the user's mouth and nostrils.
[0394] Optionally, further comprising a computer configured to generate feature values based on THs, and to utilize a model to select the state, from among potential states, based on the feature values.
[0395] In one embodiment, a method for selecting a state of a user, comprising:
[0396] taking, using at least one inward-facing head-mounted thermal camera (CAM), thermal measurements of at least three regions below the nostrils (THs); wherein THs are indicative of shape of the exhale stream (SHAPE);
[0397] generating feature values based on THs; whereby the feature values are indicative of the SHAPE; and
[0398] utilizing a model for selecting the state of the user, from among potential states of the user, based on the feature values.
[0399] Optionally, further comprising selecting different states, for the same breathing rate, when THs are indicative of different SHAPEs that correspond to different potential states.
[0400] Optionally, further comprising training the model based on: previous THs taken while the user was in a first potential state from among the potential states, and other previous THs taken while the user was in a second potential state from among the potential states.
[0401] In one embodiment, a system configured to detect the shape of exhale stream, comprising:
[0402] at least one inward-facing head-mounted thermal camera (CAM) configured to take thermal measurements of at least three regions below the nostrils (THs) of a user; wherein THs are indicative of the shape of the exhale stream (SHAPE); and
[0403] a computer configured to detect the SHAPE based on THs and a model; wherein the model was trained based on previous THs of the user.
[0404] Optionally, for the same breathing rate, the computer is configured to detect a first SHAPE based on a first THs, and to detect a second SHAPE based on a second THs; wherein the first and second THs have different thermal patterns.
[0405] Optionally, for the same breathing rate, respiration volume and dominant nostril, the computer is configured to detect a first SHAPE based on a first THs, and to detect a second SHAPE based on a
second THs; wherein the first and second THs have different thermal patterns.
[0406] In one embodiment, a system configured to differentiate between normal and abnormal states, comprising:
[0407] at least one inward-facing head-mounted thermal camera (CAM) configured to take thermal measurements of at least first and second regions on the right side of the forehead (TH I and TH 2, respectively) of a user;
[0408] the at least one CAM is further configured to take thermal measurements of at least third and fourth regions on the left side of the forehead (THLI and THL2, respectively);
[0409] wherein the middles of the first and third regions are at least 1 cm above the middles of the second and fourth regions, respectively; and wherein each CAM is located below the first and third regions, and does not occlude any of the first and third regions; and
[0410] a computer configured to determine, based on THRI , THR2, THLI, and Τ¾2, whether the user is in a normal state or an abnormal state.
[0411] Optionally, the at least one CAM comprises at least first and second inward-facing head- mounted thermal cameras (CAM1 and CAM2, respectively) located to the right and to the left of the vertical symmetry axis that divides the user's face, respectively; CAM1 is configured to take THRI and THR2, and CAM2 is configured to take THLI and THL2-
[0412] Optionally, CAM1 and CAM2 do not occlude the second and fourth regions, respectively, each weighs below 10 g, are located less than 10 cm from the user's face, and each comprises microbolometer or thermopile sensor with at least 6 sensing elements.
[0413] Optionally, CAM1 comprises at least two multi-pixel thermal cameras, one for taking measurements of the first region, and another one for taking measurements of the second region; and wherein CAM2 also comprises at least two multi-pixel thermal cameras, one for taking measurements of the third region, and another one for taking measurements of the fourth region.
[0414] Optionally, the at least one CAM comprises a multi-pixel sensor and a lens; wherein the sensor plane is tilted by more than 2° relative to the lens plane according to the Scheimpflug principle in order to capture sharper images when the at least one CAM is worn by the user.
[0415] Optionally, the computer is configured to utilize a model to determine whether the user is in the normal state or the abnormal state, and the model was trained based on a first set of previous thermal measurements taken while the user was indoors and in the normal state, a second set of previous thermal measurements taken while the user was indoors and in the abnormal state, a third set of previous thermal measurements taken while the user was outdoors and in the normal state, and a fourth set of previous thermal measurements taken while the user was outdoors and in the abnormal state.
[0416] Optionally, the computer is configured to utilize a model to determine whether the user is in the normal state or the abnormal state, and the model was trained based on a first set of previous thermal measurements taken while the user was sitting and in the normal state, a second set of previous thermal measurements taken while the user was sitting and in the abnormal state, a third set of previous thermal
measurements taken while the user in the normal state and at least one of standing and moving around, and a fourth set of previous thermal measurements taken while the user was in the abnormal state and at least one of standing and moving around.
[0417] Optionally, the abnormal state involves the user displaying symptoms of one or more of the following: an anger attack, Attention Deficit Disorder (ADD), and Attention Deficit Hyperactivity Disorder (ADHD); and wherein the normal state refers to a usual behavior of the user that does not involve displaying said symptoms.
[0418] Optionally, when the user is in the abnormal state, the user will display within a predetermined duration shorter than an hour, with a probability above a predetermined threshold, symptoms of one or more of the following: anger, Attention Deficit Disorder (ADD), and Attention Deficit Hyperactivity Disorder (ADHD); and wherein when the user is in the normal state, the user will display the symptoms within the predetermined duration with a probability below the predetermined threshold.
[0419] Optionally, when the user is in the abnormal state the user suffers from a headache, and when the user is in the normal state the user does not suffer from a headache.
[0420] Optionally, the abnormal state refers to times during which the user has a higher level of concentration compared to the normal state that refers to times during which the user has a usual level of concentration.
[0421] Optionally, further comprising at least one additional inward-facing head-mounted thermal camera configured to take thermal measurements of regions on the nose and below the nostrils (THROD and THROM, respectively); wherein THROM is indicative of the user's breathing, and the computer is configured to determine the state of the user also based on THROD and THROM-
[0422] Optionally, further comprising at least one additional inward-facing head-mounted thermal camera configured to take thermal measurements of regions on the nose and below the nostrils (THROD and THROM, respectively); wherein the computer is further configured to: (i) generate feature values based on THRI, THR2, THLI, THL2, THROD, and THROM, and (ii) utilize a model to determine the user's state based on the feature values; wherein the model was trained based on a first set of previous THRI, THR2, THLI, THL2, THROD, and THROM taken while the user was in the normal state and a second set of previous THRI, THR2, THLI, THL2, THROD, and THROM taken while the user was in the abnormal state.
[0423] Optionally, further comprising an additional inward-facing head-mounted thermal camera configured to take thermal measurements of a region on the periorbital area (THROD); wherein the computer is configured to determine the state of the user also based on THROD.
[0424] Optionally, further comprising an additional inward-facing head-mounted thermal camera configured to take thermal measurements of a region on the periorbital area (THROD); wherein the computer is further configured to: (i) generate feature values based on THRI, THR2, THLI, THL2, and THROD, and (ii) utilize a model to determine the user's state based on the feature values; wherein the model was trained based on a first set of previous THRI, THR2, THLI, THL2, and THROD taken while the user was in the normal state and a second set of previous THRI, HR2, THLI, THL2, and THROD taken while
the user was in the abnormal state, and.
[0425] Optionally, responsive to determining that the user is in the normal state, the computer is further configured to prioritize a first activity over a second activity, and responsive to determining that the user is in the abnormal state, the computer is further configured to prioritize the second activity over the first activity; wherein accomplishing each of the first and second activities requires at least a minute of the user's attention, and the second activity is more suitable for the abnormal state than the first activity.
[0426] Optionally, prioritizing the first and second activities is performed by at least one of the following programs: a calendar management program, a project management program, and a to do list program.
[0427] Optionally, the normal state refers to a normal concentration level, the abnormal state refers to a lower than normal concentration level, and the first activity requires a higher attention level from the user compared to the second activity.
[0428] Optionally, the normal state refers to a normal anger level, the abnormal state refers to a higher than normal anger level, and the first activity involves more interactions of the user with other humans compared to the second activity.
[0429] Optionally, the normal state refers to a normal fear level, the abnormal state refers to a panic attack, and the second activity is expected to have a more relaxing effect on the user compared to the first activity.
[0430] Optionally, further comprising a sensor configured to provide an indication indicative of whether the user touches the forehead, whereby the touch is expected to influence thermal readings from the touched area; wherein the computer is configured to continue to operate, for a predetermined duration, according to a state identified shortly before receiving the indication, even if it identifies a different state shortly after receiving the indication.
[0431] In one embodiment, a system configured to alert about an abnormal state, comprising:
[0432] at least one inward-facing head-mounted thermal camera (CAM) configured to take thermal measurements of at least first and second regions on the right side of the forehead (TH I and TH 2, respectively) of a user;
[0433] the at least one CAM is further configured to take thermal measurements of at least third and fourth regions on the left side of the forehead (THLI and THL2, respectively);
[0434] Optionally, the middles of the first and third regions are at least 1 cm above the middles of the second and fourth regions, respectively; and wherein each CAM is located below the first and third regions, and does not occlude any of the first and third regions; and
[0435] a user interface configured to provide an alert about an abnormal state of the user; wherein the abnormal state is determined based on THRI, THR2, THLI, and THL2-
[0436] Optionally, the at least one CAM comprises at least first and second inward-facing head- mounted thermal cameras (CAM1 and CAM2, respectively) located to the right and to the left of the vertical symmetry axis that divides the user's face, respectively, and less than 10 cm from the user's face;
CAM1 is configured to take TH I and THR2, and CAM2 is configured to take THLI and THL2.
[0437] Optionally, further comprising a computer configured to utilize a model to determine whether the user is in the normal state or the abnormal state, and the model was trained based on a first set of previous thermal measurements taken while the user was sitting and in the normal state, a second set of previous thermal measurements taken while the user was sitting and in the abnormal state, a third set of previous thermal measurements taken while the user in the normal state and at least one of standing and moving around, and a fourth set of previous thermal measurements taken while the user was in the abnormal state and at least one of standing and moving around.
[0438] Optionally, the abnormal state involves the user displaying symptoms of one or more of the following: an anger attack, Attention Deficit Disorder (ADD), and Attention Deficit Hyperactivity Disorder (ADHD); and wherein the normal state refers to a usual behavior of the user that does not involve displaying said symptoms.
[0439] In one embodiment, a method for alerting about an abnormal state, comprising:
[0440] taking thermal measurements of at least first and second regions on the right side of the forehead (THRI, HR2) of a user, and thermal measurements of at least third and fourth regions on the left side of the forehead (THLI, THL2) of the user; wherein the middles of the first and third regions are at least
1 cm above the middles of the second and fourth regions, respectively;
[0441] generating feature values based on THRI , THR2, THLI, and Τ¾2ί
[0442] utilizing a model to detect a state of the user based on the feature values; wherein the model was trained based on (i) previous feature values taken while the user was in a normal state, and (ii) other previous feature values taken while the user was in the abnormal state; and
[0443] responsive to detecting the abnormal state, alerting thereof.
[0444] In one embodiment, a system configured to provide a neurofeedback session, comprising:
[0445] an inward-facing head-mounted thermal camera (CAM) configured to take thermal measurements of a region on the forehead (THF) of a user; wherein CAM is located below the middle of the region; and
[0446] a user interface configured to provide, based on THF, a neurofeedback session for the user.
[0447] Optionally, CAM comprises a multi-pixel sensor and a lens, and the sensor plane is tilted by more than 2° relative to the lens plane according to the Scheimpflug principle in order to capture sharper images when CAM is worn by the user.
[0448] Optionally, THF comprises measurements of at least four non-collinear regions on the forehead, and further comprising a computer configured to control the neurofeedback session by providing the user a feedback via the user interface; wherein the computer is further configured to: calculate a value indicative of similarity between a current THF pattern and a previous THF pattern of the user taken while the user was in a target state, and generate the feedback based on the similarity.
[0449] Optionally, while the user was in the target state, one or more of the following were true: the user was healthier compared to the state of the user while THF were taken (the present state), the user was
more relaxed compared to the present state, a stress level of the user was below a threshold, the user's pain level was below a threshold, the user had no headache, the user did not suffer from depression, and the user was more concentrated compared to the present state.
[0450] Optionally, the computer is further configured to receive an indication of a period during which the user was in the target state based on at least one of the following: a report made by the user, measurements of the user with a sensor other than CAM, semantic analysis of text written by the user, and analysis of the user's speech; and wherein the previous THF pattern is based on THF taken during the period.
[0451] Optionally, the system does not occlude the middle of the region on the user's forehead, CAM is located less than 15 cm from the user's face, and further comprising a wearable sensor and a computer; the wearable sensor is configured to take measurements (mconf) indicative of at least one of the following confounding factors: touching the forehead, thermal radiation directed at the forehead, and direct airflow on the forehead; whereby a confounding factor causes a change in THF that is not due to brain activity; and the computer is configured to control the neurofeedback session based on TF and mCOnf- [0452] Optionally, on average, neurofeedback sessions controlled based on THF and r iconf provide better results than neurofeedback sessions controlled based on THF without mconf-
[0453] Optionally, the computer is further configured to generate feature values based on sets of THF and mconf, and to utilize a machine learning-based model to detect, based on the feature values, whether a change in THF occurred responsive to brain activity or a confounding factor.
[0454] Optionally, the computer is further configured to identify that at least one of the confounding factors reached a threshold, and command the user interface to alert the user that the neurofeedback session is less accurate due to the at least one of the confounding factors reaching the threshold.
[0455] Optionally, the computer is further configured to identify that at least one of the confounding factors reached a threshold, and refrain from updating the feedback provided to the user as part of the neurofeedback session for at least one of: 0.2 sec from reaching the threshold, and until the at least one of the confounding factors gets below the threshold.
[0456] Optionally, further comprising a frame configured to be worn on the user's head; wherein CAM is physically coupled to the frame; and further comprising an additional sensor configured to detect at least one of the following additional confounding factors: a movement of the frame relative to the head while the frame is still worn, a change in the user's position, a change in the user's body temperature, a change in environmental temperature, and a change in environmental humidity; wherein the computer is further configured to control the neurofeedback session also based on the at least one of the additional confounding factors.
[0457] Optionally, THF comprises measurements of at least six areas on the user's forehead, and the computer is further configured to: utilize THF to calculate a weighted temperature variability across the region on the forehead, and provide to the user feedback indicative of increasing positive progress as the weighted temperature variability decreases.
[0458] Optionally, at least one of the at least six areas covers the Fpz point, which is located around 10% of the distance from the nasion to the Inion; and the neurofeedback session is intended to treat at least one of: anger, headache, depression, and anxiety.
[0459] Optionally, the computer is further configured to terminate the neurofeedback session when the temperature variability decreases below a first threshold or when the temperature variability increases above a second threshold.
[0460] Optionally, THF comprises measurements of at least six areas on the user's forehead, and the computer is further configured to: utilize previous THF to learn a typical THF pattern for the user, provide feedback indicative of increasing positive progress when a THF pattern measured during the session becomes similar to the typical THF pattern, and provide feedback indicative of decreasing positive progress when the THF pattern measured during the session becomes less similar to the typical THF pattern.
[0461] Optionally, further comprising: a second inward-facing head-mounted thermal camera configured to take thermal measurements of a region below the nostrils (THN), which is indicative of the user's breathing; and a computer configured to control the neurofeedback session based on THF and THN- [0462] Optionally, further comprising: a second inward-facing head-mounted thermal camera configured to take thermal measurements of a region below the nostrils (THN), which is indicative of the user's breathing; and a computer configured to calculate the user's breathing rate based on THN, and to guide the user to breathe at the user's resonant frequency, which maximizes the amplitude of respiratory sinus arrhythmia and is in the range of 4.5 to 7.0 breaths/min.
[0463] Optionally, further comprising: second and third inward-facing head-mounted thermal cameras, configured to take thermal measurements of regions on a periorbital area and the nose (THROE and THROD, respectively); and a computer configured to control the neurofeedback session based on THF, THROO, and THROD-
[0464] Optionally, CAM is located to the right of the vertical symmetry axis that divides the user's face and the region is on the right side of the forehead; and further comprising: a second inward-facing head- mounted thermal camera, located to the left of the vertical symmetry axis, and configured to take thermal measurements of a second region on the left side of the forehead (THF2); and a computer configured to provide to the user feedback indicative of increasing positive progress as temperature asymmetry between THF and THFI decreases.
[0465] Optionally, the computer is further configured to terminate the neurofeedback session when the temperature asymmetry decreases below a first threshold or when the temperature asymmetry increases above a second threshold.
[0466] Optionally, CAM is physically coupled to a frame configured to be worn on the user's head; and further comprising a clip-on configured to be attached and detached from the frame multiple times; wherein the clip-on comprises a cover configured to occlude the region on the user's forehead when the clip-on is attached to the frame.
[0467] Optionally, CAM is used for both (i) detecting a thermal pattern of the forehead that indicates a necessity of a neurofeedback session, while the clip-on does not cover the region on the forehead, and (ii) taking THF required for the neurofeedback session while the clip-on covers the region on the forehead.
[0468] In one embodiment, a method for conducting a neurofeedback session, comprising:
[0469] taking thermal measurements of a region on the forehead (THF) of a user using an inward-facing head-mounted thermal camera that is located below the middle of the region and less than 10 cm from the user's head; and
[0470] taking measurements (niconf) indicative of at least one of the following confounding factors: touching the forehead, thermal radiation directed at the forehead, and direct airflow on the forehead; whereby a confounding factor causes a change in THF that is not due to brain activity; and
[0471] conducting a neurofeedback session for the user based on THF and mCOnf-
[0472] Optionally, further comprising generating feature values based on THF and r iconf, and utilizing a model to control the neurofeedback session based on the feature values; wherein the model was trained on samples generated based on (i) previous THF and niconf of the user, and (ii) previous THF and niconf of other users.