WO2020121425A1 - 状態判定装置、状態判定方法、及び状態判定プログラム - Google Patents
状態判定装置、状態判定方法、及び状態判定プログラム Download PDFInfo
- Publication number
- WO2020121425A1 WO2020121425A1 PCT/JP2018/045595 JP2018045595W WO2020121425A1 WO 2020121425 A1 WO2020121425 A1 WO 2020121425A1 JP 2018045595 W JP2018045595 W JP 2018045595W WO 2020121425 A1 WO2020121425 A1 WO 2020121425A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- face
- user
- feature amount
- area
- region
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Ceased
Links
Images
Classifications
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/103—Measuring devices for testing the shape, pattern, colour, size or movement of the body or parts thereof, for diagnostic purposes
- A61B5/11—Measuring movement of the entire body or parts thereof, e.g. head or hand tremor or mobility of a limb
- A61B5/1126—Measuring movement of the entire body or parts thereof, e.g. head or hand tremor or mobility of a limb using a particular sensing technique
- A61B5/1128—Measuring movement of the entire body or parts thereof, e.g. head or hand tremor or mobility of a limb using a particular sensing technique using image analysis
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/103—Measuring devices for testing the shape, pattern, colour, size or movement of the body or parts thereof, for diagnostic purposes
- A61B5/11—Measuring movement of the entire body or parts thereof, e.g. head or hand tremor or mobility of a limb
- A61B5/1104—Measuring movement of the entire body or parts thereof, e.g. head or hand tremor or mobility of a limb induced by stimuli or drugs
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/103—Measuring devices for testing the shape, pattern, colour, size or movement of the body or parts thereof, for diagnostic purposes
- A61B5/11—Measuring movement of the entire body or parts thereof, e.g. head or hand tremor or mobility of a limb
- A61B5/1123—Discriminating type of movement, e.g. walking or running
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/16—Devices for psychotechnics; Testing reaction times ; Devices for evaluating the psychological state
- A61B5/163—Devices for psychotechnics; Testing reaction times ; Devices for evaluating the psychological state by tracking eye movement, gaze, or pupil change
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/16—Devices for psychotechnics; Testing reaction times ; Devices for evaluating the psychological state
- A61B5/18—Devices for psychotechnics; Testing reaction times ; Devices for evaluating the psychological state for vehicle drivers or machine operators
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/48—Other medical applications
- A61B5/4806—Sleep evaluation
- A61B5/4809—Sleep detection, i.e. determining whether a subject is asleep or not
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/48—Other medical applications
- A61B5/4845—Toxicology, e.g. by detection of alcohol, drug or toxic products
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/168—Feature extraction; Face representation
- G06V40/171—Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/20—Movements or behaviour, e.g. gesture recognition
Definitions
- the present invention relates to a state determination device, a state determination method, and a state determination program.
- a person may be non-wakeful.
- the non-wakeful state is a state of dozing, a state of falling asleep by drinking alcohol, and the like.
- development of a technique for determining a non-awakening state of a person in a vehicle or a factory is in progress. For example, a feature amount that changes in a non-awakening state is extracted from heartbeat, brain waves, blinks, and the like. Next, the feature amount and the threshold value are compared. The non-awakening state is determined from the comparison result. In this way, the non-awakening state can be determined from biological signals such as heartbeat.
- the sensor is attached to the person. Therefore, a person feels annoyed by the determination method. Further, when the person does not wear the sensor, the determination method cannot be used. Furthermore, the determination method is costly because it uses a sensor.
- Patent Document 1 a technology for detecting a dozing state has been proposed (see Patent Document 1).
- the doze driving detection device of Patent Document 1 detects a driver's dozing state using the blink frequency.
- a technique for determining a driver's drowsiness state has been proposed (see Patent Document 2).
- the drowsiness determination device of Patent Document 2 determines the drowsiness state using the blink frequency.
- Patent Documents 1 and 2 use blink frequency.
- the blink frequency varies greatly among individuals. For example, in the case of a person who has a very high blink frequency during awakening, it is difficult to determine whether or not the person is in a non-wakeful state using the techniques of Patent Documents 1 and 2. Therefore, how to determine the non-awakening state with high accuracy is a problem.
- the purpose of the present invention is to determine a non-awakening state with high accuracy.
- a state determination device extracts a face area indicating the area of the face from each of a plurality of frames sequentially acquired by photographing the face of the user, and extracts a face feature point indicating a part of the face from the face area. Extracted, based on the face feature points, a face feature amount extraction region, which is a region in the face region that changes when the user is in a non-wake state, is calculated, and a feature is extracted from the face feature amount extraction region. Whether or not the user is in the non-wakeful state, based on an extraction unit that extracts a facial feature amount that is a quantity, and the facial feature amount in each of the plurality of frames and determination information that is created in advance. It has a state determination unit for determination and an output unit for outputting a determination result.
- the non-wakefulness can be determined with high accuracy.
- FIG. 3 is a diagram showing a hardware configuration of the state determination device according to the first embodiment.
- FIG. 6 is a diagram showing an example of a facial feature point table according to the first embodiment.
- (A), (B) is a figure which shows the example of calculation of a facial feature amount extraction area
- FIG. 6 is a diagram showing an example of a facial feature amount table according to the first embodiment.
- FIG. 6 is a diagram showing an example of a state determination model table according to the first embodiment.
- FIG. 7 is a diagram for explaining a method of calculating the number of times wrinkles are drawn between the eyebrows in the first embodiment. It is a figure which shows the specific example of the non-awakening level of Embodiment 1.
- FIG. 6 is a diagram showing an example of a determination result table according to the first embodiment.
- 6 is a flowchart showing a process of calculating a facial feature amount extraction area according to the first embodiment.
- 7 is a flowchart showing a facial feature amount extraction process according to the first embodiment.
- 6 is a flowchart showing a count process of the first embodiment.
- 7 is a flowchart showing a non-wakefulness determination process according to the first embodiment.
- FIG. 7 is a functional block diagram showing the configuration of the state determination device of the second embodiment.
- FIG. 11 is a diagram showing an example of an average facial feature point model table according to the second embodiment.
- (A)-(C) is a figure which shows an example of a face condition table.
- 11 is a diagram showing an example of an extraction region determination model table according to the second embodiment.
- 9 is a flowchart showing a face condition determination process according to the second embodiment.
- 9 is a flowchart showing a process of determining a facial feature amount extraction area according to the second embodiment.
- FIG. 1 is a diagram showing a state determination device according to the first embodiment.
- the state determination device 100 is a device that executes a state determination method.
- the state determination device 100 determines a non-wakeful state.
- the non-wakeful state is a state of dozing, a state of falling asleep by drinking alcohol, and the like.
- the non-wakeful state includes a state in which the consciousness is dull.
- a state in which the user is dimly conscious is a state in which the user is drowsy, a state in which the user is drunk by drinking alcohol, and the like.
- the non-wakeful state includes a case where the user temporarily changes from the dozing state to the awakening state and then becomes the dozing state again.
- FIG. 2 is a diagram showing a hardware configuration of the state determination device according to the first embodiment.
- the state determination device 100 includes a processor 101, a volatile storage device 102, a non-volatile storage device 103, a camera 104, and a display 105.
- the processor 101 controls the entire state determination device 100.
- the processor 101 is a CPU (Central Processing Unit), an FPGA (Field Programmable Gate Array), or the like.
- the processor 101 may be a multiprocessor.
- the state determination device 100 may be realized by a processing circuit, or may be realized by software, firmware, or a combination thereof.
- the processing circuit may be a single circuit or a composite circuit.
- the volatile storage device 102 is a main storage device of the state determination device 100.
- the volatile storage device 102 is a RAM (Random Access Memory).
- the non-volatile storage device 103 is an auxiliary storage device of the state determination device 100.
- the non-volatile storage device 103 is an SSD (Solid State Drive).
- the camera 104 is a device that images a face.
- the camera 104 is also referred to as an image pickup device.
- the display 105 is a device that displays information.
- the display 105 is also referred to as a display device.
- the state in which the state determination device 100 does not have the camera 104 and the display 105 may be considered as an information processing device.
- the state determination device 100 includes an acquisition unit 110, an extraction unit 10, a state determination unit 160, and an output unit 170.
- the extraction unit 10 includes a face area extraction unit 120, a face feature point extraction unit 130, a face feature amount extraction area calculation unit 140, and a face feature amount extraction unit 150.
- the state determination device 100 also includes a face feature point storage unit 180, a face feature amount storage unit 181, a state determination model storage unit 182, and a determination result storage unit 183.
- a part of the extraction unit 10, the acquisition unit 110, the face area extraction unit 120, the face feature point extraction unit 130, the face feature amount extraction area calculation unit 140, the face feature amount extraction unit 150, the state determination unit 160, and the output unit 170, or All may be implemented by the processor 101.
- the program executed by the processor 101 is also called a state determination program.
- the state determination program is stored in a recording medium such as the volatile storage device 102 and the non-volatile storage device 103.
- the face feature point storage unit 180, the face feature amount storage unit 181, the state determination model storage unit 182, and the determination result storage unit 183 may be realized as storage areas secured in the volatile storage device 102 or the non-volatile storage device 103. Good.
- the acquisition unit 110 acquires, from the camera 104, a plurality of frames that are sequentially acquired by capturing the user's face.
- the plurality of frames may be expressed as a moving image.
- the frame is an image.
- the plurality of frames may be expressed as a plurality of frames in which the face of the user is captured at different times.
- the extraction unit 10 extracts a face area from each of the plurality of frames, extracts a face feature point from the face area, calculates a face feature amount extraction area based on the face feature point, and extracts the face feature amount extraction area from the face feature amount extraction area. Extract facial features.
- the face area indicates a face area.
- the facial feature point indicates a face part.
- the face feature amount extraction region is a region in which a change occurs in the face region when the user is in the non-wakeful state.
- the facial feature amount is a feature amount.
- the face feature amount extraction area is an area in which a change occurs in the face area when the user is in the non-awakening state, and an area in which there is no individual difference in the movement of the user's face when the user is in the non-awaking state. May be expressed as The processing executed by the extraction unit 10 will be described in detail using the face area extraction unit 120, the face feature point extraction unit 130, the face feature amount extraction area calculation unit 140, and the face feature amount extraction unit 150.
- the face area extraction unit 120 extracts a face area from the moving image.
- the face area extraction unit 120 may be realized by using a classifier that uses Haar-like features by Adaboost learning.
- a method of extracting a face area is described in Non-Patent Document 1.
- the face feature point extraction unit 130 extracts face feature points such as contours, eyebrows, eyes, nose, and mouth based on the face area.
- a method for extracting facial feature points is described in Non-Patent Document 2.
- the facial feature point extraction unit 130 stores the facial feature points in the facial feature point storage unit 180.
- the face feature point storage unit 180 will be described.
- the facial feature point storage unit 180 stores a facial feature point table.
- FIG. 3 is a diagram showing an example of the facial feature point table according to the first embodiment.
- the face feature point table 180a is stored in the face feature point storage unit 180.
- the face feature point table 180a has items of feature points and face orientations. Further, the face feature point table 180a has items of coordinates and angles. For example, in the face feature point table 180a, the coordinates at which the face feature point extracting unit 130 extracts the inner edge of the left eye are registered. The inner end of the left eye is also called the inner corner of the left eye.
- the facial feature point extraction unit 130 calculates the orientation of the face from the facial feature points.
- the face direction is represented by Yaw, Pitch, and Roll.
- the face feature point extraction unit 130 registers the face orientation in the face feature point table 180a.
- the facial feature amount extraction area calculation unit 140 calculates the facial feature value extraction area used for the determination of the non-awakening state.
- the user performs an act of resisting drowsiness as an action performed when the user is in the non-awakening state or an action that is a sign that the user is in the non-wakeful state.
- the act of resisting drowsiness is an act of intentionally closing the eyes by the user.
- the act of resisting drowsiness may be expressed as a strong blink that the user intentionally closes his eyes. When the user intentionally closes his eyes tightly, wrinkles occur between the eyebrows. Therefore, the face feature amount extraction area calculation unit 140 calculates the eyebrow region in the face area as the face feature amount extraction area.
- the face feature amount extraction area calculation unit 140 calculates the mouth area in the face area as the face feature amount extraction area. Further, the user yawns as an action performed when the user is in the non-wakeful state or an action that is a sign that the user is in the non-wakeful state. The mouth opens when the user yawns. Also, when the user yawns, wrinkles occur on the cheeks. Therefore, the face feature amount extraction area calculation unit 140 calculates the mouth area and the cheek area in the face area as the face feature amount extraction area.
- FIGS. 4A and 4B are diagrams showing examples of calculation of the facial feature amount extraction area.
- FIG. 4A is an example of calculation of the area between the eyebrows.
- the facial feature amount extraction area calculation unit 140 identifies the left and right inner canthus from the facial feature points extracted by the facial feature point extraction unit 130.
- the facial feature amount extraction area calculation unit 140 calculates an intermediate point 200 between the left inner corner and the right inner corner of the eye.
- the facial feature amount extraction area calculation unit 140 calculates a rectangular area 201 (that is, an area of a[pixel] ⁇ a[pixel]) centered on the midpoint 200.
- the rectangular area 201 is an area between the eyebrows. In this way, the facial feature amount extraction area calculation unit 140 calculates the eyebrow area.
- the face feature amount extraction area calculation unit 140 multiplies each side of the rectangular area 201 by k based on the rectangular area of the face area extracted by the face area extraction unit 120. As a result, the facial feature amount extraction area calculation unit 140 can calculate a rectangular area of (a ⁇ k)[pixel] ⁇ (a ⁇ k)[pixel].
- the face feature amount extraction area calculation unit 140 may calculate the eyebrow area based on the face orientation and the intermediate point calculated by the face feature point extraction unit 130. This will be specifically described. For example, assume that the face is facing left. When the face is directed to the left, the center of the region where wrinkles between the eyebrows occur is at a position left of the midpoint between the left and right inner canthus. Therefore, the facial feature amount extraction area calculation unit 140 calculates, as the center coordinates of the eyebrow area, the coordinates obtained by parallel translation of the intermediate point to the left by l[pixel]. The facial feature amount extraction area calculation unit 140 calculates a rectangular area centered on the center coordinates.
- FIG. 4B is an example of calculating the mouth area.
- the facial feature amount extraction area calculation unit 140 identifies the right and left corners of the mouth from the facial feature points extracted by the facial feature point extraction unit 130.
- the facial feature amount extraction area calculation unit 140 calculates an intermediate point 210 between the left corner of the mouth and the right corner of the mouth.
- the facial feature amount extraction area calculation unit 140 calculates a rectangular area 211 (that is, an area of b[pixel] ⁇ b[pixel]) centered on the midpoint 210.
- the rectangular area 211 is a mouth area. In this way, the facial feature amount extraction area calculation unit 140 calculates the mouth area.
- the face feature amount extraction area calculation unit 140 multiplies each side of the rectangular area 211 by k based on the rectangular area of the face area extracted by the face area extraction unit 120. Thereby, the facial feature amount extraction area calculation unit 140 can calculate the rectangular area of (b ⁇ k)[pixel] ⁇ (b ⁇ k)[pixel]. Further, the facial feature amount extraction area calculation unit 140 may calculate the mouth area based on the face orientation and the facial feature points calculated by the facial feature point extraction unit 130. The calculation method is as described above. Similarly, the facial feature amount extraction area calculation unit 140 can calculate the cheek area.
- the facial feature amount extraction unit 150 extracts the facial feature amount based on the eyebrow region, the mouth region, and the cheek region.
- the facial feature amount is a HOG (Histograms of Oriented Gradients) feature amount.
- HOG feature amount is described in Non-Patent Document 3.
- the face feature amount may be other than the HOG feature amount.
- the facial feature amount may be a SIFT (Scaled Interval Feature Transform) feature amount, a SURF (Speeded-Up Robust Feature), a Haar-like feature amount, or the like.
- the face feature quantity extraction unit 150 stores the face feature quantity in the face feature quantity storage unit 181.
- the face feature amount storage unit 181 will be described.
- the face feature amount storage unit 181 stores a face feature amount table.
- FIG. 5 is a diagram showing an example of the facial feature amount table according to the first embodiment.
- the face feature amount table 181a is stored in the face feature amount storage unit 181.
- the face feature amount table 181a has items of feature amount and value.
- the face feature quantity extraction unit 150 registers the face feature quantity in the face feature quantity table 181a. That is, the face feature amount extraction unit 150 registers information indicating the face feature amount in the face feature amount item of the face feature amount table 181a. Then, the face feature quantity extraction unit 150 registers the value corresponding to the face feature quantity in the value item of the face feature quantity table 181a. Further, the facial feature amount table 181a registers the HOG feature amount corresponding to each of the n (n is an integer of 2 or more) frames acquired by the acquisition unit 110 at a predetermined time. Note that, for example, the predetermined time is 5 minutes.
- the state determination unit 160 determines whether or not the user is in a non-wakeful state based on the facial feature amount in each of the plurality of frames and the determination information created in advance. Also, the state determination unit 160 may be expressed as determining whether or not the user is in a non-wakeful state, based on the facial feature amount corresponding to each of the plurality of frames and the determination information stored in advance. ..
- the state determination unit 160 determines a non-wakeful state based on the HOG feature amount. Specifically, the state determination unit 160 determines the non-awakening state based on the number of times the user has performed the above three actions within a predetermined time. Note that the three actions are an action in which the user puts wrinkles between the eyebrows, an action in which the user licks his lips to quench his thirst, and an action in which the user yawns. Further, for example, the predetermined time is 5 minutes.
- the state determination model storage unit 182 stores information for determining a non-awakening state.
- the information is stored in advance in the state determination model storage unit 182 before the state determination device 100 executes the non-wakeful state determination. This information is called a state determination model table.
- the state determination model table will be described.
- FIG. 6 is a diagram showing an example of the state determination model table of the first embodiment.
- the state determination model table 182a is stored in the state determination model storage unit 182.
- the state determination model table 182a has items of non-awakening level, number of wrinkles on the eyebrows in 5 minutes, number of licks of lips in 5 minutes, and number of yawns in 5 minutes.
- the state determination model table 182a is also referred to as determination information.
- the state determination model table 182a includes information for determining the non-awakening level according to the number of times the user has wrinkled between the eyebrows.
- the state determination model table 182a includes information for determining the non-awakening level according to the number of times the user has licked the lips.
- the state determination model table 182a includes information for determining a non-awakening level according to the number of times the user yawns.
- the state determination unit 160 calculates the cosine similarity Sn using equation (1).
- the average value Hm is an average value of the HOG feature amount extracted by the face feature amount extraction unit 150 for a plurality of frames in the high awakening state (that is, the normal state).
- the average value Hm is calculated in advance before the state determination device 100 executes the determination of the non-awakening state. Further, for example, the average value Hm is stored in the facial feature amount storage unit 181.
- the HOG feature amount Hn is a HOG feature amount corresponding to n frames acquired by the acquisition unit 110 in a predetermined time.
- FIG. 7 is a diagram for explaining a method of calculating the number of times wrinkles are drawn between the eyebrows in the first embodiment.
- the vertical axis of the graph in FIG. 7 indicates the cosine similarity Sn.
- the horizontal axis of the graph in FIG. 7 indicates time.
- the state determination unit 160 determines that wrinkles have been drawn between the eyebrows. If the state determination unit 160 determines that wrinkles have been drawn between the eyebrows, the state determination unit 160 increments the number of times wrinkles have been drawn between the eyebrows.
- the method for determining that wrinkles have been drawn between the eyebrows and the method for determining that the lips have been licked are the same methods.
- the method of determining that wrinkles have been drawn between the eyebrows and the method of determining that a yawn has occurred are the same methods. Therefore, description of the method of determining that the lips have been licked and the method of determining that the yawns have been performed is omitted.
- the state determination unit 160 calculates the number of times the user has wrinkled between the eyebrows based on the facial feature amount extracted from the eyebrow area in each of the plurality of frames.
- the state determination unit 160 calculates the number of times the user has licked the lips, based on the facial feature amount extracted from the mouth region in each of the plurality of frames.
- the state determination unit 160 calculates the number of times the user yawns, based on the facial feature amount extracted from the mouth region and the cheek region in each of the plurality of frames.
- the state determination unit 160 determines the non-awakening level based on the number of wrinkles between the eyebrows and the state determination model table 182a. The state determination unit 160 determines the non-awakening level based on the number of times the lips have been licked and the state determination model table 182a. The state determination unit 160 determines the non-awakening level based on the number of yawns and the state determination model table 182a.
- FIG. 8 is a diagram showing a specific example of the non-wake level in the first embodiment.
- FIG. 8 shows that the non-awakening level determined based on the number of times wrinkles are drawn between the eyebrows is level 2.
- FIG. 8 shows that the non-wake level determined based on the number of times the lips have been licked is level 4.
- FIG. 8 shows that the non-wake level determined based on the number of yawns is level 3.
- the state determination unit 160 stores the determination result in the determination result storage unit 183.
- the determination result is an average value.
- the determination result storage unit 183 will be described.
- the determination result storage unit 183 stores a determination result table.
- FIG. 9 is a diagram showing an example of the determination result table according to the first embodiment.
- the determination result table 183a is stored in the determination result storage unit 183.
- the determination result table 183a has an item of non-awakening level.
- the state determination unit 160 registers the calculated average value in the determination result table 183a.
- the state determination unit 160 is based on the information obtained by machine learning using Random Forest, SVM (Support Vector Machine), Adaboost, CNN (Convolutional Neural Network), and the facial feature amount.
- the awake state may be determined.
- the information is also referred to as determination information. That is, the determination information is information obtained by machine learning and is information for determining whether or not the user is in the non-wakeful state.
- the output unit 170 outputs the determination result.
- the output unit 170 will be described in detail.
- the output unit 170 outputs the non-wakefulness level registered in the determination result table 183a.
- the output unit 170 outputs the non-wakefulness level to the display 105.
- the output unit 170 may output the non-awakening level by voice.
- the non-wakefulness level registered in the determination result table 183a is also referred to as information indicating an average value.
- the output unit 170 may output that the user is in the non-wakeful state when the non-wakefulness level registered in the determination result table 183a is level 3 or higher.
- the output unit 170 may output that the user is not in the non-wakeful state when the non-wakefulness level registered in the determination result table 183a is level 2 or lower.
- FIG. 10 is a flowchart showing the calculation processing of the facial feature amount extraction area according to the first embodiment. It should be noted that FIG. 10 describes the calculation process of the inbetween region. Further, FIG. 10 is an example of a process executed by the facial feature amount extraction area calculation unit 140.
- the facial feature amount extraction area calculation unit 140 acquires the coordinates of the left and right inner canthus and the face direction from the facial feature point storage unit 180. (Step S12) The facial feature amount extraction area calculation unit 140 calculates an intermediate point between the left inner canth and the right inner canth.
- the face feature amount extraction area calculation unit 140 calculates center coordinates based on the midpoint and the face orientation. Specifically, the facial feature amount extraction area calculation unit 140 calculates the central coordinates that are the coordinates obtained by translating the intermediate point. (Step S14) The facial feature amount extraction area calculation unit 140 calculates a rectangular area centered on the center coordinates. (Step S15) The face feature amount extraction area calculation unit 140 acquires the rectangular area of the face area extracted by the face area extraction unit 120.
- Step S16 The facial feature amount extraction area calculation unit 140 changes the size of the rectangular area calculated in step S14 based on the rectangular area of the face area. For example, the face feature amount extraction area calculation unit 140 multiplies each side of the rectangular area by k based on the rectangular area of the face area. As a result, the eyebrow area corresponding to the size of the face is calculated.
- the facial feature amount extraction area calculation unit 140 can calculate the mouth area and the cheek area by the same processing as the above processing.
- FIG. 11 is a flowchart showing the facial feature amount extraction processing according to the first embodiment.
- the face feature amount extraction unit 150 acquires the three face feature amount extraction regions calculated by the face feature amount extraction region calculation unit 140. That is, the three facial feature amount extraction areas are the eyebrow area, the mouth area, and the cheek area.
- the face feature amount extraction unit 150 extracts the HOG feature amount based on the eyebrow area. Further, the face feature amount extraction unit 150 extracts the HOG feature amount based on the mouth region. Further, the face feature amount extraction unit 150 extracts the HOG feature amount based on the cheek region.
- the face feature amount extraction unit 150 stores the HOG feature amount extracted based on the eyebrow area in the face feature amount storage unit 181. In addition, the face feature amount extraction unit 150 stores the HOG feature amount extracted based on the mouth region in the face feature amount storage unit 181. Further, the face feature amount extraction unit 150 stores the HOG feature amount extracted based on the cheek region in the face feature amount storage unit 181. As a result, each of the HOG feature amounts extracted based on each of the three face feature amount extraction regions is registered in the face feature amount table 181a.
- FIG. 12 is a flowchart showing the counting process of the first embodiment.
- the state determination unit 160 acquires the extracted HOG feature amount from the face feature amount storage unit 181 based on the eyebrow region extracted from one frame (for example, the first frame). ..
- the state determination unit 160 calculates the cosine similarity Sn using equation (1).
- Step S33 The state determination unit 160 determines whether the cosine similarity Sn calculated in step S32 is smaller than the threshold value S. For example, the case where the cosine similarity Sn calculated in step S32 is smaller than the threshold value S is a case where the wrinkle edge between the eyebrows appears strongly. That is, the case where the cosine similarity Sn calculated in step S32 is smaller than the threshold value S is the case where the user has wrinkles between the eyebrows.
- the state determination unit 160 advances the process to step S34. If the determination condition is not satisfied, the state determination unit 160 advances the process to step S35.
- Step S34 The state determination unit 160 increments the number of wrinkles between the eyebrows.
- Step S35 The state determination unit 160 determines whether 5 minutes have elapsed since the start of the counting process. The state determination unit 160 ends the process when 5 minutes have passed. If 5 minutes have not elapsed, the state determination unit 160 advances the process to step S31. Note that, for example, in step S31, the state determination unit 160 sets the extracted HOG feature amount to the face based on the eyebrow region extracted from the second frame acquired by the acquisition unit 110 after the first frame. It is acquired from the feature amount storage unit 181.
- the state determination unit 160 acquires each of the extracted HOG feature amounts based on the eyebrow region in the n frames acquired by the acquisition unit 110 in 5 minutes.
- the state determination unit 160 determines whether or not wrinkles have been drawn between the eyebrows based on each of the HOG feature amounts. Thereby, the state determination unit 160 can acquire the number of times the user has wrinkled between the eyebrows in 5 minutes.
- the number of times the lips have been licked is acquired by the same method as the counting process shown in FIG. For example, the eyebrow area described in step S31 is converted into the mouth area. Thereby, the state determination unit 160 can acquire the number of times the user has licked the lips in 5 minutes.
- the number of yawns is acquired by the same method as the counting process shown in FIG. For example, the eyebrow area described in step S31 is converted into a mouth area and a cheek area. Further, for example, in step S33, the state determination unit 160 causes the cosine similarity Sn calculated based on the HOG feature amount corresponding to the mouth region and the cosine similarity Sn calculated based on the HOG feature amount corresponding to the cheek region. If and are smaller than the preset threshold value, the process proceeds to step S34. Thereby, the state determination unit 160 can acquire the number of times the user yawns in 5 minutes. The above 5 minutes is an arbitrary time. Therefore, the time of 5 minutes may be a time other than 5 minutes.
- FIG. 13 is a flowchart showing the non-wakefulness determination process according to the first embodiment.
- the state determination unit 160 determines the non-wake level based on the state determination model table 182a and the number of wrinkles between the eyebrows.
- the state determination unit 160 determines the non-awakening level based on the state determination model table 182a and the number of times the lips have been licked.
- the state determination unit 160 determines the non-wakefulness level based on the state determination model table 182a and the number of yawns.
- Step S42 The state determination unit 160 calculates an average value based on the three non-awakening levels.
- the state determination unit 160 may round the average value below the decimal point.
- Step S43 The state determination unit 160 determines a non-wakeful state based on the average value. For example, the state determination unit 160 determines that the user is in the awake state when the average value is level 3 or higher.
- Step S44 The state determination unit 160 stores the determination result in the determination result storage unit 183.
- the state determination device 100 determines whether or not the user is in the non-wakeful state based on the action of the user wrinkling between the eyebrows, the action of the user licking the lips, and the action of the user yawning. judge.
- the three actions in the unawaken state have little or no individual difference. Since the state determination device 100 determines the non-awakening state based on the user's action with little or no individual difference, the non-wakeful state can be determined with high accuracy.
- the state determination unit 160 determines the non-awakening level based on the number of times the user has wrinkled between the eyebrows and the state determination model table 182a, and when the determined non-wakefulness level is equal to or higher than a preset threshold level, It may be determined that the user is in a non-wakeful state.
- the preset threshold level is level 3.
- the output unit 170 may output the determined non-wake level.
- the state determination unit 160 determines the non-awakening level based on the number of times the user has licked the lips and the state determination model table 182a.
- the determined non-wakefulness level is equal to or higher than a preset threshold level
- the user determines You may judge that it is a non-wake state.
- the preset threshold level is level 3.
- the output unit 170 may output the determined non-wake level.
- the state determination unit 160 determines the non-awakening level based on the number of times the user yawns and the state determination model table 182a. If the determined non-wakefulness level is equal to or higher than a preset threshold level, the user does not It may be determined that the user is awake. For example, the preset threshold level is level 3.
- the output unit 170 may output the determined non-wake level.
- the state determination unit 160 determines whether or not the user is in the non-wakeful state based on the average value of the three non-wakefulness levels has been described above.
- the state determination unit 160 may determine whether or not the user is in a non-wakeful state based on the average value of the two non-wakefulness levels. For example, the state determination unit 160 determines that the user is in the non-wakeful state based on the average value of the non-wakefulness level based on the number of times the user wrinkles between the eyebrows and the non-wakefulness level based on the number of times the user licks the lips. Or not.
- the state determination unit 160 determines whether the user is in the non-wake state based on the average value of the non-wake level based on the number of times the user wrinkles between the eyebrows and the non-wake level based on the number of times the user yawns. Determine if there is. Further, for example, the state determination unit 160 determines whether the user is in the non-wakeful state based on the average value of the non-wakefulness level based on the number of times the user licks the lips and the non-wakefulness level based on the number of times the user yawns. Determine whether or not.
- Embodiment 2 Next, a second embodiment will be described. The second embodiment will mainly describe matters different from the first embodiment, and the explanation of matters common to the first embodiment will be omitted. Embodiment 2 refers to FIGS. 1 to 13.
- FIG. 14 is a functional block diagram showing the configuration of the state determination device of the second embodiment.
- the state determination device 100a has an extraction unit 10a.
- the extraction unit 10a includes a face condition determination unit 191 and a face feature amount extraction area determination unit 192. Further, the state determination device 100a includes an average facial feature point model storage unit 184, a face condition storage unit 185, and an extraction area determination model storage unit 186.
- a part or all of the extraction unit 10a, the face condition determination unit 191, and the face feature amount extraction area determination unit 192 may be realized by the processor 101. Further, some or all of the extraction unit 10a, the face condition determination unit 191, and the face feature amount extraction area determination unit 192 may be realized as a module of a program executed by the processor 101. For example, the program executed by the processor 101 is also called a state determination program.
- the average facial feature point model storage unit 184, the face condition storage unit 185, and the extraction region determination model storage unit 186 may be realized as a storage region secured in the volatile storage device 102 or the non-volatile storage device 103. 14 which is the same as the configuration shown in FIG. 1 has the same reference numerals as those shown in FIG.
- the face condition determination unit 191 determines whether or not the user wears the wearing object based on the plurality of frames. For example, the face condition determination unit 191 determines that the user wears sunglasses when eye feature points are not extracted as face feature points. Further, for example, the face condition determination unit 191 determines that the user wears the mask when the feature point of the mouth is not extracted as the face feature point.
- the face condition determination unit 191 may also determine whether or not the user wears the wearable object, as described below. First, the average facial feature point model storage unit 184 will be described.
- the average facial feature point model storage unit 184 stores an average facial feature point model table.
- FIG. 15 is a diagram showing an example of an average facial feature point model table according to the second embodiment.
- the average face feature point model table 184a is stored in the average face feature point model storage unit 184.
- the average facial feature point model table 184a is stored in the average facial feature point model storage unit 184 before the state determination device 100a executes the non-wakefulness determination.
- the average face feature point model table 184a is also referred to as average face feature point model information.
- the average face feature point model table 184a has items of feature points and average coordinates.
- the average facial feature point model table 184a indicates the positions of average facial parts. For example, in the average facial feature point model table 184a, it is registered that the average position of the outer edge of the left eyebrow on the faces of many people is (100, 100).
- the face condition determination unit 191 uses the average facial feature point model table 184a and the facial feature points to determine whether or not the user wears a wearing object. For example, the face condition determination unit 191 determines that the distance between the position of the feature point of the left eye and the position of the average coordinates of the left eye (that is, the outer edge of the left eye and the inner edge of the left eye) registered in the average face feature point model table 184a. When it is equal to or more than the threshold value, it is determined that the reliability of the position of the feature point of the left eye is low. Then, the face condition determination unit 191 determines that the user wears sunglasses. Similarly, when the distance between the position of the feature point of the mouth and the average coordinate of the mouth registered in the average face feature point model table 184a is equal to or greater than the threshold value, the face condition determination unit 191 wears the mask by the user. Determine that
- the face condition determining unit 191 uses the Euclidean distance or the Mahalanobis distance when comparing the information registered in the average facial feature point model table 184a with the facial feature points. Note that, for example, when the Euclidean distance or the Mahalanobis distance is used, the face condition determination unit 191 determines the distance between the outer edge of the left eye and the outer edge of the right eye extracted by the facial feature point extraction unit 130, and the average facial feature point model table 184a. The distance between the outer edge of the left eye and the outer edge of the right eye registered in is made the same. As described above, the face condition determination unit 191 uses the distance between the outer edge of the left eye and the outer edge of the right eye. The face condition determination unit 191 may use a distance other than the distance between the outer edge of the left eye and the outer edge of the right eye. For example, the face condition determination unit 191 uses the distance between the outer edge of the left eyebrow and the outer edge of the right eyebrow.
- the face condition determination unit 191 also determines whether or not any of the n frames includes a shadow. Furthermore, the face condition determination unit 191 determines whether or not color skipping exists in any one of the plurality of frames. For example, color skipping is whiteout. Whiteout is the appearance of white parts in the frame. For example, when the light is emitted from the left side by the illumination, the left face portion in the frame is color skipped.
- the face condition determination unit 191 controls so that the shadow or color skip area is not set as the feature amount extraction area. Thereby, the facial feature amount extraction area calculation unit 140 can calculate an appropriate area as the facial feature amount extraction area. Further, the face condition determination unit 191 determines that it is a normal time when the user does not wear the wearable object and the frame does not include the shadow and color skip areas. The face condition determination unit 191 stores the determination result in the face condition storage unit 185. Here, the face condition storage unit 185 will be described. The face condition storage unit 185 stores a face condition table.
- FIG. 16A to 16C are diagrams showing an example of the face condition table.
- the face condition table 185a has a face condition item.
- FIG. 16A shows a state in which the face condition determining unit 191 has registered “normal” in the face condition table 185a.
- FIG. 16B shows a state in which the face condition determination unit 191 registers “wearing a mask” in the face condition table 185a.
- FIG. 16C shows a state in which the face condition determination unit 191 registers “irradiation from the left” in the face condition table 185a.
- the “diagonal face” may be registered in the face condition table 185a.
- the extraction area determination model storage unit 186 stores the extraction area determination model table.
- FIG. 17 is a diagram showing an example of the extraction area determination model table according to the second embodiment.
- the extraction area determination model table 186a is stored in the extraction area determination model storage unit 186.
- the extraction region determination model table 186a is stored in the extraction region determination model storage unit 186 before the state determination device 100a executes the non-wakefulness determination.
- the extraction region determination model table 186a has items of face condition and face feature amount extraction region.
- the extraction area determination model table 186a is also referred to as extraction area determination model information.
- the extraction region determination model table 186a is information indicating which region of the face region is to be the face feature amount extraction region, according to the position where the wearing object is attached.
- the extraction region determination model table 186a is information indicating which region of the face region is to be the face feature amount extraction region according to the position where the shadow or the color skip occurs.
- the face feature amount extraction area determination unit 192 determines the face feature amount extraction area based on the extraction area determination model table 186a. Specifically, the face feature amount extraction area determination unit 192 determines the face feature amount extraction area based on the extraction region determination model table 186a and the face condition determined by the face condition determination unit 191.
- the face feature amount extraction area determination unit 192 determines the face feature amount extraction area based on the extraction area determination model table 186a. To do.
- the face feature amount extraction area determination unit 192 determines the face feature amount extraction area as the eyebrow region.
- the face feature amount extraction area determination unit 192 controls the mouth region and the cheek region so as not to be the face feature amount extraction region. Then, the face feature amount extraction area determination unit 192 determines the face feature amount extraction area as the eyebrow area.
- the face feature amount extraction area calculation unit 140 calculates the face feature amount extraction area determined by the face feature amount extraction area determination unit 192 based on the face feature points extracted by the face feature point extraction unit 130.
- FIG. 18 is a flowchart showing face condition determination processing according to the second embodiment.
- FIG. 18 is an example of processing executed by the face condition determination unit 191.
- the face condition determination unit 191 acquires face feature points from the face feature point storage unit 180.
- the face condition determination unit 191 determines whether or not there is a shadow or color skip in the frame. For example, the face condition determination unit 191 determines that a shadow exists in the frame when the color of the area of the face in the frame is black. In addition, for example, the face condition determining unit 191 determines that there is a color skip in the frame when the color of a certain area of the face in the frame is white or when the facial feature points of the certain area in the frame cannot be acquired. judge. Note that in FIG. 18, shadows or color skips occur because light is emitted from the left side by the illumination.
- the face condition determination unit 191 may determine whether the shadow or the color skip exists in the frame based on the position information. If there is a shadow or color skip in the frame, the face condition determination unit 191 advances the process to step S53. If there is no shadow or color skip in the frame, the face condition determination unit 191 advances the process to step S54.
- Step S53 The face condition determination unit 191 registers “irradiation from left” in the face condition table 185a. Then, the face condition determination unit 191 ends the process.
- Step S54 The face condition determination unit 191 determines whether or not the user wears a wearing object. For example, the face condition determination unit 191 determines that the user wears the wearing object when the eye feature points are not extracted as the face feature points. When the user wears the wearing object, the face condition determining unit 191 advances the process to step S56. If the user does not wear the wearable object, the face condition determination unit 191 advances the process to step S55.
- Step S55 The face condition determination unit 191 registers “normal time” in the face condition table 185a. Then, the face condition determination unit 191 ends the process.
- Step S56 The face condition determination unit 191 determines whether or not the user wears a mask. For example, the face condition determination unit 191 determines that the user wears the mask when the feature point of the mouth is not extracted as the face feature point. When the user wears the mask, the face condition determination unit 191 advances the process to step S57. If the user does not wear the mask, the face condition determination unit 191 advances the process to step S58.
- the face condition determination unit 191 registers “wearing a mask” in the face condition table 185a. Then, the face condition determination unit 191 ends the process.
- the face condition determination unit 191 determines whether or not the user wears sunglasses. For example, the face condition determination unit 191 determines that the user wears sunglasses when eye feature points are not extracted as face feature points. If the user wears sunglasses, the face condition determination unit 191 advances the process to step S59. If the user does not wear sunglasses, the face condition determination unit 191 ends the process.
- Step S59 The face condition determination unit 191 registers “wearing sunglasses” in the face condition table 185a. Then, the face condition determination unit 191 ends the process.
- FIG. 19 is a flowchart showing the processing for determining the facial feature amount extraction area according to the second embodiment.
- the face feature amount extraction area determination unit 192 acquires the face condition from the face condition table 185a.
- the face feature amount extraction area determination unit 192 determines the face feature amount extraction area based on the extraction area determination model table 186a and the face condition. For example, the face feature amount extraction area determination unit 192 determines the face feature amount extraction area as the eyebrow region when the face condition is “wearing a mask”. Further, for example, when the face condition is “wearing sunglasses”, the face feature amount extraction area determination unit 192 determines the face feature amount extraction area as the mouth region and the cheek region. Further, for example, when the face condition is “illumination from the left”, the face feature amount extraction area determination unit 192 determines the face feature amount extraction area to be the eyebrow region, the mouth region, and the right cheek region. That is, the face feature amount extraction area determination unit 192 does not set the left cheek region as the face feature amount extraction area.
- the state determination device 100a can determine the non-awakening state by using the extractable face feature amount area even when the user wears the wearing article. Further, the state determination device 100a can determine the non-awakening state by using the extractable face feature amount extraction area even when the user is illuminated with light.
- 10a extraction unit 100, 100a status determination device, 101 processor, 102 volatile storage device, 103 non-volatile storage device, 104 camera, 105 display, 110 acquisition unit, 120 face area extraction unit, 130 facial feature point extraction unit 140 face feature amount extraction area calculation unit, 150 face feature amount extraction unit, 160 state determination unit, 170 output unit, 180 face feature point storage unit, 180a face feature point table, 181, face feature amount storage unit, 181a face feature amount Table, 182 state determination model storage unit, 182a state determination model table, 183 determination result storage unit, 183a determination result table, 184 average face feature point model storage unit, 184a average face feature point model table, 185 face condition storage unit, 185a Face condition table, 186 extraction region determination model storage unit, 186a extraction region determination model table, 191 face condition determination unit, 192 face feature amount extraction region determination unit, 200 midpoint, 201 rectangular region, 210 midpoint, 211 rectangular region.
Landscapes
- Health & Medical Sciences (AREA)
- Engineering & Computer Science (AREA)
- Life Sciences & Earth Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Physics & Mathematics (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Surgery (AREA)
- Animal Behavior & Ethology (AREA)
- Biomedical Technology (AREA)
- Molecular Biology (AREA)
- Public Health (AREA)
- Medical Informatics (AREA)
- Heart & Thoracic Surgery (AREA)
- Veterinary Medicine (AREA)
- Biophysics (AREA)
- Pathology (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Physiology (AREA)
- Dentistry (AREA)
- Theoretical Computer Science (AREA)
- Multimedia (AREA)
- General Physics & Mathematics (AREA)
- Human Computer Interaction (AREA)
- Social Psychology (AREA)
- Psychiatry (AREA)
- Radiology & Medical Imaging (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Anesthesiology (AREA)
- Child & Adolescent Psychology (AREA)
- Developmental Disabilities (AREA)
- Educational Technology (AREA)
- Hospice & Palliative Care (AREA)
- Psychology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Toxicology (AREA)
- Pharmacology & Pharmacy (AREA)
- Chemical & Material Sciences (AREA)
- Medicinal Chemistry (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
Abstract
Description
ところで、事故の防止のため、車内又は工場内に居る人の非覚醒状態を判定する技術の開発が進んでいる。例えば、心拍、脳波、瞬きなどから非覚醒状態のときに変化する特徴量が抽出される。次に、当該特徴量と閾値とが比較される。比較結果から非覚醒状態が判定される。このように、非覚醒状態は、心拍などの生体信号から判定できる。しかし、当該判定の方法では、人にセンサが装着される。そのため、当該判定の方法では、人は、煩わしさを感じる。また、人がセンサを装着していない場合、当該判定の方法は、使用することができない。さらに、当該判定の方法は、センサを用いるため、コストがかかる。
そのため、どのように高い精度で非覚醒状態を判定するかが問題である。
図1は、実施の形態1の状態判定装置を示す図である。状態判定装置100は、状態判定方法を実行する装置である。
状態判定装置100は、非覚醒状態を判定する。例えば、非覚醒状態は、居眠り状態、飲酒により居眠りしている状態などである。また、非覚醒状態は、意識が朦朧としている状態を含む。例えば、意識が朦朧としている状態は、ユーザがうとうとしている状態、飲酒によりユーザが酔っ払っている状態などである。ここで、ユーザは、居眠り状態から一時的に覚醒状態になった後、再び居眠り状態になる場合がある。このように、ユーザが居眠り状態から短い時間の後に再び居眠り状態になることは、ユーザが居眠り状態と考えてもよい。よって、非覚醒状態には、ユーザが居眠り状態から一時的に覚醒状態になった後、再び居眠り状態になる場合が含まれる。
図2は、実施の形態1の状態判定装置が有するハードウェアの構成を示す図である。状態判定装置100は、プロセッサ101、揮発性記憶装置102、不揮発性記憶装置103、カメラ104、及びディスプレイ105を有する。
カメラ104は、顔を撮像する装置である。また、カメラ104は、撮像デバイスとも言う。ディスプレイ105は、情報を表示する装置である。また、ディスプレイ105は、表示デバイスとも言う。
なお、状態判定装置100がカメラ104及びディスプレイ105を有していない状態は、情報処理装置と考えてもよい。
状態判定装置100は、取得部110、抽出部10、状態判定部160、及び出力部170を有する。抽出部10は、顔領域抽出部120、顔特徴点抽出部130、顔特徴量抽出領域算出部140、及び顔特徴量抽出部150を有する。
また、状態判定装置100は、顔特徴点記憶部180、顔特徴量記憶部181、状態判定モデル記憶部182、及び判定結果記憶部183を有する。
抽出部10は、複数のフレームのそれぞれのフレームから顔領域を抽出し、顔領域から顔特徴点を抽出し、顔特徴点に基づいて顔特徴量抽出領域を算出し、顔特徴量抽出領域から顔特徴量を抽出する。なお、顔領域は、顔の領域を示す。顔特徴点は、顔の部位を示す。顔特徴量抽出領域は、ユーザが非覚醒状態であるときに顔領域の中で変化が生じる領域である。顔特徴量は、特徴量である。
抽出部10が実行する処理は、顔領域抽出部120、顔特徴点抽出部130、顔特徴量抽出領域算出部140、及び顔特徴量抽出部150を用いて詳細に説明する。
顔特徴点抽出部130は、顔領域に基づいて、輪郭、眉毛、目、鼻、口などの顔特徴点を抽出する。例えば、顔特徴点を抽出する方法は、非特許文献2に記載されている。
顔特徴点抽出部130は、顔特徴点を顔特徴点記憶部180に格納する。ここで、顔特徴点記憶部180について説明する。顔特徴点記憶部180は、顔特徴点テーブルを記憶する。
例えば、顔特徴点テーブル180aには、顔特徴点抽出部130が左目内端を抽出した座標が登録される。なお、左目内端は、左目頭とも言う。
顔特徴点抽出部130は、顔特徴点から顔の向きを算出する。顔の向きは、Yaw、Pitch、Rollで表される。顔特徴点抽出部130は、顔の向きを顔特徴点テーブル180aに登録する。
顔特徴量抽出領域算出部140は、非覚醒状態の判定に用いる顔特徴量の抽出領域を算出する。
ここで、ユーザが非覚醒状態のときに行う行為又はユーザが非覚醒状態の予兆となる行為として、ユーザは、眠気に抗う行為を行う。眠気に抗う行為とは、ユーザが故意にギュッと目をつぶる行為である。眠気に抗う行為は、ユーザが故意にギュッと目を閉じる強い瞬目と表現してもよい。ユーザが故意にギュッと目をつぶる場合、眉間に皺が生じる。そこで、顔特徴量抽出領域算出部140は、顔特徴量抽出領域として、顔領域内の眉間領域を算出する。
さらに、ユーザが非覚醒状態のときに行う行為又はユーザが非覚醒状態の予兆となる行為として、ユーザは、あくびをする。ユーザがあくびを行った場合、口が開く。また、ユーザがあくびを行った場合、頬に皺が生じる。そこで、顔特徴量抽出領域算出部140は、顔特徴量抽出領域として、顔領域内の口領域及び頬領域を算出する。
図4(A),(B)は、顔特徴量抽出領域の算出例を示す図である。図4(A)は、眉間領域の算出例である。顔特徴量抽出領域算出部140は、顔特徴点抽出部130が抽出した顔特徴点の中から左右目頭を特定する。顔特徴量抽出領域算出部140は、左目頭と右目頭との間の中間点200を算出する。顔特徴量抽出領域算出部140は、中間点200を中心とした矩形領域201(すなわち、a[pixel]×a[pixel]の領域)を算出する。矩形領域201は、眉間領域である。このように、顔特徴量抽出領域算出部140は、眉間領域を算出する。
さらに、顔特徴量抽出領域算出部140は、顔特徴点抽出部130が算出した顔の向きと顔特徴点とに基づいて、口領域を算出してもよい。算出方法は、上述した通りである。
顔特徴量抽出領域算出部140は、同様に、頬領域を算出することができる。
顔特徴量抽出部150は、眉間領域、口領域、頬領域に基づいて、顔特徴量を抽出する。顔特徴量は、HOG(Histograms of Oriented Gradients)特徴量である。例えば、HOG特徴量は、非特許文献3に記載されている。
また、顔特徴量は、HOG特徴量以外でもよい。例えば、顔特徴量は、SIFT(Scaled Invariance Feature Transform)特徴量、SURF(Speeded-Up Robust Features)、Haar-like特徴量などである。
顔特徴量抽出部150は、顔特徴量を顔特徴量記憶部181に格納する。ここで、顔特徴量記憶部181について説明する。顔特徴量記憶部181は、顔特徴量テーブルを記憶する。
また、顔特徴量テーブル181aには、取得部110が所定時間に取得したn(nは、2以上の整数)枚のフレームのそれぞれに対応するHOG特徴量が登録される。なお、例えば、所定時間は、5分である。
状態判定部160は、複数のフレームのそれぞれにおける顔特徴量と予め作成されている判定情報とに基づいて、ユーザが非覚醒状態であるか否かを判定する。また、状態判定部160は、複数のフレームのそれぞれに対応する顔特徴量と予め格納されている判定情報とに基づいて、ユーザが非覚醒状態であるか否かを判定すると表現してもよい。
状態判定部160は、HOG特徴量に基づいて、非覚醒状態を判定する。具体的には、状態判定部160は、所定時間内で、上記の3つの行為をユーザが行った回数に基づいて、非覚醒状態を判定する。なお、3つの行為とは、ユーザが眉間に皺を寄せる行為、ユーザが口の渇きを潤すために唇をなめる行為、ユーザがあくびを行う行為である。また、例えば、所定時間は、5分である。
当該情報は、状態判定モデルテーブルと言う。状態判定モデルテーブルについて説明する。
HOG特徴量Hnは、取得部110が所定時間に取得したnフレームに対応するHOG特徴量である。
図7は、実施の形態1の眉間に皺が寄せられた回数を算出する方法を説明するための図である。図7のグラフの縦軸は、コサイン類似度Snを示す。図7のグラフの横軸は、時間を示す。眉間に皺が寄せられた場合、皺のエッジが強く表れる。皺のエッジが強く表れた場合、コサイン類似度Snは、小さい値になる。
状態判定部160は、平均値が予め設定された閾値以上の場合、ユーザが非覚醒状態であると判定する。例えば、閾値は、3である。なお、当該閾値は、閾値レベルとも言う。このように、状態判定部160は、平均値がレベル3以上の場合、ユーザが非覚醒状態であると判定する。なお、どのレベルを非覚醒状態と判定するかは、評価方法によって変更できる。
出力部170は、判定結果テーブル183aに登録されている非覚醒レベルを出力する。例えば、出力部170は、非覚醒レベルをディスプレイ105に出力する。また、出力部170は、非覚醒レベルを音声出力してもよい。なお、判定結果テーブル183aに登録されている非覚醒レベルは、平均値を示す情報とも言う。
図10は、実施の形態1の顔特徴量抽出領域の算出処理を示すフローチャートである。なお、図10では、眉間領域の算出処理を説明する。また、図10は、顔特徴量抽出領域算出部140が実行する処理の一例である。
(ステップS12)顔特徴量抽出領域算出部140は、左目頭と右目頭との間の中間点を算出する。
(ステップS14)顔特徴量抽出領域算出部140は、中心座標を中心とした矩形領域を算出する。
(ステップS15)顔特徴量抽出領域算出部140は、顔領域抽出部120で抽出した顔領域の矩形領域を取得する。
顔特徴量抽出領域算出部140は、上記の処理と同様の処理により、口領域と頬領域とを算出することができる。
(ステップS21)顔特徴量抽出部150は、顔特徴量抽出領域算出部140が算出した3つの顔特徴量抽出領域を取得する。すなわち、3つの顔特徴量抽出領域は、眉間領域、口領域、及び頬領域である。
(ステップS22)顔特徴量抽出部150は、眉間領域に基づいて、HOG特徴量を抽出する。また、顔特徴量抽出部150は、口領域に基づいて、HOG特徴量を抽出する。さらに、顔特徴量抽出部150は、頬領域に基づいて、HOG特徴量を抽出する。
これにより、顔特徴量テーブル181aには、3つの顔特徴量抽出領域のそれぞれに基づいて抽出されたHOG特徴量のそれぞれが登録される。
(ステップS31)状態判定部160は、1つのフレーム(例えば、第1のフレームと言う。)から抽出された眉間領域に基づいて、抽出されたHOG特徴量を顔特徴量記憶部181から取得する。
(ステップS32)状態判定部160は、式(1)を用いて、コサイン類似度Snを算出する。
例えば、ステップS32で算出したコサイン類似度Snが閾値Sよりも小さい場合とは、眉間のしわのエッジが、強く表れる場合である。すなわち、ステップS32で算出したコサイン類似度Snが閾値Sよりも小さい場合とは、ユーザが眉間に皺を寄せている場合である。
判定条件が満たされる場合、状態判定部160は、処理をステップS34に進める。判定条件が満たされない場合、状態判定部160は、処理をステップS35に進める。
(ステップS35)状態判定部160は、カウント処理を開始してから5分経過したか否かを判定する。状態判定部160は、5分経過している場合、処理を終了する。状態判定部160は、5分経過していない場合、処理をステップS31に進める。なお、例えば、当該ステップS31では、状態判定部160は、第1のフレームの次に取得部110が取得した第2のフレームから抽出された眉間領域に基づいて、抽出されたHOG特徴量を顔特徴量記憶部181から取得する。
なお、上記の5分は、任意の時間である。そのため、5分という時間は、5分以外の時間でもよい。
(ステップS41)状態判定部160は、状態判定モデルテーブル182aと眉間に皺が寄せられた回数とに基づいて、非覚醒レベルを決定する。状態判定部160は、状態判定モデルテーブル182aと唇がなめられた回数とに基づいて、非覚醒レベルを決定する。状態判定部160は、状態判定モデルテーブル182aとあくびした回数とに基づいて、非覚醒レベルを決定する。
(ステップS43)状態判定部160は、平均値に基づいて、非覚醒状態を判定する。例えば、状態判定部160は、平均値がレベル3以上の場合、ユーザが非覚醒状態であると判定する。
(ステップS44)状態判定部160は、判定結果を判定結果記憶部183に格納する。
次に、実施の形態2を説明する。実施の形態2は、実施の形態1と相違する事項を主に説明し、実施の形態1と共通する事項の説明を省略する。実施の形態2は、図1~13を参照する。
また、状態判定装置100aは、平均顔特徴点モデル記憶部184、顔条件記憶部185、及び抽出領域決定モデル記憶部186を有する。
図1に示される構成と同じ図14の構成は、図1に示される符号と同じ符号を付している。
また、顔条件判定部191は、ユーザが装着物を装着していない、かつフレームに影及び色飛びの領域が含まれていない場合、通常時と判定する。
顔条件判定部191は、判定結果を顔条件記憶部185に格納する。ここで、顔条件記憶部185について説明する。顔条件記憶部185は、顔条件テーブルを記憶する。
図16(A)は、顔条件判定部191が顔条件テーブル185aに“通常時”を登録した状態を示す。図16(B)は、顔条件判定部191が顔条件テーブル185aに“マスク着用”を登録した状態を示す。図16(C)は、顔条件判定部191が顔条件テーブル185aに“左から照射”を登録した状態を示す。
次に、抽出領域決定モデル記憶部186について説明する。抽出領域決定モデル記憶部186は、抽出領域決定モデルテーブルを記憶する。
顔特徴量抽出領域算出部140は、顔特徴点抽出部130が抽出した顔特徴点に基づいて、顔特徴量抽出領域決定部192が決定した顔特徴量抽出領域を算出する。
図18は、実施の形態2の顔条件の判定処理を示すフローチャートである。また、図18は、顔条件判定部191が実行する処理の一例である。
(ステップS51)顔条件判定部191は、顔特徴点記憶部180から顔特徴点を取得する。
例えば、顔条件判定部191は、フレーム内の顔のある領域の色彩が黒の場合、フレーム内に影が存在すると判定する。また、例えば、顔条件判定部191は、フレーム内の顔のある領域の色彩が白の場合、又はフレーム内のある領域の顔特徴点を取得できていない場合、フレーム内に色飛びが存在すると判定する。なお、図18では、影又は色飛びは、照明によって左方向から光が照射されているため、発生するものとする。
フレーム内に影又は色飛びが存在する場合、顔条件判定部191は、処理をステップS53に進める。フレーム内に影及び色飛びが存在しない場合、顔条件判定部191は、処理をステップS54に進める。
(ステップS54)顔条件判定部191は、ユーザが装着物を装着しているか否かを判定する。例えば、顔条件判定部191は、顔特徴点として目の特徴点が抽出されていない場合、ユーザが装着物を装着していると判定する。
ユーザが装着物を装着している場合、顔条件判定部191は、処理をステップS56に進める。ユーザが装着物を装着していない場合、顔条件判定部191は、処理をステップS55に進める。
(ステップS56)顔条件判定部191は、ユーザがマスクを装着しているか否かを判定する。例えば、顔条件判定部191は、顔特徴点として口の特徴点が抽出されていない場合、ユーザがマスクを装着していると判定する。
ユーザがマスクを装着している場合、顔条件判定部191は、処理をステップS57に進める。ユーザがマスクを装着していない場合、顔条件判定部191は、処理をステップS58に進める。
(ステップS58)顔条件判定部191は、ユーザがサングラスを装着しているか否かを判定する。例えば、顔条件判定部191は、顔特徴点として目の特徴点が抽出されていない場合、ユーザがサングラスを装着していると判定する。
ユーザがサングラスを装着している場合、顔条件判定部191は、処理をステップS59に進める。ユーザがサングラスを装着していない場合、顔条件判定部191は、処理を終了する。
(ステップS61)顔特徴量抽出領域決定部192は、顔条件テーブル185aから顔条件を取得する。
Claims (21)
- ユーザの顔を撮影することで順次取得される複数のフレームのそれぞれのフレームから前記顔の領域を示す顔領域を抽出し、前記顔領域から顔の部位を示す顔特徴点を抽出し、前記顔特徴点に基づいて、前記ユーザが非覚醒状態であるときに前記顔領域の中で変化が生じる領域である顔特徴量抽出領域を算出し、前記顔特徴量抽出領域から特徴量である顔特徴量を抽出する抽出部と、
前記複数のフレームのそれぞれにおける前記顔特徴量と予め作成されている判定情報とに基づいて、前記ユーザが前記非覚醒状態であるか否かを判定する状態判定部と、
判定結果を出力する出力部と、
を有する状態判定装置。 - 前記顔特徴量抽出領域は、前記顔領域内の眉間領域である、
請求項1に記載の状態判定装置。 - 前記判定情報は、前記ユーザが眉間に皺を寄せた回数に応じた非覚醒レベルを決定するための情報を含み、
前記状態判定部は、前記複数のフレームのそれぞれにおける前記眉間領域から抽出された前記顔特徴量に基づいて、前記ユーザが眉間に皺を寄せた回数を算出し、前記ユーザが眉間に皺を寄せた回数と前記判定情報とに基づいて非覚醒レベルを決定し、決定した非覚醒レベルが予め設定された閾値レベル以上の場合、前記ユーザが前記非覚醒状態であると判定する、
請求項2に記載の状態判定装置。 - 前記出力部は、決定された非覚醒レベルを出力する、
請求項3に記載の状態判定装置。 - 前記顔特徴量抽出領域は、前記顔領域内の口領域である、
請求項1に記載の状態判定装置。 - 前記判定情報は、前記ユーザが唇をなめた回数に応じた非覚醒レベルを決定するための情報を含み、
前記状態判定部は、前記複数のフレームのそれぞれにおける前記口領域から抽出された前記顔特徴量に基づいて、前記ユーザが唇をなめた回数を算出し、前記ユーザが唇をなめた回数と前記判定情報とに基づいて非覚醒レベルを決定し、決定した非覚醒レベルが予め設定された閾値レベル以上の場合、前記ユーザが前記非覚醒状態であると判定する、
請求項5に記載の状態判定装置。 - 前記出力部は、決定された非覚醒レベルを出力する、
請求項6に記載の状態判定装置。 - 前記顔特徴量抽出領域は、前記顔領域内の口領域及び頬領域である、
請求項1に記載の状態判定装置。 - 前記判定情報は、前記ユーザがあくびをした回数に応じた非覚醒レベルを決定するための情報を含み、
前記状態判定部は、前記複数のフレームのそれぞれにおける前記口領域及び前記頬領域から抽出された前記顔特徴量に基づいて、前記ユーザがあくびをした回数を算出し、前記ユーザがあくびをした回数と前記判定情報とに基づいて非覚醒レベルを決定し、決定した非覚醒レベルが予め設定された閾値レベル以上の場合、前記ユーザが前記非覚醒状態であると判定する、
請求項8に記載の状態判定装置。 - 前記出力部は、決定された非覚醒レベルを出力する、
請求項9に記載の状態判定装置。 - 前記顔特徴量抽出領域は、前記顔領域内の眉間領域及び口領域であり、
前記判定情報は、前記ユーザが眉間に皺を寄せた回数に応じた非覚醒レベルを決定するための情報及び前記ユーザが唇をなめた回数に応じた非覚醒レベルを決定するための情報を含み、
前記状態判定部は、
前記複数のフレームのそれぞれにおける前記眉間領域から抽出された前記顔特徴量に基づいて、前記ユーザが眉間に皺を寄せた回数を算出し、前記ユーザが眉間に皺を寄せた回数と前記判定情報とに基づいて非覚醒レベルを決定し、前記複数のフレームのそれぞれにおける前記口領域から抽出された前記顔特徴量に基づいて、前記ユーザが唇をなめた回数を算出し、前記ユーザが唇をなめた回数と前記判定情報とに基づいて非覚醒レベルを決定し、
決定した複数の非覚醒レベルの平均値を算出し、
前記平均値が予め設定された閾値以上の場合、前記ユーザが前記非覚醒状態であると判定する、
請求項1に記載の状態判定装置。 - 前記顔特徴量抽出領域は、前記顔領域内の眉間領域、口領域、及び頬領域であり、
前記判定情報は、前記ユーザが眉間に皺を寄せた回数に応じた非覚醒レベルを決定するための情報及び前記ユーザがあくびをした回数に応じた非覚醒レベルを決定するための情報を含み、
前記状態判定部は、
前記複数のフレームのそれぞれにおける前記眉間領域から抽出された前記顔特徴量に基づいて、前記ユーザが眉間に皺を寄せた回数を算出し、前記ユーザが眉間に皺を寄せた回数と前記判定情報とに基づいて非覚醒レベルを決定し、前記複数のフレームのそれぞれにおける前記口領域及び前記頬領域から抽出された前記顔特徴量に基づいて、前記ユーザがあくびをした回数を算出し、前記ユーザがあくびをした回数と前記判定情報とに基づいて非覚醒レベルを決定し、
決定した複数の非覚醒レベルの平均値を算出し、
前記平均値が予め設定された閾値以上の場合、前記ユーザが前記非覚醒状態であると判定する、
請求項1に記載の状態判定装置。 - 前記顔特徴量抽出領域は、前記顔領域内の口領域及び頬領域であり、
前記判定情報は、前記ユーザが唇をなめた回数に応じた非覚醒レベルを決定するための情報及び前記ユーザがあくびをした回数に応じた非覚醒レベルを決定するための情報を含み、
前記状態判定部は、
前記複数のフレームのそれぞれにおける前記口領域から抽出された前記顔特徴量に基づいて、前記ユーザが唇をなめた回数を算出し、前記ユーザが唇をなめた回数と前記判定情報とに基づいて非覚醒レベルを決定し、前記複数のフレームのそれぞれにおける前記口領域及び前記頬領域から抽出された前記顔特徴量に基づいて、前記ユーザがあくびをした回数を算出し、前記ユーザがあくびをした回数と前記判定情報とに基づいて非覚醒レベルを決定し、
決定した複数の非覚醒レベルの平均値を算出し、
前記平均値が予め設定された閾値以上の場合、前記ユーザが前記非覚醒状態であると判定する、
請求項1に記載の状態判定装置。 - 前記顔特徴量抽出領域は、前記顔領域内の眉間領域、口領域、及び頬領域であり、
前記判定情報は、前記ユーザが眉間に皺を寄せた回数に応じた非覚醒レベルを決定するための情報、前記ユーザが唇をなめた回数に応じた非覚醒レベルを決定するための情報、及び前記ユーザがあくびをした回数に応じた非覚醒レベルを決定するための情報を含み、
前記状態判定部は、
前記複数のフレームのそれぞれにおける前記眉間領域から抽出された前記顔特徴量に基づいて、前記ユーザが眉間に皺を寄せた回数を算出し、前記ユーザが眉間に皺を寄せた回数と前記判定情報とに基づいて非覚醒レベルを決定し、前記複数のフレームのそれぞれにおける前記口領域から抽出された前記顔特徴量に基づいて、前記ユーザが唇をなめた回数を算出し、前記ユーザが唇をなめた回数と前記判定情報とに基づいて非覚醒レベルを決定し、前記複数のフレームのそれぞれにおける前記口領域及び前記頬領域から抽出された前記顔特徴量に基づいて、前記ユーザがあくびをした回数を算出し、前記ユーザがあくびをした回数と前記判定情報とに基づいて非覚醒レベルを決定し、
決定した複数の非覚醒レベルの平均値を算出し、
前記平均値が予め設定された閾値以上の場合、前記ユーザが前記非覚醒状態であると判定する、
請求項1に記載の状態判定装置。 - 前記出力部は、前記平均値を示す情報を出力する、
請求項11から14のいずれか1項に記載の状態判定装置。 - 前記判定情報は、機械学習により得られた情報であり、かつ前記ユーザが前記非覚醒状態であるか否かを判定するための情報である、
請求項1に記載の状態判定装置。 - 前記複数のフレームに基づいて、前記ユーザが装着物を装着しているか否かを判定する顔条件判定部と、
前記ユーザが前記装着物を装着していると判定された場合、前記装着物が装着されている位置に応じて、前記顔領域のうちどの領域を前記顔特徴量抽出領域とするのかを示す抽出領域決定モデル情報に基づいて、前記顔特徴量抽出領域を決定する顔特徴量抽出領域決定部と、
をさらに有する、
請求項1から16のいずれか1項に記載の状態判定装置。 - 前記顔条件判定部は、平均的な顔の部位の位置を示す平均顔特徴点モデル情報と前記顔特徴点の位置とに基づいて、前記ユーザが前記装着物を装着しているか否かを判定する、
請求項17に記載の状態判定装置。 - 前記複数のフレームのいずれかのフレームに影又は色飛びが存在するか否かを判定する顔条件判定部と、
前記複数のフレームのいずれかのフレームに前記影又は前記色飛びが存在すると判定された場合、前記影又は前記色飛びが存在する位置に応じて、前記顔領域のうちどの領域を前記顔特徴量抽出領域とするのかを示す抽出領域決定モデル情報に基づいて、前記顔特徴量抽出領域を決定する顔特徴量抽出領域決定部と、
をさらに有する、
請求項1から16のいずれか1項に記載の状態判定装置。 - 状態判定装置が、
ユーザの顔を撮影することで順次取得される複数のフレームのそれぞれのフレームから前記顔の領域を示す顔領域を抽出し、前記顔領域から顔の部位を示す顔特徴点を抽出し、前記顔特徴点に基づいて、前記ユーザが非覚醒状態であるときに前記顔領域の中で変化が生じる領域である顔特徴量抽出領域を算出し、前記顔特徴量抽出領域から特徴量である顔特徴量を抽出し、
前記複数のフレームのそれぞれにおける前記顔特徴量と予め作成されている判定情報とに基づいて、前記ユーザが前記非覚醒状態であるか否かを判定し、
判定結果を出力する、
状態判定方法。 - 状態判定装置に、
ユーザの顔を撮影することで順次取得される複数のフレームのそれぞれのフレームから前記顔の領域を示す顔領域を抽出し、前記顔領域から顔の部位を示す顔特徴点を抽出し、前記顔特徴点に基づいて、前記ユーザが非覚醒状態であるときに前記顔領域の中で変化が生じる領域である顔特徴量抽出領域を算出し、前記顔特徴量抽出領域から特徴量である顔特徴量を抽出し、
前記複数のフレームのそれぞれにおける前記顔特徴量と予め作成されている判定情報とに基づいて、前記ユーザが前記非覚醒状態であるか否かを判定し、
判定結果を出力する、
処理を実行させるための状態判定プログラム。
Priority Applications (5)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| DE112018008131.1T DE112018008131B4 (de) | 2018-12-12 | 2018-12-12 | Zustandsbestimmungseinrichtung, zustandsbestimmungsverfahren und zustandsbestimmungsprogramm |
| CN201880099794.8A CN113168680B (zh) | 2018-12-12 | 2018-12-12 | 状态判定装置、状态判定方法以及记录介质 |
| PCT/JP2018/045595 WO2020121425A1 (ja) | 2018-12-12 | 2018-12-12 | 状態判定装置、状態判定方法、及び状態判定プログラム |
| JP2020558850A JP6906717B2 (ja) | 2018-12-12 | 2018-12-12 | 状態判定装置、状態判定方法、及び状態判定プログラム |
| US17/326,795 US11963759B2 (en) | 2018-12-12 | 2021-05-21 | State determination device, state determination method, and recording medium |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| PCT/JP2018/045595 WO2020121425A1 (ja) | 2018-12-12 | 2018-12-12 | 状態判定装置、状態判定方法、及び状態判定プログラム |
Related Child Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| US17/326,795 Continuation US11963759B2 (en) | 2018-12-12 | 2021-05-21 | State determination device, state determination method, and recording medium |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| WO2020121425A1 true WO2020121425A1 (ja) | 2020-06-18 |
Family
ID=71075986
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| PCT/JP2018/045595 Ceased WO2020121425A1 (ja) | 2018-12-12 | 2018-12-12 | 状態判定装置、状態判定方法、及び状態判定プログラム |
Country Status (5)
| Country | Link |
|---|---|
| US (1) | US11963759B2 (ja) |
| JP (1) | JP6906717B2 (ja) |
| CN (1) | CN113168680B (ja) |
| DE (1) | DE112018008131B4 (ja) |
| WO (1) | WO2020121425A1 (ja) |
Cited By (5)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| JP2022152500A (ja) * | 2021-03-29 | 2022-10-12 | 公益財団法人鉄道総合技術研究所 | 覚醒度推定方法、覚醒度推定装置及び覚醒度推定プログラム |
| WO2022250063A1 (ja) * | 2021-05-26 | 2022-12-01 | キヤノン株式会社 | 顔認証を行う画像処理装置および画像処理方法 |
| JP2022182960A (ja) * | 2021-05-26 | 2022-12-08 | キヤノン株式会社 | 画像処理装置、画像処理方法及びプログラム |
| JPWO2023095229A1 (ja) * | 2021-11-25 | 2023-06-01 | ||
| WO2023238365A1 (ja) * | 2022-06-10 | 2023-12-14 | 富士通株式会社 | 顔特徴情報抽出方法、顔特徴情報抽出装置、及び顔特徴情報抽出プログラム |
Families Citing this family (3)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN108615014B (zh) * | 2018-04-27 | 2022-06-21 | 京东方科技集团股份有限公司 | 一种眼睛状态的检测方法、装置、设备和介质 |
| US20230008323A1 (en) * | 2021-07-12 | 2023-01-12 | GE Precision Healthcare LLC | Systems and methods for predicting and preventing patient departures from bed |
| US12198565B2 (en) | 2021-07-12 | 2025-01-14 | GE Precision Healthcare LLC | Systems and methods for predicting and preventing collisions |
Citations (4)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| JP2008212298A (ja) * | 2007-03-01 | 2008-09-18 | Toyota Central R&D Labs Inc | 眠気判定装置及びプログラム |
| JP2012221061A (ja) * | 2011-04-05 | 2012-11-12 | Canon Inc | 画像認識装置、画像認識方法、及びプログラム |
| JP2016081212A (ja) * | 2014-10-15 | 2016-05-16 | 日本電気株式会社 | 画像認識装置、画像認識方法、および、画像認識プログラム |
| JP2017162409A (ja) * | 2016-03-11 | 2017-09-14 | ヤンマー株式会社 | 顔の表情と動作の認識装置及び方法 |
Family Cites Families (20)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| JPH11339200A (ja) | 1998-05-28 | 1999-12-10 | Toyota Motor Corp | 居眠り運転検出装置 |
| LU90879B1 (de) * | 2001-12-12 | 2003-06-13 | Hagen Hultzsch | Verfahren und Vorrichtung zur Erkennung des Schlaefrigkeitszustandes von Fuehrern bewegter Objekte |
| KR101386823B1 (ko) * | 2013-10-29 | 2014-04-17 | 김재철 | 동작, 안면, 눈, 입모양 인지를 통한 2단계 졸음운전 방지 장치 |
| KR102368812B1 (ko) * | 2015-06-29 | 2022-02-28 | 엘지전자 주식회사 | 차량 운전 보조 방법 및 차량 |
| JP6707969B2 (ja) * | 2016-04-19 | 2020-06-10 | トヨタ自動車株式会社 | 覚醒度判定装置 |
| US20180012090A1 (en) * | 2016-07-07 | 2018-01-11 | Jungo Connectivity Ltd. | Visual learning system and method for determining a driver's state |
| US20190311014A1 (en) * | 2016-08-22 | 2019-10-10 | Mitsubishi Electric Corporation | Information presentation device, information presentation system, and information presentation method |
| WO2018118958A1 (en) * | 2016-12-22 | 2018-06-28 | Sri International | A driver monitoring and response system |
| WO2018134875A1 (ja) * | 2017-01-17 | 2018-07-26 | 三菱電機株式会社 | 瞼検出装置、居眠り判定装置、および瞼検出方法 |
| KR20180124381A (ko) * | 2017-05-11 | 2018-11-21 | 현대자동차주식회사 | 운전자의 상태 판단 시스템 및 그 방법 |
| US11219395B2 (en) * | 2017-07-19 | 2022-01-11 | Panasonic Intellectual Property Management Co., Ltd. | Sleepiness estimating device and wakefulness inducing device |
| WO2019028798A1 (zh) * | 2017-08-10 | 2019-02-14 | 北京市商汤科技开发有限公司 | 驾驶状态监控方法、装置和电子设备 |
| JP7329755B2 (ja) * | 2017-08-31 | 2023-08-21 | パナソニックIpマネジメント株式会社 | 支援方法およびそれを利用した支援システム、支援装置 |
| JP2019086813A (ja) * | 2017-11-01 | 2019-06-06 | 株式会社デンソー | 漫然運転抑制システム |
| CN111417992A (zh) * | 2017-11-30 | 2020-07-14 | 松下知识产权经营株式会社 | 图像处理装置、图像处理系统、摄像装置、摄像系统及图像处理方法 |
| JP6888542B2 (ja) * | 2017-12-22 | 2021-06-16 | トヨタ自動車株式会社 | 眠気推定装置及び眠気推定方法 |
| US10867195B2 (en) * | 2018-03-12 | 2020-12-15 | Microsoft Technology Licensing, Llc | Systems and methods for monitoring driver state |
| JP7124367B2 (ja) * | 2018-03-20 | 2022-08-24 | トヨタ自動車株式会社 | 作業支援システム、情報処理方法およびプログラム |
| US10970571B2 (en) * | 2018-06-04 | 2021-04-06 | Shanghai Sensetime Intelligent Technology Co., Ltd. | Vehicle control method and system, vehicle-mounted intelligent system, electronic device, and medium |
| JP6822453B2 (ja) * | 2018-09-10 | 2021-01-27 | ダイキン工業株式会社 | 空調制御装置および空気調和装置 |
-
2018
- 2018-12-12 WO PCT/JP2018/045595 patent/WO2020121425A1/ja not_active Ceased
- 2018-12-12 JP JP2020558850A patent/JP6906717B2/ja active Active
- 2018-12-12 CN CN201880099794.8A patent/CN113168680B/zh active Active
- 2018-12-12 DE DE112018008131.1T patent/DE112018008131B4/de active Active
-
2021
- 2021-05-21 US US17/326,795 patent/US11963759B2/en active Active
Patent Citations (4)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| JP2008212298A (ja) * | 2007-03-01 | 2008-09-18 | Toyota Central R&D Labs Inc | 眠気判定装置及びプログラム |
| JP2012221061A (ja) * | 2011-04-05 | 2012-11-12 | Canon Inc | 画像認識装置、画像認識方法、及びプログラム |
| JP2016081212A (ja) * | 2014-10-15 | 2016-05-16 | 日本電気株式会社 | 画像認識装置、画像認識方法、および、画像認識プログラム |
| JP2017162409A (ja) * | 2016-03-11 | 2017-09-14 | ヤンマー株式会社 | 顔の表情と動作の認識装置及び方法 |
Cited By (9)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| JP2022152500A (ja) * | 2021-03-29 | 2022-10-12 | 公益財団法人鉄道総合技術研究所 | 覚醒度推定方法、覚醒度推定装置及び覚醒度推定プログラム |
| JP7443283B2 (ja) | 2021-03-29 | 2024-03-05 | 公益財団法人鉄道総合技術研究所 | 覚醒度推定方法、覚醒度推定装置及び覚醒度推定プログラム |
| WO2022250063A1 (ja) * | 2021-05-26 | 2022-12-01 | キヤノン株式会社 | 顔認証を行う画像処理装置および画像処理方法 |
| JP2022182960A (ja) * | 2021-05-26 | 2022-12-08 | キヤノン株式会社 | 画像処理装置、画像処理方法及びプログラム |
| JP7346528B2 (ja) | 2021-05-26 | 2023-09-19 | キヤノン株式会社 | 画像処理装置、画像処理方法及びプログラム |
| JPWO2023095229A1 (ja) * | 2021-11-25 | 2023-06-01 | ||
| WO2023095229A1 (ja) * | 2021-11-25 | 2023-06-01 | 三菱電機株式会社 | 覚醒度推定装置および覚醒度推定方法 |
| JP7403729B2 (ja) | 2021-11-25 | 2023-12-22 | 三菱電機株式会社 | 覚醒度推定装置および覚醒度推定方法 |
| WO2023238365A1 (ja) * | 2022-06-10 | 2023-12-14 | 富士通株式会社 | 顔特徴情報抽出方法、顔特徴情報抽出装置、及び顔特徴情報抽出プログラム |
Also Published As
| Publication number | Publication date |
|---|---|
| JPWO2020121425A1 (ja) | 2021-05-20 |
| US20210271865A1 (en) | 2021-09-02 |
| DE112018008131T5 (de) | 2021-07-29 |
| CN113168680B (zh) | 2025-01-07 |
| JP6906717B2 (ja) | 2021-07-21 |
| DE112018008131B4 (de) | 2022-10-27 |
| CN113168680A (zh) | 2021-07-23 |
| US11963759B2 (en) | 2024-04-23 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| JP6906717B2 (ja) | 状態判定装置、状態判定方法、及び状態判定プログラム | |
| KR102596897B1 (ko) | 모션벡터 및 특징벡터 기반 위조 얼굴 검출 방법 및 장치 | |
| US8891819B2 (en) | Line-of-sight detection apparatus and method thereof | |
| KR100888554B1 (ko) | 인식 시스템 | |
| CN103914676B (zh) | 一种在人脸识别中使用的方法和装置 | |
| JP6351243B2 (ja) | 画像処理装置、画像処理方法 | |
| US11875603B2 (en) | Facial action unit detection | |
| JP5787845B2 (ja) | 画像認識装置、方法、及びプログラム | |
| CN101271517A (zh) | 面部区域检测装置、方法和计算机可读记录介质 | |
| Komogortsev et al. | Multimodal ocular biometrics approach: A feasibility study | |
| TW201140511A (en) | Drowsiness detection method | |
| CN106682578A (zh) | 基于眨眼检测的人脸识别方法 | |
| Sharmila et al. | Eye blink detection using back ground subtraction and gradient-based corner detection for preventing CVS | |
| Liu et al. | A practical driver fatigue detection algorithm based on eye state | |
| Monwar et al. | Eigenimage based pain expression recognition | |
| CN113673378B (zh) | 基于双目摄像头的人脸识别方法、装置和存储介质 | |
| Tian et al. | Detecting good quality frames in videos captured by a wearable camera for blind navigation | |
| Kim et al. | Segmentation method of eye region based on fuzzy logic system for classifying open and closed eyes | |
| Caplova et al. | Recognition of children on age-different images: Facial morphology and age-stable features | |
| Adireddi et al. | Detection of eye blink using svm classifier | |
| Abboud et al. | Quality based approach for adaptive face recognition | |
| WO2021053806A1 (ja) | 情報処理装置、プログラム及び情報処理方法 | |
| CN110110623A (zh) | 一种基于Android平台的人脸识别系统及设计方法 | |
| Kumar et al. | Multiview 3D detection system for automotive driving | |
| Kulkarni et al. | An eye blink and head movement detection for computer vision syndrome |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| 121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 18943245 Country of ref document: EP Kind code of ref document: A1 |
|
| ENP | Entry into the national phase |
Ref document number: 2020558850 Country of ref document: JP Kind code of ref document: A |
|
| 122 | Ep: pct application non-entry in european phase |
Ref document number: 18943245 Country of ref document: EP Kind code of ref document: A1 |
|
| WWG | Wipo information: grant in national office |
Ref document number: 201880099794.8 Country of ref document: CN |