WO2022150874A1 - Système et procédé de détection de peau dans des images - Google Patents
Système et procédé de détection de peau dans des images Download PDFInfo
- Publication number
- WO2022150874A1 WO2022150874A1 PCT/AU2021/051563 AU2021051563W WO2022150874A1 WO 2022150874 A1 WO2022150874 A1 WO 2022150874A1 AU 2021051563 W AU2021051563 W AU 2021051563W WO 2022150874 A1 WO2022150874 A1 WO 2022150874A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- spatial
- images
- pixel sub
- regions
- human skin
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Ceased
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/40—Spoof detection, e.g. liveness detection
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/59—Context or environment of the image inside of a vehicle, e.g. relating to seat occupancy, driver state or inner lighting conditions
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/161—Detection; Localisation; Normalisation
- G06V40/166—Detection; Localisation; Normalisation using acquisition arrangements
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/168—Feature extraction; Face representation
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B2503/00—Evaluating a particular growth phase or type of persons or animals
- A61B2503/20—Workers
- A61B2503/22—Motor vehicles operators, e.g. drivers, pilots, captains
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/0059—Measuring for diagnostic purposes; Identification of persons using light, e.g. diagnosis by transillumination, diascopy, fluorescence
- A61B5/0075—Measuring for diagnostic purposes; Identification of persons using light, e.g. diagnosis by transillumination, diascopy, fluorescence by spectroscopy, i.e. measuring spectra, e.g. Raman spectroscopy, infrared absorption spectroscopy
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/0059—Measuring for diagnostic purposes; Identification of persons using light, e.g. diagnosis by transillumination, diascopy, fluorescence
- A61B5/0077—Devices for viewing the surface of the body, e.g. camera, magnifying lens
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/103—Measuring devices for testing the shape, pattern, colour, size or movement of the body or parts thereof, for diagnostic purposes
- A61B5/11—Measuring movement of the entire body or parts thereof, e.g. head or hand tremor or mobility of a limb
- A61B5/1113—Local tracking of patients, e.g. in a hospital or private home
- A61B5/1114—Tracking parts of the body
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/103—Measuring devices for testing the shape, pattern, colour, size or movement of the body or parts thereof, for diagnostic purposes
- A61B5/11—Measuring movement of the entire body or parts thereof, e.g. head or hand tremor or mobility of a limb
- A61B5/1126—Measuring movement of the entire body or parts thereof, e.g. head or hand tremor or mobility of a limb using a particular sensing technique
- A61B5/1128—Measuring movement of the entire body or parts thereof, e.g. head or hand tremor or mobility of a limb using a particular sensing technique using image analysis
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/16—Devices for psychotechnics; Testing reaction times ; Devices for evaluating the psychological state
- A61B5/163—Devices for psychotechnics; Testing reaction times ; Devices for evaluating the psychological state by tracking eye movement, gaze, or pupil change
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/16—Devices for psychotechnics; Testing reaction times ; Devices for evaluating the psychological state
- A61B5/168—Evaluating attention deficit, hyperactivity
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/16—Devices for psychotechnics; Testing reaction times ; Devices for evaluating the psychological state
- A61B5/18—Devices for psychotechnics; Testing reaction times ; Devices for evaluating the psychological state for vehicle drivers or machine operators
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/72—Signal processing specially adapted for physiological signals or for diagnostic purposes
- A61B5/7235—Details of waveform analysis
- A61B5/7264—Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems
- A61B5/7267—Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems involving training the classification device
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01J—MEASUREMENT OF INTENSITY, VELOCITY, SPECTRAL CONTENT, POLARISATION, PHASE OR PULSE CHARACTERISTICS OF INFRARED, VISIBLE OR ULTRAVIOLET LIGHT; COLORIMETRY; RADIATION PYROMETRY
- G01J3/00—Spectrometry; Spectrophotometry; Monochromators; Measuring colours
- G01J3/02—Details
- G01J3/10—Arrangements of light sources specially adapted for spectrometry or colorimetry
- G01J3/108—Arrangements of light sources specially adapted for spectrometry or colorimetry for measurement in the infrared range
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N20/00—Machine learning
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
Definitions
- the present application relates to subject imaging systems and in particular to liveness detection in images.
- Embodiments of the present invention are particularly adapted for detecting liveness of subjects in images such as in facial recognition systems to identify real persons from fakes.
- images such as in facial recognition systems
- the invention is applicable in broader contexts and other applications.
- NIR near-infrared
- a second application that is becoming more common in society relates to systems to monitor the interior of vehicles such as cars, trucks, trains, aircraft, for safety, comfort and/or convenience purposes.
- these systems include monitoring regions inside or outside the vehicle for occupancy and checking driver attention for signs of driver impairment.
- These systems are particularly important for safety as they are able to detect if the driver of a car is able to perform the driving task due adequately.
- a current limitation of these infrared monitoring systems today is that they may be fooled into operating with fake faces, such as pictures of faces, masks or puppets/mannequins that show realistic face appearances when viewed on a conventional 2D image sensor.
- a semi-autonomous vehicle may drive automatically under limited conditions, including where the driver is in a supervisory role and required to pay sufficient attention to the road and the car’s automatic control system to maximize safety.
- an infrared driver monitoring system is typically employed to monitor the driver’s attention.
- a plausible scenario is a driver attempting to fool such a system into deciding that he or she is paying adequate attention to the road, when they may wish to take a nap, look at their phone etc.
- Such a scenario may be detected by “liveness” detection of the driver being monitored.
- liveness detection There are already existing solutions in the category of liveness detection.
- a common technique to detect liveness is to observe the person for signs of life-like movement, such as facial expressions, eye-movements, blinking. These methods may be referred to as “behaviour-based” liveness detection methods.
- [001 1 ] Another class of method employs “deep-learned” neural networks trained on large quantities of real versus fake images of humans to make decisions in shorter time periods. However, this technique requires time and cost to build up the dataset and train the network.
- a subject monitoring system including: a near infrared illumination source configured to illuminate a scene with infrared light having spatial beam characteristics to generate a spatial pattern; an image sensor configured to capture one or more images of the scene when illuminated by the illumination source; and a processor configured to process the captured one or more images by: determining a degree of modification of the spatial pattern by objects in the scene within pixel sub-regions of an image; and classifying one or more pixel sub-regions of the image as including human skin or other material based on the degree of modification of the spatial pattern identified in that pixel sub-region.
- human skin is intended to refer to live human skin being a real human surface identified in images. This is in contrast to non-live human skin such as a paper image or photograph of a human that is imaged by a camera.
- determining a degree of modification of the spatial pattern includes determining a spatial power spectral density of the pixel sub-regions.
- Classifying a pixel sub-region as including human skin may include determining a spatial frequency response of a spatial filter matched to the power spectral density of the spatial pattern.
- the classification includes applying a threshold to the spatial frequency response of the spatial filter, wherein a spectral power of the spatial frequency response that is below the threshold is designated as including human skin.
- the threshold is determined by a statistical analysis of the filter response against a database containing images of human skin versus other materials.
- the processor includes a machine learned classifier module trained to detect human skin versus other materials based on the spatial power spectral density of pixel sub-regions under different illumination conditions.
- the illumination conditions include an exposure time of the image sensor.
- the illumination conditions include an intensity of the infrared light emitted from the illumination source.
- the processor is configured to perform the step of determining whether or not a human subject is present in the imaged scene based on a detected level of human skin in the image.
- the processor is configured to perform face detection to detect a facial pixel region in the one or more images and wherein the one or more pixel subregions includes the facial pixel region.
- the processor is configured to estimate a distance to the face from the image sensor and wherein the threshold of spatial filtering is modified based on the estimated distance.
- the infrared light has spectral power in the wavelength range from 650 nm to 1200 nm and wherein the spatial pattern includes a spatial wavelength in the range of 1 mm to 10 mm.
- the illumination source includes a laser.
- the laser is a vertical cavity surface emitting laser (VCSEL).
- the spatial pattern includes a speckle pattern produced from diffuse reflection of the laser source from a surface.
- the processor is configured to control an exposure time of the image sensor and/or an illumination time of the illumination source to enhance or reduce the amount of speckle detected in the images.
- the illumination source includes a light emitting diode (LED) spatially encoded with the spatial pattern.
- LED light emitting diode
- the illumination source is configured to selectively adjust an intensity of the spatial pattern.
- the system includes a diffractive element disposed adjacent the output of the illumination source to generate all or part of the spatial pattern.
- the subject monitoring system is a driver monitoring system of a vehicle. In other embodiments, the subject monitoring system is an occupant monitoring system of a vehicle.
- a subject monitoring system including: illuminating a scene with infrared light; an image sensor configured to capture one or more images of the scene when illuminated by the laser source; and a processor configured to process the captured one or more images by: determining a degree of modification of a laser speckle pattern by objects in the scene within pixel sub-regions of an image; and classifying one or more pixel sub-regions of the image as including human skin or other material based on the degree of modification of the speckle pattern identified in that pixel sub-region.
- a method of detecting human skin including the steps: illuminating a scene from a light source with infrared light having spatial beam characteristics to generate a spatial pattern; controlling an image sensor to capture one or more images of the scene when illuminated by the illumination source; and processing, by a processor, the captured one or more images by: determining a degree of presence or modification of the spatial pattern by objects in the scene within pixel sub-regions of an image; and classifying one or more pixel sub-regions of the image as including human skin or other material based on the degree of presence or modification of the spatial pattern identified in that pixel sub-region.
- a method of detecting human skin including the steps: illuminating a scene from a near infrared light source with infrared light; controlling an image sensor to capture one or more images of the scene when illuminated by the laser source; and processing, by a processor, the captured one or more images by: determining a degree of modification of a laser speckle pattern by objects in the scene within pixel sub-regions of an image; and classifying one or more pixel sub-regions of the image as including human skin or other material based on the degree of modification of the speckle pattern identified in that pixel sub-region.
- the step of determining a presence or modification of a laser speckle pattern or an encoded spatial pattern in the pixel sub-regions includes applying a matched filter to the spectral or spatial pixel data of the pixel sub-regions. In other embodiments, this step is performed by a a machine learned classifier.
- the methods include the step of performing, by the processor, face detection on the one or more images to detect one or more facial regions and designating one or more of the pixel sub-regions as being facial pixel sub-regions which fall wholly or partially within the facial region.
- the methods include the step of transforming image pixel data for the pixel sub-regions to a spatial frequency domain using a Fast Fourier Transform (FFT) or other spectral transfer method.
- FFT Fast Fourier Transform
- a subject monitoring system including: a near infrared illumination source configured to illuminate a scene with infrared light having a predefined spatial optical pattern; an image sensor configured to capture one or more images of the scene when illuminated by the illumination source; and a processor configured to process the captured one or more images by: determining one or more properties of the spatial optical pattern within pixel sub-regions of an image based on reflection of the infrared light from surfaces in the scene; and classifying one or more pixel sub-regions of the image as including a surface of human skin based on the one or more properties of the spatial optical pattern identified in the one or more pixel sub-regions.
- a method of detecting human skin in images captured under illumination from an infrared light source with infrared light having spatial beam characteristics to generate a spatial pattern including the steps: receiving one or more of the images; processing, at a processor, the one or more images to determine a degree of presence or modification of the spatial pattern by objects in the images within pixel sub-regions of the one or more images; and classifying one or more pixel sub-regions of the one or more image as including human skin or other material based on the degree of presence or modification of the spatial pattern identified in that pixel sub-region.
- the infrared light source is a laser and wherein the spatial pattern includes a speckle pattern produced from diffuse reflection of the laser source from a surface.
- Figure 1 is a perspective view of the interior of a vehicle having a driver monitoring system including a camera and a light source installed therein;
- Figure 2 is a driver’s perspective view of an automobile dashboard having the driver monitoring system of Figure 1 installed therein;
- FIG. 3 is a schematic functional view of a driver monitoring system according to Figures 1 and 2;
- Figure 4 is an image of a person next to a mannequin illustrating a laser speckle effect visible on the mannequin but not the person;
- Figure 5 is an image of a person holding a piece of paper illustrating a laser speckle effect visible on the paper but not the person;
- Figure 6 is a plan view of the driver monitoring system of Figures 1 to 3 showing a camera field of view and a VCSEL illumination field on a subject;
- Figure 7 is a process flow diagram illustrating the primary steps in a method of detecting human skin in images based on detection of a laser speckle pattern
- Figure 8 is an illustration of an image of a subject divided into an array of pixel subregions
- Figure 9 is an illustration of the image of Figure 8 with pixel sub-regions corresponding to the subject’s face region being shaded;
- Figure 10 is a plan view of a driver monitoring system showing a camera field of view and an LED beam encoded with a spatial pattern using a diffractive optical element to illuminate a subject;
- Figure 1 1 is a process flow diagram illustrating the primary steps in a method of detecting human skin in images based on detection of an encoded spatial pattern
- Figure 12 is a process flow diagram illustrating the primary steps in an image processing algorithm to detect human skin in images.
- the embodiments of the invention described herein relate to detection of skin in a scene imaged by a digital image sensor.
- the embodiments will be described with specific reference to subject monitoring systems such as driver monitoring systems.
- subject monitoring systems such as driver monitoring systems.
- One example is monitoring a driver or passengers of an automobile or, for example, other vehicles such as a bus, train or airplane.
- the described system and method may be applied to other systems that image humans such as biometric security systems.
- FIG. 1 to 3 there is illustrated a driver monitoring system 100 for capturing images of a vehicle driver 102 during operation of a vehicle 104.
- System 100 is further adapted for performing various image processing algorithms on the captured images such as facial detection, facial feature detection, facial recognition, facial feature recognition, facial tracking or facial feature tracking, such as tracking a person’s eyes.
- Example image processing routines are described in US Patent 7,043,056 to Edwards et al. entitled “Facial Image Processing System” and assigned to Seeing Machines Pty Ltd, the contents of which are incorporated herein by way of cross-reference.
- system 100 includes an imaging camera 106 that is positioned on or in the vehicle dash 107 instrument display and oriented to capture images of the driver’s face in the infrared wavelength range to identify, locate and track one or more human facial features.
- Camera 106 may be a conventional CCD or CMOS based digital camera having a two dimensional array of photosensitive pixels and optionally the capability to determine range or depth (such as through one or more phase detect elements).
- the photosensitive pixels are capable of sensing electromagnetic radiation in the infrared range.
- Camera 106 may also be a three dimensional camera such as a time-of-flight camera or other scanning or range-based camera capable of imaging a scene in three dimensions.
- camera 106 may be replaced by a pair of like cameras operating in a stereo configuration and calibrated to extract depth.
- camera 106 is preferably configured to image in the infrared wavelength range, it will be appreciated that, in alternative embodiments, camera 106 may image in the visible range.
- system 100 in a first embodiment, also includes an infrared light source 108 such as a Vertical Cavity Surface Emitting Laser (VCSEL), Light Emitting Diode (LED) or other light source.
- VCSEL Vertical Cavity Surface Emitting Laser
- LED Light Emitting Diode
- multiple VCSELs, LEDs or other light sources may be employed to illuminate driver 102.
- other low powered coherent infrared light sources may be used as light source 108.
- Light source 108 is preferably located proximate to the camera on vehicle dash 107 such as within a distance of 5 mm to 50 mm.
- Light source 108 is adapted to illuminate driver 102 with infrared radiation, during predefined image capture periods when camera 106 is capturing an image, so as to enhance the driver’s face to obtain high quality images of the driver’s face or facial features. Operation of camera 106 and light source 108 in the infrared range reduces visual distraction to the driver. Operation of camera 106 and light source 108 is controlled by an associated controller 1 12 which comprises a computer processor or microprocessor and memory for storing and buffering the captured images from camera 106.
- camera 106 and light source 108 may be manufactured or built as a single unit 1 1 1 having a common housing.
- the unit 1 11 is shown installed in a vehicle dash 107 and may be fitted during manufacture of the vehicle or installed subsequently as an after-market product.
- the driver monitoring system 100 may include one or more cameras and light sources mounted in any location suitable to capture images of the head or facial features of a driver, subject and/or passenger in a vehicle.
- cameras and light sources may be located on a steering column, rearview mirror, center console or driver's side A-pillar of the vehicle.
- the light source includes a single VCSEL or LED.
- the light source (or each light source in the case of multiple light sources) may each include a plurality of individual VCSELs and/or LEDs.
- a system controller 1 12 acts as the central processor for system 100 and is configured to perform a number of functions as described below.
- Controller 1 12 is located within the dash 107 of vehicle 104 and may be connected to or integral with the vehicle onboard computer.
- controller 1 12 may be located within a housing or module together with camera 106 and light source 108. The housing or module is able to be sold as an after-market product, mounted to a vehicle dash and subsequently calibrated for use in that vehicle.
- controller 1 12 may be an external computer or unit such as a personal computer.
- Controller 1 12 may be implemented as any form of computer processing device or portion of a device that processes electronic data, e.g., from registers and/or memory to transform that electronic data into other electronic data that, e.g., may be stored in registers and/or memory.
- controller 1 12 includes a microprocessor 114 (or multiple microprocessors, integrated circuits or chips operating in conjunction with each other), executing code stored in memory 1 16, such as random access memory (RAM), readonly memory (ROM), electrically erasable programmable read-only memory (EEPROM), and other equivalent memory or storage systems as should be readily apparent to those skilled in the art.
- RAM random access memory
- ROM readonly memory
- EEPROM electrically erasable programmable read-only memory
- Microprocessor 1 14 of controller 112 includes a vision processor 1 18 and a device controller 120.
- Vision processor 1 18 and device controller 120 represent functional elements which are both performed by microprocessor 114.
- vision processor 1 18 and device controller 120 may be realized as separate hardware such as microprocessors in conjunction with custom or specialized circuitry.
- Vision processor 1 18 is configured to process the captured images to perform the driver monitoring; for example to determine a three dimensional head pose and/or eye gaze position of the driver 102 within the monitoring environment. To achieve this, vision processor 1 18 utilizes one or more eye gaze determination algorithms. This may include, by way of example, the methodology described in US Patent 7,043,056 to Edwards et al. entitled “Facial Image Processing System” and assigned to Seeing Machines Pty Ltd. Vision processor 1 18 may also perform various other functions including determining attributes of the driver 102 such as eye closure, blink rate and tracking the driver’s head motion to detect driver attention, sleepiness or other issues that may interfere with the driver safely operating the vehicle.
- Device controller 120 is configured to control camera 106 and to selectively actuate light source 108 in a sequenced manner in sync with the exposure time of camera 106. In the case where two VCSELs or LEDS are provided, the light sources may be controlled to activate alternately during even and odd image frames to perform a strobing sequence. Other illumination sequences may be performed by device controller 120, such as L,L,R,R,L,L,R,R... or L,R,0,L,R,0,L,R,0...
- light source 108 is preferably electrically connected to device controller 120 but may also be controlled wirelessly by controller 120 through wireless communication such as BluetoothTM or WiFiTM communication.
- device controller 120 activates camera 106 to capture images of the face of driver 102 in a video sequence
- light source 108 is activated and deactivated in synchronization with consecutive image frames captured by camera 106 to illuminate the driver during image capture.
- device controller 120 and vision processor 1 18 provide for capturing and processing images of the driver to obtain driver state information such as drowsiness, attention and gaze position during an ordinary operation of vehicle 104.
- controller 1 12 is performed by an onboard vehicle computer system which is connected to camera 106 and light source 108.
- Lasers such as VCSELs are highly spatially and temporally coherent and can produce monochromatic light. When incident onto a diffuse or rough surface, laser beams can produce a “speckle” pattern. A speckle pattern is produced as a result of mutual interference of coherent wavefronts reflected off the diffuse surface. At some points, the coherent wavefronts constructively interfere, producing bright spots, while, at other points, the coherent wavefronts destructively interfere, producing dark spots. The resultant speckle pattern is observed as a random pattern of bright and dark spots across a diffuse surface being imaged under laser light.
- VCSELs typically emit optical power at wavelengths in the range of 650 nm to 1200 nm, which is predominantly in the near infrared (NIR) range.
- NIR near infrared
- Reflection of NIR light from biological tissues such as skin includes penetration and scattering from sub-surface tissues in addition to surface tissues.
- This multilayer reflection means frequencies of spatially encoded information in the light (such as a projected noise pattern) are averaged out and effectively low-pass filtered by the reflection process.
- the process of scattering decoheres (blurs) the information encoded in the light.
- the reflections off multiple layers of surface and sub-surface tissue acts to average out any produced speckle patterns that are produced by each diffuse layer of tissue.
- biological tissues such as human skin have a low pass filtering effect. Spatial frequencies that are similar in dimension to the penetration depth of the light will be more highly attenuated by this physical process.
- the penetration depth of NIR light in human skin is typically in the range of 1 -5 mm but may be up to 10 mm.
- Embodiments of the present invention leverage this low-pass filtering effect of NIR wavelengths, as described below.
- System 600 configured to classify image regions as containing human skin or other predefined material.
- System 600 includes many features similar to system 100 and corresponding features are designated with like reference numerals for simplicity.
- System 600 is configured to perform a method 700 of detecting human skin as illustrated in Figure 7 and the operation of system 600 will be described herein with reference to the steps of method 700.
- system 600 may be capable of performing a wide variety of other functions such as drowsiness and attention monitoring based on detection of eye gaze vectors, facial features and/or head pose vectors of a subject detected in images.
- the light source is a VCSEL 108a.
- VCSEL 108a represents a near infrared laser source that, at step 701 , is configured to illuminate a scene, including subject 102 with highly coherent infrared light such as NIR in the range of 650 nm to 1200 nm.
- the infrared light from VCSEL has spatial beam characteristics that produce a speckle pattern on diffuse surfaces due to the effect described above. Typically these spatial beam characteristics include a high spatial coherence which is inherent to VCSELs.
- the infrared light includes, as spatial beam characteristics a spatial pattern such as a spatially encoded pattern or natural spatial pattern inherent to the VCSEL.
- camera 106 includes an image sensor configured to capture one or more images of the scene including subject 102 when illuminated by VCSEL 108a.
- Vision processor 1 18 is configured to process the captured images at steps 703 and 704.
- the image processing includes, at step 703, determining a degree of presence or modification of the speckle pattern by objects in the scene within predefined pixel sub-regions of an image.
- the presence of the speckle pattern may refer simply to the positive detection of a speckle pattern with a pixel sub-region or it may refer to an amount (e.g. an intensity) of speckle pattern detected.
- a threshold intensity of speckle pattern may be set before vision processor 1 18 determines that a speckle pattern is present in that pixel sub-region. This is discussed in more detail below.
- a modification of the speckle pattern may include a change in intensity or spatial distribution of the speckle pattern relative to other pixel sub-regions or relative to one or more predefined or reference speckle pattern parameters.
- the pixel sub regions include regions of the image divided up by pixel position.
- an image 800 may be divided into a grid of square or rectangular sub-regions 801 and a degree of presence or modification of the speckle pattern in each sub-region is performed.
- vision processor 1 18 is to perform face detection to detect a facial pixel region in the one or more images and one or more of the pixel sub-regions can be characterised as being within or outside the facial pixel region.
- the facial pixel region may be represented as a single pixel sub-region or divided into multiple pixel sub-regions corresponding to the detected facial region.
- Figure 9 illustrates schematically an example of a plurality of pixel sub-regions being characterised as being within a facial pixel region as indicated by the shaded pixel regions.
- vision processor 1 18 classifies one or more pixel sub-regions of the image as including human skin or other material based on the degree of presence or modification of the speckle pattern identified in that pixel sub-region.
- Processor 1 18 classifies a pixel sub-region as containing human skin when a sufficient presence of a speckle pattern is detected within the pixel sub-region. This may be determined by a threshold value or a confidence measure, which may involve pattern recognition and/or spectral analysis as described below.
- Determining a degree of presence or modification of the speckle pattern at step 703 may include determining a spatial power spectral density of the pixel sub-regions in the spatial domain by vision processor 1 18.
- the power spectral density may be obtained by a number of known image processing techniques such as the Fast Fourier Transform method. This spectral technique involves determining the power distribution (or pixel intensity) as a function of spatial frequency across the pixels corresponding to the or each pixel sub-region.
- the speckle pattern will be imaged as a spatial noise pattern across the pixels and the size, intensity and location of the speckle dots will be dependent on:
- the speckle pattern will have power at a range of spatial frequencies or wavelengths and the filtering effect of human skin will have prominent effects on wavelengths in the range of 1 mm to 10 mm.
- the speckle pattern may be characterised by a white spatial noise to the image having a substantially flat power spectral density or a pink spatial noise to the image having a 1/k shaped power spectral density (where k represents a spatial frequency parameter).
- determining a presence of the speckle pattern may include determining a level or shape of the noise component of the power spectral density present in a pixel sub-region across a range of spatial frequencies or wavelengths.
- Determining a modification of a speckle pattern may include capturing multiple images of the scene under different illumination conditions (e.g. illumination intensity, illumination time, camera exposure time or driving VCSEL 108a in different modes) and comparing the power spectral density of the different pixel sub-regions across the different images.
- illumination conditions e.g. illumination intensity, illumination time, camera exposure time or driving VCSEL 108a in different modes
- the classification at step 704 may be performed in a number of different ways.
- classifying a pixel sub-region as including human skin includes developing an image filter that is matched to the speckle induced noise pattern observed in the received image (or at least the one or more pixel sub-regions). In this manner, a frequency response of the spatial filter can be matched to the power spectral density of the spatial noise pattern resulting from the speckle pattern.
- the image filter may be implemented in code as part of the image processing algorithm.
- the filter may be configured to have a reduced output spatial frequency response (versus a majority of alternate materials) when the image region contains human skin.
- a threshold may be applied to the output spatial frequency response of the image filter that determines if the received image is human skin or not.
- a detected spectral power of the spatial frequency response that is below the threshold may be designated as including human skin.
- the threshold may be pre-determined by statistical analysis of the filter’s response against a database containing images of human skin versus other materials under the specific illumination conditions of the system, in order to improve the classification performance.
- vision processor 118 includes a machine learned classifier module trained to detect human skin versus other materials based on the spatial power spectral density of pixel sub-regions under different illumination conditions. This machine learned classifier is used to perform the classification at step 704.
- the classifier may be an artificial neural network, decision tree or probabilistic classifier such as a Bayesian network.
- the classifier may be trained by feeding a dataset of images of actual human faces (including facial hair and sunglasses), images of face masks, photographs and images of other materials commonly used to spoof a human such as paper, plastics and fabrics.
- the images may be captured under differing illumination conditions such as at different wavelengths of light, angles of incidence, ambient light conditions, intensity of emitted light and exposure time of the image sensor.
- the detection and classification may also be performed in the spatial domain.
- the presence of a speckle pattern may be characterised as an intensity modulation added to the image of the actual scene or objects within the scene. Determining a presence or modification of the speckle pattern may include detection of the amplitude of this intensity modulation across the pixel sub-regions.
- device controller 120 is configured to control an exposure time of the image sensor of camera 106 and/or an illumination time of VCSEL 108a to enhance or reduce the potential amount of speckle detected in the images.
- device controller 120 may also be configured to control an illumination intensity of VCSEL 108a to enhance or reduce the potential amount of speckle detected in images.
- a confidence or likelihood estimate is also calculated. This confidence estimate may be based on the number and/or location of the pixel sub-regions classified as including human skin.
- the output of classification step 704 may also include this confidence measure and, if the confidence measure is below a threshold, classification step 704 may output a negative result on the basis that the confidence is too low to confirm the presence of human skin.
- a threshold number of pixels or pixel sub-regions are required to classify the image as including human skin to achieve a threshold confidence level.
- the location of the pixel sub-regions classified as including human skin are taken into account to determine a likelihood that human skin has been detected. For example, if a cluster of adjacent pixel sub-regions are classified as including human skin, then a higher confidence is output.
- vision processor 1 18 may be configured to perform face detection to detect a facial pixel region in the one or more images. Detection of this facial region may be used as an initial filter to only focus the classification on pixels or pixel subregions within this facial region (see Figure 9 for an exemplary facial region).
- detection of the facial region may be used as a subsequent step to improve the classification of pixel sub-regions being human skin or not.
- the classification of skin versus non-skin in pixel sub-regions is performed independently of any facial regions and any pixel sub-regions classified as including human skin are then correlated with their position relative to a face detected in the image. If a pixel sub-region that is classified as including human skin is located within a region identified as a facial region, then that pixel sub-region may be designated with a higher likelihood of being human skin. Alternatively, if a pixel sub-region classified as including human skin is located outside a region identified as a facial region, then that pixel sub-region may be designated with a lower likelihood of being human skin.
- vision processor 1 18 is configured to estimate a distance to a face or other object identified in an image.
- the speckle pattern is an interference pattern
- information on the distance may be fed to the classifier or image filter for improved classification of the object material based on the presence of modification of a speckle pattern.
- the threshold of spatial filtering may be modified based on the estimated distance.
- the output of method 700 may solely be used to perform a liveness assessment to detect the presence of a real human in the captured images.
- the output of method 700 provides an input to a broader liveness detection algorithm or system that is configured to determine whether or not a human subject is present in the imaged scene.
- the broader liveness detection algorithm may take in other inputs such as eye and head movements of the subject over a sequence of images and collate the inputs to provide an estimate or probability that there is a human present in the images.
- the speckle pattern generally adds spatial noise to an image, it is advantageous to reduce speckle when performing subject monitoring so that facial features can be more clearly distinguished. For this reason, it is desirable to be able to actively switch the VCSEL between a high speckle mode for detecting human skin and a low speckle mode for performing subject monitoring. This can be achieved as described below.
- VCSELs are coherent laser light sources. They emit light from the surface of a semiconductor.
- a single VCSEL device typically has many individual sources. In any VCSEL device design, these sources can be intentionally designed to be synchronized and in-phase, or synchronized and out-of-phase (by some degree), or unsynchronized (incoherent).
- it is possible to reduce the speckle by driving the VCSELs into a multi-transverse mode emission regime to reduce the amount of mode overlap, or even into an incoherent regime where transverse modes break down and individual emitters support multiple beamlets. In such a mode, the coherence of the laser output is reduced, which reduces the coherent backscattering that results in the speckle effect.
- the exposure time of camera 106 and/or illumination pulse time of VSCEL 108a may also be controlled to selectively increase or decrease the amount of speckle effect from diffuse surfaces being imaged. Therefore, by selectively controlling these parameters, system 100 may be switched between a high speckle skin detection mode and a low speckle subject monitoring mode.
- the incoherent light (having random phase) produced by LEDs makes them a good ambient illumination source, particularly for subject monitoring systems.
- Broadband light sources such as LEDs have a low coherence and produce individual speckle or noise patterns for each wavelength.
- the different speckle patterns tend to average each other out and, as a result, no distinct speckle pattern can generally be observed in images illuminated by LEDs.
- biological tissues perform a low-pass filtering effect for the reasons mentioned above, it is possible to design a system in which light from an incoherent LED can be used to distinguish human skin from other materials in images.
- the LED may be driven by device controller 120 to actively encode a spatial and/or temporal pattern into an illumination beam. This avoids the need to use a laser as the light source and LEDs may be used in place of light source 108.
- Such a system 1000 is illustrated in Figure 10 wherein light source 108 is represented as NIR emitting LED 108b. System 1000 is configured to perform method 1 100 illustrated in Figure 1 1.
- a diffractive optical element 1002 such as a pattern generator is positioned in the path of LED beam 1004 to produce a structured beam 1006 having an encoded spatial pattern.
- the encoded pattern may include a grid of dots or collimated beamlets or some other periodic structure.
- the spatial pattern encoded into beam 1006 preferably includes a spatial structure having spatial wavelengths in the range of 1 mm to 10 mm, which is the range of primary absorption of human skin for NIR light.
- diffractive optical element 1002 may be integral with LED 108b.
- device controller 120 is configured to control LED 108b to temporally modulate LED beam 1004 to encode a temporal pattern to beam 1004.
- the subject 102 is illuminated with NIR light from structured beam 1006 having the encoded spatial pattern.
- device controller 120 controls the image sensor of camera 106 to image the returned light from subject 102 and the surrounding scene.
- vision processor 118 is configured to process the captured images and determine a degree of presence or modification of the encoded spatial and/or temporal pattern by objects in the scene within pixel sub-regions of an image.
- Similar processes as described above for detecting speckle may be used to detect the spatial pattern here.
- a spectral analysis may be performed to determine the spatial power spectral density of pixel sub-regions. Filtering effects of human skin on NIR light will be apparent by a dip in spectral power spectral within the range of 1 mm to 10 mm where.
- vision processor 1 18 is configured to classify one or more pixel sub-regions of the image as including human skin or other material based on the degree of presence or modification of the spatial pattern identified in that pixel sub-region.
- the classification may include developing an image filter that is matched to the spatial noise pattern.
- vision processor 1 18 may use a machine learned classifier module trained to detect human skin versus other materials based on the spatial power spectral density of pixel sub-regions under different illumination conditions.
- the machine learned classifier may be trained to detect intensity variations in the spatial pattern across different pixel sub-regions in the images.
- Figure 12 illustrates the primary steps of an exemplary algorithm 1200 to detect the presence of human skin in images.
- device controller 120 controls camera 106 to capture images of subject 102 and the surrounding scene under illumination from controlled light source 108 such as VCSEL 108a or LED 108b described above.
- the captured images (or a subset thereof) are processed by vision processor 1 18 to determine pixel sub-regions within the captured images.
- face detection is performed to detect a facial region of subject 102 and designate one or more of the pixel sub-regions as being facial pixel sub-regions which fall wholly or partially within the facial region.
- optionally visional processor 1 18 may transform the image pixel data for the pixel sub-regions to the spatial frequency domain using a FFT or similar spectral transfer method.
- step 1204 and subsequent steps may be performed only on the facial pixel sub-regions.
- subsequent steps may be performed based on the spectral data in the spectral domain.
- a matched filter is applied to the spectral or spatial pixel data of the pixel sub-regions to capture a presence or modification of a laser speckle pattern or an encoded spatial pattern.
- a machine learned classifier could be used to perform step 1205.
- threshold analysis is performed on the output of the matched filter or classification of step1205 to determine an amount of presence or modification of laser speckle or encoded spatial pattern present in the pixel sub-regions.
- the pixel sub-regions are classified as including human skin or not based on the threshold analysis of step1206. If the output of the matched filter is less than a predetermined threshold level (overall spectral power or spectral power across a predefined spectral range), then the pixel sub-region is classified as including human skin.
- the classification of pixel sub-regions may be performed sequentially on individual pixel sub-regions.
- step 1201 of method 1200 is performed by device controller 120 while steps 1202 to 1208 are performed by vision processor 1 18, all of which may form part of broader processor 112.
- the above described invention provides a new sensing technique which will benefit the real-world problem of liveness detection, through the detection of human skin vs non-skin materials, without additional sensor component costs.
- infrared refers to the general infrared area of the electromagnetic spectrum which includes near infrared, infrared and far infrared frequencies or light waves.
- controller or “processor” may refer to any device or portion of a device that processes electronic data, e.g., from registers and/or memory to transform that electronic data into other electronic data that, e.g., may be stored in registers and/or memory.
- a “computer” or a “computing machine” or a “computing platform” may include one or more processors.
- any one of the terms comprising, comprised of or which comprises is an open term that means including at least the elements/features that follow, but not excluding others.
- the term comprising, when used in the claims should not be interpreted as being limitative to the means or elements or steps listed thereafter.
- the scope of the expression a device comprising A and B should not be limited to devices consisting only of elements A and B.
- Any one of the terms including or which includes or that includes as used herein is also an open term that also means including at least the elements/features that follow the term, but not excluding others. Thus, including is synonymous with and means comprising.
- Coupled when used in the claims, should not be interpreted as being limited to direct connections only.
- the terms “coupled” and “connected,” along with their derivatives, may be used. It should be understood that these terms are not intended as synonyms for each other.
- the scope of the expression a device A coupled to a device B should not be limited to devices or systems wherein an output of device A is directly connected to an input of device B. It means that there exists a path between an output of A and an input of B which may be a path including other devices or means.
- Coupled may mean that two or more elements are either in direct physical, electrical or optical contact, or that two or more elements are not in direct contact with each other but yet still co-operate or interact with each other.
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Theoretical Computer Science (AREA)
- Human Computer Interaction (AREA)
- Health & Medical Sciences (AREA)
- Oral & Maxillofacial Surgery (AREA)
- General Health & Medical Sciences (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)
Abstract
Un système de surveillance de sujet (100) est décrit dans la présente invention. Le système (100) comprend une source d'éclairage à infrarouge proche (108) conçue pour éclairer une scène à l'aide d'une lumière infrarouge présentant une caractéristique de faisceau spatial pour générer un profil spatial et un capteur d'image (106) conçu pour capturer une ou plusieurs images de la scène lorsqu'elle est éclairée par la source d'éclairage (108). Le système (100) comprend également un processeur (112) conçu pour traiter la ou les images capturées par détermination d'un degré de présence ou de modification du profil spatial par des objets dans la scène au sein de sous-régions de pixel d'une image. Le processeur (112) classe également une ou plusieurs sous-régions de pixel de l'image comme incluant de la peau humaine ou un autre tissu sur la base du degré de modification du profil spatial identifié dans cette sous-région de pixel.
Applications Claiming Priority (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| AU2021900066 | 2021-01-13 | ||
| AU2021900066A AU2021900066A0 (en) | 2021-01-13 | System and method for skin detection in images |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| WO2022150874A1 true WO2022150874A1 (fr) | 2022-07-21 |
Family
ID=82446278
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| PCT/AU2021/051563 Ceased WO2022150874A1 (fr) | 2021-01-13 | 2021-12-24 | Système et procédé de détection de peau dans des images |
Country Status (1)
| Country | Link |
|---|---|
| WO (1) | WO2022150874A1 (fr) |
Cited By (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| WO2024088738A1 (fr) * | 2022-10-25 | 2024-05-02 | Trinamix Gmbh | Manipulation d'image pour détecter un état d'un matériau associé à l'objet |
| WO2025237936A1 (fr) * | 2024-05-14 | 2025-11-20 | Trinamix Gmbh | Surveillance de conducteur |
Citations (4)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20130342702A1 (en) * | 2012-06-26 | 2013-12-26 | Qualcomm Incorporated | Systems and method for facial verification |
| WO2019038128A1 (fr) * | 2017-08-22 | 2019-02-28 | Lumileds Holding B.V. | Analyse de granularité laser pour authentification biométrique |
| US20190335098A1 (en) * | 2018-04-28 | 2019-10-31 | Guangdong Oppo Mobile Telecommunications Corp., Ltd. | Image processing method and device, computer-readable storage medium and electronic device |
| US20200394390A1 (en) * | 2019-06-13 | 2020-12-17 | XMotors.ai Inc. | Apparatus and method for vehicle driver recognition and applications of same |
-
2021
- 2021-12-24 WO PCT/AU2021/051563 patent/WO2022150874A1/fr not_active Ceased
Patent Citations (4)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20130342702A1 (en) * | 2012-06-26 | 2013-12-26 | Qualcomm Incorporated | Systems and method for facial verification |
| WO2019038128A1 (fr) * | 2017-08-22 | 2019-02-28 | Lumileds Holding B.V. | Analyse de granularité laser pour authentification biométrique |
| US20190335098A1 (en) * | 2018-04-28 | 2019-10-31 | Guangdong Oppo Mobile Telecommunications Corp., Ltd. | Image processing method and device, computer-readable storage medium and electronic device |
| US20200394390A1 (en) * | 2019-06-13 | 2020-12-17 | XMotors.ai Inc. | Apparatus and method for vehicle driver recognition and applications of same |
Non-Patent Citations (5)
| Title |
|---|
| BHOWMIK, M. K. ET AL.: "Thermal Infrared Face Recognition - a Biometric Identification Technique for Robust Security System", INTECH OPEN, 27 July 2011 (2011-07-27), pages 1 - 338, XP055956686, Retrieved from the Internet <URL:https://www.intechopen.com/books/reviews-refinements-and-new-ideasin-face-recognition/thermal-infrared-face-recognition-a-biometricidentification-technique-for-robust-security-system> * |
| NOWARA, E.M.: "Camera-based Vital Signs: Towards Driver Monitoring and Face Liveness Verification", THESIS, 20 August 2018 (2018-08-20), pages 1 - 102, XP055956679, Retrieved from the Internet <URL:https://scholarship.rice.edu/handle/1911/105790> * |
| SONG, L. ET AL.: "Face Liveness Detection Based on Joint Analysis of RGB and Near-Infrared Image of Faces", ELECTRONIC IMAGING, IMAGING AND MULTIMEDIA ANALYTICS IN A WEB AND MOBILE WORLD 2018, 28 January 2018 (2018-01-28), pages 1 - 6, XP055856394, Retrieved from the Internet <URL:https://www.ingentaconnect.com/content/ist/ei/2018/00002018/00000010/art00006> * |
| TANG DI, ZHOU ZHE, ZHANG YINQIAN, ZHANG KEHUAN: "Face Flashing: a Secure Liveness Detection Protocol based on Light Reflections", ARXIV.ORG, 22 August 2018 (2018-08-22), pages 1 - 15, XP081260658, Retrieved from the Internet <URL:https://arxiv.org/abs/1801.01949> * |
| ZHOU PEI, ZHU JIANGPING, YOU ZHISHENG: "3-D face registration solution with speckle encoding based spatial- temporal logical correlation algorithm", OPTICS EXPRESS, vol. 27, 12 July 2019 (2019-07-12), pages 21004 - 21019, XP055956663, Retrieved from the Internet <URL:https://www.osapublishing.org/oe/fulltext.cfm?uri=oe-27-15-21004&id=415396> * |
Cited By (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| WO2024088738A1 (fr) * | 2022-10-25 | 2024-05-02 | Trinamix Gmbh | Manipulation d'image pour détecter un état d'un matériau associé à l'objet |
| WO2025237936A1 (fr) * | 2024-05-14 | 2025-11-20 | Trinamix Gmbh | Surveillance de conducteur |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| US7689008B2 (en) | System and method for detecting an eye | |
| JP7438127B2 (ja) | 赤外線光源保護システム | |
| CN110114246B (zh) | 3d飞行时间有源反射感测系统和方法 | |
| US7940962B2 (en) | System and method of awareness detection | |
| US11455810B2 (en) | Driver attention state estimation | |
| JP7746260B2 (ja) | 高性能な明瞳孔アイトラッキング | |
| JP4589600B2 (ja) | 自動車の安全及び便宜に対する人間顔面特徴認識の適用 | |
| US9104921B2 (en) | Spoof detection for biometric authentication | |
| JP4810052B2 (ja) | 乗員センサ | |
| US10604063B2 (en) | Control device for vehicle headlight | |
| US20030169906A1 (en) | Method and apparatus for recognizing objects | |
| EP2060993B1 (fr) | Système et procédé de détection de sensibilisation | |
| JP2004504219A (ja) | 自動車の安全に対する人間顔面特徴認識の適用 | |
| WO2022150874A1 (fr) | Système et procédé de détection de peau dans des images | |
| WO2019241834A1 (fr) | Système et procédé de prétraitement d'images à grand débit de trames | |
| CN114821697A (zh) | 材料光谱 | |
| Boverie et al. | Comparison of structured light and stereovision sensors for new airbag generations | |
| US8363957B2 (en) | Image classification system and method thereof | |
| Makrushin et al. | Visual recognition systems in a car passenger compartment with the focus on facial driver identification | |
| La Rota et al. | Automatically adjustable rear mirror based on computer vision |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| 121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 21918143 Country of ref document: EP Kind code of ref document: A1 |
|
| NENP | Non-entry into the national phase |
Ref country code: DE |
|
| 122 | Ep: pct application non-entry in european phase |
Ref document number: 21918143 Country of ref document: EP Kind code of ref document: A1 |