[go: up one dir, main page]

WO2025214794A1 - Dispositif spectroscopique pour véhicules - Google Patents

Dispositif spectroscopique pour véhicules

Info

Publication number
WO2025214794A1
WO2025214794A1 PCT/EP2025/058679 EP2025058679W WO2025214794A1 WO 2025214794 A1 WO2025214794 A1 WO 2025214794A1 EP 2025058679 W EP2025058679 W EP 2025058679W WO 2025214794 A1 WO2025214794 A1 WO 2025214794A1
Authority
WO
WIPO (PCT)
Prior art keywords
driver
data
concentration
spectroscopic
body substance
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
PCT/EP2025/058679
Other languages
English (en)
Inventor
Johannes Julius MEDER
Florian Proell
Jakob Carl ARNDT
Celal Mohan OEGUEN
Wilfried HERMES
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
TrinamiX GmbH
Original Assignee
TrinamiX GmbH
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by TrinamiX GmbH filed Critical TrinamiX GmbH
Publication of WO2025214794A1 publication Critical patent/WO2025214794A1/fr
Pending legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N21/00Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
    • G01N21/17Systems in which incident light is modified in accordance with the properties of the material investigated
    • G01N21/25Colour; Spectral properties, i.e. comparison of effect of material on the light at two or more different wavelengths or wavelength bands
    • G01N21/31Investigating relative effect of material at wavelengths characteristic of specific elements or molecules, e.g. atomic absorption spectrometry
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/0059Measuring for diagnostic purposes; Identification of persons using light, e.g. diagnosis by transillumination, diascopy, fluorescence
    • A61B5/0075Measuring for diagnostic purposes; Identification of persons using light, e.g. diagnosis by transillumination, diascopy, fluorescence by spectroscopy, i.e. measuring spectra, e.g. Raman spectroscopy, infrared absorption spectroscopy
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/08Measuring devices for evaluating the respiratory organs
    • A61B5/082Evaluation by breath analysis, e.g. determination of the chemical composition of exhaled breath
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/145Measuring characteristics of blood in vivo, e.g. gas concentration or pH-value ; Measuring characteristics of body fluids or tissues, e.g. interstitial fluid or cerebral tissue
    • A61B5/14532Measuring characteristics of blood in vivo, e.g. gas concentration or pH-value ; Measuring characteristics of body fluids or tissues, e.g. interstitial fluid or cerebral tissue for measuring glucose, e.g. by tissue impedance measurement
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/145Measuring characteristics of blood in vivo, e.g. gas concentration or pH-value ; Measuring characteristics of body fluids or tissues, e.g. interstitial fluid or cerebral tissue
    • A61B5/14546Measuring characteristics of blood in vivo, e.g. gas concentration or pH-value ; Measuring characteristics of body fluids or tissues, e.g. interstitial fluid or cerebral tissue for measuring analytes not otherwise provided for, e.g. ions, cytochromes
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/145Measuring characteristics of blood in vivo, e.g. gas concentration or pH-value ; Measuring characteristics of body fluids or tissues, e.g. interstitial fluid or cerebral tissue
    • A61B5/1455Measuring characteristics of blood in vivo, e.g. gas concentration or pH-value ; Measuring characteristics of body fluids or tissues, e.g. interstitial fluid or cerebral tissue using optical sensors, e.g. spectral photometrical oximeters
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/16Devices for psychotechnics; Testing reaction times ; Devices for evaluating the psychological state
    • A61B5/18Devices for psychotechnics; Testing reaction times ; Devices for evaluating the psychological state for vehicle drivers or machine operators
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/48Other medical applications
    • A61B5/4845Toxicology, e.g. by detection of alcohol, drug or toxic products
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/68Arrangements of detecting, measuring or recording means, e.g. sensors, in relation to patient
    • A61B5/6887Arrangements of detecting, measuring or recording means, e.g. sensors, in relation to patient mounted on external non-worn devices, e.g. non-medical devices
    • A61B5/6893Cars
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60KARRANGEMENT OR MOUNTING OF PROPULSION UNITS OR OF TRANSMISSIONS IN VEHICLES; ARRANGEMENT OR MOUNTING OF PLURAL DIVERSE PRIME-MOVERS IN VEHICLES; AUXILIARY DRIVES FOR VEHICLES; INSTRUMENTATION OR DASHBOARDS FOR VEHICLES; ARRANGEMENTS IN CONNECTION WITH COOLING, AIR INTAKE, GAS EXHAUST OR FUEL SUPPLY OF PROPULSION UNITS IN VEHICLES
    • B60K28/00Safety devices for propulsion-unit control, specially adapted for, or arranged in, vehicles, e.g. preventing fuel supply or ignition in the event of potentially dangerous conditions
    • B60K28/02Safety devices for propulsion-unit control, specially adapted for, or arranged in, vehicles, e.g. preventing fuel supply or ignition in the event of potentially dangerous conditions responsive to conditions relating to the driver
    • B60K28/06Safety devices for propulsion-unit control, specially adapted for, or arranged in, vehicles, e.g. preventing fuel supply or ignition in the event of potentially dangerous conditions responsive to conditions relating to the driver responsive to incapacity of driver
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W40/00Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models
    • B60W40/08Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models related to drivers or passengers
    • B60W2040/0818Inactivity or incapacity of driver
    • B60W2040/0836Inactivity or incapacity of driver due to alcohol
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W40/00Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models
    • B60W40/08Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models related to drivers or passengers
    • B60W2040/0818Inactivity or incapacity of driver
    • B60W2040/0845Inactivity or incapacity of driver due to drugs
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W2540/00Input parameters relating to occupants
    • B60W2540/043Identity of occupants
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W2540/00Input parameters relating to occupants
    • B60W2540/221Physiology, e.g. weight, heartbeat, health or special needs
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N2201/00Features of devices classified in G01N21/00
    • G01N2201/02Mechanical
    • G01N2201/022Casings
    • G01N2201/0221Portable; cableless; compact; hand-held
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N2201/00Features of devices classified in G01N21/00
    • G01N2201/12Circuits of general importance; Signal processing
    • G01N2201/129Using chemometrical methods
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N2201/00Features of devices classified in G01N21/00
    • G01N2201/12Circuits of general importance; Signal processing
    • G01N2201/129Using chemometrical methods
    • G01N2201/1296Using chemometrical methods using neural networks
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N33/00Investigating or analysing materials by specific methods not covered by groups G01N1/00 - G01N31/00
    • G01N33/48Biological material, e.g. blood, urine; Haemocytometers
    • G01N33/483Physical analysis of biological material
    • G01N33/487Physical analysis of biological material of liquid biological material

Definitions

  • the invention is in the field of spectroscopic devices for vehicles.
  • the invention relates to a spectroscopic device for integration into a vehicle and for determining a concentration of a body substance of a driver of the vehicle, a vehicle comprising the spectroscopic device, a method for determining a concentration of a body substance of a driver of a vehicle, a use of the concentration of the body substance of the driver for controlling a functionality of the vehicle, and a non-transient computer-readable medium including instructions that, when executed by one or more processors, cause the one or more processors to perform a method for determining a concentration of a body substance of a driver of a vehicle.
  • CN 104827900 A and KR 10-2022-0083289 A disclose an infrared detector built into the handle of the transmission of a car.
  • the detector measures the alcohol content of the hand when touching the handle.
  • differences in the spectra due to the presence of blood alcohol is so small that reliable measurement results can hardly be obtained easily.
  • US 2023/0204507 A1 discloses spectroscopy system for measuring the concentration of alcohol of a vehicle driver. However, accurate and difficult to trick blood alcohol measurement remains a challenge.
  • the invention relates to spectroscopic device for integration into a vehicle and for determining a concentration of a body substance of a driver of the vehicle comprising: a) a spectroscopy module for acquiring spectroscopic data of the driver, b) an input to receive driver data or environmental data, c) a processor for determining the concentration of body substance of the driver using the spectroscopic data and one or both of the driver data and the environmental data, and d) an output for outputting the concentration of the body substance of the driver.
  • the invention in another aspect relates to spectroscopic device for integration into a vehicle and for determining a concentration of a body substance of a driver of the vehicle comprising: a) a spectroscopy module for acquiring spectroscopic data of the driver, b) a processor for determining the concentration of body substance of the driver using the spectroscopic data, and c) an output for outputting the concentration of the body substance of the driver.
  • the invention in another aspect relates to vehicle comprising the spectroscopic device according to the invention.
  • the invention in another aspect relates to method for determining a concentration of a body substance of a driver of a vehicle comprising: a) receiving spectroscopic data of the driver from a spectroscopic device integrated into a vehicle and one or both of the driver data associated with a characteristic of the driver and environmental data associated with a characteristic of the surrounding of the driver, b) determining the concentration of the body substance of the driver using the spectroscopic data and one or both of the driver data and the environmental data, and c) outputting the concentration of the body substance of the driver.
  • the invention in another aspect relates to method for determining a concentration of a body substance of a driver of a vehicle comprising: a) receiving spectroscopic data of the driver from a spectroscopic device integrated into a vehicle, b) determining the concentration of the body substance of the driver using the spectroscopic data, and c) outputting the concentration of the body substance of the driver.
  • the invention relates to a use of the concentration of the body substance of the driver obtained from the method of the invention for controlling a functionality of a vehicle.
  • the invention in another aspect relates to a non-transient computer-readable medium including instructions that, when executed by one or more processors, cause the one or more processors to perform a method comprising: a) receiving spectroscopic data of the driver from a spectroscopic device integrated into a vehicle and one or both of the driver data associated with a characteristic of the driver and environmental data associated with a characteristic of the surrounding of the driver, b) determining the concentration of the body substance of the driver using the spectroscopic data and one or both of the driver data and the environmental data, and c) outputting the concentration of the body substance of the driver.
  • the invention in another aspect relates to a non-transient computer-readable medium including instructions that, when executed by one or more processors, cause the one or more processors to perform a method comprising: a) receiving spectroscopic data of the driver from a spectroscopic device integrated into a vehicle, b) determining the concentration of the body substance of the driver using the spectroscopic data, and c) outputting the concentration of the body substance of the driver.
  • the advantage of the present invention is that the concentration of a body substance of a driver can be determined more accurately within short measurement time.
  • the safety is increased as a compromised fitness to drive can be quickly and easily detected, for example due to intoxicants such as alcohol or drugs, due to a health problem, for example low sugar concentration of a diabetes patient, or due to fitness problems like dehydration.
  • the user experience is enhanced by fewer false measurements and shorter measurement time.
  • a higher user acceptance can be expected leading to an increased safety level.
  • the required hardware is simple and can be produced at high volumes at low prices. Low-weight hardware can be used which consumes little energy.
  • the concentration of a body substance determination is furthermore robust against highly variable conditions in a vehicle, like the temperature, ambient light exposure or vibrations.
  • the spectroscopic device is for integration into various vehicles including cars, motorcycles, buses, trucks, trams, trains or even airplanes.
  • the spectroscopic device is suitable for integration into a vehicle.
  • the spectroscopic device may be attached to the vehicle, or it may be integrated as component or as part of a component of a vehicle, for example as part of a display in the dashboard, an entertainment control system, or loudspeakers.
  • the spectroscopic device can be placed at various places, for example in the steering wheel and its periphery, such as the steering wheel rim, the steering wheel column, or the steering wheel center behind or besides the emblem; in the dashboard, such as in the instrument cluster bezel or its surrounding, the dashboard button panel or in or around frequently used buttons like the infotainment control button or engine start button, the touchscreen display in the center display of the infotainment system; the overhead and A-pillars, such as in the overhead console behind the light sensor, nestled behind the light sensor housing in the overhead console, the A-pillar trim on the driver side placed behind the A-pillar trim panel; the center console, such as the cup holder insert incorporated within a removable cup holder insert, the gear shift knob positioned on top or on the side of the gear shift knob, the arm rest, the parking break button.
  • the dashboard such as in the instrument cluster bezel or its surrounding, the dashboard button panel or in or around frequently used buttons like the infotainment control button or engine start button, the touchscreen display in the center display
  • the term “spectroscopic device” may refer to an apparatus which is capable of recording spectroscopic data of a driver.
  • the spectroscopic device may be a spectrometer or a device into which a spectrometer is integrated.
  • the spectroscopic device may be portable or stationary, for example a laboratory device.
  • a portable spectroscopic device may be a hand-held spectrometer or a module which is integrated into a portable device like a smartphone, a tablet or a wearable like a smartwatch.
  • a portable spectroscopic device may be communicatively coupled to a computer device, for example a cloud computer or a smartphone.
  • Such computer device may be configured to execute a chemometric model.
  • the computer device may further be configured to receive spectroscopic data from the spectroscopic device.
  • the computer device may store such spectroscopic data, or send it to a system for determining concentration of a body substance.
  • a spectroscopic device may comprise:
  • an optical element configured for separating incident optical radiation provided by the measurement driver into a spectrum of constituent wavelength components
  • a photosensor comprising at least one photosensitive region configured for receiving the optical radiation from the optical element, wherein the photosensor is configured for generating at least one photosensor signal dependent on an illumination of the photosensitive region by the optical radiation;
  • - a processor to process the photosensor signals into a spectrum.
  • optical element may refer to an arbitrary element configured for influencing optical radiation.
  • the optical element may be configured for at least one of at least partially dispersing the optical radiation, at least partially filtering the optical radiation, at least partially reflecting the optical radiation, e.g. diffusely or directly, at least partially deflecting the optical radiation, at least partially transmitting the optical radiation and at least partially absorbing the optical radiation.
  • the optical element may comprise at least one of a prism, a grating, a beam splitter, or an interferometer, for example a Michelson interferometer.
  • the optical element may be configured for being used in mobile applications, for example for being used in handheld spectrometer devices and/or in spectrometer devices comprised by electronic communication devices, such as a smartphone or a tablet.
  • the optical element may comprise at least one optical filter element.
  • the optical filter element may be configured for filtering the optical radiation or more specifically at least one selected spectral range of the optical radiation.
  • the optical filter element may specifically be positioned in a light path before the photosensor.
  • the portable spectrometer may comprise a plurality of a photosensors, for example 5 to 20, such as 8 to 12.
  • the photosensors may be arranged as pixels in an array or in a matrix.
  • the portable spectrometer may comprise a plurality of optical filter elements.
  • An optical filter element may be positioned in a beam path before a photosensor.
  • the optical filter elements may be transmissive at different wavelengths of different wavelength regions.
  • each photosensor may be positioned behind an optical filter with regard to the beam path, wherein each optical filter is transmissive at different wavelength or different wavelength region to the other optical filters.
  • the spectroscopic device may comprise one or more than one photosensor.
  • the photosensor may comprise at least one photosensitive region.
  • the photosensitive region may be configured for receiving the optical radiation from the optical element.
  • the photosensor may be configured for generating at least one photosensor signal dependent on an illumination of the photosensitive region by the optical radiation.
  • the term “sensor” may refer to a device configured for detecting at least one condition or for measuring at least one measurement variable.
  • the sensor may be capable of generating at least one signal, such as a measurement signal, which is a qualitative or quantitative indication of the measurement variable and/or measurement property, e.g. of an illumination of the sensor or a part of the sensor.
  • the signal may be or comprise an electrical signal, such as a current, specifically a photocurrent.
  • the term “photosensor” may refer to a sensor or a detector configured for detecting or measuring optical radiation, such as for detecting an illumination and/or a light spot generated by at least one light beam, e.g. by using the photoelectric effect.
  • the photodetector may comprise at least one substrate.
  • a single photosensor may be a substrate with at least one single photosensitive region, which generates a physical response, e.g. an electronic response, to the illumination for a given wavelength range.
  • photosensitive region may refer to a unit of the photosensor, specifically to a spatial area or volume being part of the photosensor, configured for being illuminated, or in other words for receiving optical radiation, and for generating at least one signal, such as an electronic signal, in response to the illumination.
  • the photosensitive region may be located on a surface of the photosensor.
  • the photosensitive region may specifically be a single, closed, uniform photosensitive region. However, other options may also be feasible.
  • the spectroscopic device may comprise at least one light emitting element configured for emitting illumination light for illuminating the driver in order to generate detection light from the driver.
  • the light emitting element may be an incandescent lamp, for example a tungsten filament lamp or a tungsten halogen lamp, a light-emitting diode (LED), a laser diode, a gas-discharge lamp, for example a xenon lamp, a mercury vapor lamp, or a deuterium lamp.
  • the term “light” may refer to electromagnetic radiation in one or more of the infrared, the visible and the ultraviolet spectral range
  • the term “ultraviolet spectral range” may refer to electromagnetic radiation having a wavelength of 1 nm to 380 nm, preferably of 100 nm to 380 nm, for example 280 nm to 315 nm (UV-B) or 315 nm to 380 nm (UV-A).
  • UV-B 315 nm
  • UV-A 315 nm to 380 nm
  • visible spectral range may refer to a spectral range of 380 nm to 760 nm.
  • IR infrared spectral range
  • NIR near infrared spectral range
  • MidlR mid infrared spectral range
  • FIR far infrared spectral range
  • light used for the typical purposes of the present invention is light in the infrared (IR) spectral range, more preferred, in the near infrared (NIR) and/or the mid infrared spectral range (MidlR), especially the light having a wavelength of 750 nm to 3 pm, for example 780 nm to 1 .4 pm or 1.4 pm to 2.5 pm.
  • IR infrared
  • NIR near infrared
  • MidlR mid infrared spectral range
  • the spectroscopic device may comprise a processor to process the photosensor signals into spectroscopic data, for example an infrared spectrum.
  • the processor may output the spectroscopic data, for example to an interface for further processing or to a user interface.
  • the processor may further be configured to apply a chemometric model and output the concentration of a body substance obtained by the chemometric model.
  • the spectroscopic device may further comprise a memory.
  • the memory may be configured to store the chemometric model.
  • the memory may be configured to store spectroscopic data.
  • the spectroscopic device may contain or be placed behind a transparent display.
  • the term “display” may refer to an arbitrary shaped device configured for displaying an item of information.
  • the item of information may be arbitrary information such as at least one image, at least one diagram, at least one histogram, at least one graphic, text, numbers, at least one sign, or an operating menu.
  • the display may be or may comprise at least one screen.
  • the display may have an arbitrary shape, e.g. a rectangular shape.
  • the display may be a front display of a device.
  • the display may be or may comprise at least one organic light-emitting diode (OLED) display.
  • organic light emitting diode may refer to a light-emitting diode (LED) in which an emissive electroluminescent layer is a film of organic compound configured for emitting light in response to an electric current.
  • the OLED display may be configured for emitting visible light.
  • the display, particularly a display area may be covered by glass.
  • the display may comprise at least one glass cover.
  • the transparent display may be at least partially transparent.
  • the term “at least partially transparent” may refer to a property of the display to allow light, in particular of a certain wavelength range, e.g. in the infrared spectral region, in particular in the near infrared spectral region, to pass at least partially through.
  • the display may be semitransparent in the near infrared region.
  • the display may have a transparency of 20 % to 50 % in the near infrared region.
  • the display may have a different transparency for other wavelength ranges.
  • the display may have a transparency of > 80 % for the visible spectral range, preferably > 90 % for the visible spectral range.
  • the transparent display may be at least partially transparent over the entire display area or only parts thereof. Typically, it is sufficient if only those parts of the display area are at least partially transparent trough which light needs to pass from the projector or to the camera.
  • the display may comprise a display area.
  • the term “display area” may refer to an active area of the display, in particular an area which is activatable.
  • the display may have additional areas such as recesses or cutouts.
  • the display may have a first area associated with a first pixel per inch (PPI) value and a second area associated with a second PPI value.
  • the first PPI value may be lower than the second PPI value, preferably first PPI value is equal to or below 400 PPI, more preferably the second PPI value may be equal to or higher than 300 PPI.
  • the first PPI value may be associated with the at least one continuous area being at least partially transparent.
  • the spectroscopy module may be positioned such that it can illuminate the driver with light through the transparent display.
  • the spectroscopy module may be positioned such that it can receive light from the driver through the transparent display. Light reflected or refracted from the person firstly crosses the transparent display before it impinges on the sensor of the spectroscopy module. From the driver’s view, the spectroscopy module may be placed behind the transparent display.
  • the term “body substance” may refer to any chemical substance which can be found in a human body, in particular in the skin, blood or interstitial fluid of a human body. The body substance may be indicative of the driver's fitness to drive a vehicle, the body substance may, for example, reduce the concentration of a driver or may be a metabolite of such substance.
  • the body substance may be indicative for a health or fitness condition which compromises the driver's fitness to drive a vehicle, for example a low hydration level or an irregular blood glucose concentration.
  • Body substance may comprise proteins, such as enzymes, antibodies, or hormones; carbohydrates, such as glucose, glycogen, or fructose; lipids, such as triglycerides, cholesterol, and phospholipids; water; nucleic acids, such as DNA or RNA; amino acids, such as alanine, glutamic acid, cysteine; neurotransmitters, such as dopamine, serotonin, and acetylcholine; hormones, such as insulin, estrogen, or testosterone; electrolytes, such as sodium, potassium, or calcium ions; vitamins, such as ascorbic acid, calciferol, cobalamin; metabolites, such as lactate, urea, and creatinine.
  • proteins such as enzymes, antibodies, or hormones
  • carbohydrates such as glucose, glycogen, or fructose
  • lipids such
  • Body substance may be an intoxicant or its metabolite including ethanol, opioids, such as heroin, morphine, fentanyl; stimulants, such as amphetamine, methylphenidate, cocaine; benzodiazepines, such as diazepam, or alprazolam; cannabinoids, such as tetrahydrocannabinol (THC); barbiturates, such as phenobarbital; hallucinogens, such as lysergic acid diethylamide (LSD) or psilocybin; antihistamines, such as diphenhydramine; antipsychotics and antidepressants, such as fluoxetine or amitriptyline; muscle relaxants, such as carisoprodol or cyclobenzaprine; pain killers, such as tramadol, codeine, ibuprofen, naproxen, cyclobenzaprine, or methocarbamol.
  • opioids such as heroin, morphine, f
  • spectroscopic data may refer to data associated with a spectroscopic measurement of a driver, in particular with optical spectroscopic measurement of the driver
  • the spectroscopic data may be received from the spectroscopic device of the present invention.
  • the spectroscopic data may be received directly from a spectroscopic device or indirectly, i e. from a storage device to which the spectroscopic data have been stored after the measurement.
  • a spectroscopic measurement may be triggered by a predefined event, for example when the vehicle is switched on, before the engine is started, or after a certain time period.
  • a spectroscopic measurement may be triggered when a measurement trigger event occurs.
  • a measurement trigger event may be a situation in which an indicator indicates the necessity for a spectroscopic measurement necessary.
  • a measurement trigger event may occur when an indicator indicate that the driver's fitness to drive is potentially compromised, for example due to intoxicants such as alcohol or drugs, due to a health problem, for example low sugar concentration of a diabetes patient, or due to fitness problems like dehydration.
  • the measurement trigger event may be determined using driver data and/or environmental data.
  • the driver data and/or environmental data may indicate an increased likelihood that the driver's fitness to drive the vehicle are compromised, such as slow pupil reflex recorded by an optical camera, unusual movement patterns recorded by a pressure sensor, or certain voice characteristics recorded by a microphone. Triggering a spectroscopic measurement in such cases may be particularly useful if the body substance is used for access control of the vehicle, for example to keep drunk drivers from driving without burdening obviously sober drivers with a measurement.
  • the spectroscopic measurement may be made at various body parts of the driver, for example the face, the arm, the hand.
  • the spectroscopic measurement may be made at parts of the hand, for example the palm, the back of the hand, one or multiple fingers, such as the thumb, the forefinger, the long finger, the ring finger or auricular finger.
  • the spectroscopic measurement may be made in direct contact with the driver or in close proximity, for example with a distance of less than 10 cm or less than 5 cm between driver and spectrometer device.
  • Spectroscopic data may be or may comprise one or more than one spectrum.
  • the term “spectrum” may refer to a data structure in which several intensity values or values derived thereof such as absorbance of radiation are associated with wavelengths or wavelength ranges of the radiation.
  • the wavelength or wavelength ranges may be those described above.
  • the data structure may be a vector, wherein each element represents an intensity and the position in the vector represents a certain wavelength or wavelength range, so the value at a certain position represents the intensity of that wavelength or wavelength range.
  • the data structure may be a vector or matrix containing value pairs, wherein one value represents the wavelength or wavelength range and the other value the intensity at this wavelength or wavelength range.
  • the spectrum recorded by the spectrometer may be corrected by calibration coefficients to compensate for sensor imperfections or drifts.
  • the spectrum may represent the absorbance or transmittance of radiation after having penetrated the skin of the driver.
  • driver data may refer to data associated with a characteristic of the driver such as a physical or chemical characteristic of the driver.
  • Driver data may refer to any data associated with a characteristic of the driver which has been obtained with a method other than spectroscopy.
  • Driver data may correlate with the alcohol level of the driver.
  • Driver data may be personalized data, i.e. specific for a particular driver, or it may be data associated with a certain group of people, for example female drivers of age 25 to 30.
  • Physical characteristics may comprise thermal characteristics, for example the body temperature, the thermal conductivity or the specific heat capacity of the skin; mechanical characteristics, for example pressure exerted on the spectrometer, compressibility or mechanical elasticity of the skin; optical characteristics, for example the color, refractive index, optical conductivity or absorption coefficients of the skin; electro-magnetic characteristics, for example electrical conductivity, dielectric constant, radio frequency-based permittivity, microwave complex permittivity, millimeter wave complex permittivity, magnetic permittivity or susceptibility of the skin.
  • Chemical characteristics of a driver typically refer to the chemical composition of some body tissue like skin, blood or sweat, for example the type and the concentration of certain chemical compounds such as the water content.
  • the driver data may contain or may be a biomarker.
  • biomarker may refer to a measurable substance, process or characteristic that is indicative of a biological state or condition.
  • a biomarker may refer to a specific molecule, protein, genetic sequence, or other measurable feature that is associated with a particular disease, condition or treatment response.
  • biomarkers are body dimensions such as size, head circumference, chest girth, abdominal girth, crotch length, arm length; body weight or body mass index; body topology such as face topology, iris structure, finger print, palm topology; muscle measures like muscular strength, muscular endurance, muscular agility and speed, balance, coordination; cardio-vascular measures such as heart rate, heart rate variability, electrocardiogram, blood pressure, blood oxygen; skin measures such as skin conductance, skin impedance, skin moisture level, skin sebum level, skin roughness, skin elasticity, skin pH, skin blood flow, skin sweat rate; blood metabolites such as blood glucose, blood cholesterol, blood triglycerides, blood urea, blood creatinine, blood lactate, blood bilirubin, blood pH; urine metabolites such as urine glucose, urine urea, urine creatinine, urine ketones, urine pH, urine protein content; hormone levels such as thyroid hormone level, insulin level, growth hormone level, cortisol level, estrogen level, progesterone level, testosterone level, prolact
  • Driver data may be received from sensors other than a spectrometer, for example a thermometer, a scale, a balance, an optical camera, an optical 3D scanner system, conductance or impedance gauge such as a corneometer, a sweat rate monitor or sweat patch, a liquid or gas chromatograph, a mass spectrograph, a nuclear magnetic spectrometer or imager, an electrochemical sensor, an immunoassay, a polymerase chain reaction apparatus.
  • the spectroscopic device may be integrated into a portable device which further comprises sensors from which at least parts of the driver data is received.
  • Driver data may also be received from a storage device or it can be obtained from a user interface, for example a graphical user interface, to which a user can enter driver data, for example from observations.
  • Driver data may comprise human characteristics like age, sex, origin, ethnicity; medical history including current and former medications; nutrition such as vegetarian or vegan diet; consumption of stimulants such as caffeine, alcohol, tobacco products, drug; physical activity level such as type of profession, i.e. office job or physically demanding job, kind of sports, average duration of sports, average sleeping hours.
  • environmental data may refer to data associated with a characteristic of the surrounding of the driver, for example a physical or chemical characteristic of the surrounding of the driver.
  • the characteristic of the surrounding of the driver may have an influence on the spectroscopic measurement of the driver or on the characteristic of the driver such as the physical or chemical characteristic of the driver.
  • environmental data may not comprise an intrinsic characteristic of the driver.
  • Environmental data may comprise sensor data from sensors other than a spectrometer.
  • Environmental data may comprise the location of the driver, for example the geolocation such as the GPC coordinates, the height above see level, distance to a reference point such as the spectrometer, acceleration, orientation with regard to gravity; weather conditions such air temperature, air pressure, air humidity, wind speed, wind direction, ambient light intensity; time or date; air pollutant levels like CO2 concentration, CO concentration, ozone concentration, nitrogen oxide concentration, sulfur dioxide concentration, fine dust concentration, volatile organic compounds level.
  • Sensor data may have been recorded by a sensor capable of determining the sensor data.
  • the sensor may be integrated into the spectrometer.
  • the spectroscopic device may be integrated into a portable device which further comprises sensors from which at least parts of the environmental data is received.
  • the sensor may be communicatively coupled to the spectrometer, for example via a wireless communication or via internet.
  • sensors may be a GPC receiver, an accelerometer, a gyroscope, an altimeter, a goniometer, a distance sensor like a time-of-flight sensor, a radar or a LiDaR, a pressure sensor such as a MEMS sensor, a piezo sensor or a capacitive sensor, a magnetometer, a barometer, a light sensor, a thermometer, a gas sensor.
  • Environmental data may comprise data associated with the spectrometer, for example a spectrometer ID, a version number of the spectrometer, the spectrometer settings, the temperature of the spectrometer, the age of the spectrometer, time since the last calibration was performed, age of the illumination source, number of measurements the spectrometer has already performed in its lifetime or within a certain time such as the last week or the last month.
  • Environmental data may further comprise data associated with the spectroscopic measurement of the driver, for example the sampling time, the illumination strength with which the spectrometer illuminates the driver, or the distance of the driver to the spectrometer.
  • Environmental data may be received from a data storage medium.
  • the data storage medium may be part of the spectroscopic device, or it may be a remote storage device, for example a computer system or a cloud system.
  • Environmental data may be received from a database, for example from a database on a remote storage system, in response to a request containing time and/or geographic location
  • a remote storage system may refer to a system which is far from the driver of the measurement, for example a cloud server or a database server.
  • a request containing the GPS coordinates of the driver and the time of the spectroscopic measurement may be sent to a cloud server having a weather database.
  • the cloud server may in response to the request send weather data corresponding to the time and location of the request.
  • the concentration of a body substance of the driver is determined using the spectroscopic data.
  • the concentration of a body substance of the driver may be determined using the spectroscopic data and the driver data.
  • the concentration of a body substance of the driver may be determined using the spectroscopic data and the environmental data.
  • the concentration of a body substance of the driver may be determined using the spectroscopic data, the driver data and the environmental data.
  • the concentration may be a numeric value, such as mass ratio or a volume ratio. The ratio may relate to the whole body or parts thereof, for example the skin or the blood. For example, in case of alcohol the blood alcohol concentration may be determined.
  • the concentration may be a categoric value, for example indicating the presence of the body substance or certain value ranges, for example none, low, medium, high.
  • the concentration of a body substance of the driver may be determined by employing a chemometric model.
  • the term ‘‘chemometric model” may refer to a model which is parameterized to receive spectroscopic data as input and output the concentration of a body substance.
  • the chemometric model may be parameterized to receive spectroscopic data and driver data as input and output the concentration of a body substance.
  • the chemometric model may be parameterized to receive spectroscopic data and environmental data as input and output the concentration of a body substance.
  • the chemometric model may be parameterized to receive spectroscopic data, driver data and environmental data as input and output the concentration of a body substance.
  • the chemometric model may be parameterized to receive spectroscopic data as input and output an intermediate concentration of a body substance.
  • the intermediate concentration of a body substance may be adjusted or corrected using the driver data and/or the environmental data, for example by employing a refining model.
  • the refining model may be a data- driven model which may be trained with historic data for adjusting or correcting the intermediate concentration of a body substance.
  • a refining model may be a multivariate linear or polynomial regression model, or it may be an artificial neural network.
  • a chemometric model may comprise a pre-processing method and a machine learning model to obtain the concentration of a body substance.
  • a chemometric model may comprise a pre-processing method, a feature selection filter and a machine learning model. If the chemometric model comprises two or more partial chemometric models, each partial chemometric model may comprise a separate pre-processing method, a feature selection filter and a machine learning model. Alternatively, the partial models may use the same pre-processing method or feature selection filter.
  • pre-processing may refer to a method to reduce or eliminate interferences from a spectrum such as stray light, noise or baseline drift to enhance the subsequent machine learning. Hence, the pre-processing method may be applied before the machine learning method. Pre-processing may include one or more of baseline correction, scatter correction, smoothing, scaling, aggregation.
  • machine learning method' may refer to a model which translates spectra into corresponding driver data.
  • the machine learning method hence may use a spectrum as input and derive driver data therefrom.
  • the machine learning method may be considered as an integral part of the chemometric model.
  • Machine learning methods may be supervised, semi-supervised or unsupervised.
  • Machine learning methods may include multivariate calibration, classification, pattern recognition, clustering, ensemble methods, neural nets and deep learning, or multivariate curve resolution.
  • feature selection filter may refer to a method to select those parts of the spectrum with a correlation to the driver data.
  • a feature selection filter may facilitate the machine learning method of the chemometric model and thus avoid overfitting and reduce the number of required training datasets.
  • a feature selection filter may use a spectrum as input, remove all unselected parts and output a spectrum with only the selected parts left.
  • the output of the feature selection filter may be a spectrum in form of a vector of lower dimensionality than the input vector.
  • the output of the feature selection filter can be used as input for the machine learning method.
  • the feature selection filter may be applied before the machine learning method.
  • the input of the feature selection filter may be the received spectrum or it may be the pre-processed spectrum, preferably the pre-processed spectrum.
  • the feature selection filter may be applied after the pre-processing method.
  • a chemometric model may be or may contain a data-driven model.
  • the chemometric model may be a trained data- driven model.
  • Training may comprise adjusting parameters of the chemometric model such that the output of the chemometric model most closely fits to the provided training data.
  • training comprises minimizing a loss or cost function, for example a least mean square value of chemometric model output to provided training data.
  • the complete set of training data may be used for training or parts thereof. Parts of the received training data may be used for training and the remainder may be used for determining the prediction accuracy of the trained chemometric model.
  • cross-validation can be applied, for example K-fold cross-validation, leave-one-out cross- validation, stratified cross-validation.
  • the spectroscopic device may be operatively coupled to a driver identification system.
  • the driver identification system may provide the identity of the driver.
  • the driver identification system may be a biometric recognition system, for example a fingerprint recognition system, a hand geometry recognition system, an iris recognition system, a retina recognition system, a face recognition system, a vein recognition system, a voice recognition system.
  • the driver identification system may be integrated into the same part of the vehicle as the spectroscopic device or into a different part.
  • the spectroscopic device may be placed behind a display with an integrated fingerprint scanner or a behind-display face recognition system.
  • the driver identification system may be used to make sure the person using the spectroscopic device is in fact the driver and not a different vehicle passenger This may be efficiently achieved if the spectroscopic device and the driver identification system are in close proximity, for example by integrating both in the same part of the vehicle.
  • the personalized driver data may be obtained using the identity of the driver obtained from a driver identification system.
  • the face recognition system may be a 2D face recognition system, for example a feature extraction analysis from an image, for example from a RGB or an IR camera.
  • the analysis may yield various features like size and position of eyes, nose, mouth ears and their relative distance and orientation. By comparing such features to a reference database, the identify of the driver may be identified.
  • the face recognition system may be a 3D face recognition system determining a depth map of the driver, for example by a stereo camera system, a structured light system, or a time-of-flight camera system.
  • the depth map may be used to identify the driver by comparing it to a reference database.
  • the face recognition system may comprise a projector to illuminate a driver with light.
  • the term “light” may refer to electromagnetic radiation in one or more of the infrared, the visible and the ultraviolet spectral range.
  • the term “ultraviolet spectral range” generally, refers to electromagnetic radiation having a wavelength of 1 nm to 380 nm, preferably of 100 nm to 380 nm.
  • visible spectral range generally, refers to a spectral range of 380 nm to 760 nm.
  • IR infrared spectral range
  • NIR near infrared spectral range
  • MidlR mid infrared spectral range
  • FIR far infrared spectral range
  • light used for the typical purposes of the present invention is light in the infrared (IR) spectral range, more preferred, in the near infrared (NIR) and/or the mid infrared spectral range (MidlR), especially the light having a wavelength of 1 pm to 5 pm, preferably of 1 pm to 3 pm.
  • IR infrared
  • NIR near infrared
  • MidlR mid infrared spectral range
  • the term “illuminate” may refer to the process of exposing at least one element to light.
  • the term “projector” may refer to a device configured for generating or providing light in the sense of the above-mentioned definition.
  • the projector may be a pattern projector, a floodlight projector or both either at the same time or the projector may repeatedly switch from illuminating patterned light to floodlight.
  • the term “pattern projector” may refer to a device configured for generating or providing at least one light pattern, in particular at least one infrared light pattern.
  • the term “light pattern” may refer to at least one pattern comprising a plurality of light spots.
  • the light spot may be at least partially spatially extended.
  • At least one spot or any spot may have an arbitrary shape. In some cases a circular shape of at least one spot or any spot may be preferred.
  • the spots may be arranged by considering a structure of a display comprised by a device that is further comprising the optoelectronic apparatus. Typically, an arrangement of an OLED-pixel-structure of the display may be considered.
  • the term “infrared light pattern” may refer to a light pattern comprising spots in the infrared spectral range.
  • the infrared light pattern may be a near infrared light pattern.
  • the infrared light may be coherent.
  • the infrared light pattern may be a coherent infrared light
  • the infrared light pattern may comprise at least one regular and/or constant and/or periodic pattern such as a triangular pattern, a rectangular pattern, a hexagonal pattern or a pattern comprising further convex tilings.
  • the infrared light pattern is a hexagonal pattern, preferably a hexagonal infrared light pattern, preferably a 2/5 hexagonal infrared light pattern.
  • Using a periodical 2/5 hexagonal pattern can allow distinguishing between artefacts and usable signal.
  • the pattern projector may comprise at least one pattern projector configured for generating the infrared light pattern.
  • the pattern projector may comprise at least one emitter, in particular a plurality of emitters.
  • the term “emitter” may refer to at least one arbitrary device configured for providing at least one light beam. The light beam may generate the infrared light pattern.
  • the emitter may comprise at least one element selected from the group consisting of at least one laser source such as at least one semi-conductor laser, at least one double heterostructure laser, at least one external cavity laser, at least one separate confinement heterostructure laser, at least one quantum cascade laser, at least one distributed Bragg reflector laser, at least one polariton laser, at least one hybrid silicon laser, at least one extended cavity diode laser, at least one quantum dot laser, at least one volume Bragg grating laser, at least one Indium Arsenide laser, at least one Gallium Arsenide laser, at least one transistor laser, at least 50 one diode pumped laser, at least one distributed feedback lasers, at least one quantum well laser, at least one interband cascade laser, at least one semiconductor ring laser, at least one vertical cavity surface emitting laser (VCSEL); at least one non-laser light source such as at least one LED or at least one light bulb.
  • at least one laser source such as at least one semi-conductor laser, at least one double heterostructure laser
  • the pattern projector comprises at least one least one VCSEL, preferably a plurality of VCSELs.
  • the plurality of VCSELs may be arranged in at least one array, e.g. comprising a matrix of VCSELs.
  • the VCSELs may be arranged on the same substrate, or on different substrates.
  • the term “vertical-cavity surface-emitting laser” may refer to a semiconductor laser diode configured for laser beam emission perpendicular with respect to a top surface. Examples for VCSELs can be found e.g. in en.wikipedia.org/wikiA/erticalcavity_surface-emitting_laser.
  • VCSELs are generally known to the skilled person such as from WO 2017/222618 A.
  • Each of the VCSELs is configured for generating at least one light beam.
  • the plurality of generated spots may be associated with the infrared light pattern.
  • the VCSELs may be configured for emitting light beams at a wavelength range from 800 to 1000 nm.
  • the VCSELs may be configured for emitting light beams at 808 nm, 850 nm, 940 nm, and/or 980 nm.
  • the VCSELs emit light at 940 nm, since terrestrial sun radiation has a local minimum in irradiance at this wavelength, e.g. as described in CIE 085-1989 removableSolar spectral Irradiance”.
  • the pattern projector may comprise at least one optical element configured for increasing, e.g. duplicating, the number of spots generated by the pattern projector.
  • the pattern projector particularly the optical element, may comprises at least one diffractive optical element (DOE) and/or at least one metasurface element.
  • DOE diffractive optical element
  • the DOE and/or the metasurface element may be configured for generating multiple light beams from a single incoming light beam. Further arrangements, particularly comprising a different number of projecting VCSEL and/or at least one different optical element configured for increasing the number of spots may be possible. Other multiplication factors are possible. For example, a VCSEL or a plurality of VCSELs may be used and the generated laser spots may be duplicated by using at least one DOE.
  • the pattern projector may be configured to illuminated patterned light comprising less than 4000 light beams, preferably less than 3000 light beams, more preferably less than 2000 light beams, most preferably less than 1000 light beams.
  • the patterned light may comprise 100 to 4000 light beams or 200 to 3000 light beams or 300 to 2000 light beams or 500 to 1000 light beams.
  • the pattern projector may comprise at least one transfer device.
  • transfer device also denoted as “transfer system” may refer to one or more optical elements which are adapted to modify the light beam, particularly the light beam used for generating at least a portion of the infrared light pattern, such as by modifying one or more of a beam parameter of the light beam, a width of the light beam or a direction of the light beam.
  • the transfer device may comprise at least one imaging optical device .
  • the transfer device specifically may comprise one or more of: at least one lens, for example at least one lens selected from the group consisting of at least one focus-tunable lens, at least one aspheric lens, at least one spherical lens, at least one Fresnel lens; at least one diffractive optical element; at least one concave mirror; at least one beam deflection element, preferably at least one mirror; at least one beam splitting element, preferably at least one of a beam splitting cube or a beam splitting mirror; at least one multilens system; at least one holographic optical element; at least one meta optical element.
  • the transfer device comprises at least one refractive optical lens stack.
  • the transfer device may comprise a multi-lens system having refractive properties.
  • the term “flood projector” may refer to at least one device configured for providing substantially continuous spatial illumination.
  • the flood projector may illuminate a measurement area, such as a user, a portion of the user and/or a face of the user, with a spatially constant or essentially constant illumination intensity.
  • the term “flood light” may refer to substantially continuous spatial illumination, in particular diffuse and/or uniform illumination.
  • the flood light has a wavelength in the infrared range, in particular in the near infrared range.
  • the flood projector may comprise at least one least one VCSEL, preferably a plurality of VCSELs.
  • substantially continuous spatial illumination may refer to uniform spatial illumination, wherein areas of non-uniform are possible.
  • a relative distance between the flood projector and the pattern projector may be below 3.0 mm.
  • the relative distance between the flood projector and the pattern projector may be below 2.5 mm, preferably below 2.0 mm.
  • the pattern projector and the flood projector may be combined into one module.
  • the pattern projector and the flood projector may be arranged on the same substrate, in particular having a minimum relative distance.
  • the minimum relative distance may be defined by a physical extension of the flood projector and the pattern projector.
  • Arranging the pattern projector and the flood projector having a relative distance below 3.0 mm can result in decreased space requirement of the two projectors.
  • said projectors can even be combined into one module Such a reduced space requirement can allow reducing the transparent area(s) in a display necessary for operation of the projectors) behind the display.
  • the pattern projector and the flood projector may comprise at least one VCSEL, preferably a plurality of VCSELs.
  • the pattern projector may comprise a plurality of first VCSELs mounted on a first platform.
  • the flood projector may comprise a plurality of second VCSELs mounted on a second platform.
  • the second platform may be beside the first platform.
  • the optoelectronic apparatus may comprise a heat sink. Above the heat sink a first increment comprising the first platform may be attached. Above the heat sink a second increment comprising the second platform may be attached. The second increment may be different from the first increment.
  • the first platform may be more distant to the optical element configured for increasing, e.g. duplicating, the number of spots.
  • the second platform may be closer to the optical element.
  • the beam emitted from the second VCSEL may be defocused and thus, form overlapping spots. This leads to a substantially continuous illumination and, thus, to flood illumination.
  • the projector may be positioned such that it can illuminate light through a transparent display as described above. Hence, light emitted by the projector crosses the transparent display before it impinges on the driver. From the driver's view, the projector is placed behind the transparent display.
  • the face recognition system may further comprise a camera.
  • the term “camera” may refer to at least one unit of the optoelectronic apparatus configured for generating at least one image.
  • the image may be generated via a hardware and/or a software interface, which may be considered as the camera.
  • image generation may refer to capturing and/or generating and/or determining and/or recording at least one image by using the camera.
  • the image generation may comprise imaging and/or recording the image.
  • the image generation may comprise capturing a single image and/or a plurality of images such as a sequence of images.
  • the capturing and/or generating and/or determining and/or recording of the image may be caused and/or initiated by the hardware and/or the software interface.
  • the image generation may comprise recording continuously a sequence of images such as a video or a movie.
  • the image generation may be initiated by a user action or may automatically be initiated, e.g. once the presence of at least one object or user within a field of view and/or within a predetermined sector of the field of view of the camera is automatically detected.
  • the camera may comprise at least one optical sensor, in particular at least one pixelated optical sensor.
  • the camera may comprise at least one CMOS sensor or at least one CCD chip.
  • the camera may comprise at least one CMOS sensor, which may be sensitive in the infrared spectral range.
  • image may refer to data recorded by using the optical sensor, such as a plurality of electronic readings from the CMOS or CCD chip.
  • the image may comprise raw image data or may be a pre-processed image.
  • the pre-processing may comprise applying at least one filter to the raw image data and/or at least one background correction and/or at least one background subtraction.
  • the camera may comprise a color camera, e.g. comprising at least color pixels.
  • the camera may comprise a color CMOS camera.
  • the camera may comprise black and white pixels and color pixels.
  • the color pixels and the black and white pixels may be combined internally in the camera.
  • the camera may comprise at least one color camera (e.g. RGB) and/or at least one black and white camera, such as a black and white CMOS.
  • the camera may comprise at least one black and white CMOS chip.
  • the camera generally may comprise a one-dimensional or two-dimensional array of image sensors, such as pixels.
  • the color camera may be an internal and/or external camera of a device comprising the optoelectronic apparatus.
  • the internal and/or external camera of the device may be accessed via a hardware and/or a software interface comprised by the optoelectronic apparatus, which is used as the camera.
  • the device is or comprises a smartphone
  • the image generating unit may be a front camera, such as a selfie camera, and/or back camera of the smartphone.
  • the camera may have a field of view between 10°x10° and 75°x75°, preferably 55°x65°.
  • the camera may have a resolution below 2 MP, preferably between 0.3 MP and 1.5 MP.
  • the camera may comprise further elements, such as one or more optical elements, e.g. one or more lenses.
  • the optical sensor may be a fix-focus camera, having at least one lens which is fixedly adjusted with respect to the camera.
  • the camera may also comprise one or more variable lenses which may be adjusted, automatically or manually.
  • Other cameras are feasible.
  • pattern image may refer to an image generated by the camera while illuminating the infrared light pattern, e.g. on an object and/or a user.
  • the pattern image may comprise an image showing a user, in particular at least parts of the face of the user, while the user is being illuminated with the infrared light pattern, particularly on a respective area of interest comprised by the image.
  • the pattern image may be generated by imaging and/or recording light reflected by an object and/or user which is illuminated by the infrared light pattern.
  • the pattern image showing the user may comprise at least a portion of the illuminated infrared light pattern on at least a portion the user.
  • the illumination by the pattern illumination source and the imaging by using the optical sensor may be synchronized, e.g. by using at least one control unit of the optoelectronic apparatus.
  • the term “flood image” may refer to an image generated by the camera while illumination source is illuminating infrared flood light, e.g. on an object and/or a user.
  • the flood image may comprise an image showing a user, in particular the face of the user, while the user is being illuminated with the flood light.
  • the flood image may be generated by imaging and/or recording light reflected by an object and/or user which is illuminated by the flood light.
  • the flood image showing the user may comprise at least a portion of the flood light on at least a portion the user
  • the illumination by the flood illumination source and the imaging by using the optical sensor may be synchronized, e.g. by using at least one control unit of the optoelectronic apparatus.
  • the camera may be configured for imaging and/or recording the pattern image and the flood image at the same time or at different times.
  • the camera may be configured for imaging and/or recording the pattern image and the flood image at at least partially overlapping measurement areas or equivalents of the measurement areas.
  • the face recognition system may further comprise a processor.
  • the processor may be configured, such as by software programming, for performing one or more evaluation operations. At least one or any component of a computer program configured for performing the authentication process may be executed by the processing device. Alternatively or in addition, the processor may be or may comprise a connection interface. The connection interface may be configured to transfer data from the device to a remote device; or vice versa. At least one or any component of a computer program configured for performing the authentication process may be executed by the remote device.
  • the processor may be configured for identifying the driver based on the flood image. Particularly therefore, the processor may forward data to a remote device. Alternatively or in addition, the processor may perform the identification of the driver based on the flood image, particularly by running an appropriate computer program having a respective functionality.
  • the term “identifying” may refer to identity check and/or verifying an identity of the driver.
  • the identifying of the driver may comprise analyzing the flood image.
  • the analyzing of the flood image may comprise performing a face verification of the imaged face to be the driver's face.
  • the analyzing may comprise one or more of the following: a filtering; a selection of at least one region of interest; a formation of a difference image between the flood image and at least one offset; an inversion of flood image; a background correction; a decomposition into color channels; a decomposition into hue; saturation; and brightness channels; a frequency decomposition; a singular value decomposition; applying a Canny edge detector; applying a Laplacian of Gaussian filter; applying a Difference of Gaussian filter; applying a Sobel operator; applying a Laplace operator; applying a Scharr operator; applying a Prewitt operator; applying a Roberts operator; applying a Kirsch operator; applying a high-pass filter; applying a low-pass filter; applying a Fourier transformation; applying a Radon-transfor- mation; applying a Hough-transformation; applying a wavelet-transformation; a thresholding; creating a binary image.
  • the region of interest may be determined manually by a user or may be determined automatically, such as by recognizing the user within the image.
  • the analyzing of the flood image may comprise using at least one image recognition technique, in particular a face recognition technique.
  • An image recognition technique comprises at least one process of identifying the user in an image.
  • the image recognition may comprise using at least one technique selected from the technique consisting of: color-based image recognition, e.g. using features such as hue, saturation, and value (HSV) or red, green, blue (RGB); template matching, for example as illustrated on https://www.mathworks.com/help/vision/ug/pattern-matching html; image segment and/or blob analysis e.g. using size, color, or shape; machine learning and/or deep learning e.g. using at least one convolutional neural network.
  • HSV hue, saturation, and value
  • RGB red, green, blue
  • template matching for example as illustrated on https://www.mathworks.com/help/vision/ug/pat
  • the neural network may be trained by the user, such as in a training procedure, in which the user is indicated to take at least one or a plurality of pictures showing himself.
  • the analyzing of the flood image may comprise determining a plurality of facial features.
  • the analyzing may comprise comparing, in particular matching, the determined facial features with template features.
  • the template features may be features extracted from at least one template.
  • the template may be or may comprise at least one image generated in an enrollment process, e.g. when initializing the authentication system. Template may be an image of an authorized user.
  • the template features and/or the facial feature may comprise a vector.
  • Matching of the features may comprise determining a distance between the vectors.
  • the identifying of the user may comprise comparing the distance of the vectors to a least one predefined limit, wherein the user is successfully identified in case the distance is smaller than or equal to the predefined limit at least within tolerances. The user declining and/or rejected otherwise.
  • the image recognition may comprise using at least one model, in particular a trained model comprising at least one face recognition model.
  • the analyzing of the flood image may be performed by using a face recognition system, such as FaceNet, e.g. as described in Florian Schroff, Dmitry Kalenichenko, James Philbin, "FaceNet: A Unified Embedding for Face Recognition and Clustering”, arXiv: 1503.03832.
  • the trained model may comprises at least one convolutional neural network.
  • the convolutional neural network may be designed as described in M. D. Zeller and R. Fergus, "Visualizing and understanding convolutional networks”, CoRR, abs/1311.2901 , 2013, or C.
  • the processor may be configured to correct image artifacts caused by diffraction of the light when passing the transparent display.
  • the term “correct” may mean partially or fully remove the artifacts or tag them so they can be excluded from further processing, in particular from determine if the imaged person is an authorized person
  • Correcting image artifacts may take into account the information about the transparent display, in particular the dimensions of the pixels or the distance of repeating features to each other This information can facilitate identifying artifacts as diffraction patterns can be calculated and compared to the image Correcting image artifacts may comprise identifying reflection features, sorting them by brightness and selecting the locally brightest features.
  • the information of the transparent display may be used, in particular a distance in the image by which a light beam may be displaced by diffraction on the transparent display may be calculated based on the information about the transparent display. This method can be particularly useful for pattern images. Further details are disclosed in WO 2021/105265 A1.
  • the processor may be configured to determine the quality of the image from the camera. Determination of the quality of the image can mean determining the brightness of the image, in particular determining if the brightness of the image is within a predetermined range. This predetermined range may be selected such that image recognition yields optimum results.
  • the processor may generate a signal indicative of the brightness level of the image. Such signal may be use, for example by a controller of the projector, to adjust the illumination power of the projector. The signal may also be used by a controller of the camera to adjust the camera settings according to the signal indicative of the brightness level and/or trigger the camera to generate a new image. Determination of the quality of the image can also mean determining the head position of the person, in particular determining the angle of the face of the person relative to the camera.
  • the processor may generate a signal indicative of the head position of the person. Such signal may be used, for example by a controller of the camera to trigger the camera to generate a new image. The signal may also be used to inform the user to turn the head, for example by displaying such information on the transparent display.
  • the processor may be further configured for determining material data based on the pattern image. Particularly therefore, the processor may forward data to a remote device. Alternatively or in addition, the processor may perform the material determination based on the pattern image, particularly by running an appropriate computer program having a respective functionality. Particularly by considering the material as a parameter for validating the authentication process, the authentication process may be robust against being outwitted by using a recorded image of the user.
  • the processor may be configured for extracting the material data from the pattern image by beam profile analysis of the light spots.
  • beam profile analysis can allow for providing a reliable classification of scenes based on a few light spots.
  • Each of the light spots of the pattern image may comprise a beam profile.
  • the term “beam profile” may generally refer to at least one intensity distribution of the light spot on the optical sensor as a function of the pixel.
  • the beam profile may be selected from the group consisting of a trapezoid beam profile; a triangle beam profile; a conical beam profile and a linear combination of Gaussian beam profiles.
  • the processor may be configured for outsourcing at least one step of the authentication process, such as the identification of the user, and/or at least one step of the validation of the authentication process, such as the consideration of the material data, to a remote device, specifically a server and/or a cloud server.
  • the authentication system and the remote device may be part of a computer network, particularly the internet.
  • the authentication system may transmit the generated data and/or data associated to an intermediate step of the authentication process and/or its validation to the remote device.
  • the processor may be and/or may comprise a connection interface configured for transmitting information to the remote device. Data generated by the remote device used in the authentication process and/or its validation may further be transmitted to the authentication system. This data may be received by the connection interface comprised by the authentication system.
  • connection interface may specifically be configured for transmitting or exchanging information.
  • the connection interface may provide a data transfer connection, e.g. Bluetooth, NFC, or inductive coupling.
  • connection interface may be or may comprise at least one port comprising one or more of a network or internet port, a USB-port, and a disk drive.
  • the processor is configured for using a facial recognition authentication process operating on the pattern image and/or extracted material data.
  • the processor may be configured for extracting material data from the pattern image.
  • extracting material data from the pattern image may comprise generating the material type and/or data derived from the material type.
  • extracting material data may be based on the pattern image.
  • Material data may be extracted by using at least one model. Extracting material data may include providing the pattern image to a model and/or receiving material data from the model.
  • Providing the image to a model may comprise and may be followed by receiving the pattern image at an input layer of the model or via a model loss function.
  • the model may be a data-driven model.
  • Data-driven model may comprise a convolutional neural network and/or an encoder decoder structure such as an autoencoder.
  • generating a representation may be FFT, wavelets, deep learning, like CNNs, energy models, normalizing flows, GANs, vision transformers, or transformers used for natural language processing, Autoregressive Image Modeling, Normalizing Flows, Deep Autoencoders, Deep Energy-Based Models.
  • Supervised or unsupervised schemes may be applicable to generate a representation, also embedding in e.g. cosine or Euclidian metric in ML language.
  • the data-driven model may be parametrized according to a training data set including at least one image and material data, preferably at least one pattern image and material data.
  • extracting material data may include providing the image to a model and/or receiving material data from the model.
  • the data-driven model may be trained according to a training data set including at least one image and material data.
  • the data-driven model may be parametrized according to a training data set including at least one image and material data.
  • the data-driven model may be parametrized according to a training data set to receive the image and provide material data based on the received image.
  • the data-driven model may be trained according to a training data set to receive the image and provide material data as output based on the received image.
  • the training data set may comprise at least one image and material data, preferably material data associated with the at least one image.
  • the image may comprise a representation of the image.
  • the representation may be a lower dimensional representation of the image.
  • the representation may comprise at least a part of the data or the information associated with the image.
  • the representation of an image may comprise a feature vector.
  • determining a representation, in particular a lower-dimensional representation may be based on principal component analysis (PCA) mapping or radial basis function (RBF) mapping. Determining a representation may also be referred to as generating a representation. Generating a representation based on PCA mapping may include clustering based on features in the pattern image and/or partial image. Additionally or alternatively, generating a representation may be based on neural network structures suitable for reducing dimensionality. Neural network structures suitable for reducing dimensionality may comprise encoder and/or decoder. In an example, neural network structure may be an autoencoder.
  • neural network structure may comprise a convolutional neural network (CNN).
  • the CNN may comprise at least one convolutional layer and/or at least one pooling layer.
  • CNNs may reduce the dimensionality of a partial image and/or an image by applying a convolution, e.g. based on a convolutional layer, and/or by pooling. Applying a convolution may be suitable for selecting feature related to material information of the pattern image.
  • a model may be suitable for determining an output based on an input.
  • model may be suitable for determining material data based on an image as input.
  • a model may be a deterministic model, a data- driven model or a hybrid model.
  • the deterministic model preferably, reflects physical phenomena in mathematical form, e.g., including first-principles models.
  • a deterministic model may comprise a set of equations that describe an interaction between the material and the patterned electromagnetic radiation thereby resulting in a condition measure, a vital sign measure or the like.
  • a data-driven model may be a classification model.
  • a hybrid model may be a classification model comprising at least one machine-learning architecture with deterministic or statistical adaptations and model parameters. Statistical or deterministic adaptations may be introduced to improve the quality of the results since those provide a systematic relation between empiricism and theory.
  • the data-driven model may be a classification model.
  • the classification model may comprise at least one machine-learning architecture and model parameters.
  • the machine-learning architecture may be or may comprise one or more of: linear regression, logistic regression, random forest, piecewise linear, nonlinear classifiers, support vector machines, naive Bayes classifications, nearest neighbors, neural networks, convolutional neural networks, generative adversarial networks, support vector machines, or gradient boosting algorithms or the like.
  • the model can be a multi-scale neural network or a recurrent neural network (RNN) such as, but not limited to, a gated recurrent unit (GRU) recurrent neural network or a long short-term memory (LSTM) recurrent neural network.
  • RNN recurrent neural network
  • GRU gated recurrent unit
  • LSTM long short-term memory
  • the data-driven model may be trained based on the training data set. Training the model may include parametrizing the model. The term training may also be denoted as learning. The term specifically may refer to a process of building the classification model, in particular determining and/or updating parameters of the classification model. Updating parameters of the classification model may also be referred to as retraining. Retraining may be included when referring to training herein.
  • the training data set may include at least one image and material information.
  • extracting material data from the image with a data-driven model may comprise providing the image to a data-driven model. Additionally or alternatively, extracting material data from the image with a data-driven model may comprise may comprise generating an embedding associated with the image based on the data-driven model.
  • An embedding may refer to a lower dimensional representation associated with the image such as a feature vector. Feature vector may be suitable for suppressing the background while maintaining the material signature indicating the material data.
  • background may refer to information independent of the material signature and/or the material data. Further, background may refer to information related to biometric features such as facial features.
  • Material data may be determined with the data-driven model based on the embedding associated with the image.
  • extracting material data from the image by providing the image to a data-driven model may comprise transforming the image into material data, in particular a material feature vector indicating the material data.
  • material data may comprise further the material feature vector and/or material feature vector may be used for determining material data.
  • authentication process may be validated based on the extracted material data.
  • the validating based on the extracted material data may comprise determining if the extracted material data corresponds a desired material data. Determining if extracted material data matches the desired material data may be referred to as validating. Allowing or declining the user and/or object to perform at least one operation on the device that requires authentication based on the material data may comprise validating the authentication or authentication process.
  • a comparison of material data with desired material data may result in a allowing and/or declining the user and/or object to perform at least one operation that requires authentication.
  • skin as desired material data may be compared with non-skin material or silicon as material data and the result may be declination since silicon or non-skin material may be different from skin.
  • the authentication process or its validation may include generating at least one feature vector from the material data and matching the material feature vector with associate reference template vector for material.
  • the face recognition system may be configured to output the identity of a driver or it may output an indicator indicating if the person having been identified is an authorized driver, i.e. a driver who has the right to use the vehicle.
  • the authorization may involve data from a database storing data of authorized drivers
  • the face recognition system may be configured to identify the driver from which the concentration of the body substance has been determined.
  • the spectroscopic device may hence output the concentration of the body substance of the driver associated with the driver's identity as determined by the face recognition system.
  • Such data may be used to ensure that the driver driving the vehicle is the same as the driver for which the concentration of the body substance has been determined.
  • the face recognition system may continuously or repeatedly identify the driver and compare the result with the identity of the driver for which the concentration of the body substance has been determined. In this way, manipulation can be avoided, for example that a sober person puts one of his body parts on the spectrometer, but a different person who is drunk drives the vehicle.
  • the concentration of a body substance determined by the chemometric model may be output.
  • the term "outputting'' may refer to writing the concentration of a body substance to a non-transitory data storage medium, for example into a file or database, display it on a user interface, for example a screen, or both. Outputting may further mean to forward the concentration of a body substance to a computer system for further processing, for example an electronic control unit (ECU) or the on-board computer system. It is also possible to output the concentration of a body substance through an interface to a cloud system for storage and/or further processing.
  • ECU electronice control unit
  • the concentration of the body substance may be used to determine the driver's fitness to drive a vehicle.
  • the processor of the spectroscopic device may be configured to determine the driver's fitness to drive a vehicle.
  • the board computer of the vehicle or an ECU may be configured to receive the concentration of the body substance and to determine the driver’s fitness to drive a vehicle using the concentration of the body substance.
  • the determination may involve determining if the concentration of the body substance exceeds or falls below a threshold.
  • the threshold may be given by law, for example for the blood alcohol concentration or the THC concentration.
  • the threshold may also be specific for a certain group of drivers, for example a glucose level for patients suffering from type 1 diabetes.
  • the threshold may be specific for a specific driver, i.e. a personal threshold, for example for medial conditions like dehydration which may depend on the specific skin type of a person.
  • Driver-specific thresholds may be determined using the driver identification described above.
  • the concentration of a body substance of the driver may be used for controlling a functionality of the vehicle.
  • a control signal may be generated using the concentration of the body substance.
  • the control signal may be usable to control a vehicle access control system, for example to fully exclude a driver from using a vehicle with an ignition interlock if the concentration of a body substance is above a threshold or to partially exclude the driver if the concentration of a body substance is within a certain range, for example by restricting certain functionalities of a vehicle like the engine power, the maximum achievable speed or the entertainment system.
  • the control signal may be a Boolean value indicating whether the access can be granted or not.
  • the control signal may be a numeric value, for example classifier indicating the extent of access which can be granted to the driver.
  • the control signal may be generated by determining if the concentration of a body substance is above or below a preset threshold.
  • the determination of the control signal may involve region-specific settings, for example a country or state-specific concentration of a body substance threshold.
  • the region-specific settings may be obtained from a storage medium taking into account the geographic location of the vehicle, for example obtained from a GPS system.
  • the determination of the control signal may involve driver data, for example the driver’s age to determine an agespecific threshold of blood alcohol concentration.
  • the determination of the control signal may involve personalized driver data, for example a personalized threshold of blood alcohol concentration which may be lower than the general threshold, for example due to a court order as a consequence of a prior driving under the influence.
  • personalized driver data may be selected from a database using the driver identity obtained from driver identification as described above.
  • the control signal may be used for geofence lockout, for example prevent the vehicle from leaving a designated area, for example a home or highways, if a preset concentration of a body substance is exceeded; for passive alert, for example discreetly notify emergency contacts or roadside assistance if a preset concentration of a body substance is exceeded; for adapting autonomous driving functionality, for example, increase distance kept to vehicles driving in front and increase break system pressure to allow for more effective breaking and avoid accidents due to reduced reaction time if the concentration of a body substance is within a preset range; for data logging, for example maintain a discreet log of concentration of a body substance readings for personal health tracking or potential use by law enforcement; for determining eligibility, for example restrict driving privileges based on concentration of a body substance for individuals with prior driving under the influence convictions or for novice drivers; for insurance premium adjustments, for example to adjust insurance premiums based on concentration of a body substance measurement history to encourage responsible driving behavior; for emergency response decisions, for example to improve decision making of law enforcement and medical personnel, taking into account concentration of a
  • the present invention further relates to a non-transient computer-readable medium including instructions that, when executed by one or more processors, cause the one or more processors to perform the method according to the present invention.
  • computer-readable data medium may refer to any suitable data storage device or computer readable memory on which is stored one or more sets of instructions (for example software) embodying any one or more of the methodologies or functions described herein.
  • the instructions may also reside, completely or at least partially, within the main memory and/or within the processor during execution thereof by the computer, main memory, and processing device, which may constitute computer-readable storage media.
  • the instructions may further be transmitted or received over a network via a network interface device.
  • Computer-readable data medium include hard drives, for example on a server, USB storage device, CD, DVD or Blue-ray discs.
  • the computer program may contain all functionalities and data required for execution of the method according to the present invention or it may provide interfaces to have parts of the method processed on remote systems, for example on a cloud system.
  • Figure 1 illustrates an example for a spectroscopic device.
  • Figure 2 shows possible placements of the spectroscopic device in the interior of a car.
  • Figure 3 illustrates an example for the method of the present invention.
  • Figure 4 illustrates two examples of how the concentration of a body substance can be determined from spectroscopic data, driver data and/or environmental data.
  • Figure 5 illustrates an example for using the invention for controlling a functionality of a vehicle.
  • Figure 1 illustrates an example for a spectroscopic device.
  • the spectroscopic device may be suitable to acquire spectroscopic data of the driver from its finger 120.
  • the spectroscopic device may comprise a spectroscopic module
  • the spectroscopic module 100 may comprise a substrate 101, for example a printed circuit board (PCB).
  • the spectroscopic module 100 may comprise a light emitting element 102, for example an LED.
  • the LED may emit light of a desired wavelength, for example infrared light in the range of 750 nm to 2.5 m.
  • the light emitting element 102 may emit a light ray 103 directed towards the finger 120 of the driver.
  • a cover 110 which is at least partially transparent to the light emitted by the light emitting element 102.
  • the cover 110 may be a sheet of glass or a polymer like polycarbonate or polymethyl methacrylate (PMMA).
  • PMMA polymethyl methacrylate
  • the cover 110 may also be a transparent display, for example the display of a control panel or a multimedia system
  • the spectroscopic module 100 may comprise a set of photosensors 104 which may be mounted on the substrate
  • the set of photosensors 104 may comprise an array of photosensors, for example a 3 times 3 array. Each photosensor 105 may be sensitive to light at the wavelength range emitted by the light emitting element 102. The light ray 103 may impinge on the set of photosensors 104 after having penetrated into the finger 120. Each photosensor 105 may be covered with an optical filter 106. The optical filters 106 may be chosen to let pass light at different wavelengths, so each photosensor 105 receives a different wavelength range of light.
  • the photosensor 105 may comprise a photosensitive material, for example a photoconductor like lead sulfide (PbS). The photosensor may generate an electric signal depending on the light intensity of the light impinging on the photosensor 105.
  • PbS lead sulfide
  • the spectroscopic module 100 may comprise a processor 107 which may be mounted on the substrate 101.
  • the processor 107 may be operatively coupled to the light source 102 and the photodetectors 105, for example via electric conductors on the PCB.
  • the processor 107 may be a microcontroller configured to control the light emitting element 102, for example to switch it on during the measurement and switch it off afterwards.
  • the processor 107 may be a microcontroller configured to receive the electric signal from the photosensors 105 and convert them into digital signals by analog-to-digital conversion.
  • the processor may thus generate spectroscopic data which may either be further processed to determine the concentration of a body substance of the driver or it may forward the spectroscopic data to an electronic control unit (ECU) of the vehicle or the on-board computer of the vehicle for determining the concentration of a body substance of the driver.
  • ECU electronice control unit
  • Figure 2 shows possible placements of the spectroscopic device in the interior of a car.
  • the figure shows the dashboard, the middle console, the steering wheel and the windshield of a car as seen from the inside of the car.
  • the spectroscopic device may be integrated into various places, for example behind a transparent display or as a separate device in parts of the car accessible to the driver.
  • the spectroscopic device may be integrated in the center above the windscreen (201).
  • Another option is to integrate the spectroscopic device into the interior mirror (202). This may be particularly useful if the mirror functionality is only mimicked by a display which displays the rear view recorded by a camera.
  • the spectroscopic device may be integrated into the A column on the driver’s side (203).
  • the spectroscopic device may further be integrated into the steering wheel rim in a position where the driver puts his palm or fingers (205).
  • Another possibility is to place the driver monitoring system into an engine start-stop button (206), a display in the center of the dashboard (207), the side door, for example just beneath the side window (208), the door handle (209), or the arm rest in the side door (210).
  • the driver monitoring system may also be integrated into the center of the steering wheel (211), for example as part of a control system for the board computer or the entertainment system.
  • the gearshift lever (212) or the center console such as a display in the center console (213) or a button (214) such as the board computer control button or the park break button are further options.
  • the latter may replace traditional controls with a display.
  • the space behind the steering wheel (204), the center of the dashboard (207) and the center console (213) may be combined in a continuous display behind which the spectroscopic device may be placed.
  • FIG 3 illustrates an example for the method of the present invention.
  • Spectroscopic data of the driver may be received (301) from a spectroscopic device integrated into a vehicle.
  • the concentration of a body substance for example the blood alcohol concentration, of the driver may be determined (302) using a chemometric model which uses the spectroscopic data as input.
  • the concentration of the body substance of the driver may be output (303), for example to an ECU controlling the vehicle access or the on-board computer of a vehicle.
  • the determination of the concentration of the body substance (302) may involve driver data (306), for example received from memory or a sensor.
  • the driver data may be personalized data. Personalized data may be received for the driver, which has been identified (305).
  • the identification may be performed by a fingerprint sensor system or by an optical face recognition, for example by image analysis of an image from an RGB camera in the vehicle.
  • the determination of the concentration of a body substance (302) may involve environmental data (304), for example received from memory or a sensor such as a thermometer or a light sensor.
  • Figure 4 illustrates two examples of how the concentration of a body substance can be determined from spectroscopic data, driver data and/or environmental data.
  • spectroscopic data 411 may be input to a chemometric model comprising pre-processing 421 , feature selection 422 and a machine learning model 423.
  • Spectroscopic data 411 may comprise a spectrum, for example a near infrared spectrum, obtained from a measurement with a spectrometer.
  • Pre-processing 421 may comprise baseline correction, for example first-order derivation, scatter correction, for example standard normal variate, smoothing, for example moving average filtering, scaling, for example Pareto scaling, aggregation, for example spatial median.
  • the pre-processed spectroscopic data may subseguently undergo feature selection 422.
  • Feature selection 422 may reduce the dimensionality of the spectroscopic data 411 , so training the machine learning model 423 requires less training data.
  • Feature selection 422 may for example involve principle component regression (PCR).
  • PCR principle component regression
  • the thus pre-processed and feature-selected spectroscopic data may be passed as input to a machine leaning model 423, for example an artificial neural network.
  • the machine leaning model 423 may be parametrized to further receive the driver data 412 and the environmental data 413 as further input.
  • the driver data 412 and the environmental data 413 may be pre-processed before inputting into the machine learning model 413, for example to adjust the format and the units of the data.
  • the machine leaning model 423 may be trained with historic data comprising spectroscopic data, driver data and environmental data.
  • the machine leaning model 423 may output the concentration of a body substance 415, for example its quantity or concentration.
  • a chemometric model which uses spectroscopic data, driver data and environmental data as input has the advantage that complex interplays between spectroscopic data, driver data and environmental data can be taken into account.
  • Figure 4b shows an alternative example for determination of a concentration of a body substance from spectroscopic data, driver data and/or environmental data
  • Spectroscopic data 411 may be input to a chemometric model 431 which outputs an intermediate concentration of a body substance 432.
  • the chemometric model 431 may not be parametrized to take driver data 412 and/or environmental data 413 into account.
  • the intermediate concentration of a body substance 432 only depends on the spectroscopic data 432.
  • a refining model 433 may be employed.
  • the refining model 433 may be parametrized to receive the intermediate concentration of a body substance 432, the driver data 412 and the environmental data 413 and to output the concentration of a body substance 434.
  • the refining model 433 may comprise to sub-models, one which processes the driver data 412 and one which processes the environmental data 413.
  • the first sub-model may receive the intermediate concentration of a body substance 432 and the driver data 412 as input and output a refined concentration of a body substance.
  • a second sub-model may use the refined concentration of a body substance and the environmental data 413 as input and output the concentration of a body substance 434.
  • the refining model 433 may be a multivariate polynomial regression model which adjusts the intermediate concentration of a body substance 432 according to the driver data 412 and the environmental data 413 to arrive at the concentration of a body substance 434.
  • a refining model 433 has the advantage that the chemometric model 431 does not need a retraining for new driver data types or environmental data types.
  • FIG. 5 illustrates an example for using the invention for controlling a functionality of a vehicle.
  • a measurement trigger event may be determined 501.
  • the measurement trigger event may be derived from driver data 508 and/or environmental data 509.
  • an RGB camera may have recorded an image of the driver.
  • the image may have been analyzed by an image algorithm.
  • the image algorithm may have detected an anomality, for example an unusual red color of the face.
  • a spectroscopic measurement may be triggered 502.
  • the spectroscopic device may be triggered to record spectroscopic data of the driver.
  • spectroscopic data may be received 503.
  • the spectroscopic data may be used to determine a concentration of a body substance 504, for example the blood alcohol concentration.
  • the concentration of the body substance may be output 505, for example to an interface to the board computer, or an ECU of a vehicle part. Based on the concentration of the body substance, a control signal may be generated. For example, if a certain blood alcohol concentration is exceeded, the control signal may be directed to deny access to the vehicle or to stop the vehicle. The control signal may be used to control a functionality of the vehicle 507, for example the access control of the vehicle.
  • any steps presented herein can be performed in any order.
  • the methods disclosed herein are not limited to a specific order of these steps. It is also not required that the different steps are per-formed at a certain place or in a certain computing node of a distributed system, i.e each of the steps may be performed at different computing nodes using different equipment/data processing.
  • determining also includes .initiating or causing to determine
  • generating also includes .initiating and/or causing to generate
  • providing also includes “initiating or causing to determine, generate, select, send and/or receive”.
  • “Initiating or causing to perform an action” includes any processing signal that triggers a computing node or device to perform the respective action.
  • Providing in the scope of this disclosure may include any interface configured to provide data. This may include an application programming interface, a human-machine interface such as a display and/or a software module interface. Providing may include communication of data or sub-mission of data to the interface, in particular display to a user or use of the data by the receiving node, entity or interface.
  • Various units, circuits, entities, nodes or other computing components may be described as “configured to” perform a task or tasks. Configured to shall recite structure meaning “having circuitry that” performs the task or tasks on operation. The units, circuits, entities, nodes or other computing components can be configured to perform the task even when the unit/circuit/component is not operating. The units, circuits, entities, nodes or other computing components that form the structure corresponding to "configured to” may include hardware circuits and/or memory storing program instructions executable to implement the operation. The units, circuits, entities, nodes or other computing components may be described as performing a task or tasks, for convenience in the description. Such descriptions shall be interpreted as including the phrase “configured to.” Any recitation of “configured to” is expressly intended not to invoke 35 U.S.C. ⁇ 112(f) interpretation.
  • the methods, apparatuses, systems, computer elements, nodes or other computing components described herein may include memory, software components and hardware components.
  • the memory can include volatile memory such as static or dynamic random-access memory and/or nonvolatile memory such as optical or magnetic disk storage, flash memory, programmable read-only memories, etc.
  • the hardware components may include any combination of combinatorial logic circuitry, clocked storage devices such as flops, registers, latches, etc., finite state machines, memory such as static random-access memory or embedded dynamic random-access memory, custom designed circuitry, programmable logic arrays, etc.

Landscapes

  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Pathology (AREA)
  • Medical Informatics (AREA)
  • Veterinary Medicine (AREA)
  • Public Health (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Molecular Biology (AREA)
  • Surgery (AREA)
  • Animal Behavior & Ethology (AREA)
  • Spectroscopy & Molecular Physics (AREA)
  • Optics & Photonics (AREA)
  • Chemical & Material Sciences (AREA)
  • Physiology (AREA)
  • Psychology (AREA)
  • Transportation (AREA)
  • Mechanical Engineering (AREA)
  • Pulmonology (AREA)
  • Emergency Medicine (AREA)
  • Child & Adolescent Psychology (AREA)
  • Developmental Disabilities (AREA)
  • Educational Technology (AREA)
  • Hospice & Palliative Care (AREA)
  • Psychiatry (AREA)
  • Combustion & Propulsion (AREA)
  • Social Psychology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Pharmacology & Pharmacy (AREA)
  • Toxicology (AREA)
  • Analytical Chemistry (AREA)
  • Biochemistry (AREA)
  • General Physics & Mathematics (AREA)
  • Immunology (AREA)
  • Investigating Or Analysing Materials By Optical Means (AREA)

Abstract

L'invention se rapporte au domaine des dispositifs spectroscopiques pour véhicules. L'invention concerne un dispositif spectroscopique destiné à être intégré dans un véhicule et à déterminer une concentration d'une substance corporelle d'un conducteur du véhicule comprenant : a) un module de spectroscopie permettant d'acquérir des données spectroscopiques du conducteur, b) une entrée permettant de recevoir des données de conducteur associées à une caractéristique du conducteur ou des données environnementales associées à une caractéristique de l'environnement du conducteur, c) un processeur permettant de déterminer la concentration de la substance corporelle du conducteur à l'aide des données spectroscopiques et des données de conducteur et/ou des données environnementales, et d) une sortie permettant d'émettre en sortie la concentration de la substance corporelle du conducteur.
PCT/EP2025/058679 2024-04-09 2025-03-31 Dispositif spectroscopique pour véhicules Pending WO2025214794A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
EP24169177.3 2024-04-09
EP24169177 2024-04-09

Publications (1)

Publication Number Publication Date
WO2025214794A1 true WO2025214794A1 (fr) 2025-10-16

Family

ID=90720056

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/EP2025/058679 Pending WO2025214794A1 (fr) 2024-04-09 2025-03-31 Dispositif spectroscopique pour véhicules

Country Status (1)

Country Link
WO (1) WO2025214794A1 (fr)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7386152B2 (en) * 2003-04-04 2008-06-10 Lumidigm, Inc. Noninvasive alcohol sensor
US7812712B2 (en) * 2006-02-13 2010-10-12 All Protect, Llc Method and system for controlling a vehicle given to a third party
CN104827900A (zh) 2015-05-07 2015-08-12 西北农林科技大学 一种基于近红外光谱的人体酒精无损检测装置
WO2017222618A1 (fr) 2016-06-23 2017-12-28 Apple Inc. Réseau vcsel á émission haute et diffuseur intégré
WO2018091638A1 (fr) 2016-11-17 2018-05-24 Trinamix Gmbh Détecteur pour détecter optiquement au moins un objet
WO2021105265A1 (fr) 2019-11-27 2021-06-03 Trinamix Gmbh Mesure de profondeur à l'aide d'un dispositif d'affichage
KR20220083289A (ko) 2020-12-11 2022-06-20 정기순 음주운전 방지시스템
US20230204507A1 (en) 2019-06-12 2023-06-29 Automotive Coalition For Traffic Safety, Inc. System for non-invasive measurement of an analyte in a vehicle driver

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7386152B2 (en) * 2003-04-04 2008-06-10 Lumidigm, Inc. Noninvasive alcohol sensor
US7812712B2 (en) * 2006-02-13 2010-10-12 All Protect, Llc Method and system for controlling a vehicle given to a third party
CN104827900A (zh) 2015-05-07 2015-08-12 西北农林科技大学 一种基于近红外光谱的人体酒精无损检测装置
WO2017222618A1 (fr) 2016-06-23 2017-12-28 Apple Inc. Réseau vcsel á émission haute et diffuseur intégré
WO2018091638A1 (fr) 2016-11-17 2018-05-24 Trinamix Gmbh Détecteur pour détecter optiquement au moins un objet
WO2018091640A2 (fr) 2016-11-17 2018-05-24 Trinamix Gmbh Détecteur pouvant détecter optiquement au moins un objet
WO2018091649A1 (fr) 2016-11-17 2018-05-24 Trinamix Gmbh Détecteur destiné à la détection optique d'au moins un objet
US20230204507A1 (en) 2019-06-12 2023-06-29 Automotive Coalition For Traffic Safety, Inc. System for non-invasive measurement of an analyte in a vehicle driver
WO2021105265A1 (fr) 2019-11-27 2021-06-03 Trinamix Gmbh Mesure de profondeur à l'aide d'un dispositif d'affichage
KR20220083289A (ko) 2020-12-11 2022-06-20 정기순 음주운전 방지시스템

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
C. SZEGEDY ET AL.: "Going deeper with convolutions", CORR, ABS/1409.4842, 2014
FLORIAN SCHROFFDMITRY KALENICHENKOJAMES PHILBIN: "FaceNet: A Unified Embedding for Face Recognition and Clustering", ARXIV:1503.03832
G. B. HUANGM. RAMESHT. BERGE. LEARNED-MILLER: "Technical Report 07-49", October 2007, UNIVERSITY OF MASSACHUSETTS, article "Labeled faces in the wild: A database for studying face recognition in unconstrained environments"
L. WOLFT. HASSNERI. MAOZ: "Face recognition in unconstrained videos with matched background similarity", IEEE CONF. ON CVPR, 2011

Similar Documents

Publication Publication Date Title
Fernández et al. Driver distraction using visual-based sensors and algorithms
US10583842B1 (en) Driver state detection based on glycemic condition
Kim et al. Lightweight driver monitoring system based on multi-task mobilenets
TWI666566B (zh) 顯示器整合式使用者分類、安全及指紋系統
EP2848196B1 (fr) Appareil pour la photographie des vaisseaux sanguins
CN113015984A (zh) 卷积神经网络中的错误校正
Liu et al. Real time detection of driver fatigue based on CNN‐LSTM
US20200394390A1 (en) Apparatus and method for vehicle driver recognition and applications of same
CN112232163B (zh) 指纹采集方法及装置、指纹比对方法及装置、设备
CN112016525A (zh) 非接触式指纹采集方法和装置
US20220108805A1 (en) Health management apparatus and health management method
CN112232159B (zh) 指纹识别的方法、装置、终端及存储介质
Needham et al. Watch your step! A frustrated total internal reflection approach to forensic footwear imaging
Zhao et al. Detection method of eyes opening and closing ratio for driver's fatigue monitoring
CN114092974B (zh) 身份识别方法、装置、终端以及存储介质
Quiles-Cucarella et al. Multi-index driver drowsiness detection method based on driver’s facial recognition using haar features and histograms of oriented gradients
Ghosal et al. iNAP: a hybrid approach for noninvasive anemia-polycythemia detection in the IoMT
WO2025214794A1 (fr) Dispositif spectroscopique pour véhicules
Andriyanov Application of computer vision systems for monitoring the condition of drivers based on facial image analysis
CN112232157B (zh) 指纹区域检测方法、装置、设备、存储介质
CN112232152B (zh) 非接触式指纹识别方法、装置、终端和存储介质
WO2025045642A1 (fr) Système de reconnaissance biométrique
WO2025252689A1 (fr) Dispositif spectroscopique
WO2025252688A1 (fr) Dispositif spectroscopique
CN212569821U (zh) 非接触式指纹采集装置

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 25714756

Country of ref document: EP

Kind code of ref document: A1