[go: up one dir, main page]

WO2025045642A1 - Système de reconnaissance biométrique - Google Patents

Système de reconnaissance biométrique Download PDF

Info

Publication number
WO2025045642A1
WO2025045642A1 PCT/EP2024/073268 EP2024073268W WO2025045642A1 WO 2025045642 A1 WO2025045642 A1 WO 2025045642A1 EP 2024073268 W EP2024073268 W EP 2024073268W WO 2025045642 A1 WO2025045642 A1 WO 2025045642A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
hand
user
recognition system
biometric recognition
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
PCT/EP2024/073268
Other languages
English (en)
Inventor
Stefan Metz
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
TrinamiX GmbH
Original Assignee
TrinamiX GmbH
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by TrinamiX GmbH filed Critical TrinamiX GmbH
Publication of WO2025045642A1 publication Critical patent/WO2025045642A1/fr
Pending legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/10Image acquisition
    • G06V10/12Details of acquisition arrangements; Constructional details thereof
    • G06V10/14Optical characteristics of the device performing the acquisition or on the illumination arrangements
    • G06V10/141Control of illumination
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/30Authentication, i.e. establishing the identity or authorisation of security principals
    • G06F21/31User authentication
    • G06F21/32User authentication using biometric data, e.g. fingerprints, iris scans or voiceprints
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/10Image acquisition
    • G06V10/12Details of acquisition arrangements; Constructional details thereof
    • G06V10/14Optical characteristics of the device performing the acquisition or on the illumination arrangements
    • G06V10/143Sensing or illuminating at different wavelengths
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/10Image acquisition
    • G06V10/12Details of acquisition arrangements; Constructional details thereof
    • G06V10/14Optical characteristics of the device performing the acquisition or on the illumination arrangements
    • G06V10/145Illumination specially adapted for pattern recognition, e.g. using gratings
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/14Vascular patterns

Definitions

  • the invention is in the area of biometric recognition systems.
  • the invention relates to a biometric recognition system for recognizing a user from its hand, a method for recognizing a user from its hand, a use of a recognition signal obtained by the method according to the present invention for authentication a user, and a non-transient computer-readable medium.
  • Biometric recognition system often involve a face scan of a user as it contains many features which in sum yield a reliable base for recognition.
  • data privacy concerns may restrict the usage of face authentication.
  • Fingerprint sensors authenticate a user by its individual skin surface profile. This method is less subject to data privacy concerns.
  • it requires physical contact to a reader which may be conceived us unhygienic. It was therefore an object to provide a biometric recognition system with high reliability, no requirement for physical contact and which gives less cause for data privacy concerns.
  • WO 2023/156473 A1 discloses a method for determining an access right of a user to a requesting computer device. The method primarily focuses a face authentication which can be augmented with additional information such as a palm print. However, no material of the palm is determined.
  • the present invention relates to biometric recognition system for recognizing a user from its hand comprising: a. projector for projecting light onto the hand of the user, b. a camera for recording an image of the hand, and c. a processor configured to i. extract features of the hand of the user for comparing them to reference features, ii. determine from the image if the hand is made of skin, and iii. generating a recognition signal based on the feature comparison and the skin determination.
  • biometric recognition system for recognizing a user from its eye comprising: a. projector for projecting light onto the eye of the user, b. a camera for recording an image of the eye, and c. a processor configured to i. extract features of the eye of the user for comparing them to reference features, ii. determine from the image if the surrounding of the eye is made of skin, and iii. generating a recognition signal based on the feature comparison and the skin determination.
  • the present invention relates to a method for recognizing a user from its hand comprising: a. projecting light onto the hand of the user, b. recording an image of the hand, c. extracting features of the hand of the user for comparing them to reference features, d. determine from the image if the hand is made of skin, and e. generating a recognition signal based on the feature comparison and the skin determination.
  • the present invention relates to a method for recognizing a user from its eye comprising: a. projecting light onto the eye of the user, b. recording an image of the eye, c. extracting features of the eye of the user for comparing them to reference features, d. determine from the image if the surrounding of the eye is made of skin, and e. generating a recognition signal based on the feature comparison and the skin determination.
  • the present invention relates to a use of the recognition signal obtained by the method according to the invention for authentication a user.
  • the present invention relates to a non-transient computer-readable medium including instructions that, when executed by one or more processors, cause the one or more processors to perform the following steps: a. projecting light onto the hand of the user, b. recording an image of the hand, c. extracting features of the hand of the user for comparing them to reference features, d. determine from the image if the hand is made of skin, and e. generating a recognition signal based on the feature comparison and the skin determination.
  • the present invention relates to a non-transient computer-readable medium including instructions that, when executed by one or more processors, cause the one or more processors to perform the following steps: a. projecting light onto the eye of the user, b. recording an image of the eye, c. extracting features of the eye of the user for comparing them to reference features, d. determine from the image if the surrounding of the eye is made of skin, and e. generating a recognition signal based on the feature comparison and the skin determination.
  • the present invention relates to a method for granting a user access to a device or application comprising a. receiving a request for access to the device or application, b. in response to the request projecting light onto the hand of the user, c. recording an image of the hand, d. extracting features of the hand of the user for comparing them to reference features, e. determine from the image if the hand is made of skin, and f. generating a recognition signal based on the feature comparison and the skin determination, and g. granting access to the device or application based on the recognition signal.
  • the present invention relates to a method for granting a user access to a device or application comprising a. receiving a request for access to the device or application, b. in response to the request projecting light onto the eye of the user, c. recording an image of the eye, d. extracting features of the eye of the user for comparing them to reference features, e. determine from the image if the surrounding of the eye is made of skin, and f. generating a recognition signal based on the feature comparison and the skin determination, and g. granting access to the device or application based on the recognition signal.
  • the hand provides a high degree of details, for example its veine system, the wrinkles, or the contour, in particular the contour of the fingers, which can reliably provide a basis to differentiate different users. This is in particular true if the user is recognized from its palm as the palm contains a lot of characteristic features individual to each human.
  • a user may be recognized from the eye, in particular from the iris.
  • the iris contains a lot of individual features which can be used to recognize an authorized user.
  • the term “light” may refer to electromagnetic radiation in one or more of the infrared, the visible and the ultraviolet spectral range.
  • the term “ultraviolet spectral range” generally, refers to electromagnetic radiation having a wavelength of 1 nm to 380 nm, preferably of 100 nm to 380 nm.
  • visible spectral range generally, refers to a spectral range of 380 nm to 760 nm.
  • IR infrared spectral range
  • NIR near infrared spectral range
  • MidlR mid infrared spectral range
  • FIR far infrared spectral range
  • light used for the typical purposes of the present invention is light in the infrared (IR) spectral range, more preferred, in the near infrared (NIR) and/or the mid infrared spectral range (MidlR), especially the light having a wavelength of 1 pm to 5 pm, preferably of 1 pm to 3 pm.
  • IR infrared
  • NIR near infrared
  • MidlR mid infrared spectral range
  • the optical biometric recognition system comprises a projector.
  • the term “projector” may refer to a device configured for generating or providing light in the sense of the above-mentioned definition.
  • the projector may be a pattern projector, a floodlight projector or both either at the same time or the projector may repeatedly switch from illuminating patterned light to floodlight.
  • the term “pattern projector” may refer to a device configured for generating or providing at least one light pattern, in particular at least one infrared light pattern.
  • the term “light pattern” may refer to at least one pattern comprising a plurality of light spots.
  • the light spot may be at least partially spatially extended.
  • At least one spot or any spot may have an arbitrary shape. In some cases a circular shape of at least one spot or any spot may be preferred.
  • the spots may be arranged by considering a structure of a display comprised by a device that is further comprising the optoelectronic apparatus. Typically, an arrangement of an OLED-pixel-structure of the display may be considered.
  • the term “infrared light pattern” may refer to a light pattern comprising spots in the infrared spectral range.
  • the infrared light pattern may be a near infrared light pattern.
  • the infrared light may be coherent.
  • the infrared light pattern may be a coherent infrared light
  • the pattern projector may be configured for emitting light at a single wavelength, e.g. in the near infrared region. In other embodiments, the pattern projector may be adapted to emit light with a plurality of wavelengths, e.g. for allowing additional measurements in other wavelengths channels.
  • the infrared light pattern may comprise at least one regular and/or constant and/or periodic pattern such as a triangular pattern, a rectangular pattern, a hexagonal pattern or a pattern comprising further convex tilings.
  • the infrared light pattern is a hexagonal pattern, preferably a hexagonal infrared light pattern.
  • the illumination pattern may comprise a number of rows on which the illumination features are arranged in equidistant positions with distance d. The rows may be orthogonal with respect to the epipolar lines. A distance between the rows may be constant. A different offset may be applied to each of the rows in the same direction. The offset may result in that the illumination features of a row are shifted.
  • the light pattern may comprise less than 4000 light spots, for example less than 3000 light spots or less than 2000 light spots or less than 1500 light spots or less than 1000 light spots.
  • the light pattern may comprise patterned coherent infrared light of less than 4000 spots or less than 3000 spots or less than 2000 spots or less than 1500 spots or less than 1000 spots.
  • At least one of the infrared light spots may be associated with a beam divergence of 0.2° to 0.5°, preferably 0.1 ° to 0.3°.
  • beam divergence may refer to at least one measure of an increase in at least one diameter and/or at least one diameter equivalent, such as a radius, with a distance from an optical aperture from which the beam emerges.
  • the measure may be an angle or an angle equivalent.
  • a beam divergence may be determined at 1/e 2 .
  • the pattern projector may comprise at least one pattern projector configured for generating the infrared light pattern.
  • the pattern projector may comprise at least one emitter, in particular a plurality of emitters.
  • the term “emitter” may refer to at least one arbitrary device configured for providing at least one light beam. The light beam may generate the infrared light pattern.
  • the emitter may comprise at least one element selected from the group consisting of at least one laser source such as at least one semi-conductor laser, at least one double heterostructure laser, at least one external cavity laser, at least one separate confinement heterostructure laser, at least one quantum cascade laser, at least one distributed Bragg reflector laser, at least one polariton laser, at least one hybrid silicon laser, at least one extended cavity diode laser, at least one quantum dot laser, at least one volume Bragg grating laser, at least one Indium Arsenide laser, at least one Gallium Arsenide laser, at least one transistor laser, at least 50 one diode pumped laser, at least one distributed feedback lasers, at least one quantum well laser, at least one interband cascade laser, at least one semiconductor ring laser, at least one vertical cavity surface emitting laser (VCSEL); at least one non-laser light source such as at least one LED or at least one light bulb.
  • at least one laser source such as at least one semi-conductor laser, at least one double heterostructure laser
  • the pattern projector comprises at least one least one VCSEL, preferably a plurality of VCSELs.
  • the plurality of VCSELs may be arranged in at least one array, e.g. comprising a matrix of VCSELs.
  • the VCSELs may be arranged on the same substrate, or on different substrates.
  • the term “vertical-cavity surface-emitting laser” may refer to a semiconductor laser diode configured for laser beam emission perpendicular with respect to a top surface. Examples for VCSELs can be found e.g. in en.wikipedia.org/wiki/Verticalcav- ity_surface-emitting_laser.
  • VCSELs are generally known to the skilled user such as from WO 2017/222618 A.
  • Each of the VCSELs is configured for generating at least one light beam.
  • the plurality of generated spots may be associated with the infrared light pattern.
  • the VCSELs may be configured for emitting light beams at a wavelength range from 800 to 1000 nm.
  • the VCSELs may be configured for emitting light beams at 808 nm, 850 nm, 940 nm, and/or 980 nm.
  • the VCSELs emit light at 850 nm or 940 nm, since terrestrial sun radiation has a local minimum in irradiance at this wavelength, e.g. as described in CIE 085-1989 removableSolar spectral Irradiance”.
  • the pattern projector may emit monochromatic light, for example with a wavelength accuracy of less or equal to ⁇ 2 % or less or equal to ⁇ 1 %.
  • the wavelength accuracy may be the maximum difference of emitted wavelength relative to the mean wavelength.
  • the pattern projector may emit light of a range of wavelengths, for example with a wavelength accuracy of more than ⁇ 5 % or more than ⁇ 10 %.
  • the pattern projector may comprise at least one optical element configured for increasing, e.g. duplicating, the number of spots generated by the pattern projector.
  • the pattern projector particularly the optical element, may comprises at least one diffractive optical element (DOE) and/or at least one metasurface element.
  • DOE diffractive optical element
  • the DOE and/or the metasurface element may be configured for generating multiple light beams from a single incoming light beam. Further arrangements, particularly comprising a different number of projecting VCSEL and/or at least one different optical element configured for increasing the number of spots may be possible. Other multiplication factors are possible. For example, a VCSEL or a plurality of VCSELs may be used and the generated laser spots may be duplicated by using at least one DOE.
  • the pattern projector may comprise at least one transfer device.
  • transfer device also denoted as “transfer system” may refer to one or more optical elements which are adapted to modify the light beam, particularly the light beam used for generating at least a portion of the infrared light pattern, such as by modifying one or more of a beam parameter of the light beam, a width of the light beam or a direction of the light beam.
  • the transfer device may comprise at least one imaging optical device .
  • the transfer device specifically may comprise one or more of: at least one lens, for example at least one lens selected from the group consisting of at least one focus-tunable lens, at least one aspheric lens, at least one spherical lens, at least one Fresnel lens; at least one diffractive optical element; at least one concave mirror; at least one beam deflection element, preferably at least one mirror; at least one beam splitting element, preferably at least one of a beam splitting cube or a beam splitting mirror; at least one multilens system; at least one holographic optical element; at least one meta optical element.
  • the transfer device comprises at least one refractive optical lens stack.
  • the transfer device may comprise a multi-lens system having refractive properties.
  • the pattern projector may be configured for emitting modulated or non-modulated light.
  • the different emitters may have different modulation frequencies, e.g. which can be used for distinguishing the light beams.
  • the pattern projector may be configured for emitting polarized or non-polarized light.
  • Polarized light may be linearly polarized, circularly polarized or elliptically polarized.
  • the pattern projector may contain a polarization filter to emit polarized light.
  • the light beam or light beams generated by the pattern projector may propagate parallel to an optical axis.
  • the pattern projector may comprise at least one reflective element, preferably at least one prism, for deflecting the illuminating light beam onto the optical axis.
  • the light beam or light beams, such as the laser light beam, and the optical axis may include an angle of less than 10°, preferably less than 5° or even less than 2°. Other embodiments, however, are feasible. Further, the light beam or light beams may be on the optical axis or off the optical axis.
  • the light beam or light beams may be parallel to the optical axis having a distance of less 10 than 10 mm to the optical axis, preferably less than 5 mm to the optical axis or even less than 1 mm to the optical axis or may even coincide with the optical axis.
  • the term “flood projector” may refer to at least one device configured for providing substantially continuous spatial illumination.
  • the flood projector may illuminate a measurement area, such as a user, a portion of the user and/or a hand of the user, a surrounding of the eye of the user, with a spatially constant or essentially constant illumination intensity.
  • the term “flood light” may refer to substantially continuous spatial illumination, in particular diffuse and/or uniform illumination.
  • the flood light may have a wavelength in the infrared spectral range, in particular in the near infrared range, or in the visible spectral range.
  • the flood light may be monochromatic light, for example with a wavelength accuracy of less or equal to ⁇ 2 % or less or equal to ⁇ 1 %.
  • the wavelength accuracy may be the maximum difference of emitted wavelength relative to the mean wavelength.
  • the flood light may be light of a range of wavelengths, for example with a wavelength accuracy of more than ⁇ 5 % or more than ⁇ 10 %.
  • a relative distance between the flood projector and the pattern projector may be below 3.0 mm.
  • the relative distance between the flood projector and the pattern projector may be below 2.5 mm, preferably below 2.0 mm.
  • the pattern projector and the flood projector may be combined into one module.
  • the pattern projector and the flood projector may be arranged on the same substrate, in particular having a minimum relative distance.
  • the minimum relative distance may be defined by a physical extension of the flood projector and the pattern projector.
  • Arranging the pattern projector and the flood projector having a relative distance below 3.0 mm can result in decreased space requirement of the two projectors.
  • said projectors can even be combined into one module. Such a reduced space requirement can allow reducing the transparent area(s) in a display necessary for operation of the projector(s) behind the display.
  • the projector is positioned such that it can illuminate light through the transparent display. Hence, light emitted by the projector crosses the transparent display before it impinges on the user. From the user’s view, the projector is placed behind the transparent display.
  • the capturing and/or generating and/or determining and/or recording of the image may be caused and/or initiated by the hardware and/or the software interface.
  • the image generation may comprise recording continuously a sequence of images such as a video or a movie.
  • the image generation may be initiated by a user action or may automatically be initiated, e.g. once the presence of at least one object or user within a field of view and/or within a predetermined sector of the field of view of the camera is automatically detected.
  • the camera may comprise at least one optical sensor, in particular at least one pixelated optical sensor.
  • the camera may comprise at least one CMOS sensor or at least one CCD chip.
  • the camera may comprise at least one CMOS sensor, which may be sensitive in the infrared spectral range.
  • image may refer to data recorded by using the optical sensor, such as a plurality of electronic readings from the CMOS or CCD chip.
  • the image may comprise raw image data or may be a pre-processed image.
  • the pre-processing may comprise applying at least one filter to the raw image data and/or at least one background correction and/or at least one background subtraction.
  • the camera may comprise a color camera, e.g. comprising at least color pixels.
  • the camera may comprise a color CMOS camera.
  • the camera may comprise black and white pixels and color pixels.
  • the color pixels and the black and white pixels may be combined internally in the camera.
  • the camera may comprise a color camera (e.g. RGB) or a black and white camera, such as a black and white CMOS.
  • the camera may comprise a black and white CMOS chip.
  • the camera generally may comprise a one-dimensional or two-dimensional array of image sensors, such as pixels.
  • the color camera may be an internal and/or external camera of a device comprising the optoelectronic apparatus.
  • the internal and/or external camera of the device may be accessed via a hardware and/or a software interface comprised by the optoelectronic apparatus, which is used as the camera.
  • the device is or comprises a smartphone
  • the image generating unit may be a front camera, such as a selfie camera, and/or back camera of the smartphone.
  • the camera may have a field of view between 10°x10° and 75°x75°, preferably 55°x65°.
  • the camera may have a resolution below 2 megapixel (MP), for example between 0.3 MP and 1.5 MP, or the camera may have a resolution of 2 to 4 MP or 3 to 5 MP or 6 to 10 MP or more than 10 MP.
  • the camera may comprise further elements, such as one or more optical elements, e.g. one or more lenses.
  • the optical sensor may be a fix-focus camera, having at least one lens which is fixedly adjusted with respect to the camera.
  • the camera may also comprise one or more variable lenses which may be adjusted, automatically or manually.
  • Other cameras are feasible.
  • the distance between hand and camera may be within a range of 0 to 100 cm, for example 0 to 50 cm or 5 to 25 cm or 5 to 15 cm.
  • the distance between eye and camera may be within a range of 0 to 100 cm, for example 10 to 80 cm or 15 to 50 cm.
  • the biometric recognition system may contain one camera or it may contain more than one camera, for example two cameras.
  • the biometric recognition system may contain two cameras, wherein the two cameras are sensitive in different wavelength region.
  • a first camera may be sensitive in the visible spectral range and a second camera may be sensitive in the infrared range.
  • the first camera may record an RGB of the hand or iris, while the second camera records an image in the infrared spectral range.
  • the second camera may record a pattern image.
  • pattern image may refer to an image generated by the camera while illuminating the infrared light pattern, e.g. on an object and/or a user.
  • the pattern image may comprise an image showing a user, in particular at least parts of a hand of the user, a surrounding of the eye of the user, while the user is being illuminated with the infrared light pattern, particularly on a respective area of interest comprised by the image.
  • the pattern image may be generated by imaging and/or recording light reflected by an object and/or user which is illuminated by the infrared light pattern.
  • the pattern image showing the user may comprise at least a portion of the illuminated infrared light pattern on at least a portion the user.
  • the illumination by the pattern illumination source and the imaging by using the optical sensor may be synchronized, e.g. by using at least one control unit of the optoelectronic apparatus.
  • the term “flood image” may refer to an image generated by the camera while illumination source is illuminating infrared flood light, e.g. on an object and/or a user.
  • the flood image may comprise an image showing a user, in particular a hand of the user, a surrounding of the eye of the user, while the user is being illuminated with the flood light.
  • the flood image may be generated by imaging and/or recording light reflected by an object and/or user which is illuminated by the flood light.
  • the flood image showing the user may comprise at least a portion of the flood light on at least a portion the user.
  • the illumination by the flood illumination source and the imaging by using the optical sensor may be synchronized, e.g. by using at least one control unit of the optoelectronic apparatus.
  • the camera may be configured for imaging and/or recording the pattern image and the flood image at the same time or at different times.
  • the camera may be configured for imaging and/or recording the pattern image and the flood image at at least partially overlapping measurement areas or equivalents of the measurement areas.
  • the biometric recognition system may comprise a transparent display.
  • the camera or the projector may be placed behind the transparent display in order maximize the display area of a device.
  • the term “display” may refer to an arbitrary shaped device configured for displaying an item of information.
  • the item of information may be arbitrary information such as at least one image, at least one diagram, at least one histogram, at least one graphic, text, numbers, at least one sign, or an operating menu.
  • the display may be or may comprise at least one screen.
  • the display may have an arbitrary shape, e.g. a rectangular shape.
  • the display may be a front display of the device.
  • the display may be or may comprise at least one light-emitting diode (LED) display, in particular at least one organic light-emitting diode (OLED) display or a micro light-emitting diode (pLED) display.
  • LED light-emitting diode
  • OLED organic light-emitting diode
  • pLED micro light-emitting diode
  • organic light emitting diode may refer to a light-emitting diode (LED) in which an emissive electroluminescent layer is a film of organic compound configured for emitting light in response to an electric current.
  • the OLED display may be configured for emitting visible light.
  • micro light-emitting diode may refer to a light-emitting diode (LED) with a pixel size of less than 15 pm or less than 10 pm or less than 5 pm. The distance between pixels in a pLED is often larger than the pixel size, for example equal or greater than 15 pm.
  • the display particularly a display area, may be covered by glass. In particular, the display may comprise at least one glass cover.
  • the transparent display is at least partially transparent.
  • the term “at least partially transparent” may refer to a property of the display to allow light, in particular of a certain wavelength range, e.g. in the infrared spectral region, in particular in the near infrared spectral region, to pass at least partially through.
  • the display may be semitransparent in the near infrared region.
  • the display may have a transparency of 20 % to 50 % in the near infrared region.
  • the display may have a different transparency for other wavelength ranges.
  • the display may have a transparency of 30 to 60 % or 40 to 70 % or 50 to 80 % or more than 80 % for the visible spectral range.
  • the transparent display may be at least partially transparent over the entire display area or only parts thereof. Typically, it is sufficient if only those parts of the display area are at least partially transparent trough which light needs to pass from the projector or to the camera.
  • the display comprises a display area.
  • the term “display area” may refer to an active area of the display, in particular an area which is activatable.
  • the display may have additional areas such as recesses or cutouts.
  • the display may have a first area associated with a first pixel per inch (PPI) value and a second area associated with a second PPI value.
  • the first PPI value may be lower than the second PPI value, preferably first PPI value is equal to or below 400 PPI, more preferably the second PPI value may be equal to or higher than 300 PPI.
  • the first PPI value may be associated with the at least one continuous area being at least partially transparent.
  • Biometric recognition may comprise identifying the user based on the flood image.
  • the term “identifying” may refer to identity check and/or verifying an identity of the user.
  • the identifying of the user may comprise analyzing the flood image.
  • the analyzing of the flood image may comprise performing a verification of the imaged hand to be the user’s hand.
  • the identifying the user may comprise matching the flood image, e.g. showing a contour of parts of the user, in particular parts of the user’s hand, with a template.
  • Determining if the imaged hand is the hand of the user may comprise identifying the user, in particular determining if the imaged hand corresponds to at least one image of the user’s hand stored in at least one memory, e.g. of the device.
  • the analyzing may comprise one or more of the following: a filtering; a selection of at least one region of interest; a formation of a difference image between the flood image and at least one offset; an inversion of flood image; a background correction; a decomposition into color channels; a decomposition into hue; saturation; and brightness channels; a frequency decomposition; a singular value decomposition; applying a Canny edge detector; applying a Laplacian of Gaussian filter; applying a Difference of Gaussian filter; applying a Sobel operator; applying a Laplace operator; applying a Scharr operator; applying a Prewitt operator; applying a Roberts operator; applying a Kirsch operator; applying a high-pass filter; applying a low-pass filter; applying a Fourier transformation; applying a Radon-transformation; applying a Hough-transformation; applying a wavelet-transformation; a thresholding; creating a binary image.
  • the region of interest may be determined manually by a user or may be determined automatically, such as by recognizing the user within the image.
  • the analyzing of the flood image may comprise using at least one image recognition technique, in particular a hand or eye recognition technique.
  • An image recognition technique comprises at least one process of identifying the user in an image.
  • the image recognition may comprise using at least one technique selected from the technique consisting of: color-based image recognition, e.g. using features such as hue, saturation, and value (HSV) or red, green, blue (RGB); template matching, for example as illustrated on https://www.mathworks.com/help/vision/ug/pattern-matching.html; image segment and/or blob analysis e.g.
  • the neural network may be trained by the user, such as in a training procedure, in which the user is indicated to take at least one or a plurality of pictures showing himself.
  • the analyzing of the flood image may comprise determining a plurality of hand or eye features.
  • the analyzing may comprise comparing, in particular matching, the determined hand or eye features with template features.
  • the template features may be features extracted from at least one template.
  • the template may be or may comprise at least one image generated in an enrollment process, e.g. when initializing the authentication system. Template may be an image of an authorized user.
  • the template features and/or the hand or eye feature may comprise a vector.
  • Matching of the features may comprise determining a distance between the vectors.
  • the identifying of the user may comprise comparing the distance of the vectors to a least one predefined limit, wherein the user is successfully identified in case the distance is smaller than or equal to the predefined limit at least within tolerances. The user declining and/or rejected otherwise.
  • the image recognition may comprise using at least one model, in particular a trained model comprising at least one hand or eye recognition model.
  • the analyzing of the flood image may be performed by using a hand or eye recognition system, such as FaceNet, e.g. as described in Florian Schroff, Dmitry Kalenichenko, James Philbin, “FaceNet: A Unified Embedding for Face Recognition and Clustering”, arXiv: 1503.03832.
  • the trained model may comprises at least one convolutional neural network.
  • the convolutional neural network may be designed as described in M. D. Zeiler and R. Fergus, “Visualizing and understanding convolutional networks”, CoRR, abs/1311.2901 , 2013, or C.
  • labeled faces may be used from one or more of G. B. Huang, M. Ramesh, T. Berg, and E. Learned-Miller, “Labeled faces in the wild: A database for studying face recognition in unconstrained environments”, Technical Report 07-49, University of Massachusetts, Amherst, October 2007, the Youtube® Faces Database as described in L. Wolf, T. Hassner, and I. Maoz, “Face recognition in unconstrained videos with matched background similarity”, in IEEE Conf, on CVPR, 2011 , or Google® Facial Expression Comparison dataset.
  • the training of the convolutional neural network may be performed as described in Florian Schroff, Dmitry Kalenichenko, James Philbin, “FaceNet: A Unified Embedding for Face Recognition and Clustering”, arXiv: 1503.03832.
  • Image artifacts caused by diffraction of the light when passing the transparent display may be corrected.
  • the term “correct” may mean partially or fully remove the artifacts or tag them so they can be excluded from further processing, in particular from determine if the imaged user is an authorized user.
  • Correcting image artifacts may take into account the information about the transparent display, in particular the dimensions of the pixels or the distance of repeating features to each other. This information can facilitate identifying artifacts as diffraction patterns can be calculated and compared to the image.
  • Correcting image artifacts may comprise identifying reflection features, sorting them by brightness and selecting the locally brightest features.
  • the information of the transparent display may be used, in particular a distance in the image by which a light beam may be displaced by diffraction on the transparent display may be calculated based on the information about the transparent display. This method can be particularly useful for pattern images. Further details are disclosed in WO 2021/105265 A1.
  • Biometric recognition comprises determining material. Material determination may be based on the pattern image. Particularly by considering the material as a parameter for validating the authentication process, the authentication process may be robust against being outwitted by using a recorded image of the user.
  • Extracting the material data from the pattern image may be performed by beam profile analysis of the light spots.
  • beam profile analysis reference is made to WO 2018/091649 A1 , WO 2018/091638 A1 and WO 2018/091640 A1 , the full content of which is included by reference.
  • Beam profile analysis can allow for providing a reliable classification of scenes based on a few light spots.
  • Each of the light spots of the pattern image may comprise a beam profile.
  • beam profile may generally refer to at least one intensity distribution of the light spot on the optical sensor as a function of the pixel.
  • the beam profile may be selected from the group consisting of a trapezoid beam profile; a triangle beam profile; a conical beam profile and a linear combination of Gaussian beam profiles.
  • extracting material data from the pattern image may comprise generating the material type and/or data derived from the material type.
  • extracting material data may be based on the pattern image.
  • Material data may be extracted by using at least one material model. Extracting material data may include providing the pattern image to a material model and/or receiving material data from the material model.
  • Providing the image to a material model may comprise and may be followed by receiving the pattern image at an input layer of the material model or via a material model loss function.
  • the material model may be a data-driven model.
  • Data-driven model may comprise a convolutional neural network and/or an encoder decoder structure such as an autoencoder.
  • generating a representation may be FFT, wavelets, deep learning, like CNNs, energy models, normalizing flows, GANs, vision transformers, or transformers used for natural language processing, Autoregressive Image Modeling, Normalizing Flows, Deep Autoencoders, Deep Energy-Based Models.
  • Supervised or unsupervised schemes may be applicable to generate a representation, also embedding in e.g. cosine or Euclidian metric in ML language.
  • the data-driven model may be parametrized according to a training data set including at least one image and material data, preferably at least one pattern image and material data.
  • extracting material data may include providing the image to a material model and/or receiving material data from the material model.
  • the data-driven model may be trained according to a training data set including at least one image and material data.
  • the data-driven model may be parametrized according to a training data set including at least one image and material data.
  • the data-driven model may be parametrized according to a training data set to receive the image and provide material data based on the received image.
  • the data-driven model may be trained according to a training data set to receive the image and provide material data as output based on the received image.
  • the training data set may comprise at least one image and material data, preferably material data associated with the at least one image.
  • the image may comprise a representation of the image.
  • the representation may be a lower dimensional representation of the image.
  • the representation may comprise at least a part of the data or the information associated with the image.
  • the representation of an image may comprise a feature vector.
  • determining a representation, in particular a lower-dimensional representation may be based on principal component analysis (PCA) mapping or radial basis function (RBF) mapping. Determining a representation may also be referred to as generating a representation.
  • PCA principal component analysis
  • RBF radial basis function
  • Generating a representation based on PCA mapping may include clustering based on features in the pattern image and/or partial image. Additionally or alternatively, generating a representation may be based on neural network structures suitable for reducing dimensionality. Neural network structures suitable for reducing dimensionality may comprise encoder and/or decoder. In an example, neural network structure may be an autoencoder. In an example, neural network structure may comprise a convolutional neural network (CNN). The CNN may comprise at least one convolutional layer and/or at least one pooling layer. CNNs may reduce the dimensionality of a partial image and/or an image by applying a convolution, e.g. based on a convolutional layer, and/or by pooling. Applying a convolution may be suitable for selecting feature related to material information of the pattern image.
  • CNNs may reduce the dimensionality of a partial image and/or an image by applying a convolution, e.g. based on a convolutional layer, and/or by pooling. Applying
  • a material model may be suitable for determining an output based on an input.
  • material model may be suitable for determining material data based on an image as input.
  • a material model may be a deterministic model, a data-driven model or a hybrid model.
  • the deterministic model preferably, reflects physical phenomena in mathematical form, e.g., including first-principles models.
  • a deterministic model may comprise a set of equations that describe an interaction between the material and the patterned electromagnetic radiation thereby resulting in a condition measure, a vital sign measure or the like.
  • a data-driven model may be a classification model.
  • a hybrid model may be a classification model comprising at least one machine-learning architecture with deterministic or statistical adaptations and model parameters.
  • the data-driven model may be a classification model.
  • the classification model may comprise at least one machine-learning architecture and model parameters.
  • the machine-learning architecture may be or may comprise one or more of: linear regression, logistic regression, random forest, piecewise linear, nonlinear classifiers, support vector machines, naive Bayes classifications, nearest neighbors, neural networks, convolutional neural networks, generative adversarial networks, support vector machines, or gradient boosting algorithms or the like.
  • the material model can be a multi-scale neural network or a recurrent neural network (RNN) such as, but not limited to, a gated recurrent unit (GRU) recurrent neural network or a long short-term memory (LSTM) recurrent neural network.
  • RNN recurrent neural network
  • the data- driven model may be parametrized according to a training data set.
  • the data-driven model may be trained based on the training data set. Training the material model may include parametrizing the material model.
  • the term training may also be denoted as learning.
  • the term specifically may refer to a process of building the classification model, in particular determining and/or updating parameters of the classification model. Updating parameters of the classification model may also be referred to as retraining. Retraining may be included when referring to training herein.
  • the training data set may include at least one image and material information.
  • extracting material data from the image with a data-driven model may comprise providing the image to a data-driven model. Additionally or alternatively, extracting material data from the image with a data-driven model may comprise may comprise generating an embedding associated with the image based on the data-driven model. An embedding may refer to a lower dimensional representation associated with the image such as a feature vector.
  • the validating based on the extracted material data may comprise determining if the extracted material data corresponds a desired material data. Determining if extracted material data matches the desired material data may be referred to as validating. Allowing or declining the user and/or object to perform at least one operation on the device that requires authentication based on the material data may comprise validating the authentication or authentication process. Validating may be based on material data and/or image. Determining if the extracted material data corresponds a desired material data may comprise determining a similarity of the extracted material data and the desired material data. Determining a similarity of the extracted material data and the desired material data may comprise comparing the extracted material data with the desired material data. Desired material data may refer to predetermined material data.
  • the authentication process or its validation may include generating at least one feature vector from the material data and matching the material feature vector with associate reference template vector for material.
  • Biometric recognition may comprise liveliness detection, i.e. determining if the hand corresponds to a living human.
  • the user may be illuminated with coherent patterned infrared illumination and determining if the hand corresponds to a living human may comprise determining a blood perfusion measure based on the pattern image.
  • Determining a blood perfusion measure may comprise determining a speckle contrast of the pattern image and determining a blood perfusion measure based on the determined speckle contrast.
  • a speckle contrast may represent a measure for a mean contrast of an intensity distribution within an area of a speckle pattern.
  • a speckle contrast K over an area of the speckle pattern may be expressed as a ratio of standard deviation o to the mean speckle intensity ⁇ l>, i.e.,
  • Speckle contrast may comprise a speckle contrast value. Speckle contrast values may be distributed between 0 and 1.
  • the blood perfusion measure is determined based on the speckle contrast. Thus, the vital sign measure may depend on the determined speckle contrast. If the speckle contrast changes, the blood perfusion measure derived from the speckle contrast may change accordingly.
  • a blood perfusion measure may be a single number or value that may represent a likelihood that the object is a living subject.
  • the complete pattern image may be used.
  • a section of the pattern image may be used. The section of the pattern image, preferably, represents a smaller area of the pattern image than an area of the complete pattern image. The section of the pattern image may be obtained by cropping the pattern image.
  • a data-driven model may be used for determining a blood perfusion measure.
  • Data-driven model be parametrized and/or trained based on a training data set.
  • Training data set may comprise a pattern image and a blood perfusion measure.
  • Data-driven model may be parametrized and/or trained based on the training data set to output a blood perfusion measure based on receiving a pattern image.
  • a recognition signal may be generated for authentication based on the authentication result, i.e. authenticating the user in case the user can be identified and/or if the material data matches the desired material data.
  • the device may comprise at least one authorization unit configured for allowing the user to perform at least one operation on the device, e.g. unlocking the device, in case of successful authentication of the user or declining the user to perform at least one operation on the device in case of non-successful authentication. Thereby, the user may become aware of the result of the authentication.
  • the biometric recognition system may be integrated into a portable device, for example a smartphone, a tablet computer, a laptop computer or a smartwatch; a point-of-sale terminal, for example a payment terminal; or an access control system, for example a system to control access to a building or a vehicle.
  • the biometric recognition system may be integrated in a payment terminal and at the same time in an access control system, for example to grant access in response to a payment which is authorized by the biometric recognition system.
  • the present invention further relates to a non-transient computer-readable medium including instructions that, when executed by one or more processors, cause the one or more processors to perform the method according to the present invention.
  • computer-readable data medium may refer to any suitable data storage device or computer readable memory on which is stored one or more sets of instructions (for example software) embodying any one or more of the methodologies or functions described herein.
  • the instructions may also reside, completely or at least partially, within the main memory and/or within the processor during execution thereof by the computer, main memory, and processing device, which may constitute computer- readable storage media.
  • the instructions may further be transmitted or received over a network via a network interface device.
  • Computer-readable data medium include hard drives, for example on a server, USB storage device, CD, DVD or Blue-ray discs.
  • the computer program may contain all functionalities and data required for execution of the method according to the present invention or it may provide interfaces to have parts of the method processed on remote systems, for example on a cloud system.
  • the invention further relates to a method for granting a user access to a device or application.
  • a device can be a mobile device, for example smartphone, a tablet computer, a laptop computer or a smartwatch, or it can be a stationary device such as a payment terminal or an access control system, for example to control access to a building, a subway train station, an airport gate, a production facility, a car rental site, an amusement park, a cinema, or a supermarket for registered customers.
  • the access control system may further be integrated into a vehicle, for example a car, a train, an airplane, or a ship.
  • An application may refer to a local program, for example installed on a smartphone or a laptop, or a remote service, for example a service on a cloud system to be accessed via internet. The application may serve several purposes, for example to authorize a payment, identify the user for a transaction with the public administration, for example to renew a driver’s license, or authorize the user for high-security communication.
  • Figure 1 illustrates an example for a biometric recognition system.
  • FIG. 2 illustrates an embodiment of the method of the present invention.
  • Figure 3 illustrates an access control system containing a biometric recognition system.
  • the processor 103 may be a microcontroller, i.e. containing memory and IO controller functionalities, or it may be a CPU which is connected to memory and IO controllers.
  • the processor 103 may execute program code which determines if the user 110 is an authorized user 110. Such determination may involve vectorizing the image into features. Such feature vector may be compared to a stored template. If the difference between the feature vector and the stored template is below a predefined threshold, the processor may determine that the hand 110 is authorized.
  • the processor 103 may further determine if the image really shows a human rather than a spoofing hand. This may be accomplished by classifying the material of the hand in the image by evaluating reflection characteristics in the reflected light. If no skin is detected, the processor may determine that the user to the hand 110 is not authorized.
  • the processor 103 may be communicatively coupled to memory 104.
  • the memory 104 may be transient memory, for example random access memory (RAM), or persistent memory, for example flash memory.
  • RAM random access memory
  • the memory 104 may contain program code configured to determines if the user to the hand 110 is an authorized person as well as templates for registered authorized users.
  • the biometric recognition system 100 may contain a display 106.
  • the display 106 may be transparent for the projected light 111 and the reflected light 112, such that the projector 101 and the camera 102 may be placed behind the display 106.
  • the display 106 may only transparent at the positions at which the projected light 111 and the reflected light 112 passes the display 106. Transparent may mean that at least 30 % or at least 50 % of the incident light passes through the transparent display 106.
  • Figure 2 illustrates an embodiment of the method of the present invention.
  • Light may be projected onto the hand of a user (201).
  • the light may be infrared light, for example with a wavelength of 850 nm or 940 nm, which is invisible to the user.
  • the projected light may be floodlight or patterned light, for example a hexagonal point pattern.
  • the projected light may impinge on the complete hand of the user including the fingers or on parts of it.
  • An image of the hand may be recorded (202).
  • the image may be recorded when the hand is under illumination of the light projected in step 201.
  • the image is processed by extracting features and comparing these features with a reference (203). Extracting features may involve applying a trained model, for example a convolutional neural network (CNN).
  • CNN convolutional neural network
  • FIG. 3 illustrates an access control system 300.
  • the access control system 300 may allow access to a restricted area, for example a subway train station, an airport gate, a production facility, an office building, a car rental site, an amusement park, a cinema, or a supermarket for registered customers.
  • the access control system 300 may have an access gate 301 which only opens when an authorized user is recognized.
  • the user may hold his hand 311 close to the biometric recognition system 302.
  • the biometric recognition system 302 may have a projector which projects light 304 onto the user’s hand 311 and may record with a camera the light reflected by the user’s hand 311.
  • the biometric recognition system 302 may generate a signal which is transferred to the access gate 301 , whereupon the access gate 301 may open.
  • the access control system 300 may comprise a display 303 which may guide the user and may display the authorization results.
  • the biometric recognition system 302 may be placed behind the display 303, which may be a transparent display.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Computer Security & Cryptography (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Computer Hardware Design (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Vascular Medicine (AREA)
  • Human Computer Interaction (AREA)
  • Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)

Abstract

L'invention se rapporte au domaine de la reconnaissance biométrique. L'invention concerne un système de reconnaissance biométrique pour reconnaître un utilisateur grâce à sa main comprenant : a. un projecteur pour projeter de la lumière sur la main de l'utilisateur, b. une caméra pour enregistrer une image de la main, et c. un processeur configuré pour i. extraire des caractéristiques de la main de l'utilisateur pour les comparer avec des caractéristiques de référence, ii. déterminer à partir de l'image si la main est constituée de peau, et iii. générer un signal de reconnaissance sur la base de la comparaison de caractéristiques et de la détermination de la présence de peau.
PCT/EP2024/073268 2023-08-25 2024-08-20 Système de reconnaissance biométrique Pending WO2025045642A1 (fr)

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
EP23193428 2023-08-25
EP23193428.2 2023-08-25
EP23196242 2023-09-08
EP23196242.4 2023-09-08

Publications (1)

Publication Number Publication Date
WO2025045642A1 true WO2025045642A1 (fr) 2025-03-06

Family

ID=92424110

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/EP2024/073268 Pending WO2025045642A1 (fr) 2023-08-25 2024-08-20 Système de reconnaissance biométrique

Country Status (1)

Country Link
WO (1) WO2025045642A1 (fr)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20240386747A1 (en) * 2023-05-18 2024-11-21 Ford Global Technologies, Llc Scene authentication

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017222618A1 (fr) 2016-06-23 2017-12-28 Apple Inc. Réseau vcsel á émission haute et diffuseur intégré
WO2018091638A1 (fr) 2016-11-17 2018-05-24 Trinamix Gmbh Détecteur pour détecter optiquement au moins un objet
WO2021105265A1 (fr) 2019-11-27 2021-06-03 Trinamix Gmbh Mesure de profondeur à l'aide d'un dispositif d'affichage
WO2023156473A1 (fr) 2022-02-15 2023-08-24 Trinamix Gmbh Procédé de détermination d'un droit d'accès d'un utilisateur, demande de dispositif informatique, dispositif informatique d'authentification et système d'authentification

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017222618A1 (fr) 2016-06-23 2017-12-28 Apple Inc. Réseau vcsel á émission haute et diffuseur intégré
WO2018091638A1 (fr) 2016-11-17 2018-05-24 Trinamix Gmbh Détecteur pour détecter optiquement au moins un objet
WO2018091640A2 (fr) 2016-11-17 2018-05-24 Trinamix Gmbh Détecteur pouvant détecter optiquement au moins un objet
WO2018091649A1 (fr) 2016-11-17 2018-05-24 Trinamix Gmbh Détecteur destiné à la détection optique d'au moins un objet
WO2021105265A1 (fr) 2019-11-27 2021-06-03 Trinamix Gmbh Mesure de profondeur à l'aide d'un dispositif d'affichage
WO2023156473A1 (fr) 2022-02-15 2023-08-24 Trinamix Gmbh Procédé de détermination d'un droit d'accès d'un utilisateur, demande de dispositif informatique, dispositif informatique d'authentification et système d'authentification

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
ONG MICHAEL G K ET AL: "Touch-less palm print biometrics: Novel design and implementation", IMAGE AND VISION COMPUTING, ELSEVIER, GUILDFORD, GB, vol. 26, no. 12, 1 December 2008 (2008-12-01), pages 1551 - 1560, XP025470870, ISSN: 0262-8856, [retrieved on 20080705], DOI: 10.1016/J.IMAVIS.2008.06.010 *
ONG MICHEAL ET AL., IMAGE AND VISION COMPUTING, vol. 26, 2008, pages 1551 - 1560

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20240386747A1 (en) * 2023-05-18 2024-11-21 Ford Global Technologies, Llc Scene authentication

Similar Documents

Publication Publication Date Title
US12288421B2 (en) Optical skin detection for face unlock
JP2025507401A (ja) 画像から抽出された材料データを含む顔認証
WO2025045642A1 (fr) Système de reconnaissance biométrique
Bhattacharjee et al. A comparative study of human thermal face recognition based on Haar wavelet transform and local binary pattern
JP2025507403A (ja) 画像から抽出した材料データに基づく遮閉検出を含む顔認証
WO2024231531A1 (fr) Projecteur à delo
WO2025040591A1 (fr) Rugosité cutanée en tant qu'élément de sécurité pour déverrouillage facial
JP2025508407A (ja) 材料情報決定のための画像操作
WO2025068196A1 (fr) Génération d'ensemble de données de reconnaissance biométrique
CN120112964A (zh) 作为安全特征的距离
EP4530666A1 (fr) Projecteur 2in1 à vcsel polarisés et diviseur de faisceau
WO2024170254A1 (fr) Système d'authentification pour véhicules
EP4665614A1 (fr) Système d'authentification pour véhicules
WO2024170597A1 (fr) Authentification de diode électroluminescente organique (oled) derrière une oled
EP4666254A1 (fr) Authentification de diode électroluminescente organique (oled) derrière une oled
WO2024200502A1 (fr) Élément de masquage
WO2025012337A1 (fr) Procédé d'authentification d'un utilisateur d'un dispositif
WO2025046067A1 (fr) Éléments optiques sur des vcsel d'inondation pour projecteurs 2in1
CN121195290A (zh) 结合oled的投影仪
WO2024170598A1 (fr) Authentification de diode électroluminescente organique (oled) derrière une oled
EP4666255A1 (fr) Authentification de diode électroluminescente organique (oled) derrière une oled
WO2025046063A1 (fr) Tomographie optique diffuse combinée à laser vcsel unique et lampe-projecteur à faisceau large
WO2025040650A1 (fr) Synchronisation de mesure de référence d'authentification de visage
WO2025252691A1 (fr) Détermination d'une propriété de surface
WO2025247861A1 (fr) Système de surveillance de ceinture de sécurité

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 24755612

Country of ref document: EP

Kind code of ref document: A1