WO2025040591A1 - Skin roughness as security feature for face unlock - Google Patents
Skin roughness as security feature for face unlock Download PDFInfo
- Publication number
- WO2025040591A1 WO2025040591A1 PCT/EP2024/073113 EP2024073113W WO2025040591A1 WO 2025040591 A1 WO2025040591 A1 WO 2025040591A1 EP 2024073113 W EP2024073113 W EP 2024073113W WO 2025040591 A1 WO2025040591 A1 WO 2025040591A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- user
- surface roughness
- image
- speckle
- electromagnetic radiation
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F21/00—Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
- G06F21/30—Authentication, i.e. establishing the identity or authorisation of security principals
- G06F21/31—User authentication
- G06F21/32—User authentication using biometric data, e.g. fingerprints, iris scans or voiceprints
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L63/00—Network architectures or network communication protocols for network security
- H04L63/08—Network architectures or network communication protocols for network security for authentication of entities
- H04L63/0861—Network architectures or network communication protocols for network security for authentication of entities using biometrical features, e.g. fingerprint, retina-scan
Definitions
- the invention relates to a computer-implemented method for authenticating a user of a device, a computer program, a computer-readable storage medium, a non-transitory computer-readable storage medium, and a use of a surface roughness measure.
- CMOS complementary metal-oxide-semiconductor
- Two-dimensional-picture based face recognition algorithms use biometric features such as eye to eye or eye to nose distances and shapes, or iris scans, wherein the latter requiring a high resolution or close-up picture of the eye.
- known techniques based on two-dimensional based algorithms can be tricked, e.g. by using a printout or a picture of a face.
- US 2022/094456 A1 describes an apparatus comprising means for: obtaining a propagation profile for wireless signals transmitted between at least two devices via a creeping wave along a user's skin; causing transmission of electromagnetic radiation towards a plurality of locations on a target user's body to obtain dielectric properties of their skin at the plurality of locations based on an amount of the electromagnetic radiation reflected from each location; determining whether the propagation profile correlates with a realizable creeping wave along the target user's skin, the realizability being based on the obtained dielectric properties of their skin; and forming an association between the target user and the at least two devices based on a strength of correlation between the propagation profile and a realizable creeping wave.
- a computer-implemented method for authenticating a user of a device is disclosed.
- the method comprising: a. receiving a request for accessing one or more functions associated with the device; b. executing at least one authentication process comprising the following steps: b.1 triggering to illuminate the user by coherent electromagnetic radiation associated with a wavelength between 850 nm and 1400 nm, b.2 triggering to generate at least one speckle image of the user while the user is being illuminated by the coherent electromagnetic radiation, b.3 determining at least one surface roughness measure based on the speckle image, b.4 authenticating the user or denial using the surface roughness measure.
- the method steps may be performed in the given order or may be performed in a different order. Further, one or more additional method steps may be present which are not listed. Further, one, more than one or even all of the method steps may be performed repeatedly.
- the term "computer implemented" as used herein is a broad term and is to be given its ordinary and customary meaning to a person of ordinary skill in the art and is not to be limited to a special or customized meaning.
- the term specifically may refer, without limitation, to a method involving at least one computer and/or at least one computer network.
- the computer and/or computer network may comprise at least one processor which is configured for performing at least one of the method steps of the method according to the present invention. Specifically, each of the method steps is performed by the computer and/or computer network. The method may be performed completely automatically, specifically without user interaction.
- the term “user” as used herein is a broad term and is to be given its ordinary and customary meaning to a person of ordinary skill in the art and is not to be limited to a special or customized meaning.
- the term specifically may refer, without limitation, to a person intended to and/or using the device.
- the term “request for accessing” as used herein is a broad term and is to be given its ordinary and customary meaning to a person of ordinary skill in the art and is not to be limited to a special or customized meaning.
- the term specifically may refer, without limitation, to at least one act and/or instance of asking for access.
- the term “receiving a request” as used herein is a broad term and is to be given its ordinary and customary meaning to a person of ordinary skill in the art and is not to be limited to a special or customized meaning.
- the term specifically may refer, without limitation, to a process of obtaining a request, e.g. from a data source. The receiving may fully or partially take place automatically.
- the term “infrared spectral range” (IR) generally refers to electromagnetic radiation of 760 nm to 1000 pm, wherein the range of 760 nm to 1 .5 pm is usually denominated as “near infrared spectral range” (NIR) while the range from 1 .5
- NIR near infrared spectral range
- ray as used herein is a broad term and is to be given its ordinary and customary meaning to a person of ordinary skill in the art and is not to be limited to a special or customized meaning.
- the term “light beam” as used herein is a broad term and is to be given its ordinary and customary meaning to a person of ordinary skill in the art and is not to be limited to a special or customized meaning.
- the term specifically specifically may refer, without limitation, to a collection of rays. In the following, the terms “ray” and “beam” will be used as synonyms.
- light beam as used herein is a broad term and is to be given its ordinary and customary meaning to a person of ordinary skill in the art and is not to be limited to a special or customized meaning.
- the term specifically may refer, without limitation, to an amount of light, specifically an amount of light traveling essentially in the same direction, including the possibility of the light beam having a spreading angle or widening angle.
- coherent electromagnetic radiation as used herein is a broad term and is to be given its ordinary and customary meaning to a person of ordinary skill in the art and is not to be limited to a special or customized meaning.
- the term specifically may refer, without limitation, to light pattern and/or a plurality of light beams that have at least essentially a fixed phase relationship between electric field values at different locations and/or at different times.
- the coherent electromagnetic radiation may refer to electromagnetic radiation that is able to exhibit interference effects.
- coherent may also comprise partial coherence, i.e. a non-per- fect correlation between phase values.
- the electromagnetic radiation may be completely coherent, wherein deviations of about ⁇ 10% of phase relationship are possible.
- the use of coherent electromagnetic radiation within the above-specified region may enable measurements of surface roughness even in the presence of sun light such as in the nature. Consequently, measurements of surface roughness can be easily and location-independently used. Overall, an improved signal-to-noise ratio can be achieved and the accuracy of evaluations of surface roughness can be increased.
- the term “illuminate”, as used herein, is a broad term and is to be given its ordinary and customary meaning to a person of ordinary skill in the art and is not to be limited to a special or customized meaning.
- the term specifically may refer, without limitation, to the process of exposing at least one element to light.
- the illuminating may comprise using at least one illumination source, in particular of the device.
- the term “illumination source” as used herein is a broad term and is to be given its ordinary and customary meaning to a person of ordinary skill in the art and is not to be limited to a special or customized meaning.
- the term specifically may refer, without limitation, to an arbitrary device configured for generating or providing light in the sense of the above-mentioned definition.
- the illumination source may be configured for illuminating the user by coherent electromagnetic radiation and/or may be suitable for emitting coherent electromagnetic radiation.
- the illumination source may be configured for emitting light at a single wavelength, e.g. in the infrared region.
- the illumination source may be adapted to emit light with a plurality of wavelengths, e.g. for allowing additional measurements in other wavelengths channels.
- the illumination source may comprise at least one radiation source.
- radiation source as used herein is a broad term and is to be given its ordinary and customary meaning to a person of ordinary skill in the art and is not to be limited to a special or customized meaning.
- the term specifically may refer, without limitation, to at least one arbitrary device configured for providing at least one light beam.
- the radiation source may be or may comprise at least one light emitter.
- the illumination source may comprise a plurality of radiation sources.
- the illumination source may comprise, for example, at least one laser source and/or at least one semiconductor radiation source.
- a semiconductor radiation source may be for example a light-emitting diode such as an organic, a laser diode and/or inorganic light-emitting diode. Additionally or alternatively, the radiation source may be a VCSEL array and/or a LED. Additionally or alternatively, the illumination source may comprise a VCSEL array and/or a LED.
- the illumination source may comprise one or more optical elements.
- Optical element may be for example a lens, a metasurface element, a DOE or a combination thereof.
- an illumination source may comprise one or more radiation sources and one or more optical elements.
- the coherent electromagnetic radiation is patterned coherent electromagnetic radiation and/or wherein the coherent electromagnetic radiation comprises one or more light beams.
- patterned coherent electromagnetic radiation as used herein is a broad term and is to be given its ordinary and customary meaning to a person of ordinary skill in the art and is not to be limited to a special or customized meaning.
- the term specifically may refer, without limitation, to a plurality of light beams of coherent electromagnetic radiations, e.g. at least two light beams, preferably at least two light beams.
- the coherent electromagnetic radiation may comprise at least two, more preferably at least 5 light beams.
- the coherent electromagnetic radiation may be projected onto the user. Projection of a light beam of the coherent electromagnetic radiation onto a surface, in particular of the user, may result in a light spot.
- a light beam may illuminate at least a part of the user and/or may be associated with a contiguous area of coherent electromagnetic radiation on at least a part of the user.
- a light spot may refer to the contiguous area of coherent electromagnetic radiation on at least a part of the user.
- a light spot may refer to an arbitrarily shaped spot of coherent electromagnetic radiation.
- a light spot may be a result of the projection of a light beam associated with the coherent electromagnetic radiation.
- the light spot may be at least partially spatially extended.
- the patterned coherent electromagnetic radiation may illuminate the user by a light pattern comprising a plurality of light spots.
- the light spots may be overlapping at least partially. For example, the number of light spots may be equal to the number of light beams associated with the patterned coherent electromagnetic radiation.
- the intensity associated with a light spot may be substantially similar. Substantially similar may refer to intensity values associated with the light spot may differ by less than 50%, preferably less than 30%, more preferably less than 20%. Using patterned light may be advantageous since it can enable the sparing of light-sensitive regions such as the eyes.
- the one or more light spots may be shown in the speckle image.
- a projection of patterned coherent electromagnetic radiation onto a regular surface may result in a light spot projected onto the regular surface independent of speckle.
- a projection of patterned coherent electromagnetic radiation onto a regular surface may result in a light spot projected onto the irregular surface comprising at least one speckle, preferably a plurality of speckles.
- the user may be associated with an at least partially irregular surface.
- the speckle image may comprise a plurality of speckles.
- a plurality of speckle is formed due to the interference of the coherent electromagnetic radiation.
- a light spot may comprise zero, one or more speckle depending on the surface the patterned coherent electromagnetic radiation is projected on.
- Skin may have an irregular surface.
- the projection of patterned coherent electromagnetic radiation may result in the formation of speckle within the one or more light spots. Projecting coherent electromagnetic radiation on an irregular surface results in the formation of speckle.
- the light spot may comprise one or more speckle.
- a light spot may have a diameter between 0.5 mm and 5 cm, preferably 0.6 mm and 4 cm, more preferably, 0.7 mm and 3 cm, most preferably 0.4 and 2 cm.
- patterned coherent electromagnetic radiation may be generated by an illumination source comprising a plurality of light emitters such as a VCSEL array comprises a plurality of VCSELs.
- An emitter of the plurality of light emitters may emit one light beam.
- an emitter of the plurality of light emitters may be associated with the one light spot, with the formation of one light spot and/or with the projection of one light spot.
- patterned coherent electromagnetic radiation may be generated by one or more light emitters and an optical element such as a DOE or a metasurface element.
- a metasurface element may be a meta lense.
- a meta lense may be at least partially transparent with respect to the coherent electromagnetic radiation and/or may comprise a material associated with a structure on the nanoscale.
- the optical element may replicate the number of light beams associated with the one or more light emitters and/or may be suitable for replicating the number of light beams associated with the one or more light emitters.
- the light emitter may be a laser.
- patterned coherent electromagnetic radiation may be generated by one or more light emitters and an optical element such as a DOE or a metasurface element.
- the optical element may replicate the number of light beams associated with the one or more light emitters and/or may be suitable for replicating the number of light beams associated with the one or more light emitters.
- the light emitter may be a laser.
- speckle image is a broad term and is to be given its ordinary and customary meaning to a person of ordinary skill in the art and is not to be limited to a special or customized meaning.
- the term specifically may refer, without limitation, to an image showing a plurality of speckles.
- the speckle image may show a plurality of speckles.
- the speckle image may comprise an image showing the user, in particular at least one part of the face of the user, while the user is being illuminated with the coherent electromagnetic radiation, particularly on a respective area of interest comprised by the image.
- the speckle image may be generated while the user may be illuminated by coherent electromagnetic radiation associated with a wavelength between 850 nm and 1400 nm.
- the speckle image may show a speckle pattern.
- the speckle pattern may specify a distribution of the speckles.
- the speckle image may indicate the spatial extent of the speckles.
- the speckle image may be suitable for determining a surface roughness measure.
- the speckle image may be generated with at least one camera. For generating the speckle image, the user may be illuminated by the illumination source.
- speckle as used herein is a broad term and is to be given its ordinary and customary meaning to a person of ordinary skill in the art and is not to be limited to a special or customized meaning.
- the term specifically may refer, without limitation, to an optical phenomenon caused by interfering coherent electromagnetic radiation due to non-regular or irregular surfaces. Speckles may appear as contrast variations in an image such as a speckle image.
- speckle pattern is a broad term and is to be given its ordinary and customary meaning to a person of ordinary skill in the art and is not to be limited to a special or customized meaning.
- the term specifically may refer, without limitation, to a distribution of the plurality of speckles.
- the distribution of the plurality of speckles may refer to a spatial distribution of at least one of the plurality of speckles and/or a spatial distribution of at least two of the plurality of speckles in relation to each other.
- the spatial distribution of the at least one of the plurality of speckles may refer to and/or specify a spatial extent of the at least one of the plurality of speckles.
- the spatial distribution of the at least two of the plurality of speckles may refer to and/or specify a spatial extent of the first speckle of the at least two speckles in relation to the second speckle of the at least two speckles and/or a distance between the first speckle of the at least two speckles and the second speckle of the at least two speckles.
- the speckles may be caused by the irregularities of the surface, the speckles reflect the roughness of the surface.
- determining the surface roughness measure based on the speckles in the speckle image utilizes the relation between the speckle distribution and the surface roughness. Thereby, a low-cost, efficient and readily available solution for surface roughness evaluation can be enabled.
- the term “generating” at least one image as used herein is a broad term and is to be given its ordinary and customary meaning to a person of ordinary skill in the art and is not to be limited to a special or customized meaning.
- the term specifically may refer, without limitation, to imaging capturing and/or determining and/or recording at least one image.
- the generating of an image may be performed by using at least one camera.
- the generating of the image may comprise capturing a single image and/or a plurality of images such as a sequence of images such as a video or a movie.
- the generating of the speckle image may be initiated by a user action or may automatically be initiated, e.g. once the presence of a user within a field of view and/or within a predetermined sector of the field of view of the camera is automatically detected.
- the term “field of view” as used herein is a broad term and is to be given its ordinary and customary meaning to a person of ordinary skill in the art and is not to be limited to a special or customized meaning.
- the term specifically may refer, without limitation, to an angular extent of the observable world and/or at least one scene that may be captured or viewed by an optical system, such as the image generation unit.
- the field of view may, typically, be expressed in degrees and/or radians, and, exemplarily, may represent the total angle spanned by the image and/or viewable area.
- the term “camera” as used herein is a broad term and is to be given its ordinary and customary meaning to a person of ordinary skill in the art and is not to be limited to a special or customized meaning.
- the term specifically may refer, without limitation, to device having at least one image sensor configured for generating or recording spatially resolved one-dimensional, two-dimensional or even three-dimensional optical data or information.
- the camera may be a digital camera.
- the camera may comprise at least one image sensor, such as at least one CCD sensor and/or at least one CMOS sensor configured for recording images.
- the image may be generated via a hardware and/or a software interface, which may be considered as the camera.
- the camera may comprise at least one image sensor, in particular at least one pixelated image sensor.
- the speckle image is generated by using at least one camera comprising at least one image sensor such as at least one CCD sensor and/or at least one CMOS sensor.
- the camera may comprise at least one CMOS sensor and/or at least one CCD chip.
- the camera may comprise at least one CMOS sensor, which may be sensitive in the infrared spectral range.
- the camera may have a field of view between 10°x10° and 75°x75°, preferably 55°x65°.
- the camera may have a resolution below 2 megapixel, preferably between 0.3 megapixel and 1 .5 megapixel. Megapixel may refer to a unit for measuring the number of pixels associated with a camera and/or an image.
- the camera may comprise further elements, such as one or more optical elements, e.g. one or more lenses.
- the optical sensor may be a fix-focus camera, having at least one lens which is fixedly adjusted with respect to the camera.
- the camera may also comprise one or more variable lenses which may be adjusted, automatically or manually.
- Other cameras are feasible.
- the camera may comprise the at least one image sensor and at least one further optical element.
- the further optical element may be at least one lens.
- a lens may refer to an optical element suitable for influencing the expansion of the light beam associated with the coherent electromagnetic radiation.
- the further optical element may be at least one polarizer.
- the camera may comprise at least one image sensor, at least one lens and at least one polarizer.
- the polarizer may refer to an optical element suitable for selecting the electromagnetic radiation according to its polarization.
- the polarizer may be an optical element suitable for selecting the coherent electromagnetic radiation according to polarization its.
- a part of the electromagnetic radiation, in particular coherent electromagnetic radiation may pass the polarizer while the rest of the electromagnetic radiation, in particular coherent electromagnetic radiation, may be turned away at least partially and/or may be absorbed at least partially.
- the coherent electromagnetic radiation associated with a wavelength between 850 nm and 1400 nm penetrates the skin deeply, a part of the information received from the light reflected from the skin comprises information independent from the surface roughness which distracts the measurement of the surface roughness.
- a polarizer can be used.
- the coherent electromagnetic radiation reflected from the surface of the object is usually polarized differently as the light reflected from deeper layers of the human skin.
- the polarizer enables a selection of the desired signal from the undesired signal.
- a distance between the user and the camera used for generating the speckle image is between 10 cm and 1.5 m and/or wherein the distance between the user and an illumination source used for illuminating the user is between 10 cm and 1 .5 m.
- the distance between the user and the camera may be between 20 cm and 1 .2 m.
- the distance between the user and the illumination source may be between 20 cm and 1 .2 m. Adjusting the distance between object and camera ensure that a speckle image of sufficient quality is generated.
- the above-specified distances enable a correct and reliable determination of the surface roughness. This is especially important in non-static contexts, where a user may operate the device by himself.
- the speckle image may show the user while being illuminated by coherent electromagnetic radiation and the surface roughness of the user’s skin may be determined.
- the user may have generated the speckle image and/or initiated the generation of the speckle image.
- the speckle image may be initiated by the user operating an application of a mobile electronic device.
- the user can decide on his or her own when to determine the surface roughness of her or his skin.
- a non-expert user is enabled to determine surface roughness and measurements can be carried out in more natural and less artificial contexts.
- the surface roughness can be evaluated more realistically which in turn serves more realistic measure for the surface roughness.
- the skin may have different surface roughness during the course of the day depending on the activity of the human. Doing sports may influence the surface roughness as well as creaming the skin. This influence can be verified with the herein described methods and systems.
- the speckle image may be associated with a resolution of less than 5 megapixel.
- the speckle image may be associated with a resolution of less than 3 megapixel, more preferably less than 2.5 megapixel, most preferably less than 2 megapixel.
- Such speckles images can be generated with readily available, small and cheap smartphone cameras.
- the storage and processing capacities needed for evaluating the surface roughness measure are small.
- the low resolution of the speckle image used for evaluating the surface roughness enables the usage of mobile electronic devices for evaluating the surface roughness, in particular devices like smartphone or wearables since these devices have strictly limited size, memory and processing capacity.
- the method may further comprise reducing the speckle image to a predefined size prior to determining the surface roughness measure.
- Reducing the speckle image to a predefined size may be based on applying one or more image augmentation techniques.
- Reducing the speckle image to a predefined sized may comprise selecting an area of the speckle image of the predefined size and cutting the speckle image to the area of the speckle image of the predefined size.
- the area of the speckle image of the predefined size may be associated with the living organism such as the human, in particular with the skin of the living organism such as the skin of the human.
- the part of the image other than the area of the speckle image of the predefined size may be associated with background and/or may be independent of the living organism such as a human.
- a reduced amount of data needs to be processed which decreases the time needed for determining the surface roughness or allows for less storage and processor to be needed. Furthermore, the part of the image useful for the analysis is selected. Hence, reducing the size may result in disregarding parts of the speckle image independent of the object or living organism such as a human. Followingly, the surface roughness measure can be determined easily and for the analysis disturbing parts not relating to the user are ignored.
- image augmentation techniques may comprise at least one of scaling, cutting, rotating, blurring, warping, shearing, resizing, folding, changing the contrast, changing the brightness, adding noise, multiply at least a part of the pixel values, drop out, adjusting colors, applying a convolution, embossing, sharpening, flipping, averaging pixel values or the like.
- the method may further comprise reducing the speckle image to a predefined size based on detecting the user in the speckle image.
- the speckle image may be reduced to a predefined size based on detecting the user in the speckle image prior to determining the surface roughness measure.
- reducing the speckle image to the predefined size based on detecting the user in the speckle image may comprise detecting the contour of the user, e.g. detecting the contour of a user’s face and reducing the speckle image to an area associated with the user, in particular with an area associated with the user’s face.
- the area associated with the user may be within the contour of the user, in particular the user and/or the contour of a user’s face.
- the method may further comprise receiving at least one flood image.
- flood image as used herein is a broad term and is to be given its ordinary and customary meaning to a person of ordinary skill in the art and is not to be limited to a special or customized meaning.
- the term specifically may refer, without limitation, to an image generated by the camera while illumination source is emitting flood light on the user.
- fluorescence as used herein, is a broad term and is to be given its ordinary and customary meaning to a person of ordinary skill in the art and is not to be limited to a special or customized meaning.
- the term specifically may refer, without limitation, to substantially continuous spatial illumination, in particular diffuse and/or uniform illumination.
- the flood light may have a wavelength in the infrared range.
- the flood image may comprise an image showing a user, in particular the face of the user, while the user is being illuminated with the flood light.
- the flood image may be generated by imaging and/or recording light reflected by the user which is illuminated by the flood light.
- the flood image showing the user may comprise at least a portion of the flood light on at least a portion the user.
- the flood image may show the contour of the user.
- the contour of the user may be detected based on the flood image.
- the contour of the user may be detected by providing the flood image to an object detection data-driven model, in particular a user detection model, wherein object detection data-driven model may be parametrized and/or trained to receive the flood image and provide an indication on the contour of the user based on a training data set.
- the training data set may comprise flood images and indications on the contour of objects and/or humans.
- the indication of the contour may include a plurality of points indicating the location of a specific landmark associated with the user. For example, where the speckle image may be associated with a user’s face, the user’s face may be detected based on the contour, wherein the contour may indicate the landmarks of the face such as the nose point or the outer corner of the lips or eyebrows.
- surface roughness is a broad term and is to be given its ordinary and customary meaning to a person of ordinary skill in the art and is not to be limited to a special or customized meaning.
- the term specifically may refer, without limitation, to a property of a surface associated with the user.
- the surface roughness may characterize lateral and/or vertical extent of surface features.
- the surface roughness may be evaluated based on the surface roughness measure.
- the surface roughness measure may quantify the surface roughness.
- surface feature is a broad term and is to be given its ordinary and customary meaning to a person of ordinary skill in the art and is not to be limited to a special or customized meaning.
- the term specifically may refer, without limitation, to an arbitrarily shaped structure associated with the surface, in particular of the user.
- the surface feature may refer to a substructure of the surface associated with the user.
- a surface may comprise a plurality of surface features.
- an uplift or a sink may be surface features.
- a surface feature may refer to a part of the surface associated with an angle unequal to 90° against the surface normal.
- surface roughness measure is a broad term and is to be given its ordinary and customary meaning to a person of ordinary skill in the art and is not to be limited to a special or customized meaning.
- the term specifically may refer, without limitation, to a measure suitable for quantifying the surface roughness.
- Surface roughness measure may be related to the speckle pattern.
- surface roughness measure may comprise at least one of a fractal dimension, speckle size, speckle contrast, speckle modulation, roughness exponent, standard deviation of the height associated with surface features, lateral correlation length, average mean height, root mean square height or a combination thereof.
- the surface roughness measure may be suitable for describing the vertical and lateral surface features.
- Surface roughness measure may comprise a value associated with the surface roughness measure.
- Surface roughness measure may refer to a term of a quantity for measuring the surface roughness and/or to the values associated with the quantity for measuring the surface roughness.
- the determining of the surface roughness measure based on the speckle image may refer to determining the surface roughness measure based on a speckle pattern in the speckle image.
- the surface roughness measure may be determined based on the speckle image by providing the speckle image to a model and receiving the surface roughness measure from the model.
- the model may be suitable for determining an output based on an input.
- model may be suitable for determining a surface roughness measure based on the speckle image, preferably based on receiving speckle image.
- the model may be or may comprise one or more of a physical model, a data- driven model or a hybrid model.
- a hybrid model may be a model comprising at least one data- driven model with physical or statistical adaptations and model parameters. Statistical or physical adaptations may be introduced to improve the quality of the results since those provide a systematic relation between empiricism and theory.
- a data-driven model may represent a correlation between the surface roughness measure and the speckle image.
- the data- driven model may obtain the correlation between surface roughness measure and speckle image based on a training data set comprising a plurality of speckle images and a plurality of surface roughness measures.
- the data-driven model may be parametrized based on a training data set to receive the speckle image and provide the surface roughness measure.
- the data-driven model may be trained based on a training data set.
- the training data set may comprise at least one speckle image and at least one corresponding surface roughness measure.
- the training data set may comprise a plurality of speckle image and a plurality of surface roughness measures.
- Training the model may comprise parametrizing the model.
- Providing the surface roughness measure based on the speckle image may comprise mapping the speckle image to the surface roughness measure.
- the data-driven model may be parametrized and/or trained to receive the speckle image.
- Data-driven model may receive the speckle image at an input layer.
- the term training may also be denoted as learning.
- the term specifically may refer, without limitation, to a process of building the data-driven model, in particular determining and/or updating parameters of the data-driven model. Updating parameters of the data-driven model may also be referred to as retraining. Retraining may be included when referring to training herein.
- the data-driven model may adjust to achieve best fit with the training data, e.g. relating the at least on input value with best fit to the at least one desired output value.
- the neural network is a feedforward neural network such as a CNN
- a backpropagation-algorithm may be applied for training the neural network.
- a gradient descent algorithm or a backpropagation-through-time algorithm may be employed for training purposes.
- Training a data-driven model may comprise or may refer without limitation to calibrating the model.
- the physical model may reflect physical phenomena in mathematical form, e.g., including first-principles models.
- a physical model may comprise a set of equations that describe an interaction between the object and the coherent electromagnetic radiation thereby resulting in a surface roughness measure.
- the physical model may be based on at least one of a fractal dimension, speckle size, speckle contrast, speckle modulation, roughness exponent, standard deviation of the height associated with surface features, lateral correlation length, average mean height, root mean square height or a combination thereof.
- the physical model may comprise one or more equations relating the speckle image and the surface roughness measure based on equations relating to the fractal dimension, speckle size, speckle contrast, speckle modulation, roughness exponent, standard deviation of the height associated with surface features, lateral correlation length, average mean height, root mean square height or a combination thereof.
- the fractal dimension may be determined based on the Fourier transform of the speckle image and/or the inverse of the Fourier transform of the speckle image.
- the fractal dimension may be determined based on the slope of a linear function fitted to a double logarithmic plot of the power spectral density versus a frequency obtained by Fourier transform.
- the speckle size may refer to the spatial extent of one or more speckles. Where the speckle size may refer to the spatial extent of more than one speckle, the speckle size may be determined based on an average of more than one speckle sizes and/or a weighting of the more than one speckle sizes.
- the speckle contrast may refer to a measure for the standard deviation of at least a part of the speckle image in relation to the mean intensity of at least the part of the speckle image.
- the speckle modulation may refer to a measure for the intensity fluctuation associated with the speckles in at least a part of the speckle image.
- Roughness exponent, standard deviation of the height associated with surface features, lateral correlation length or a combination thereof may be determined based on the autocorrelation function associated with the double logarithmic plot of the power spectral density versus a frequency obtained by Fourier transform.
- determining the surface roughness measure based on the speckle image may comprise determining the surface roughness measure based on a speckle pattern.
- Determining the distribution of the speckles may comprise determining at least one of fractal dimension associated with the speckle image, speckle size associated with the speckle image, speckle contrast associated with the speckle image, speckle modulation associated with the speckle image, roughness exponent associated with the speckle image, standard deviation of the height associated with surface features associated with the speckle image, lateral correlation length associated with the speckle image, average mean height associated with the speckle image, root mean square height associated with the speckle image or a combination thereof.
- determining the surface roughness measure may comprise determining at least one of fractal dimension associated with the speckle image, speckle size associated with the speckle image, speckle contrast associated with the speckle image, speckle modulation associated with the speckle image, roughness exponent associated with the speckle image, standard deviation of the height associated with surface features associated with the speckle image, lateral correlation length associated with the speckle image, average mean height associated with the speckle image, root mean square height associated with the speckle image or a combination thereof.
- determining the surface roughness measure based on the distribution of the speckles in the speckle image may comprise providing the speckle image to a model, in particular a data-driven model, wherein the data-driven model may be parametrized and/or trained based on a training data set comprising one or more speckle image and one or more corresponding surface roughness measure.
- the surface roughness measure may be determined based on the speckle image by providing the speckle image to a model and receiving the surface roughness measure from the model.
- the model may be a data-driven model and may be parametrized and/or trained based on a training data set comprising a plurality of speckle images and corresponding surface roughness measures or indications of surface roughness measures. Additionally or alternatively, the model may be a physical model.
- the method may further comprise generating a partial speckle image.
- a partial speckle image may refer to a partial image generated based on the speckle image.
- the partial speckle image may be generated by applying one or more image augmentation techniques to the speckle image.
- the method may further comprise generating a first speckle image and a second speckle image.
- the speckle image may comprise the first speckle image and the second speckle image.
- the first speckle image may refer to a first part of the speckle image.
- the second speckle image may refer to a second part of the speckle image.
- the first speckle image and the second speckle image may be different from each other.
- the first speckle image and the second speckle image may be non-overlapping.
- the first speckle image and the second speckle image may be generated by applying one or more image augmentation techniques to the speckle image.
- Determining the surface roughness measure based on the speckle image may comprise determining a first surface roughness measure based on the first speckle image and determining a second surface roughness measure based on the second speckle image.
- Providing the surface roughness measure may include providing the first surface roughness measure and the second surface roughness measure.
- the first surface roughness measure and the second surface roughness measure may be provided together.
- the first surface roughness measure and the second surface roughness measure may be provided in a surface roughness measure map indicating the spatial distribution of surface roughness measures.
- the surface roughness measure map may indicate the first surface roughness measure associated with a first area in the surface roughness measure map and the second surface roughness measure map may indicate the second surface roughness measure associated with a second area in the surface roughness measure map.
- the surface roughness measure map may be similar to a heat map, wherein the surface roughness measures may be plotted against the area associated with the respective surface roughness measures.
- the surface roughness measure is determined by using at least one processor.
- processor as generally used herein is a broad term and is to be given its ordinary and customary meaning to a person of ordinary skill in the art and is not to be limited to a special or customized meaning.
- the term specifically may refer, without limitation, to an arbitrary logic circuitry configured for performing basic operations of a computer or system, and/or, generally, to a device which is configured for performing calculations or logic operations.
- the processor, or computer processor may be configured for processing basic instructions that drive the computer or system. It may be a semi-conductor based processor, a quantum processor, or any other type of processor configures for processing instructions.
- the processor may be or may comprise a Central Processing Unit ("CPU").
- the processor may be a (“GPU”) graphics processing unit, (“TPU”) tensor processing unit, (“CISC”) Complex Instruction Set Computing microprocessor, Reduced Instruction Set Computing (“RISC”) microprocessor, Very Long Instruction Word (“VLIW”) microprocessor, or a processor implementing other instruction sets or processors implementing a combination of instruction sets.
- the processing means may also be one or more special-purpose processing devices such as an Application-Specific Integrated Circuit (“ASIC”), a Field Programmable Gate Array (“FPGA”), a Complex Programmable Logic Device (“CPLD”), a Digital Signal Processor (“DSP”), a network processor, or the like.
- ASIC Application-Specific Integrated Circuit
- FPGA Field Programmable Gate Array
- CPLD Complex Programmable Logic Device
- DSP Digital Signal Processor
- processor may also refer to one or more processing devices, such as a distributed system of processing devices located across multiple computer systems (e.g., cloud computing), and is not limited to a single device unless otherwise specified.
- the processor may also be an interface to a remote computer system such as a cloud service.
- the processor may include or may be a secure enclave processor (SEP).
- SEP secure enclave processor
- An SEP may be a secure circuit configured for processing the spectra.
- a "secure circuit” is a circuit that protects an isolated, internal resource from being directly accessed by an external circuit.
- the processor may be an image signal processor (ISP) and may include circuitry suitable for processing images, in particular images with personal and/or confidential information.
- ISP image signal processor
- the device may be a mobile electronic device.
- the surface roughness measure may be determined by a mobile electronic device and/or wherein the speckle image is generated with a camera of a mobile electronic device.
- the human may initiate the generation of the speckle image based on the mobile electronic device. This is beneficial since many humans own a mobile electronic device such as a smartphone. These devices accompany the human and thus, a measurement of the surface roughness is possible at any time and can be carried out in more natural and less artificial contexts. Thereby, the surface roughness can be evaluated more realistically which in turn serves more realistic measure for the surface roughness.
- the method further comprises authenticating the user or denial using the surface roughness measure.
- the authenticating may be performed by using at least one authentication unit, e.g. of the device.
- the term “authentication unit” as used herein is a broad term and is to be given its ordinary and customary meaning to a person of ordinary skill in the art and is not to be limited to a special or customized meaning.
- the term specifically may refer, without limitation, to at least one unit configured for performing at least one authentication process of a user.
- the authentication unit may be or may comprise at least one processor and/or may be designed as software or application.
- the authenticating of the user or denial using the surface roughness measure is performed by using an authentication unit of the device and/or a remote authentication unit.
- the authentication unit may perform at least one face detection using the flood image.
- the face detection may be performed locally on the device.
- Face identification i.e. assigning an identity to the detected face, however, may be performed remotely, e.g. in the cloud, e.g. especially when identification needs to be done and not only verification.
- User templates can be stored at the remote device, e.g. in the cloud, and would not need to be stored locally. This can be an advantage in view of storage space and security.
- the authentication unit may be configured for identifying the user based on the flood image. Particularly therefore, the authentication unit may forward data to a remote device. Alternatively or in addition, the authentication unit may perform the identification of the user based on the flood image, particularly by running an appropriate computer program having a respective functionality.
- identifying as used herein is a broad term and is to be given its ordinary and customary meaning to a person of ordinary skill in the art and is not to be limited to a special or customized meaning. The term specifically may refer, without limitation, to assigning an identity to a detected face and/or at least one identity check and/or verifying an identity of the user.
- the authentication may comprise a plurality of steps.
- the authentication may comprise performing at least one face detection using the flood image.
- the face detection may comprise analyzing the flood image.
- the analyzing of the flood image may comprise using at least one image recognition technique, in particular a face recognition technique.
- An image recognition technique comprises at least one process of identifying the user in an image.
- the image recognition may comprise using at least one technique selected from the technique consisting of: color-based image recognition, e.g. using features such as template matching; segmentation and/or blob analysis e.g. using size, or shape; machine learning and/or deep learning e.g. using at least one convolutional neural network.
- the authentication may comprise identifying the user.
- the identifying may comprise assigning an identity to a detected face and/or at least one identity check and/or verifying an identity of the user.
- the identifying may comprise performing a face verification of the imaged face to be the user’s face.
- the identifying the user may comprise matching the flood image, e.g. showing a contour of parts of the user, in particular parts of the user’s face, with a template, e.g. a template image generated within an enrollment process.
- the identifying of the user may comprise determining if the imaged face is the face of the user, in particular if the imaged face corresponds to at least one image of the user’s face stored in at least one memory, e.g. of the device.
- Authentication may be successful if the flood image can be matched with an image template.
- Authentication may be unsuccessful if the flood image cannot be matched with an image template.
- the identifying of the user may comprise determining a plurality of facial features.
- the analyzing may comprise comparing, in particular matching, the determined facial features with template features.
- the template features may be features extracted from at least one template.
- the template may be or may comprise at least one image generated in an enrollment process, e.g. when initializing the device. Template may be an image of an authorized user.
- the template features and/or the facial feature may comprise a vector.
- Matching of the features may comprise determining a distance between the vectors.
- the identifying of the user may comprise comparing the distance of the vectors to a least one predefined limit. The user may be successfully identified in case the distance is ⁇ the predefined limit at least within tolerances. The user may be declined and/or rejected otherwise.
- the analyzing of the flood image may further comprise one or more of the following: a filtering; a selection of at least one region of interest; a formation of a difference image between the flood image and at least one offset; an inversion of flood image; a background correction; a decompo- sition into color channels; a decomposition into hue; saturation; and brightness channels; a frequency decomposition; a singular value decomposition; applying a Canny edge detector; applying a Laplacian of Gaussian filter; applying a Difference of Gaussian filter; applying a Sobel operator; applying a Laplace operator; applying a Scharr operator; applying a Prewitt operator; applying a Roberts operator; applying a Kirsch operator; applying a high-pass filter; applying a low-pass filter; applying a Fourier transformation; applying a Radon-transformation; applying a Hough-transformation; applying a wavelet-transformation; a thresholding; creating a binary image.
- the region of interest may be determined manually by a
- the image recognition may comprise using at least one model, in particular a trained model comprising at least one face recognition model.
- the analyzing of the flood image may be performed by using a face recognition system, such as FaceNet, e.g. as described in Florian Schroff, Dmitry Kalenichenko, James Philbin, “FaceNet: A Unified Embedding for Face Recognition and Clustering”, arXiv: 1503.03832.
- the trained model may comprises at least one convolutional neural network.
- the convolutional neural network may be designed as described in M. D. Zeiler and R. Fergus, “Visualizing and understanding convolutional networks”, CoRR, abs/1311 .2901 , 2013, or C.
- Learned-Miller “Labeled faces in the wild: A database for studying face recognition in unconstrained environments”, Technical Report 07-49, University of Massachusetts, Amherst, October 2007, the Youtube® Faces Database as described in L. Wolf, T. Hassner, and I. Maoz, “Face recognition in unconstrained videos with matched background similarity”, in IEEE Conf, on CVPR, 2011 , or Google® Facial Expression Comparison dataset.
- the training of the convolutional neural network may be performed as described in Florian Schroff, Dmitry Kalenichenko, James Philbin, “FaceNet: A Unified Embedding for Face Recognition and Clustering”, arXiv: 1503.03832.
- the face detection and identification of the user may be performed before step b.4 of authenticating the user or denial using the surface roughness measure.
- the authentication process may be aborted in case the user is not successfully identified.
- using two-dimensional images for authentication can be tricked.
- the method according to the present invention proposes to use the surface roughness measure as additional security feature for authentication.
- the authentication using the flood image may be validated using the surface roughness measure.
- step b.4 of authenticating the user or denial using the surface roughness measure may be performed regardless of whether the face detection and identification of the user was performed.
- the surface roughness measure may be used as biometric identifier for uniquely identifying the user.
- the method may comprise determining if the surface roughness measure corresponds to a surface roughness measure of a human being.
- the method may comprise determining if the surface roughness measure corresponds to a surface roughness measure of the specific user. Determining if the surface roughness measure corresponds to a surface roughness measure of a human being and/or of the specific user may comprise comparing the surface roughness measure to at least one pre-defined or pre-determined range of values of surface roughness measure, e.g. stored in at least one database e.g. of the device or of a remote database such as of a cloud.
- the surface roughness measure is at least within tolerances within the re-defined or pre-determined range of values of surface roughness measure, the user is authenticated otherwise the authentication is unsuccessful.
- the surface roughness measure may be a human skin roughness.
- the determined human skin roughness is within the range of 10 pm to 150 pm the user is authenticated.
- other ranges are possible.
- the method further may comprise c. allowing or declining the user to access one or more functions associated with the device depending on the authentication or denial in step b.4.
- the allowing may comprise granting permission to access the one or more functions.
- the method may comprise determining if the user correspond to an authorized user, wherein allowing or declining is further based on determining if the user corresponds to an authorized user.
- the method may comprise at least authorization step, e.g. by using at least one authorization unit.
- authorization step as used herein is a broad term and is to be given its ordinary and customary meaning to a person of ordinary skill in the art and is not to be limited to a special or customized meaning.
- the term specifically specifically may refer, without limitation, to a step of assigning access rights to the user, in particular a selective permission or selective restriction of access to the device and/or at least one resource of the device.
- the authorization unit may be configured for access control.
- authorization unit is a broad term and is to be given its ordinary and customary meaning to a person of ordinary skill in the art and is not to be limited to a special or customized meaning.
- the term specifically may refer, without limitation, to a unit such as a processor configured for authorization of a user.
- the authorization unit may comprise at least one processor or may be designed as software or application.
- the authorization unit and the authentication unit may be embodied integral, e.g. by using the same processor.
- the authorization unit may be configured for allowing the user to access the one or more functions, e.g. on the device, e.g. unlocking the device, in case of successful authentication of the user or declining the user to access the one or more functions, e.g. on the device, in case of non-successful authentication.
- the method may comprise displaying a result of the authentication and/or the authorization e.g. by using at least one communication interface such as a user interface, e.g. a display.
- the method further may comprise using even further security features such as a three-dimensional information and/or other liveness data.
- the method may comprise conducting at least one distance measurement, in addition to using the surface roughness as security feature.
- the three-dimensional information obtained via the distance measurement may be used as additional security feature.
- the terms determining a distance or performing a distance measurement may refer to measuring a distance.
- the method may comprise performing a distance measurement at a plurality of positions of the user’s face and for determining a depth map.
- the method may comprise authenticating the user based on the surface roughness measure and the depth map.
- the determined depth map may be compared to a predetermined depth map of the user, e.g. determined during an enrollment process.
- the authentication unit may be configured for authenticating the user in case the determined depth map matches with the predetermined depth map of the user, in particular at least within tolerances. Otherwise, the user may be declined.
- the authenticating using the depth map may be performed before or after the authenticating using the surface roughness measure.
- the distance measurement may comprise using one of more of depth from focus, depth from defocus, triangulation, or depth-from-photon-ratio.
- the distance measurement may comprise receiving and/or obtaining at least one reflection image showing at least a part of the user while the user is illuminated at least partially with electromagnetic radiation and determining a distance of the user from the image generation unit and/or from an illumination source based on the at least one reflection image.
- the reflection image as used herein may not be limited to an actual visual representation of a user. Instead, a reflection image comprises data generated based on electromagnetic radiation reflected by an object being illuminated by electromagnetic radiation. Reflection image may comprise at least one pattern. Reflection image may comprise at least one pattern feature. Reflection image may be comprised in a larger reflection image. A larger reflection image may be a reflection image comprising more pixels than the reflection image comprised in it.
- Dividing a reflection image into at least two parts may result in at least two reflection images.
- the at least two reflection images may comprise different data generated based on light reflected by an object being illuminated with light, e.g. one of the at least two reflection images may represent a living organisms nose and the other one of the at least two reflection images may represent a living organisms forehead.
- Reflection image may be suitable for determining a feature contrast for the at least one pattern feature.
- Reflection image may comprise a plurality of pixels.
- a plurality of pixels may comprise at least two pixels, preferably more than two pixels. For determining a feature contrast at least one pixel associated with the reflection feature and at least one pixel not associated with the reflection feature may be suitable.
- the term “reflection image” as used herein can refer to any data based on which an actual visual representation of the imaged object can be constructed.
- the data can correspond to an assignment of color or grayscale values to image positions, wherein each image position can correspond to a position in or on the imaged object.
- the reflection images or the data referred to herein can be two-dimensional, three-dimensional or four-dimensional, for instance, wherein a four-dimensional image is understood as a three-dimensional image evolving over time and, likewise, a two-dimensional image evolving over time might be regarded as a three-dimensional image.
- a reflection image can be considered a digital image if the data are digital data, wherein then the image positions may correspond to pixels or voxels of the image and/or image sensor.
- the living organism may be illuminated with light, eventually being RGB light or preferably I R flood light and/or patterned light.
- Electromagnetic radiation may be patterned electromagnetic radiation. Patterned electromagnetic radiation may comprise at least one pattern. Patterned light may be projected onto the living organism. Patterned electromagnetic radiation may comprise patterned coherent electromagnetic radiation.
- a distance of the user from an image generation unit and/or from an illumination source may be determined based on the at least one reflection image.
- Reflection image may comprise information associated with the distance.
- a variety of methods are known for determining a distance based on an image, e.g. “depth from focus”, “depth from defocus” or triangulation.
- Distance may be determined by “depth from focus”, “depth from defocus” (DFD), triangulation, depth-from-photon-ratio (DPR) or combinations thereof.
- Different methods for determining a distance may provide different advantages depending on the use case as known in the art. Hence, combinations of at least two methods may provide more accurate results and thus, improve the reliability of an authentication process including distance determination.
- Distance obtained from at least two methods may comprise at least two distance values. At least two distance values may be combined by using at least one recursive filter and/or using a real function such as the arithmetic or geometric mean, a polynomial, preferably a polynomial up to the eights order in the at least two distance values.
- the method as described herein may further comprise determining a distance based on the at least one reflection by using one of more of depth from focus, depth from defocus, triangulation, depth-from-photon-ratio, determining a distance based on the distance between at least two spatial features based on a flood image and combinations thereof.
- Determining a distance of the user from an image generation unit and/or from an illumination source based on the reflection image may comprise one or more of the following techniques depth from focus, depth from defocus, triangulation, depth-from-photon-ratio, determining a distance based on measuring a distance between at least two spatial features in a flood image and combinations thereof.
- Spatial feature may be represented with a vector.
- the ector may comprise at least one numerical value.
- Example for spatial features of a face may comprise at least one of the following: the nose, the eyes, the eyebrows, the mouth, the ears, the chin, the forehead, wrinkles, irregularities such as scars, cheeks including cheekbones or the like.
- Other examples for spatial features may include finger, nails or the like.
- determining a distance of the user from an image generation unit and/or from an illumination source based on the at least one reflection image may be based on meas- uring the distance between at least two spatial features in a flood image and comparing the distance between at least two spatial features in a flood image with a reference.
- Distance between at least two spatial features associated with the object may be indicative of a distance of the user and an illumination source and/or an image generation unit.
- Distance between at least two spatial features associated with the user may be related to a distance of the object and an illumination source and/or an image generation unit.
- the at least one blurring function f a may be a function or composite function composed from at least one function from the group consisting of: a Gaussian, a sine function, a pillbox function, a square function, a Lorentzian function, a radial function, a polynomial, a Hermite polynomial, a Zernike polynomial, a Legendre polynomial.
- Distance may be referred to as longitudinal coordinate z.
- the longitudinal coordinate ZDFD may be determined by using at least one con- volution-based algorithm such as a depth-from-defocus algorithm. To obtain the distance from the image, the depth-from-defocus algorithm estimates the defocus of the object.
- a quotient of a measure for an intensity associated with a first location in an image and another measure for an intensity associated with a second location comprises one or more of: dividing at least the first measure and/or at least then second measure, dividing multiples of at least the first measure and/or at least the second measure, dividing linear combinations of at least the first measure and/or at least the second measure.
- Electromagnetic radiation used for illumination may be associated with at least one beam profile.
- a measure for an intensity may further comprise at least one information related to at least one beam profile of the light beam associated with the electromagnetic radiation.
- the beam profile may be one of a trapezoid beam profile; a triangle beam profile; a conical beam profile and a linear combination of Gaussian beam profiles.
- determining the distance may be associated with determining the first area of the beam profile and the second area of the beam profile.
- First area of the beam profile may comprise essentially edge information of the beam profile and the second area of the beam profile may comprise essentially center information of the beam profile.
- Edge information may comprise an information relating to a number of photons in the first area of the beam profile and the center information may comprise an information relating to a number of photons in the second area of the beam profile.
- Determining the distance based on the quotient may comprise dividing the edge information and the center information, dividing multiples of the edge information and the center information, dividing linear combinations of the edge information and the center information.
- distance may be determined by using triangulation.
- Triangulation may be based on trigonometrical equations. Trigonometrical equations may be used for determining a distance.
- Triangulation may be based on the at least one reflection image and a baseline. Baseline may refer to distance between the illumination source and the image generation unit. Baseline may be received. In particular, baseline may be received together with the at least one reflection image.
- device for authenticating a user of a device comprises: at least one illumination source configured for illuminating the user with coherent electromagnetic radiation associated with a wavelength between 850 nm and 1400 nm; at least one camera configured for generating at least one speckle image showing the user under illumination with the coherent electromagnetic radiation associated with a wavelength between 850 nm and 1400 nm, at least one processor configured for receiving the speckle image from the camera and determining at least one surface roughness measure based on the speckle image and providing the surface roughness measure, at least one authentication unit configured for receiving the surface roughness measure and for configured for authenticating the user or denying the authentication of the user using the surface roughness measure.
- a computer-readable storage medium may refer to any suitable data storage device or computer readable memory on which is stored one or more sets of instructions (for example software) embodying any one or more of the methodologies or functions described herein.
- the instructions may also reside, completely or at least partially, within the main memory and/or within the processor during execution thereof by the computer, main memory, and processing device, which may constitute computer-readable storage media.
- the instructions may further be transmitted or received over a network via a network interface device.
- Computer- readable storage medium includes hard drives, for example on a server, USB storage device, CD, DVD or Blue-ray discs.
- program code means in order to perform the method according to the present invention in one or more of the embodiments enclosed herein when the program is executed on a computer or computer network.
- the program code means may be stored on a computer-readable data carrier and/or on a computer-readable storage medium.
- a data carrier having a data structure stored thereon, which, after loading into a computer or computer network, such as into a working memory or main memory of the computer or computer network, may execute the method according to one or more of the embodiments disclosed herein.
- a computer program product with program code means stored on a machine-readable carrier, in order to perform the method according to one or more of the embodiments disclosed herein, when the program is executed on a computer or computer network.
- a computer program product refers to the program as a tradable product.
- the product may generally exist in an arbitrary format, such as in a paper for-mat, or on a computer-readable data carrier and/or on a computer-readable storage medium.
- the computer program product may be distributed over a data network.
- Embodiment 1 A computer-implemented method for authenticating a user of a device, the method comprising: a. receiving a request for accessing one or more functions associated with the device; b. executing at least one authentication process comprising the following steps: b.1 triggering to illuminate the user by coherent electromagnetic radiation associated with a wavelength between 850 nm and 1400 nm, b.2 triggering to generate at least one speckle image of the user while the user is being illuminated by the coherent electromagnetic radiation, b.3 determining at least one surface roughness measure based on the speckle image, b.4 authenticating the user or denial using the surface roughness measure.
- Embodiment 8 The method according to the preceding embodiment, wherein the camera comprises at least one lens and at least one polarizer.
- Embodiment 9 The method of any one of the preceding embodiments, wherein a distance between the user and a camera used for generating the speckle image is between 10 cm and 1 .5 m and/or wherein the distance between the user and an illumination source used for illuminating the user is between 10 cm and 1 .5 m.
- Embodiment 12 The method of any one of the preceding embodiments, wherein the speckle image is associated with a resolution of less than 5 megapixel.
- Embodiment 13 The method of any one of the preceding embodiments, wherein the receiving of the request for accessing one or more functions associated with the device is performed by using at least one communication interface.
- Embodiment 17 The method of any one of the preceding embodiments, wherein steps a) to b) are performed by a mobile electronic device, wherein the speckle image is generated with a camera of the mobile electronic device.
- Embodiment 18 A device for authenticating a user of a device, the device comprising: at least one illumination source configured for illuminating the user with coherent electromagnetic radiation associated with a wavelength between 850 nm and 1400 nm; at least one camera configured for generating at least one speckle image showing the user under illumination with the coherent electromagnetic radiation associated with a wavelength between 850 nm and 1400 nm, at least one processor configured for receiving the speckle image from the camera and determining at least one surface roughness measure based on the Speckle image and providing the surface roughness measure, at least one authentication unit configured for receiving the surface roughness measure and for configured for authenticating the user or denying the authentication of the user using the surface roughness measure.
- Embodiment 19 The device according to the preceding embodiment, wherein the device is configured for performing the method according to any one of embodiments referring to a method.
- Embodiment 20 Use of a surface roughness measure as obtained by a method according to any one of embodiments referring to a method and/or as obtained by a device according to any one of the preceding embodiments relating to a device for authentication a user.
- Embodiment 21 A computer program comprising instructions which, when the program is executed by the device according to any one of the preceding embodiments referring to a device, cause the device to perform the method according to any one of the preceding embodiments referring to a method.
- Embodiment 22 A computer-readable storage medium comprising instructions which, when the instructions are executed by the device according to any one of the preceding embodiments referring to a device, cause the device to perform the method according to any one of the preceding embodiments referring to a method.
- Embodiment 23 A non-transitory computer-readable storage medium, the computer-readable storage medium comprises instructions that when executed by a computer, cause the computer to: receive a request for accessing one or more functions associated with the device; execute at least one authentication process comprising the following steps: - triggering to illuminate the user by coherent electromagnetic radiation associated with a wavelength between 850 nm and 1400 nm,
- FIG. 1 illustrates an exemplary embodiment of a device for authenticating a user of a device
- FIG. 2 illustrates an example for determining a surface roughness measure
- FIG. 3 illustrates an example embodiment of a method for authenticating a user of a device
- FIG. 4 illustrates an embodiment of a surface associated with a user.
- FIG. 1 illustrates an exemplary embodiment of a device 102 for authenticating a user 114 of a device 102.
- the device 102 comprises an illumination source 104, a camera 106 and a processor 108.
- the surface roughness may be determined with respect to skin of a user 114.
- the skin may be associated with a surface roughness.
- the surface roughness can be evaluated based on a surface roughness measure.
- the skin may be exposed to coherent electromagnetic radiation associated with a wavelength between 850 nm and 1400 nm.
- the coherent electromagnetic radiation associated with a wavelength between 850 nm and 1400 nm may be emitted by the illumination source 104.
- the illumination source 104 may comprise one or more radiation sources such as a VCSEL array or a single laser diode.
- the radiation source may be associated with one or more light beams.
- a single laser diode may emit one light beam
- a VCSEL array may emit a plurality of light beams.
- the number of light beams correspond to the number of VCSELs in the VCSEL array.
- the illumination source may emit a plurality of light beams.
- the plurality of light beams may result in projecting a pattern onto the object.
- the illumination source 104 may emit patterned coherent electromagnetic radiation. Patterned coherent electromagnetic radiation may be suitable for projection a pattern onto the object.
- the illumination source 104 may comprise one or more optical elements. An optical element may be suitable for splitting and/or multi plicating light beams.
- optical elements can be diffractive optical elements, refractive optical elements, meta surface elements, lenses or the like.
- an illumination source 104 comprising a single laser diode or a VCSEL array in combination with an optical element may result in illuminating the object with patterned coherent electromagnetic radiation.
- the illumination source 104 may be associated with a field of illumination as indicated by the two lines originating from the illumination source 104.
- Different skin surface roughness measures may be determined. Different body parts of the userl 14 may be associated with different skin roughness 116. For example, a hand may be associated with a higher skin roughness whereas the face may be associated with a lower skin surface roughness. The surface roughness may be characteristic for a body part of the user 114 and/or for the identity of the user 114.
- a speckle image may be generated while the userl 14 may be illuminated with coherent electromagnetic radiation, preferably patterned coherent electromagnetic radiation.
- Coherent electromagnetic radiation may interact with the skin of the user 114 once it may be projected onto the skin of the user 114.
- Coherent electromagnetic radiation forms speckle when interacting with a non-homogeneous and uneven surface such as skin.
- the different wavefronts of the coherent electromagnetic radiation may interact by means of interference.
- the interference of the different wavefronts of the coherent electromagnetic radiation may result in contrast variations of the coherent electromagnetic radiation on the skin of the user 114. These contrast variations may depend on the surface roughness associated with the surface the coherent electromagnetic radiation is illuminating. Hence, the roughness associated with the skin may influence the formation of speckle such as the size and orientation of the speckle.
- analysis of the speckle may result in a surface roughness measure.
- At least one speckle image is generated with a 106 such as a camera.
- the camera 106 may comprise a sensor 110.
- the camera 106 may comprise a lens 112.
- the camera 106 may comprise a polarizer.
- the coherent electromagnetic radiation is in the infrared range.
- the surface roughness measure may specify the surface roughness associated with the surface of skin.
- information obtained by coherent electromagnetic radiation penetrating for example the dermis or deeper may overlay the desired information relating to the surface of the skin.
- a polarizer may be suitable for selecting the coherent electromagnetic radiation reflected from the surface of the skin and may be suitable for deselecting parts of the coherent electromagnetic radiation having interacted with skin layer such as the dermis or deeper layers.
- the camera 106 may be associated with a field of view as indicated by the two lines originating from the camera 106.
- the camera 106 may have a field of view between 10°x10° and 75°x75°, preferably 55°x65°.
- the camera 106 may have a resolution below 2 megapixel, preferably between 0.3 megapixel and 1 .5 megapixel. Examples for the Speckle image can be found in FIG. 2.
- the field of illumination may correspond at least partially to the field of view. At least a fraction of the field of view associated with the camera 106 may be independent of illumination with coherent electromagnetic radiation.
- the Speckle image may show at least in parts the object under illumination with coherent electromagnetic radiation.
- the speckle image may be provided to and/or received by a processor 108.
- the processor 108 may comprise one or more processors.
- the processor 108 may determine the surface roughness measure based on the Speckle image.
- the processor 108 may determine the surface roughness measure as described within the context of FIG. 2.
- the device 102 further comprises at least one authentication unit 118.
- the authentication unit 118 may be or may comprise at least one processor (in this embodiment the processor 108) and/or may be designed as software or application.
- the authentication may comprise a plurality of steps.
- the authentication unit 118 may perform at least one face detection using a flood image.
- the face detection may comprise analyzing the flood image.
- the analyzing of the flood image may comprise using at least one image recognition technique, in particular a face recognition technique.
- An image recognition technique comprises at least one process of identifying the user 114 in an image.
- the image recognition may comprise using at least one technique selected from the technique consisting of: color-based image recognition, e.g. using features such as template matching; segmentation and/or blob analysis e.g. using size, or shape; machine learning and/or deep learning e.g. using at least one convolutional neural network.
- the authentication may comprise identifying the user 114.
- the identifying may comprise assigning an identity to a detected face and/or at least one identity check and/or verifying an identity of the user.
- the identifying may comprise performing a face verification of the imaged face to be the user’s face.
- the identifying the user 114 may comprise matching the flood image, e.g. showing a contour of parts of the user, in particular parts of the user’s face, with a template, e.g. a template image generated within an enrollment process.
- the identifying of the user may comprise determining if the imaged face is the face of the user 114, in particular if the imaged face corresponds to at least one image of the user’s face stored in at least one memory, e.g. of the device.
- Authentication may be successful if the flood image can be matched with an image template.
- Authentication may be unsuccessful if the flood image cannot be matched with an image template.
- the identifying of the user 114 may comprise determining a plurality of facial features.
- the analyzing may comprise comparing, in particular matching, the determined facial features with template features.
- the template features may be features extracted from at least one template.
- the template may be or may comprise at least one image generated in an enrollment process, e.g. when initializing the device. Template may be an image of an authorized user.
- the template features and/or the facial feature may comprise a vector. Matching of the features may comprise determining a distance between the vectors.
- the identifying of the user may comprise comparing the distance of the vectors to a least one predefined limit. The user 114 may be successfully identified in case the distance is ⁇ the predefined limit at least within tolerances. The user 114 may be declined and/or rejected otherwise.
- the image recognition may comprise using at least one model, in particular a trained model comprising at least one face recognition model.
- the analyzing of the flood image may be performed by using a face recognition system, such as FaceNet, e.g. as described in Florian Schroff, Dmitry Kalenichenko, James Philbin, “FaceNet: A Unified Embedding for Face Recognition and Clustering”, arXiv: 1503.03832.
- the trained model may comprises at least one convolutional neural network.
- the convolutional neural network may be designed as described in M. D. Zeiler and R. Fergus, “Visualizing and understanding convolutional networks”, CoRR, abs/1311.2901 , 2013, or C.
- Learned-Miller “Labeled faces in the wild: A database for studying face recognition in unconstrained environments”, Technical Report 07-49, University of Massachusetts, Amherst, October 2007, the Youtube® Faces Database as described in L. Wolf, T. Hassner, and I. Maoz, “Face recognition in unconstrained videos with matched background similarity”, in IEEE Conf, on CVPR, 2011 , or Google® Facial Expression Comparison dataset.
- the training of the convolutional neural network may be performed as described in Florian Schroff, Dmitry Kalenichenko, James Philbin, “FaceNet: A Unified Embedding for Face Recognition and Clustering”, arXiv: 1503.03832.
- the authentication unit 118 is configured for receiving the surface roughness measure and for configured for authenticating the user 114 or denying the authentication of the user 114 using the surface roughness measure.
- the face detection and identification of the user 114 may be performed before authenticating the user or denial using the surface roughness measure.
- the authentication process may be aborted in case the user is not successfully identified.
- using two-dimensional images for authentication can be tricked.
- the method according to the present invention proposes to use the surface roughness measure as additional security feature for authentication.
- the authentication using the flood image may be validated using the surface roughness measure.
- the authenticating the user 114 or denial using the surface roughness measure may be performed regardless of whether the face detection and identification of the user 114 was performed.
- the surface roughness measure may be used as biometric identifier for uniquely identifying the user 114.
- the authentication unit 118 may be configured for determining if the surface roughness measure corresponds to a surface roughness measure of a human being.
- the authentication unit 118 may be configured for determining if the surface roughness measure corresponds to a surface roughness measure of the specific user 114. Determining if the surface roughness measure corresponds to a surface roughness measure of a human being and/or of the specific user may comprise comparing the surface roughness measure to at least one pre-defined or pre-determined range of values of surface roughness measure, e.g. stored in at least one database e.g. of the device 102 or of a remote database such as of a cloud.
- the surface roughness measure is at least within tolerances within the re-defined or pre-determined range of values of surface roughness measure, the user 114 is authenticated otherwise the authentication is unsuccessful.
- the surface roughness measure may be a human skin roughness. In case the determined human skin roughness is within the range of 10 pm to 150 pm the user is authenticated. However, other ranges are possible.
- the authentication unit 118 may be configured for allowing or declining the user 114 to access one or more functions associated with the device 102 depending on the authentication or denial.
- the allowing may comprise granting permission to access the one or more functions.
- the device 102 may comprise at least authorization unit.
- the authorization unit may be configured for access control.
- the authorization unit may comprise at least one processor, e.g. the processor 108, or may be designed as software or application.
- the authorization unit and the authentication unit 118 may be embodied integral, e.g. by using the same processor.
- the authorization unit may be configured for allowing the user to access the one or more functions, e.g. on the device 102, e.g. unlocking the device 102, in case of successful authentication of the user or declining the user to access the one or more functions, e.g. on the device 102, in case of non-successful authentication.
- the device 102 may be configured for displaying a result of the authentication and/or the authorization e.g. by using at least one communication interface such as a user interface, e.g. a display. All system components may be part of the device 102. In other embodiments, the components may be separated between a plurality of devices.
- the processor 108 may be a server, whereas the illumination source 104 and the camera 106 may be part of one device 102 such as a mobile electronic device.
- the camera 106 may provide the speckle image to the processor 108.
- the processor 108 may provide the surface roughness measure to a device for displaying the surface roughness measure and/or a device for processing the surface roughness measure.
- the device 102 comprising the camera 106 and the illumination source 104 may further comprise a display for displaying the surface roughness measure and/or a surface roughness processor configured for processing the surface roughness measure further.
- the device may comprise the processor 108.
- FIG. 2 illustrates an example for determining a surface roughness measure.
- the surface roughness measure is determined based on the speckle images 202a, 202b. Examples for speckle images 202a, 202b are shown in FIG. 2.
- the speckle image 204 may be cropped to a predefined size.
- the speckle images 202a, 202b may be transformed by means of Fourier transformation.
- the result of Fourier transforming the speckle images 202a, 202b can be referred to as Fourier plots 204a, 204b.
- the Fourier plots may be obtained by means of Fast Fourier transform (FFT).
- the Fourier plots 204a, 204b may represent the speckle images 202a, 202b in the frequency domain.
- FFT Fast Fourier transform
- the Fourier plots 204a, 204b may represent the distribution of frequencies associated with the speckle images 202a, 202b. Followingly, the Fourier plots 204a, 204b may comprise the magnitude of frequencies associated with the speckle images 202a, 202b. The Fourier plots 204a, 204b, may be further transformed into power spectral density (PSD) plots 208. The Fourier plots 204a, 204b may be transformed into the PSD plots 206a, 206b by multiplying the magnitudes of the respective Fourier plots 204a, 204b with its conjugate.
- PSD power spectral density
- Radial averaging with respect to a predefined point such as the center point of a quadratic image may result in the double logarithmic magnitude versus frequency plots as it can be seen on the right side of FIG. 2.
- Radial averaging may refer to averaging values with the same distance to the predefined point. Determining the surface roughness based on PSD is advantageous since the PSD may take vertical and lateral features into account. This provides an in- depth picture of a surface roughness associated with a surface structure. Followingly, a realistic description of the surface of the object can be achieved. Further, an estimation on the distribution of surface irregularities is enabled.
- the double logarithmic plotting may be used to visualize the fractal dimension.
- the fractal dimension may be an example for a surface roughness measure.
- the fractal dimension may be determined by fitting the double logarithmic magnitude versus frequency plots associated with the speckle images 202a, 202b with a linear function.
- the fractal dimension may be determined as the slope of the linear function fitted to the double logarithmic magnitude versus frequency plots associated with the speckle images 202a, 202b.
- a high surface roughness may correspond to a high fractal dimension.
- a low surface roughness may correspond to a low fractal dimension.
- the surface roughness measure may comprise one or more parameters of an autocorrelation function associated with the speckle images 202a, 202b.
- the autocorrelation function may be obtained by inverse Fourier transform of the PDS plot 208.
- the autocorrelation function may be defined as follows:
- the parameters T, O and may be further examples for surface roughness measures.
- a high may reflect a low surface roughness, a high a may reflect a high surface roughness and a high o may reflect a high surface roughness.
- surface roughness measures may be speckle contrast, speckle modulation, speckle size or the like. These examples are readily available from the speckle images 202a, 202b.
- Speckle contrast y may refer to a ratio of a standard deviation of intensity values preferably associated with a predefined area of the speckle images 202a, 202b, to a mean of the respective intensity values ?. Speckle contrast may be defined according to the following equation:
- Speckle modulation M may be calculated based on the following formula: wherein N may refer to the total number of predefined areas of the speckle images 202a, 202b, the indexes i and j may refer to pixel numbers and thus, may define the predefined area of the rmax speckle images 202a, 202b and wherein may refer to the maximum intensity value asso- rmin ciated with the predefined area of the speckle images 202a, 202b and wherein may refer to the minimum intensity value associated with the predefined area of the speckle images 202a, 202b.
- the speckle size may be calculated for example by multiplying the number of pixels with the size of the pixels. In some embodiments, the speckle size may be averaged over a part of the one or more speckle images 202a, 202b and/or over the full one or more speckle images 202a, 202b.
- Another embodiment for determining a surface roughness measure may comprise providing at least one of the speckle images 202a, 202b to a data-driven model such as a convolutional neural network (CNN).
- the data-driven model may receive at least one of the speckle images 202a, 202b at an input layer.
- the data-driven model may further comprise one or more hidden layers and an output layer.
- the speckle images 202a, 202b may be of a predefined size.
- the input layer may be specified according to the predefined size of the speckle images 202a, 202b.
- the layers of the data-driven model may be connected. Hence, the speckle images 202a, 202b may be passed through the layers.
- the pixel values associated with the speckle images 202a, 202b may pass through the layers of the data-driven model. While the pixel values may pass through the layers of the data-driven model, the pixel values may be allowed to interact with each other and/or may be combined, preferably non-linearily. Additionally or alternatively, the pixel values may be transformed. Preferably the pixel values may be transformed into an indication of the surface roughness measure by the data-driven model.
- the indication of the surface roughness measure may comprise the surface roughness measure and/or the surface roughness measure may be derivable from the indication of the surface roughness measure.
- the surface roughness measure may be received from the data-driven model and/or the data-driven model may provide the surface roughness measure.
- the data-driven model may be configured for providing the surface roughness measure, in particular by transforming the indication of the surface roughness measure into a surface roughness measure.
- Using a data-driven model may be advantageous since these models may learn correlations being non-obvious or may reflect correlations between different factors an expert would consider easily. So, the use of a data-driven model may reduce the time invest while achieving accuracies exceeding whitebox models.
- the data-driven model may provide the indication of the surface roughness measure and/or the indication of the surface roughness measure may be received from the data-driven model.
- the surface roughness measure may be derivable by means of a mathematical operation and/or by means of a look up table.
- the data-driven model may be classifier classifying speckle images 202a, 202b into different groups of surface roughness measures.
- the output may indicate the group label.
- the group label may indicate the surface roughness measure.
- the relation between the group label and the surface roughness measure may be specified e.g. by the look up table.
- Other embodiments for establishing the relation between the surface roughness measure and the indication of the surface roughness measure may be feasible.
- the data-driven model may be parametrized and/or trained according to a training data set.
- the training data set may comprise a plurality of speckle images 202a, 202b and corresponding surface roughness measures and/or indications of the surface roughness measure.
- the surface roughness measure and/or the indication of the surface roughness measure may refer to a label associated with the speckle images 202a, 202b.
- Parametrizing may be a prerequisite for training the data-driven model.
- the data-driven model may be trained based on the parametrizing of the data-driven model.
- Another embodiment for determining a surface roughness measure may comprise providing the speckle images 202a, 202b to a physical model.
- the physical model may reflect physical phenomena in mathematical form, e.g., including first-principles models.
- a physical model may comprise a set of equations that describe an interaction between the object, in particular the surface of the object and the coherent electromagnetic radiation thereby resulting in a surface roughness measure.
- the physical model may comprise and/or combine at least one of the relations associated with the speckle contrast, the speckle modulation, the speckle size, the fractal dimension or a combination thereof.
- the physical model may be a white box model.
- the physical model may transform the speckle images 202a, 202b into a surface roughness measure.
- the physical model may combine the relations described above linearly, e.g.
- weighting may reflect these relations which results in a higher accuracy. This in turn enables the reliable determination of the surface roughness because one of the factors may not be sufficient for significant results.
- FIG. 3 illustrates an example embodiment of a method for computer-implemented method for authenticating a user 114 of a device 102 such as the device described with respect to Figure 1.
- the method comprises the following steps: a. (302) receiving a request for accessing one or more functions associated with the device; b.
- executing at least one authentication process comprising the following steps: b.1 (304) triggering to illuminate the user by coherent electromagnetic radiation associated with a wavelength between 850 nm and 1400 nm, b.2 (306) triggering to generate at least one speckle image of the user while the user is being illuminated by the coherent electromagnetic radiation, b.3 (308) determining at least one surface roughness measure based on the speckle image, b.4 (310) authenticating the user or denial using the surface roughness measure.
- the speckle image Prior to determining the surface roughness measure in block 308, the speckle image may be cut to a predefined size. Thereby, background may be removed and the degree of speckles associated with the user 114 may be increased. Speckles associated with the user 114 may refer to speckles caused by coherent electromagnetic radiation illuminating at least a part of the user 114.
- the surface roughness measure may be determined as described within the context of FIG. 2
- the surface roughness measure may be determined by a processor as described within the context of FIG. 1 .
- the processor, the camera and the illumination source may be part of one device and/or system.
- the surface roughness measure may be provided.
- the surface roughness measure may be provided to an application of a mobile electronic device.
- the application may be configured for initiating the determining of the surface roughness measure and/or for initiating the generating of the speckle image.
- the application may display the surface roughness measure, in particular the value of the surface roughness measure for example to the user 114. In particular the value of the surface roughness measure may be provided to the user 114.
- the application may further process the surface roughness measure to derive properties of the human skin.
- FIG. 4 illustrates an embodiment of a surface 402 associated with a user 114.
- the surface 402 may comprise a plurality of surface features.
- Surface features may be lateral surface features 410 and/or vertical surface features 408.
- the lateral surface feature 410 may be quantified according to the dashed line indicating a length of a sink on the surface 402.
- the vertical surface features 408 may be quantified according to the dashed line indicating a height of an uplift on the surface 402.
- This surface 402 may be illuminated by coherent electromagnetic radiation emitted from the illumination source 406 as described in the context of FIG. 1 and FIG. 3.
- a speckle image may be generated while the surface may be illuminated by coherent electromagnetic radiation with the camera 404 as described in the context of FIG. 1 and FIG. 3.
- Authentication unit a speckle image b speckle image speckle image a Fourier plot b Fourier plot receiving a request triggering to illuminate the user triggering to generate at least one speckle image determining at least one surface roughness measure authenticating the user surface camera
Landscapes
- Engineering & Computer Science (AREA)
- Computer Security & Cryptography (AREA)
- Theoretical Computer Science (AREA)
- General Engineering & Computer Science (AREA)
- Computer Hardware Design (AREA)
- Computing Systems (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Computer Networks & Wireless Communication (AREA)
- Signal Processing (AREA)
- Software Systems (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Biomedical Technology (AREA)
- Length Measuring Devices By Optical Means (AREA)
Abstract
A computer-implemented method for authenticating a user (114) of a device (102) is proposed. The method comprising: a. (302) receiving a request for accessing one or more functions associated with the device (102); b. executing at least one authentication process comprising the following steps: b.1 (304) triggering to illuminate the user (114) by coherent electromagnetic radiation associated with a wavelength between 850 nm and 1400 nm, b.2 (306) triggering to generate at least one speckle image of the user (114) while the user is being illuminated by the coherent electromagnetic radiation, b.3 (308) determining at least one surface roughness measure based on the speckle image, b.4 (310) authenticating the user (114) or denial using the surface roughness measure.
Description
Skin Roughness as security feature for face unlock
Technical Field
The invention relates to a computer-implemented method for authenticating a user of a device, a computer program, a computer-readable storage medium, a non-transitory computer-readable storage medium, and a use of a surface roughness measure.
Background art
A large number of techniques for authentication, e.g. based on face identification, in particular as a security feature, are known from the prior art. For example, face identification is used in mobile devices to unlock or prevent the unlock of the mobile device. For this purpose, in these devices a complementary metal-oxide-semiconductor (CMOS) sensor may be combined with a structured light three-dimensional sensor. Two-dimensional-picture based face recognition algorithms use biometric features such as eye to eye or eye to nose distances and shapes, or iris scans, wherein the latter requiring a high resolution or close-up picture of the eye. However, known techniques based on two-dimensional based algorithms can be tricked, e.g. by using a printout or a picture of a face. The use of three-dimensional information for face recognition makes it more difficult to trick but even such a technique can tricked, e.g. by using a three-di- mensional-shaped head model. Generally, most security systems may be tricked with sufficient effort involved. However, the amount of effort will increase, the more features are being used in an unlock system.
US 2022/094456 A1 describes an apparatus comprising means for: obtaining a propagation profile for wireless signals transmitted between at least two devices via a creeping wave along a user's skin; causing transmission of electromagnetic radiation towards a plurality of locations on a target user's body to obtain dielectric properties of their skin at the plurality of locations based on an amount of the electromagnetic radiation reflected from each location; determining whether the propagation profile correlates with a realizable creeping wave along the target user's skin, the realizability being based on the obtained dielectric properties of their skin; and forming an association between the target user and the at least two devices based on a strength of correlation between the propagation profile and a realizable creeping wave.
Problem to be solved
It is therefore an object of the present invention to provide devices and methods facing the above-mentioned technical challenges of known devices and methods. Specifically, it is an object of the present invention to provide devices and methods, which allow for improving authentication of a user.
Summary
This problem is addressed by a computer-implemented method for authenticating a user of a device, a computer program, a computer-readable storage medium, a non-transitory computer- readable storage medium, and a use of a surface roughness measure with the features of the independent claims. Advantageous embodiments which might be realized in an isolated fashion or in any arbitrary combinations are listed in the dependent claims as well as throughout the specification.
In a first aspect of the present invention, a computer-implemented method for authenticating a user of a device is disclosed.
The method comprising: a. receiving a request for accessing one or more functions associated with the device; b. executing at least one authentication process comprising the following steps: b.1 triggering to illuminate the user by coherent electromagnetic radiation associated with a wavelength between 850 nm and 1400 nm, b.2 triggering to generate at least one speckle image of the user while the user is being illuminated by the coherent electromagnetic radiation, b.3 determining at least one surface roughness measure based on the speckle image, b.4 authenticating the user or denial using the surface roughness measure.
The method steps may be performed in the given order or may be performed in a different order. Further, one or more additional method steps may be present which are not listed. Further, one, more than one or even all of the method steps may be performed repeatedly.
The term "computer implemented" as used herein is a broad term and is to be given its ordinary and customary meaning to a person of ordinary skill in the art and is not to be limited to a special or customized meaning. The term specifically may refer, without limitation, to a method involving at least one computer and/or at least one computer network. The computer and/or computer network may comprise at least one processor which is configured for performing at least one of the method steps of the method according to the present invention. Specifically, each of the method steps is performed by the computer and/or computer network. The method may be performed completely automatically, specifically without user interaction.
The term “user” as used herein is a broad term and is to be given its ordinary and customary meaning to a person of ordinary skill in the art and is not to be limited to a special or customized meaning. The term specifically may refer, without limitation, to a person intended to and/or using the device.
The device may be selected from the group consisting of: a mobile device, particularly a cell phone, and/or a smart phone, and/or a tablet computer, and/or a laptop, and/or a tablet, and/or a virtual reality device, and/or a wearable, such as a smart watch; or another type of portable
computer, a television device; a game console; a personal computer; an access system such as of an automotive.
The term “authenticating” as used herein is a broad term and is to be given its ordinary and customary meaning to a person of ordinary skill in the art and is not to be limited to a special or customized meaning. The term specifically may refer, without limitation, to verifying an identity of a user. Specifically, the authentication may comprise distinguishing between the user from other humans or objects, in particular between an authorized access from a non-authorized access. The authentication may comprise verifying identity of a respective user and/or assigning identity to a user. The authentication may comprise generating and/or providing identity information, e.g. to other devices or units such as to at least one authorization unit for authorization for providing access to the device. The identify information may be proofed by the authentication. For example, the identity information may be and/or may comprise at least one identity token. In case of successful authentication an image of a face recorded by at least one image generation unit may be verified to be an image of the user’s face and/or the identity of the user is verified. The authenticating may be performed using at least one authentication process. The authentication process may comprise a plurality of steps such as at least one face detection, e.g. on at least one flood image as will be described in more detail below, and at least one identification step in which an identity is assigned to the detected face and/or at least one identity check and/or verifying an identity of the user is performed.
The method may relate to biometric authentication. The term "biometric authentication" as used herein is a broad term and is to be given its ordinary and customary meaning to a person of ordinary skill in the art and is not to be limited to a special or customized meaning. The term specifically may refer, without limitation, to authentication using at least one biometric identifier such as a distinctive, measurable characteristics used to label and describe individuals. The biometric identifier may be a physiological characteristics, in particular the surface roughness measure.
The term “access” as used herein is a broad term and is to be given its ordinary and customary meaning to a person of ordinary skill in the art and is not to be limited to a special or customized meaning. The term specifically may refer, without limitation, to entering and/or using the one or more functions.
The term “function associated with the device” as used herein is a broad term and is to be given its ordinary and customary meaning to a person of ordinary skill in the art and is not to be limited to a special or customized meaning. The term specifically may refer, without limitation, to an arbitrary function such as access to at least one element and/or at least one resource of the device or associated with the device. The functions that require authentication of the user may be pre-defined. The one or more functions associated with the device may comprise unlocking the device, and/or access to an application, preferably associated with the device and/or access to a part of an application, preferably associated with the device. For example, the function may comprise access to a content of the device, e.g. as stored in a database of the device, and/or
retrievable by the device. In an embodiment, allowing the user to access a resource may include allowing the user to perform at least one operation with a device and/or system. The resource may be a device, a system, a function of a device, a function of a system and/or an entity. Additionally and/or alternatively, allowing the user to access a resource may include allowing the user to access an entity. The entity may be physical entity and/or virtual entity. The virtual entity may be a database for example. The physical entity may be an area with restricted access. The area with restricted access may be one of the following: security areas, rooms, apartments, vehicles, parts of the before mentioned examples, or the like. Device and/or system may be locked. The device and/or the system may only be unlocked by authorized user.
The term “request for accessing” as used herein is a broad term and is to be given its ordinary and customary meaning to a person of ordinary skill in the art and is not to be limited to a special or customized meaning. The term specifically may refer, without limitation, to at least one act and/or instance of asking for access. The term “receiving a request” as used herein is a broad term and is to be given its ordinary and customary meaning to a person of ordinary skill in the art and is not to be limited to a special or customized meaning. The term specifically may refer, without limitation, to a process of obtaining a request, e.g. from a data source. The receiving may fully or partially take place automatically. The receiving of the request for accessing one or more functions associated with the device may be performed by using at least one communication interface. The receiving may comprise receiving at least one user input, e.g. via at least one user interface e.g. such as a display of the device, and/or a request from a remote device and/or cloud, e.g. via the communication of the device such as via the internet. For example, the request may be generated by or triggered by at least one user input, such as by inputting a security number or other unlocking action by the user, and/or may be send from a remote device and/or cloud such as via a connected account.
The term “trigger” as used herein is a broad term and is to be given its ordinary and customary meaning to a person of ordinary skill in the art and is not to be limited to a special or customized meaning. The term specifically may refer, without limitation, to at least one process of initiating at least one action and/or causing, in particular causing one or more elements of the device to execute at least one function.
The term “light” as used herein is a broad term and is to be given its ordinary and customary meaning to a person of ordinary skill in the art and is not to be limited to a special or customized meaning. The term specifically may refer, without limitation, to electromagnetic radiation in one or more of the infrared, the visible and the ultraviolet spectral range. Herein, the term “ultraviolet spectral range”, generally, refers to electromagnetic radiation having a wavelength of 1 nm to 380 nm, preferably of 100 nm to 380 nm. Further, in partial accordance with standard ISO- 21348 in a valid version at the date of this document, the term “visible spectral range”, generally, refers to a spectral range of 380 nm to 760 nm. The term “infrared spectral range” (IR) generally refers to electromagnetic radiation of 760 nm to 1000 pm, wherein the range of 760 nm to 1 .5 pm is usually denominated as “near infrared spectral range” (NIR) while the range from 1 .5
|j to 15 |jm is denoted as “mid infrared spectral range” (Midi R) and the range from 15 pm to 1000 pm as “far infrared spectral range” (FIR).
The term “ray” as used herein is a broad term and is to be given its ordinary and customary meaning to a person of ordinary skill in the art and is not to be limited to a special or customized meaning. The term specifically may refer, without limitation, to a line that is perpendicular to wavefronts of light which points in a direction of energy flow. The term “light beam” as used herein is a broad term and is to be given its ordinary and customary meaning to a person of ordinary skill in the art and is not to be limited to a special or customized meaning. The term specifically may refer, without limitation, to a collection of rays. In the following, the terms “ray” and “beam” will be used as synonyms. The term “light beam” as used herein is a broad term and is to be given its ordinary and customary meaning to a person of ordinary skill in the art and is not to be limited to a special or customized meaning. The term specifically may refer, without limitation, to an amount of light, specifically an amount of light traveling essentially in the same direction, including the possibility of the light beam having a spreading angle or widening angle.
The term “coherent” electromagnetic radiation as used herein is a broad term and is to be given its ordinary and customary meaning to a person of ordinary skill in the art and is not to be limited to a special or customized meaning. The term specifically may refer, without limitation, to light pattern and/or a plurality of light beams that have at least essentially a fixed phase relationship between electric field values at different locations and/or at different times. In particular, the coherent electromagnetic radiation may refer to electromagnetic radiation that is able to exhibit interference effects. The term “coherent” may also comprise partial coherence, i.e. a non-per- fect correlation between phase values. The electromagnetic radiation may be completely coherent, wherein deviations of about ± 10% of phase relationship are possible.
The coherent electromagnetic radiation is associated with a wavelength between 850 nm and 1400 nm. Preferably, the coherent electromagnetic radiation may be in the infrared range. Preferably, the coherent electromagnetic radiation may be associated with a wavelength between 880 nm and 1300 nm. In particular, the coherent electromagnetic radiation is associated with a wavelength between 900 nm and 1000 nm and/or the coherent electromagnetic radiation is associated with a wavelength between 1100 nm and 1200 nm. This may be advantageous since the sunlight has a bandgap in these regions. Followingly, the coherent electromagnetic radiation with the abovementioned wavelengths illuminated for generating the speckle image can be differentiated easier from incoming sun light. Followingly, the use of coherent electromagnetic radiation within the above-specified region may enable measurements of surface roughness even in the presence of sun light such as in the nature. Consequently, measurements of surface roughness can be easily and location-independently used. Overall, an improved signal-to-noise ratio can be achieved and the accuracy of evaluations of surface roughness can be increased.
The term “illuminate”, as used herein, is a broad term and is to be given its ordinary and customary meaning to a person of ordinary skill in the art and is not to be limited to a special or
customized meaning. The term specifically may refer, without limitation, to the process of exposing at least one element to light. The illuminating may comprise using at least one illumination source, in particular of the device. The term “illumination source” as used herein is a broad term and is to be given its ordinary and customary meaning to a person of ordinary skill in the art and is not to be limited to a special or customized meaning. The term specifically may refer, without limitation, to an arbitrary device configured for generating or providing light in the sense of the above-mentioned definition. The illumination source may be configured for illuminating the user by coherent electromagnetic radiation and/or may be suitable for emitting coherent electromagnetic radiation. The illumination source may be configured for emitting light at a single wavelength, e.g. in the infrared region. In other embodiments, the illumination source may be adapted to emit light with a plurality of wavelengths, e.g. for allowing additional measurements in other wavelengths channels.
The illumination source may comprise at least one radiation source. The term “radiation source” as used herein is a broad term and is to be given its ordinary and customary meaning to a person of ordinary skill in the art and is not to be limited to a special or customized meaning. The term specifically may refer, without limitation, to at least one arbitrary device configured for providing at least one light beam. For example, the radiation source may be or may comprise at least one light emitter. The illumination source may comprise a plurality of radiation sources. The illumination source may comprise, for example, at least one laser source and/or at least one semiconductor radiation source. A semiconductor radiation source may be for example a light-emitting diode such as an organic, a laser diode and/or inorganic light-emitting diode. Additionally or alternatively, the radiation source may be a VCSEL array and/or a LED. Additionally or alternatively, the illumination source may comprise a VCSEL array and/or a LED.
The illumination source may comprise one or more optical elements. Optical element may be for example a lens, a metasurface element, a DOE or a combination thereof. Hence, an illumination source may comprise one or more radiation sources and one or more optical elements.
The coherent electromagnetic radiation is patterned coherent electromagnetic radiation and/or wherein the coherent electromagnetic radiation comprises one or more light beams. The term “patterned” coherent electromagnetic radiation as used herein is a broad term and is to be given its ordinary and customary meaning to a person of ordinary skill in the art and is not to be limited to a special or customized meaning. The term specifically may refer, without limitation, to a plurality of light beams of coherent electromagnetic radiations, e.g. at least two light beams, preferably at least two light beams. For example, the coherent electromagnetic radiation may comprise at least two, more preferably at least 5 light beams. The coherent electromagnetic radiation may be projected onto the user. Projection of a light beam of the coherent electromagnetic radiation onto a surface, in particular of the user, may result in a light spot. A light beam may illuminate at least a part of the user and/or may be associated with a contiguous area of coherent electromagnetic radiation on at least a part of the user. A light spot may refer to the contiguous area of coherent electromagnetic radiation on at least a part of the user. A light spot may refer to an arbitrarily shaped spot of coherent electromagnetic radiation. A light spot may
be a result of the projection of a light beam associated with the coherent electromagnetic radiation. The light spot may be at least partially spatially extended. The patterned coherent electromagnetic radiation may illuminate the user by a light pattern comprising a plurality of light spots. The light spots may be overlapping at least partially. For example, the number of light spots may be equal to the number of light beams associated with the patterned coherent electromagnetic radiation. The intensity associated with a light spot may be substantially similar. Substantially similar may refer to intensity values associated with the light spot may differ by less than 50%, preferably less than 30%, more preferably less than 20%. Using patterned light may be advantageous since it can enable the sparing of light-sensitive regions such as the eyes.
The one or more light spots may be shown in the speckle image. A projection of patterned coherent electromagnetic radiation onto a regular surface may result in a light spot projected onto the regular surface independent of speckle. A projection of patterned coherent electromagnetic radiation onto a regular surface may result in a light spot projected onto the irregular surface comprising at least one speckle, preferably a plurality of speckles. The user may be associated with an at least partially irregular surface. Followingly, the speckle image may comprise a plurality of speckles. For example, if the patterned coherent electromagnetic radiation is projected at least partially on the skin of the user, a plurality of speckle is formed due to the interference of the coherent electromagnetic radiation. Followingly, a light spot may comprise zero, one or more speckle depending on the surface the patterned coherent electromagnetic radiation is projected on. Skin may have an irregular surface. Hence the projection of patterned coherent electromagnetic radiation may result in the formation of speckle within the one or more light spots. Projecting coherent electromagnetic radiation on an irregular surface results in the formation of speckle. Followingly, the light spot may comprise one or more speckle. A light spot may have a diameter between 0.5 mm and 5 cm, preferably 0.6 mm and 4 cm, more preferably, 0.7 mm and 3 cm, most preferably 0.4 and 2 cm.
For example, patterned coherent electromagnetic radiation may be generated by an illumination source comprising a plurality of light emitters such as a VCSEL array comprises a plurality of VCSELs. An emitter of the plurality of light emitters may emit one light beam. Hence, an emitter of the plurality of light emitters may be associated with the one light spot, with the formation of one light spot and/or with the projection of one light spot.
Additionally or alternatively, patterned coherent electromagnetic radiation may be generated by one or more light emitters and an optical element such as a DOE or a metasurface element. A metasurface element may be a meta lense. A meta lense may be at least partially transparent with respect to the coherent electromagnetic radiation and/or may comprise a material associated with a structure on the nanoscale. The optical element may replicate the number of light beams associated with the one or more light emitters and/or may be suitable for replicating the number of light beams associated with the one or more light emitters. For example, the light emitter may be a laser.
Additionally or alternatively, patterned coherent electromagnetic radiation may be generated by one or more light emitters and an optical element such as a DOE or a metasurface element. The optical element may replicate the number of light beams associated with the one or more light emitters and/or may be suitable for replicating the number of light beams associated with the one or more light emitters. For example, the light emitter may be a laser.
The term “speckle image” as used herein is a broad term and is to be given its ordinary and customary meaning to a person of ordinary skill in the art and is not to be limited to a special or customized meaning. The term specifically may refer, without limitation, to an image showing a plurality of speckles. The speckle image may show a plurality of speckles. The speckle image may comprise an image showing the user, in particular at least one part of the face of the user, while the user is being illuminated with the coherent electromagnetic radiation, particularly on a respective area of interest comprised by the image. The speckle image may be generated while the user may be illuminated by coherent electromagnetic radiation associated with a wavelength between 850 nm and 1400 nm. The speckle image may show a speckle pattern. The speckle pattern may specify a distribution of the speckles. The speckle image may indicate the spatial extent of the speckles. The speckle image may be suitable for determining a surface roughness measure. The speckle image may be generated with at least one camera. For generating the speckle image, the user may be illuminated by the illumination source.
The term “speckle” as used herein is a broad term and is to be given its ordinary and customary meaning to a person of ordinary skill in the art and is not to be limited to a special or customized meaning. The term specifically may refer, without limitation, to an optical phenomenon caused by interfering coherent electromagnetic radiation due to non-regular or irregular surfaces. Speckles may appear as contrast variations in an image such as a speckle image.
The term “speckle pattern” as used herein is a broad term and is to be given its ordinary and customary meaning to a person of ordinary skill in the art and is not to be limited to a special or customized meaning. The term specifically may refer, without limitation, to a distribution of the plurality of speckles. The distribution of the plurality of speckles may refer to a spatial distribution of at least one of the plurality of speckles and/or a spatial distribution of at least two of the plurality of speckles in relation to each other. The spatial distribution of the at least one of the plurality of speckles may refer to and/or specify a spatial extent of the at least one of the plurality of speckles. The spatial distribution of the at least two of the plurality of speckles may refer to and/or specify a spatial extent of the first speckle of the at least two speckles in relation to the second speckle of the at least two speckles and/or a distance between the first speckle of the at least two speckles and the second speckle of the at least two speckles.
As outlined above, the speckles may be caused by the irregularities of the surface, the speckles reflect the roughness of the surface. Followingly, determining the surface roughness measure based on the speckles in the speckle image utilizes the relation between the speckle distribution and the surface roughness. Thereby, a low-cost, efficient and readily available solution for surface roughness evaluation can be enabled.
The term “generating” at least one image as used herein is a broad term and is to be given its ordinary and customary meaning to a person of ordinary skill in the art and is not to be limited to a special or customized meaning. The term specifically may refer, without limitation, to imaging capturing and/or determining and/or recording at least one image. The generating of an image may be performed by using at least one camera. The generating of the image may comprise capturing a single image and/or a plurality of images such as a sequence of images such as a video or a movie.
The generating of the speckle image may be initiated by a user action or may automatically be initiated, e.g. once the presence of a user within a field of view and/or within a predetermined sector of the field of view of the camera is automatically detected. The term “field of view” as used herein is a broad term and is to be given its ordinary and customary meaning to a person of ordinary skill in the art and is not to be limited to a special or customized meaning. The term specifically may refer, without limitation, to an angular extent of the observable world and/or at least one scene that may be captured or viewed by an optical system, such as the image generation unit. The field of view may, typically, be expressed in degrees and/or radians, and, exemplarily, may represent the total angle spanned by the image and/or viewable area.
The term “camera” as used herein is a broad term and is to be given its ordinary and customary meaning to a person of ordinary skill in the art and is not to be limited to a special or customized meaning. The term specifically may refer, without limitation, to device having at least one image sensor configured for generating or recording spatially resolved one-dimensional, two-dimensional or even three-dimensional optical data or information. The camera may be a digital camera. As an example, the camera may comprise at least one image sensor, such as at least one CCD sensor and/or at least one CMOS sensor configured for recording images. The image may be generated via a hardware and/or a software interface, which may be considered as the camera. The camera may comprise at least one image sensor, in particular at least one pixelated image sensor. For example, the speckle image is generated by using at least one camera comprising at least one image sensor such as at least one CCD sensor and/or at least one CMOS sensor. The camera may comprise at least one CMOS sensor and/or at least one CCD chip. For example, the camera may comprise at least one CMOS sensor, which may be sensitive in the infrared spectral range. The camera may have a field of view between 10°x10° and 75°x75°, preferably 55°x65°. The camera may have a resolution below 2 megapixel, preferably between 0.3 megapixel and 1 .5 megapixel. Megapixel may refer to a unit for measuring the number of pixels associated with a camera and/or an image. The camera may comprise further elements, such as one or more optical elements, e.g. one or more lenses. As an example, the optical sensor may be a fix-focus camera, having at least one lens which is fixedly adjusted with respect to the camera. Alternatively, however, the camera may also comprise one or more variable lenses which may be adjusted, automatically or manually. Other cameras, however, are feasible.
For example, the camera may comprise the at least one image sensor and at least one further optical element. For example, the further optical element may be at least one lens. A lens may
refer to an optical element suitable for influencing the expansion of the light beam associated with the coherent electromagnetic radiation. For example, the further optical element may be at least one polarizer. For example, the camera may comprise at least one image sensor, at least one lens and at least one polarizer. The polarizer may refer to an optical element suitable for selecting the electromagnetic radiation according to its polarization. In particular, the polarizer may be an optical element suitable for selecting the coherent electromagnetic radiation according to polarization its. Followingly, a part of the electromagnetic radiation, in particular coherent electromagnetic radiation may pass the polarizer while the rest of the electromagnetic radiation, in particular coherent electromagnetic radiation, may be turned away at least partially and/or may be absorbed at least partially. As the coherent electromagnetic radiation associated with a wavelength between 850 nm and 1400 nm penetrates the skin deeply, a part of the information received from the light reflected from the skin comprises information independent from the surface roughness which distracts the measurement of the surface roughness. To increase the sig- nal-to-noise ratio a polarizer can be used. The coherent electromagnetic radiation reflected from the surface of the object is usually polarized differently as the light reflected from deeper layers of the human skin. Followingly, the polarizer enables a selection of the desired signal from the undesired signal.
For example, a distance between the user and the camera used for generating the speckle image is between 10 cm and 1.5 m and/or wherein the distance between the user and an illumination source used for illuminating the user is between 10 cm and 1 .5 m. Preferably, the distance between the user and the camera may be between 20 cm and 1 .2 m. Preferably, the distance between the user and the illumination source may be between 20 cm and 1 .2 m. Adjusting the distance between object and camera ensure that a speckle image of sufficient quality is generated. Followingly, the above-specified distances enable a correct and reliable determination of the surface roughness. This is especially important in non-static contexts, where a user may operate the device by himself.
For example, the speckle image may show the user while being illuminated by coherent electromagnetic radiation and the surface roughness of the user’s skin may be determined. Preferably, the user may have generated the speckle image and/or initiated the generation of the speckle image. Preferably, the speckle image may be initiated by the user operating an application of a mobile electronic device. By doing so, the user can decide on his or her own when to determine the surface roughness of her or his skin. Followingly, a non-expert user is enabled to determine surface roughness and measurements can be carried out in more natural and less artificial contexts. Thereby, the surface roughness can be evaluated more realistically which in turn serves more realistic measure for the surface roughness. For example, the skin may have different surface roughness during the course of the day depending on the activity of the human. Doing sports may influence the surface roughness as well as creaming the skin. This influence can be verified with the herein described methods and systems.
For example, the speckle image may be associated with a resolution of less than 5 megapixel.
Preferably, the speckle image may be associated with a resolution of less than 3 megapixel,
more preferably less than 2.5 megapixel, most preferably less than 2 megapixel. Such speckles images can be generated with readily available, small and cheap smartphone cameras. Furthermore, the storage and processing capacities needed for evaluating the surface roughness measure are small. Thus, the low resolution of the speckle image used for evaluating the surface roughness enables the usage of mobile electronic devices for evaluating the surface roughness, in particular devices like smartphone or wearables since these devices have strictly limited size, memory and processing capacity.
The method may further comprise reducing the speckle image to a predefined size prior to determining the surface roughness measure. Reducing the speckle image to a predefined size may be based on applying one or more image augmentation techniques. Reducing the speckle image to a predefined sized may comprise selecting an area of the speckle image of the predefined size and cutting the speckle image to the area of the speckle image of the predefined size. The area of the speckle image of the predefined size may be associated with the living organism such as the human, in particular with the skin of the living organism such as the skin of the human. The part of the image other than the area of the speckle image of the predefined size may be associated with background and/or may be independent of the living organism such as a human. By doing so a reduced amount of data needs to be processed which decreases the time needed for determining the surface roughness or allows for less storage and processor to be needed. Furthermore, the part of the image useful for the analysis is selected. Hence, reducing the size may result in disregarding parts of the speckle image independent of the object or living organism such as a human. Followingly, the surface roughness measure can be determined easily and for the analysis disturbing parts not relating to the user are ignored.
For example, image augmentation techniques may comprise at least one of scaling, cutting, rotating, blurring, warping, shearing, resizing, folding, changing the contrast, changing the brightness, adding noise, multiply at least a part of the pixel values, drop out, adjusting colors, applying a convolution, embossing, sharpening, flipping, averaging pixel values or the like.
For example, the method may further comprise reducing the speckle image to a predefined size based on detecting the user in the speckle image. In particular, the speckle image may be reduced to a predefined size based on detecting the user in the speckle image prior to determining the surface roughness measure. In particular, reducing the speckle image to the predefined size based on detecting the user in the speckle image may comprise detecting the contour of the user, e.g. detecting the contour of a user’s face and reducing the speckle image to an area associated with the user, in particular with an area associated with the user’s face. Preferably the area associated with the user may be within the contour of the user, in particular the user and/or the contour of a user’s face.
The method may further comprise receiving at least one flood image. The term “flood image” as used herein is a broad term and is to be given its ordinary and customary meaning to a person of ordinary skill in the art and is not to be limited to a special or customized meaning. The term specifically may refer, without limitation, to an image generated by the camera while illumination
source is emitting flood light on the user. The term “flood light” as used herein, is a broad term and is to be given its ordinary and customary meaning to a person of ordinary skill in the art and is not to be limited to a special or customized meaning. The term specifically may refer, without limitation, to substantially continuous spatial illumination, in particular diffuse and/or uniform illumination. The flood light may have a wavelength in the infrared range. The flood image may comprise an image showing a user, in particular the face of the user, while the user is being illuminated with the flood light. The flood image may be generated by imaging and/or recording light reflected by the user which is illuminated by the flood light. The flood image showing the user may comprise at least a portion of the flood light on at least a portion the user.
The flood image may show the contour of the user. The contour of the user may be detected based on the flood image. Preferably, the contour of the user may be detected by providing the flood image to an object detection data-driven model, in particular a user detection model, wherein object detection data-driven model may be parametrized and/or trained to receive the flood image and provide an indication on the contour of the user based on a training data set. The training data set may comprise flood images and indications on the contour of objects and/or humans. The indication of the contour may include a plurality of points indicating the location of a specific landmark associated with the user. For example, where the speckle image may be associated with a user’s face, the user’s face may be detected based on the contour, wherein the contour may indicate the landmarks of the face such as the nose point or the outer corner of the lips or eyebrows.
The term “surface roughness” as used herein, is a broad term and is to be given its ordinary and customary meaning to a person of ordinary skill in the art and is not to be limited to a special or customized meaning. The term specifically may refer, without limitation, to a property of a surface associated with the user. In particular, the surface roughness may characterize lateral and/or vertical extent of surface features. The surface roughness may be evaluated based on the surface roughness measure. The surface roughness measure may quantify the surface roughness.
The term “surface feature” as used herein, is a broad term and is to be given its ordinary and customary meaning to a person of ordinary skill in the art and is not to be limited to a special or customized meaning. The term specifically may refer, without limitation, to an arbitrarily shaped structure associated with the surface, in particular of the user. In particular, the surface feature may refer to a substructure of the surface associated with the user. A surface may comprise a plurality of surface features. For example, an uplift or a sink may be surface features. Preferably, a surface feature may refer to a part of the surface associated with an angle unequal to 90° against the surface normal.
The term “surface roughness measure” as used herein, is a broad term and is to be given its ordinary and customary meaning to a person of ordinary skill in the art and is not to be limited to a special or customized meaning. The term specifically may refer, without limitation, to a measure suitable for quantifying the surface roughness. Surface roughness measure may be related to
the speckle pattern. For example, surface roughness measure may comprise at least one of a fractal dimension, speckle size, speckle contrast, speckle modulation, roughness exponent, standard deviation of the height associated with surface features, lateral correlation length, average mean height, root mean square height or a combination thereof. Preferably, the surface roughness measure may be suitable for describing the vertical and lateral surface features. Surface roughness measure may comprise a value associated with the surface roughness measure. Surface roughness measure may refer to a term of a quantity for measuring the surface roughness and/or to the values associated with the quantity for measuring the surface roughness. The determining of the surface roughness measure based on the speckle image may refer to determining the surface roughness measure based on a speckle pattern in the speckle image.
The surface roughness measure may be determined based on the speckle image by providing the speckle image to a model and receiving the surface roughness measure from the model. For example, the model may be suitable for determining an output based on an input. In particular, model may be suitable for determining a surface roughness measure based on the speckle image, preferably based on receiving speckle image.
For example, the model may be or may comprise one or more of a physical model, a data- driven model or a hybrid model. A hybrid model may be a model comprising at least one data- driven model with physical or statistical adaptations and model parameters. Statistical or physical adaptations may be introduced to improve the quality of the results since those provide a systematic relation between empiricism and theory. For example, a data-driven model may represent a correlation between the surface roughness measure and the speckle image. The data- driven model may obtain the correlation between surface roughness measure and speckle image based on a training data set comprising a plurality of speckle images and a plurality of surface roughness measures. For example, the data-driven model may be parametrized based on a training data set to receive the speckle image and provide the surface roughness measure. The data-driven model may be trained based on a training data set. The training data set may comprise at least one speckle image and at least one corresponding surface roughness measure. The training data set may comprise a plurality of speckle image and a plurality of surface roughness measures. Training the model may comprise parametrizing the model. The data- driven model may be parametrized and/or trained to provide the surface roughness measure based on the speckle image, in particular receiving the speckle image. Determining the surface roughness measure based on the speckle image may comprise providing the speckle image to a data-driven model and receiving the surface roughness measure from the data-driven model. Providing the surface roughness measure based on the speckle image may comprise mapping the speckle image to the surface roughness measure. The data-driven model may be parametrized and/or trained to receive the speckle image. Data-driven model may receive the speckle image at an input layer. The term training may also be denoted as learning. The term specifically may refer, without limitation, to a process of building the data-driven model, in particular determining and/or updating parameters of the data-driven model. Updating parameters of the data-driven model may also be referred to as retraining. Retraining may be included when
referring to training herein. During the training the data-driven model may adjust to achieve best fit with the training data, e.g. relating the at least on input value with best fit to the at least one desired output value. For example, if the neural network is a feedforward neural network such as a CNN, a backpropagation-algorithm may be applied for training the neural network. In case of a RNN, a gradient descent algorithm or a backpropagation-through-time algorithm may be employed for training purposes. Training a data-driven model may comprise or may refer without limitation to calibrating the model.
For example, the physical model may reflect physical phenomena in mathematical form, e.g., including first-principles models. A physical model may comprise a set of equations that describe an interaction between the object and the coherent electromagnetic radiation thereby resulting in a surface roughness measure. The physical model may be based on at least one of a fractal dimension, speckle size, speckle contrast, speckle modulation, roughness exponent, standard deviation of the height associated with surface features, lateral correlation length, average mean height, root mean square height or a combination thereof. In particular, the physical model may comprise one or more equations relating the speckle image and the surface roughness measure based on equations relating to the fractal dimension, speckle size, speckle contrast, speckle modulation, roughness exponent, standard deviation of the height associated with surface features, lateral correlation length, average mean height, root mean square height or a combination thereof.
For example, the fractal dimension may be determined based on the Fourier transform of the speckle image and/or the inverse of the Fourier transform of the speckle image. For example, the fractal dimension may be determined based on the slope of a linear function fitted to a double logarithmic plot of the power spectral density versus a frequency obtained by Fourier transform. The speckle size may refer to the spatial extent of one or more speckles. Where the speckle size may refer to the spatial extent of more than one speckle, the speckle size may be determined based on an average of more than one speckle sizes and/or a weighting of the more than one speckle sizes. The speckle contrast may refer to a measure for the standard deviation of at least a part of the speckle image in relation to the mean intensity of at least the part of the speckle image. The speckle modulation may refer to a measure for the intensity fluctuation associated with the speckles in at least a part of the speckle image. Roughness exponent, standard deviation of the height associated with surface features, lateral correlation length or a combination thereof may be determined based on the autocorrelation function associated with the double logarithmic plot of the power spectral density versus a frequency obtained by Fourier transform.
For example, determining the surface roughness measure based on the speckle image may comprise determining the surface roughness measure based on a speckle pattern. For example, determining the surface roughness based on the speckle pattern may comprise determining the surface roughness based on a distribution of a plurality of speckles in the speckle image. Determining the surface roughness measure based on the distribution of the plurality of speckles in the speckle image may refer to determining the distribution of the plurality of speckles in
the speckle image. Determining the distribution of the speckles may comprise determining at least one of fractal dimension associated with the speckle image, speckle size associated with the speckle image, speckle contrast associated with the speckle image, speckle modulation associated with the speckle image, roughness exponent associated with the speckle image, standard deviation of the height associated with surface features associated with the speckle image, lateral correlation length associated with the speckle image, average mean height associated with the speckle image, root mean square height associated with the speckle image or a combination thereof.
Additionally or alternatively, determining the surface roughness measure may comprise determining at least one of fractal dimension associated with the speckle image, speckle size associated with the speckle image, speckle contrast associated with the speckle image, speckle modulation associated with the speckle image, roughness exponent associated with the speckle image, standard deviation of the height associated with surface features associated with the speckle image, lateral correlation length associated with the speckle image, average mean height associated with the speckle image, root mean square height associated with the speckle image or a combination thereof.
For example, determining the surface roughness measure may be based on the distribution of the speckles in the speckle image. Determining the surface roughness measure based on the distribution of the speckles in the speckle image may comprise determining at least one of a size distribution of the speckles, a power spectral density associated with the speckle image, a fractal dimension associated with the speckle image, a speckle contrast, a speckle modulation or a combination thereof.
Additionally or alternatively, determining the surface roughness measure based on the distribution of the speckles in the speckle image may comprise providing the speckle image to a model, in particular a data-driven model, wherein the data-driven model may be parametrized and/or trained based on a training data set comprising one or more speckle image and one or more corresponding surface roughness measure.
For example, the surface roughness measure may be determined based on the speckle image by providing the speckle image to a model and receiving the surface roughness measure from the model. The model may be a data-driven model and may be parametrized and/or trained based on a training data set comprising a plurality of speckle images and corresponding surface roughness measures or indications of surface roughness measures. Additionally or alternatively, the model may be a physical model.
For example, the method may further comprise generating a partial speckle image. A partial speckle image may refer to a partial image generated based on the speckle image. The partial speckle image may be generated by applying one or more image augmentation techniques to the speckle image.
For example, the method may further comprise generating a first speckle image and a second speckle image. The speckle image may comprise the first speckle image and the second speckle image. The first speckle image may refer to a first part of the speckle image. The second speckle image may refer to a second part of the speckle image. Preferably, the first speckle image and the second speckle image may be different from each other. In particular, the first speckle image and the second speckle image may be non-overlapping. The first speckle image and the second speckle image may be generated by applying one or more image augmentation techniques to the speckle image. Determining the surface roughness measure based on the speckle image may comprise determining a first surface roughness measure based on the first speckle image and determining a second surface roughness measure based on the second speckle image. Providing the surface roughness measure may include providing the first surface roughness measure and the second surface roughness measure. In particular, the first surface roughness measure and the second surface roughness measure may be provided together. Preferably, the first surface roughness measure and the second surface roughness measure may be provided in a surface roughness measure map indicating the spatial distribution of surface roughness measures. For example, the surface roughness measure map may indicate the first surface roughness measure associated with a first area in the surface roughness measure map and the second surface roughness measure map may indicate the second surface roughness measure associated with a second area in the surface roughness measure map. In particular, the surface roughness measure map may be similar to a heat map, wherein the surface roughness measures may be plotted against the area associated with the respective surface roughness measures.
The surface roughness measure is determined by using at least one processor. The term “processor” as generally used herein is a broad term and is to be given its ordinary and customary meaning to a person of ordinary skill in the art and is not to be limited to a special or customized meaning. The term specifically may refer, without limitation, to an arbitrary logic circuitry configured for performing basic operations of a computer or system, and/or, generally, to a device which is configured for performing calculations or logic operations. In particular, the processor, or computer processor may be configured for processing basic instructions that drive the computer or system. It may be a semi-conductor based processor, a quantum processor, or any other type of processor configures for processing instructions. As an example, the processor may be or may comprise a Central Processing Unit ("CPU"). The processor may be a (“GPU”) graphics processing unit, (“TPU”) tensor processing unit, ("CISC") Complex Instruction Set Computing microprocessor, Reduced Instruction Set Computing ("RISC") microprocessor, Very Long Instruction Word ("VLIW") microprocessor, or a processor implementing other instruction sets or processors implementing a combination of instruction sets. The processing means may also be one or more special-purpose processing devices such as an Application-Specific Integrated Circuit ("ASIC"), a Field Programmable Gate Array ("FPGA"), a Complex Programmable Logic Device ("CPLD"), a Digital Signal Processor ("DSP"), a network processor, or the like. The methods, systems and devices described herein may be implemented as software in a DSP, in a micro-controller, or in any other side-processor or as hardware circuit within an ASIC, CPLD,
or FPGA. It is to be understood that the term processor may also refer to one or more processing devices, such as a distributed system of processing devices located across multiple computer systems (e.g., cloud computing), and is not limited to a single device unless otherwise specified. The processor may also be an interface to a remote computer system such as a cloud service. The processor may include or may be a secure enclave processor (SEP). An SEP may be a secure circuit configured for processing the spectra. A "secure circuit" is a circuit that protects an isolated, internal resource from being directly accessed by an external circuit. The processor may be an image signal processor (ISP) and may include circuitry suitable for processing images, in particular images with personal and/or confidential information.
In an embodiment, the device may be a mobile electronic device. The surface roughness measure may be determined by a mobile electronic device and/or wherein the speckle image is generated with a camera of a mobile electronic device. In particular, the human may initiate the generation of the speckle image based on the mobile electronic device. This is beneficial since many humans own a mobile electronic device such as a smartphone. These devices accompany the human and thus, a measurement of the surface roughness is possible at any time and can be carried out in more natural and less artificial contexts. Thereby, the surface roughness can be evaluated more realistically which in turn serves more realistic measure for the surface roughness.
As outlined above, the method further comprises authenticating the user or denial using the surface roughness measure. The authenticating may be performed by using at least one authentication unit, e.g. of the device. The term “authentication unit” as used herein is a broad term and is to be given its ordinary and customary meaning to a person of ordinary skill in the art and is not to be limited to a special or customized meaning. The term specifically may refer, without limitation, to at least one unit configured for performing at least one authentication process of a user. The authentication unit may be or may comprise at least one processor and/or may be designed as software or application. The authenticating of the user or denial using the surface roughness measure is performed by using an authentication unit of the device and/or a remote authentication unit.
For example, the authentication unit may perform at least one face detection using the flood image. The face detection may be performed locally on the device. Face identification, i.e. assigning an identity to the detected face, however, may be performed remotely, e.g. in the cloud, e.g. especially when identification needs to be done and not only verification. User templates can be stored at the remote device, e.g. in the cloud, and would not need to be stored locally. This can be an advantage in view of storage space and security.
The authentication unit may be configured for identifying the user based on the flood image. Particularly therefore, the authentication unit may forward data to a remote device. Alternatively or in addition, the authentication unit may perform the identification of the user based on the flood image, particularly by running an appropriate computer program having a respective functionality. The term “identifying” as used herein is a broad term and is to be given its ordinary and
customary meaning to a person of ordinary skill in the art and is not to be limited to a special or customized meaning. The term specifically may refer, without limitation, to assigning an identity to a detected face and/or at least one identity check and/or verifying an identity of the user.
The authentication may comprise a plurality of steps.
For example, the authentication may comprise performing at least one face detection using the flood image. The face detection may comprise analyzing the flood image. In particular, the analyzing of the flood image may comprise using at least one image recognition technique, in particular a face recognition technique. An image recognition technique comprises at least one process of identifying the user in an image. The image recognition may comprise using at least one technique selected from the technique consisting of: color-based image recognition, e.g. using features such as template matching; segmentation and/or blob analysis e.g. using size, or shape; machine learning and/or deep learning e.g. using at least one convolutional neural network.
For example, the authentication may comprise identifying the user. The identifying may comprise assigning an identity to a detected face and/or at least one identity check and/or verifying an identity of the user. The identifying may comprise performing a face verification of the imaged face to be the user’s face. The identifying the user may comprise matching the flood image, e.g. showing a contour of parts of the user, in particular parts of the user’s face, with a template, e.g. a template image generated within an enrollment process. The identifying of the user may comprise determining if the imaged face is the face of the user, in particular if the imaged face corresponds to at least one image of the user’s face stored in at least one memory, e.g. of the device. Authentication may be successful if the flood image can be matched with an image template. Authentication may be unsuccessful if the flood image cannot be matched with an image template.
For example, the identifying of the user may comprise determining a plurality of facial features. The analyzing may comprise comparing, in particular matching, the determined facial features with template features. The template features may be features extracted from at least one template. The template may be or may comprise at least one image generated in an enrollment process, e.g. when initializing the device. Template may be an image of an authorized user. The template features and/or the facial feature may comprise a vector. Matching of the features may comprise determining a distance between the vectors. The identifying of the user may comprise comparing the distance of the vectors to a least one predefined limit. The user may be successfully identified in case the distance is < the predefined limit at least within tolerances. The user may be declined and/or rejected otherwise.
The analyzing of the flood image may further comprise one or more of the following: a filtering; a selection of at least one region of interest; a formation of a difference image between the flood image and at least one offset; an inversion of flood image; a background correction; a decompo-
sition into color channels; a decomposition into hue; saturation; and brightness channels; a frequency decomposition; a singular value decomposition; applying a Canny edge detector; applying a Laplacian of Gaussian filter; applying a Difference of Gaussian filter; applying a Sobel operator; applying a Laplace operator; applying a Scharr operator; applying a Prewitt operator; applying a Roberts operator; applying a Kirsch operator; applying a high-pass filter; applying a low-pass filter; applying a Fourier transformation; applying a Radon-transformation; applying a Hough-transformation; applying a wavelet-transformation; a thresholding; creating a binary image. The region of interest may be determined manually by a user or may be determined automatically, such as by recognizing the user within the image.
For example, the image recognition may comprise using at least one model, in particular a trained model comprising at least one face recognition model. The analyzing of the flood image may be performed by using a face recognition system, such as FaceNet, e.g. as described in Florian Schroff, Dmitry Kalenichenko, James Philbin, “FaceNet: A Unified Embedding for Face Recognition and Clustering”, arXiv: 1503.03832. The trained model may comprises at least one convolutional neural network. For example, the convolutional neural network may be designed as described in M. D. Zeiler and R. Fergus, “Visualizing and understanding convolutional networks”, CoRR, abs/1311 .2901 , 2013, or C. Szegedy et aL, “Going deeper with convolutions”, CoRR, abs/1409.4842, 2014. For more details with respect to convolutional neural network for the face recognition system reference is made to Florian Schroff, Dmitry Kalenichenko, James Philbin, “FaceNet: A Unified Embedding for Face Recognition and Clustering”, arXiv: 1503.03832. As training data labelled image data from an image database may be used. Specifically, labeled faces may be used from one or more of G. B. Huang, M. Ramesh, T. Berg, and E. Learned-Miller, “Labeled faces in the wild: A database for studying face recognition in unconstrained environments”, Technical Report 07-49, University of Massachusetts, Amherst, October 2007, the Youtube® Faces Database as described in L. Wolf, T. Hassner, and I. Maoz, “Face recognition in unconstrained videos with matched background similarity”, in IEEE Conf, on CVPR, 2011 , or Google® Facial Expression Comparison dataset. The training of the convolutional neural network may be performed as described in Florian Schroff, Dmitry Kalenichenko, James Philbin, “FaceNet: A Unified Embedding for Face Recognition and Clustering”, arXiv: 1503.03832.
The face detection and identification of the user may be performed before step b.4 of authenticating the user or denial using the surface roughness measure. The authentication process may be aborted in case the user is not successfully identified. As outlined above, using two-dimensional images for authentication can be tricked. The method according to the present invention proposes to use the surface roughness measure as additional security feature for authentication. The authentication using the flood image may be validated using the surface roughness measure.
Alternatively, step b.4 of authenticating the user or denial using the surface roughness measure may be performed regardless of whether the face detection and identification of the user was
performed. The surface roughness measure may be used as biometric identifier for uniquely identifying the user.
For example, the method may comprise determining if the surface roughness measure corresponds to a surface roughness measure of a human being. For example, the method may comprise determining if the surface roughness measure corresponds to a surface roughness measure of the specific user. Determining if the surface roughness measure corresponds to a surface roughness measure of a human being and/or of the specific user may comprise comparing the surface roughness measure to at least one pre-defined or pre-determined range of values of surface roughness measure, e.g. stored in at least one database e.g. of the device or of a remote database such as of a cloud. In case the determined surface roughness measure is at least within tolerances within the re-defined or pre-determined range of values of surface roughness measure, the user is authenticated otherwise the authentication is unsuccessful. For example, the surface roughness measure may be a human skin roughness. In case the determined human skin roughness is within the range of 10 pm to 150 pm the user is authenticated. However, other ranges are possible.
The method further may comprise c. allowing or declining the user to access one or more functions associated with the device depending on the authentication or denial in step b.4.
The allowing may comprise granting permission to access the one or more functions. The method may comprise determining if the user correspond to an authorized user, wherein allowing or declining is further based on determining if the user corresponds to an authorized user. In particular, the method may comprise at least authorization step, e.g. by using at least one authorization unit. The term “authorization step” as used herein is a broad term and is to be given its ordinary and customary meaning to a person of ordinary skill in the art and is not to be limited to a special or customized meaning. The term specifically may refer, without limitation, to a step of assigning access rights to the user, in particular a selective permission or selective restriction of access to the device and/or at least one resource of the device. The authorization unit may be configured for access control. The term “authorization unit” as used herein is a broad term and is to be given its ordinary and customary meaning to a person of ordinary skill in the art and is not to be limited to a special or customized meaning. The term specifically may refer, without limitation, to a unit such as a processor configured for authorization of a user. The authorization unit may comprise at least one processor or may be designed as software or application. The authorization unit and the authentication unit may be embodied integral, e.g. by using the same processor. The authorization unit may be configured for allowing the user to access the one or more functions, e.g. on the device, e.g. unlocking the device, in case of successful authentication of the user or declining the user to access the one or more functions, e.g. on the device, in case of non-successful authentication.
The method may comprise displaying a result of the authentication and/or the authorization e.g. by using at least one communication interface such as a user interface, e.g. a display.
The method further may comprise using even further security features such as a three-dimensional information and/or other liveness data. For example, the method may comprise conducting at least one distance measurement, in addition to using the surface roughness as security feature. The three-dimensional information obtained via the distance measurement may be used as additional security feature. In an embodiment, the terms determining a distance or performing a distance measurement may refer to measuring a distance.
For example, the method may comprise performing a distance measurement at a plurality of positions of the user’s face and for determining a depth map. The method may comprise authenticating the user based on the surface roughness measure and the depth map. The determined depth map may be compared to a predetermined depth map of the user, e.g. determined during an enrollment process. The authentication unit may be configured for authenticating the user in case the determined depth map matches with the predetermined depth map of the user, in particular at least within tolerances. Otherwise, the user may be declined. The authenticating using the depth map may be performed before or after the authenticating using the surface roughness measure.
For example, the distance measurement may comprise using one of more of depth from focus, depth from defocus, triangulation, or depth-from-photon-ratio.
For example, the distance measurement may comprise receiving and/or obtaining at least one reflection image showing at least a part of the user while the user is illuminated at least partially with electromagnetic radiation and determining a distance of the user from the image generation unit and/or from an illumination source based on the at least one reflection image. The reflection image as used herein may not be limited to an actual visual representation of a user. Instead, a reflection image comprises data generated based on electromagnetic radiation reflected by an object being illuminated by electromagnetic radiation. Reflection image may comprise at least one pattern. Reflection image may comprise at least one pattern feature. Reflection image may be comprised in a larger reflection image. A larger reflection image may be a reflection image comprising more pixels than the reflection image comprised in it. Dividing a reflection image into at least two parts may result in at least two reflection images. The at least two reflection images may comprise different data generated based on light reflected by an object being illuminated with light, e.g. one of the at least two reflection images may represent a living organisms nose and the other one of the at least two reflection images may represent a living organisms forehead. Reflection image may be suitable for determining a feature contrast for the at least one pattern feature. Reflection image may comprise a plurality of pixels. A plurality of pixels may comprise at least two pixels, preferably more than two pixels. For determining a feature contrast at least one pixel associated with the reflection feature and at least one pixel not associated with the reflection feature may be suitable. In particular, the term “reflection image” as used herein can refer to any data based on which an actual visual representation of the imaged object can be constructed. For instance, the data can correspond to an assignment of color or grayscale values to image positions, wherein each image position can correspond to a position
in or on the imaged object. The reflection images or the data referred to herein can be two-dimensional, three-dimensional or four-dimensional, for instance, wherein a four-dimensional image is understood as a three-dimensional image evolving over time and, likewise, a two-dimensional image evolving over time might be regarded as a three-dimensional image. A reflection image can be considered a digital image if the data are digital data, wherein then the image positions may correspond to pixels or voxels of the image and/or image sensor. While generating the reflection image, the living organism may be illuminated with light, eventually being RGB light or preferably I R flood light and/or patterned light. Electromagnetic radiation may be patterned electromagnetic radiation. Patterned electromagnetic radiation may comprise at least one pattern. Patterned light may be projected onto the living organism. Patterned electromagnetic radiation may comprise patterned coherent electromagnetic radiation.
In an embodiment, a distance of the user from an image generation unit and/or from an illumination source may be determined based on the at least one reflection image. Reflection image may comprise information associated with the distance. In the art a variety of methods are known for determining a distance based on an image, e.g. “depth from focus”, “depth from defocus” or triangulation. Distance may be determined by “depth from focus”, “depth from defocus” (DFD), triangulation, depth-from-photon-ratio (DPR) or combinations thereof. Different methods for determining a distance may provide different advantages depending on the use case as known in the art. Hence, combinations of at least two methods may provide more accurate results and thus, improve the reliability of an authentication process including distance determination. Distance obtained from at least two methods may comprise at least two distance values. At least two distance values may be combined by using at least one recursive filter and/or using a real function such as the arithmetic or geometric mean, a polynomial, preferably a polynomial up to the eights order in the at least two distance values.
In an embodiment, the method as described herein may further comprise determining a distance based on the at least one reflection by using one of more of depth from focus, depth from defocus, triangulation, depth-from-photon-ratio, determining a distance based on the distance between at least two spatial features based on a flood image and combinations thereof. Determining a distance of the user from an image generation unit and/or from an illumination source based on the reflection image may comprise one or more of the following techniques depth from focus, depth from defocus, triangulation, depth-from-photon-ratio, determining a distance based on measuring a distance between at least two spatial features in a flood image and combinations thereof. Spatial feature may be represented with a vector. The ector may comprise at least one numerical value. Example for spatial features of a face may comprise at least one of the following: the nose, the eyes, the eyebrows, the mouth, the ears, the chin, the forehead, wrinkles, irregularities such as scars, cheeks including cheekbones or the like. Other examples for spatial features may include finger, nails or the like.
In an embodiment, determining a distance of the user from an image generation unit and/or from an illumination source based on the at least one reflection image may be based on meas-
uring the distance between at least two spatial features in a flood image and comparing the distance between at least two spatial features in a flood image with a reference. Distance between at least two spatial features associated with the object may be indicative of a distance of the user and an illumination source and/or an image generation unit. Distance between at least two spatial features associated with the user may be related to a distance of the object and an illumination source and/or an image generation unit. Distance of the user from an illumination source and/or an image generation unit may be determined based on a relation of the distance between at least two spatial features associated with the user in a flood image and the distance of the user from an illumination source and/or an image generation unit. Determining a distance of the user from an illumination source and/or an image generation unit may be determined based on a reference. Reference may include a distance between at least two spatial features associated with the object and a distance of the object from an illumination source and/or an image generation unit. Hence, determining a distance of the object from an illumination source and/or an image generation unit may comprise referencing the distance between at least two spatial values associated with the user in a flood image to a predetermined distance between at least two spatial values associated with a user. For example, the distance between a human’s eyes may be about 5 cm at a distance of 1 m from the camera and the distance between a human’s eyes in a flood image may be 1 cm. Based on this information, the distance of the human from the camera may be determined. Relation between distance of the user from an illumination source and/or an image generation unit and distance between at least two spatial features associated with the object may be obtained by using equations of optics or by interpolating between several distance values.
In an embodiment, the distance may be determined by using a model based on determining the distance between at least two spatial features in a flood image. The model may be trained with a training data set comprising at least one flood image and a distance associated with the object shown at least partially in the flood image. In an embodiment, the model may implement the relation of the distance between at least two spatial features associated with the object in a flood image and the distance of the user from an illumination source and/or an image generation unit.
In an embodiment, the distance may be determined by using depth from defocus (DFD). Depth from defocus may comprise optimizing at least one blurring function fa. Blurring function fa may refer to as blur kernel or point spread function, refers to a response function of a detector to the illumination from the user. Specifically, the blurring function may model the blur of a defocused object. The at least one blurring function fa may be a function or composite function composed from at least one function from the group consisting of: a Gaussian, a sine function, a pillbox function, a square function, a Lorentzian function, a radial function, a polynomial, a Hermite polynomial, a Zernike polynomial, a Legendre polynomial. Distance may be referred to as longitudinal coordinate z. The longitudinal coordinate ZDFD may be determined by using at least one con- volution-based algorithm such as a depth-from-defocus algorithm. To obtain the distance from the image, the depth-from-defocus algorithm estimates the defocus of the object. A longitudinal coordinate ZDFD may be determined by optimizing the at least one blurring function fa. The blurring function may be optimized by varying the parameters of the at least one blurring function.
The image may be a blurred image i , in particular a blurred reflection image. A longitudinal coordinate z may be reconstructed from the blurred image ib and the blurring function fa. The longitudinal coordinate ZDFD may be determined by minimizing a difference between the blurred image ib and the convolution of the blurring function fa with at least one further image i’b, min||(i'& * /a(o(z)) - i&)||, by varying the parameters o of the blurring function. o(z) is a set of distance dependent blurring parameters. The further image may be blurred or sharp. As used herein, the term “sharp” or “sharp image” refers to a blurred image having a maximum contrast. The at least one further image may be generated from the blurred image ib by a convolution with a known blurring function. Thus, the depth-from-defocus algorithm may be used to obtain the longitudinal coordinate ZDFD.
In an embodiment, the distance may be determined by using depth-from-photon-ratio (DPR). Depth from photon ratio may be based on a combination of a measure for intensity associated with the first location in an image and a second measure for intensity associated with a second location. DPR may be based on a quotient of a measure for an intensity associated with a first location in an image and a second measure for an intensity associated with a second location. Preferably, first location may be a location other than the second location. A measure for an intensity may comprise but are not limited to an intensity, an absorbance, an extinction, a relative intensity, e.g. by relating the final intensity to the initial intensity or the like. A quotient of at least one first measure for an intensity associated with a first location in an image and at least one second measure for an intensity associated with a second location may be related to the distance. A quotient of a first measure for an intensity associated with a first location in an image and a second measure for an intensity associated with a second location may be suitable for determining a distance. The distance may be in at least one measurement range independent from the object size in an object plane. A quotient of a measure for an intensity associated with a first location in an image and another measure for an intensity associated with a second location comprises one or more of: dividing at least the first measure and/or at least then second measure, dividing multiples of at least the first measure and/or at least the second measure, dividing linear combinations of at least the first measure and/or at least the second measure. Electromagnetic radiation used for illumination may be associated with at least one beam profile. A measure for an intensity may further comprise at least one information related to at least one beam profile of the light beam associated with the electromagnetic radiation. The beam profile may be one of a trapezoid beam profile; a triangle beam profile; a conical beam profile and a linear combination of Gaussian beam profiles. Furthermore, a first measure for an intensity may comprise information of a first area of the beam profile and a second measure for an intensity may comprise information of a second area of the beam profile. A first area of the beam profile the second area of the beam profile may be adjacent or overlapping areas. A measure for intensity may be obtained by integrating the intensities of an area in the least one reflection image. The first measure for intensity associated with the first location in an image may be obtained by integrating the intensities of the at least one area associated with the first location in the at least one reflection image. The second measure for intensity associated with the second location in an image may be obtained by integrating the intensities of the at least one area associated with the second location in the at least one reflection image.
In an embodiment, determining the distance may be associated with determining the first area of the beam profile and the second area of the beam profile. First area of the beam profile may comprise essentially edge information of the beam profile and the second area of the beam profile may comprise essentially center information of the beam profile. Edge information may comprise an information relating to a number of photons in the first area of the beam profile and the center information may comprise an information relating to a number of photons in the second area of the beam profile. Determining the distance based on the quotient may comprise dividing the edge information and the center information, dividing multiples of the edge information and the center information, dividing linear combinations of the edge information and the center information. Quotient Q may be expressed as
wherein x and y are transversal coordinates in the image, A1 and A2 are areas of the beam profile, and E(x,y,zo) denotes the beam profile given at the object distance z0. Other embodiments are disclosed in EP17797964A 2017-11-17, which is included by reference herewith.
In an embodiment, distance may be determined by using triangulation. Triangulation may be based on trigonometrical equations. Trigonometrical equations may be used for determining a distance. Triangulation may be based on the at least one reflection image and a baseline. Baseline may refer to distance between the illumination source and the image generation unit. Baseline may be received. In particular, baseline may be received together with the at least one reflection image.
In an embodiment, distance may be determined based on a combination of depth from photon ratio and triangulation. In an embodiment, distance may be determined based on a combination of depth from photon ratio triangulation and depth from defocus. In an embodiment, a distance may be determined based on a combination of DPR and DFD. For this purpose, distance is determined by DPR and DFD and the at least two distance values may be combined.
In an embodiment, distance may be determined by using a model. In an embodiment, model is suitable for determining an output based on an input. A model may be a mechanistic model, a data-driven model or a hybrid model. The mechanistic model, preferably, reflects physical phenomena in mathematical form, e.g., including first-principle models. A mechanistic model may comprise a set of equations that describes an interaction between the user (object) and the electromagnetic radiation. Preferably, the data-driven model may be a classification model. The classification model may comprise at least one machine-learning architecture and model parameters. For example, the machine-learning architecture may be or may comprise one or more of: linear regression, logistic regression, random forest, piecewise linear, nonlinear classifiers, support vector machines, naive Bayes classifications, nearest neighbors, neural networks, convolutional neural networks, generative adversarial networks, support vector machines, or gradient
boosting algorithms or the like. In the case of a neural network, the model can be a multi-scale neural network or a recurrent neural network (RNN) such as, but not limited to, a gated recurrent unit (GRU) recurrent neural network or a long short-term memory (LSTM) recurrent neural network. The data-driven model may be trained based on training data. The training may comprise parametrizing. The training may comprise a process of building the model, in particular determining and/or updating parameters of the model. Updating parameters of the classification model may also be referred to as retraining. Retraining may be included when referring to training herein. The classification model may be at least partially data-driven. The classification model may be trained based on training data. Training data may comprise at least one reflection image and at least one distance, preferably the distance may be associated with the reflection image. Training the data-driven model may comprise providing training data to the model. The training data may comprise at least one training dataset. During the training the data-driven model may adjust to achieve best fit with the training data, e.g. relating the at least on input value with best fit to the at least one desired output value. For example, if the neural network is a feedforward neural network such as a CNN, a backpropagation-algorithm may be applied for training the neural network. In case of a RNN, a gradient descent algorithm or a backpropaga- tion-through-time algorithm may be employed for training purposes. In an embodiment, a training data set may comprise at least one input and at least one desired output. A training data set may comprise of at least one reflection image and a distance, in particular a distance associated with the at least one reflection image. In particular, a training data set may comprise of a plurality of reflection images and a plurality of distances. Training a model may include or may refer without limitation to calibrating the model. The model may be suitable for measuring a desired value such as a target value and/or a reference value. The model may be referred to as a measuring system, e.g. for measuring a target value and/or a reference value.
In an embodiment, the user may be provided with feedback of an authentication process. Feedback may comprise of user-related information and/or process-related information. A user-related information may comprise of information selected for the user involved in the process. According to an embodiment, the user-related information includes: a user guidance for navigating the user through the process, a required user action, a specific information for the user based on a current status of the user, in particular requesting inputting authentication information by the user and/or a user representation.
User guidance for navigating the user through the process may comprise instructions. Instructions may be associated with explanations. Explanations may be suitable for explaining the process to the user. Examples may be advising the user to change the distance between him or her and the device. A required user action can be an action that the user has to perform in order for the process to continue. Examples may be selecting an option out of several, entering additional information such as authentication information, or the like. A user representation may be suitable for representing the user’s physical appearance. Examples may be a representation of the user’s face in a face authentication process.
For example, an increased transparency for the user during the execution of the process, e.g. a face authentication process, is achieved by representing the user (e.g. during illumination) wherein the representation can be an image of the user recorded with a RGB or IR camera, an Animoji (animated emoji) generated from image data obtained with the active illumination source illuminating light and the camera generating at least one image. Furthermore, advantages of a user representation are besides increasing the transparency for the user, other error sources can be recognized, like grease or dirt on the display.
A process-related information comprises information selected for the execution of the process. According to an embodiment, the process-related information includes: information associated with a type of the process, upcoming events related to the process, and/or highlighting parts of the display device involved into the process.
Information associated with the type of process refers to a name or a symbolic representation of the process. Exemplary names for processes can be authentication process, payment process, or the like. Upcoming events related to the process may be subsequent processes or termination of the process or an application after the process is completed. Parts of the display device may be highlighted with symbols, representation or text referring to a part of the display device. In an exemplary scenario, the camera may be highlighted by means of text, a camera symbol close to or above the camera. In other scenarios, a fingerprint sensor may be highlighted by representing a fingerprint in the area where the finger of the user needs to be placed.
The enrolment process may be suitable for generating a template. A template may be a low- level representation template. A template may be stored after generation by a device. An enrolment process may be performed before an authentication may be performed. An enrolment process may be a preceding process to at least one authentication process. An enrolment process may be performed at least once per user. An exemplary enrolment process may comprise of: - providing an image of a at least a part of a user, e.g., a fingerprint feature or a facial feature; - generating a low-level representation template of the image; and - storing the low-level representation template.
All described method steps may be performed by using the device. Therefore, a single processing device may be configured to exclusively perform at least one computer program, in particular at least one line of computer program code configured to execute at least one algorithm, as used in at least one of the embodiments of the method according to the present invention. Herein, the computer program as executed on the single processing device may comprise all instructions causing the computer to carry out the method. Alternatively, or in addition, at least one method step may be performed by using at least one remote device, especially selected from at least one of a server or a cloud server, particularly when the device and the remote device may be part of a computer network. In this case, the computer program may comprise at least one remote component to be executed by the at least one remote processing device to carry out the at least one method step. The remote component, e.g. may have the functionality
of performing the identifying of the user. Further, the computer program may comprise at least one interface configured to forward to and/or receive data from the at least one remote component of the computer program.
In a further aspect, device for authenticating a user of a device is disclosed. The device comprises: at least one illumination source configured for illuminating the user with coherent electromagnetic radiation associated with a wavelength between 850 nm and 1400 nm; at least one camera configured for generating at least one speckle image showing the user under illumination with the coherent electromagnetic radiation associated with a wavelength between 850 nm and 1400 nm, at least one processor configured for receiving the speckle image from the camera and determining at least one surface roughness measure based on the speckle image and providing the surface roughness measure, at least one authentication unit configured for receiving the surface roughness measure and for configured for authenticating the user or denying the authentication of the user using the surface roughness measure.
For details, options and definitions of the device, reference may be made to the method as discussed above. Thus, specifically, the device may be configured for performing the method according to the present invention, such as according to the embodiments described above or described in further detail below. Reference may, therefore, be made to any further aspect of the present disclosure.
The device for authenticating a user may be comprised by the device of the user, e.g. may be an element of the device of the user, or may be the user’s device itself.
Further disclosed and proposed herein is a use of a surface roughness measure as obtained by a method according to the present invention, such as according to the embodiments described above or described in further detail below, and/or as obtained by a device according to the present invention, such as according to the embodiments described above or described in further detail below.
In a further aspect, a computer program is disclosed, which comprises instructions which, when the program is executed by the device, cause the device to perform the method according to any one of the preceding embodiments referring to a method. Specifically, the computer program may be stored on a computer-readable data carrier and/or on a computer-readable storage medium. The computer program may be executed on at least one processor comprised by the device. The computer program may generate input data by accessing and/or controlling at least one unit of the device, such as the projector and/or the flood illumination source and/or the image generation unit. The computer program may generate outcome data based on the input data, particularly by using the authentication unit.
As used herein, the terms “computer-readable data carrier” and “computer-readable storage medium” specifically may refer to non-transitory data storage means, such as a hardware storage medium having stored thereon computer-executable instructions. The stored computer-executable instruction may be associate with the computer program. The computer-readable data carrier or storage medium specifically may be or may comprise a storage medium such as a random-access memory (RAM) and/or a read-only memory (ROM).
In an embodiment, a computer-readable storage medium may refer to any suitable data storage device or computer readable memory on which is stored one or more sets of instructions (for example software) embodying any one or more of the methodologies or functions described herein. The instructions may also reside, completely or at least partially, within the main memory and/or within the processor during execution thereof by the computer, main memory, and processing device, which may constitute computer-readable storage media. The instructions may further be transmitted or received over a network via a network interface device. Computer- readable storage medium includes hard drives, for example on a server, USB storage device, CD, DVD or Blue-ray discs.
The computer-readable storage medium, in particular the non-transitory computer-readable storage medium comprises instructions that when executed by a computer, cause the computer to: receive a request for accessing one or more functions associated with the device; execute at least one authentication process comprising the following steps:
- triggering to illuminate the user by coherent electromagnetic radiation associated with a wavelength between 850 nm and 1400 nm,
- triggering to generate at least one speckle image of the user while the user is being illuminated by the coherent electromagnetic radiation,
- determining at least one surface roughness measure based on the speckle image,
- authenticating the user or denial using the surface roughness measure.
Thus, specifically, one, more than one or even all of method steps as indicated above may be performed by using a computer or a computer network, preferably by using a computer program.
Further disclosed and proposed herein is a computer program product having program code means, in order to perform the method according to the present invention in one or more of the embodiments enclosed herein when the program is executed on a computer or computer network. Specifically, the program code means may be stored on a computer-readable data carrier and/or on a computer-readable storage medium.
Further disclosed and proposed herein is a data carrier having a data structure stored thereon, which, after loading into a computer or computer network, such as into a working memory or main memory of the computer or computer network, may execute the method according to one or more of the embodiments disclosed herein.
Further disclosed and proposed herein is a computer program product with program code means stored on a machine-readable carrier, in order to perform the method according to one or more of the embodiments disclosed herein, when the program is executed on a computer or computer network. As used herein, a computer program product refers to the program as a tradable product. The product may generally exist in an arbitrary format, such as in a paper for-mat, or on a computer-readable data carrier and/or on a computer-readable storage medium. Specifically, the computer program product may be distributed over a data network.
Further disclosed and proposed herein is a non-transient computer-readable medium including instructions that, when executed by one or more processors, cause the one or more processors to perform the method according to one or more of the embodiments disclosed herein.
Finally, disclosed and proposed herein is a modulated data signal which contains instructions readable by a computer system or computer network, for performing the method according to one or more of the embodiments disclosed herein.
Referring to the computer-implemented aspects of the invention, one or more of the method steps or even all of the method steps of the method according to one or more of the embodiments disclosed herein may be performed by using a computer or computer network. Thus, generally, any of the method steps including provision and/or manipulation of data may be performed by using a computer or computer network. Generally, these method steps may include any of the method steps, typically except for method steps requiring manual work, such as providing the samples and/or certain aspects of performing the actual measurements.
Specifically, further disclosed herein are: a computer or computer network comprising at least one processor, wherein the processor is adapted to perform the method according to one of the embodiments described in this description, a computer loadable data structure that is adapted to perform the method according to one of the embodiments described in this description while the data structure is being executed on a computer, a computer program, wherein the computer program is adapted to perform the method according to one of the embodiments described in this description while the program is being executed on a computer, a computer program comprising program means for performing the method according to one of the embodiments described in this description while the computer program is being executed on a computer or on a computer network, a computer program comprising program means according to the preceding embodiment, wherein the program means are stored on a storage medium readable to a computer, a storage medium, wherein a data structure is stored on the storage medium and wherein the data structure is adapted to perform the method according to one of the embodiments described in this description after having been loaded into a main and/or working storage of a computer or of a computer network, and
a computer program product having program code means, wherein the program code means can be stored or are stored on a storage medium, for performing the method ac-cording to one of the embodiments described in this description, if the program code means are executed on a computer or on a computer network.
By deploying the methods, devices and systems as described herein, the surface roughness can be measured easily with low-cost hardware. Such hardware is readily available and can be integrated easily in mobile electronic devices such as smartphone and can be used as additional security feature for access control. Furthermore, resources for time consuming measurements are saved. Hence, determining a surface roughness measure based on a speckle image showing the object such as a human while being illuminated by coherent electromagnetic radiation associated with a wavelength between 850 nm and 1400 nm enables mobile electronic devices to measure surface roughness. These devices can be operated by the non-expert user of the mobile electronic device enabling thus, measurements under more natural conditions. Furthermore, the wavelength of the coherent electromagnetic radiation is invisible to the human and thus, the human and specifically the eyes are not distracted or disturbed by the measurement. Followingly, the measurement, and thus, the authentication can be carried out in the darkness. Another benefit is that the measurement of the surface roughness and thus, the authentication can be performed under ambient light. As the wavelength of the coherent electromagnetic radiation is chosen such that the contribution of the ambient light to the intensity signal associated with the speckle image can be eliminated and/or neglected. Since the measurement of the surface roughness is based the speckle image, no contact to the skin of the human is needed and a contact-free operation is enabled while the analysis of the speckle image for determining the surface roughness measure is fast.
As used herein, the terms “have”, “comprise” or “include” or any arbitrary grammatical variations thereof are used in a non-exclusive way. Thus, these terms may both refer to a situation in which, besides the feature introduced by these terms, no further features are present in the entity described in this context and to a situation in which one or more further features are present. As an example, the expressions “A has B”, “A comprises B” and “A includes B” may both refer to a situation in which, besides B, no other element is present in A (i.e. a situation in which A solely and exclusively consists of B) and to a situation in which, besides B, one or more further elements are present in entity A, such as element C, elements C and D or even further elements.
Further, it shall be noted that the terms “at least one”, “one or more” or similar expressions indicating that a feature or element may be present once or more than once typically are used only once when introducing the respective feature or element. In most cases, when referring to the respective feature or element, the expressions “at least one” or “one or more” are not repeated, nonwithstanding the fact that the respective feature or element may be present once or more than once.
Further, as used herein, the terms "preferably", "more preferably", "particularly", "more particularly", "specifically", "more specifically" or similar terms are used in conjunction with optional features, without restricting alternative possibilities. Thus, features introduced by these terms are optional features and are not intended to restrict the scope of the claims in any way. The invention may, as the skilled person will recognize, be performed by using alternative features. Similarly, features introduced by "in an embodiment of the invention" or similar expressions are intended to be optional features, without any restriction regarding alternative embodiments of the invention, without any restrictions regarding the scope of the invention and without any restriction regarding the possibility of combining the features introduced in such way with other optional or non-optional features of the invention.
Summarizing and without excluding further possible embodiments, the following embodiments may be envisaged:
Embodiment 1 . A computer-implemented method for authenticating a user of a device, the method comprising: a. receiving a request for accessing one or more functions associated with the device; b. executing at least one authentication process comprising the following steps: b.1 triggering to illuminate the user by coherent electromagnetic radiation associated with a wavelength between 850 nm and 1400 nm, b.2 triggering to generate at least one speckle image of the user while the user is being illuminated by the coherent electromagnetic radiation, b.3 determining at least one surface roughness measure based on the speckle image, b.4 authenticating the user or denial using the surface roughness measure.
Embodiment 2. The method according to the preceding embodiment, wherein the method further comprises c. Allowing or declining the user to access one or more functions associated with the device depending on the authentication or denial in step b.4.
Embodiment 3. The method according to the preceding embodiment, wherein the method further comprises determining if the user correspond to an authorized user, wherein allowing or declining is further based on determining if the user corresponds to an authorized user.
Embodiment 4. The method according to any one of the preceding embodiments, wherein the coherent electromagnetic radiation is patterned coherent electromagnetic radiation and/or wherein the coherent electromagnetic radiation comprises one or more light beams.
Embodiment 5. The method according to any one of the preceding embodiments, wherein determining the surface roughness measure based on the speckle image refers to determining the surface roughness measure based on a speckle pattern in the speckle image.
Embodiment 6. The method according to any one of the preceding embodiments, wherein the coherent electromagnetic radiation is associated with a wavelength between 900 nm and 1000 nm and/or wherein the coherent electromagnetic radiation is associated with a wavelength between 1100 nm and 1200 nm.
Embodiment 7. The method according to any one of the preceding embodiments, wherein the speckle image is generated by using at least one camera comprising at least one image sensor such as at least one CCD sensor and/or at least one CMOS sensor.
Embodiment 8. The method according to the preceding embodiment, wherein the camera comprises at least one lens and at least one polarizer.
Embodiment 9. The method of any one of the preceding embodiments, wherein a distance between the user and a camera used for generating the speckle image is between 10 cm and 1 .5 m and/or wherein the distance between the user and an illumination source used for illuminating the user is between 10 cm and 1 .5 m.
Embodiment 10. The method of any one of the preceding embodiments, wherein the surface roughness measure is determined using the speckle image by providing the speckle image to at least one model and receiving the surface roughness measure from the model.
Embodiment 11. The method of any one of the preceding embodiments, wherein the method comprises reducing the speckle image to a predefined size prior to determining the surface roughness measure.
Embodiment 12. The method of any one of the preceding embodiments, wherein the speckle image is associated with a resolution of less than 5 megapixel.
Embodiment 13. The method of any one of the preceding embodiments, wherein the receiving of the request for accessing one or more functions associated with the device is performed by using at least one communication interface.
Embodiment 14. The method of any one of the preceding embodiments, wherein the surface roughness measure is determined by using at least one processor.
Embodiment 15. The method of any one of the preceding embodiments, wherein the authenticating of the user or denial using the surface roughness measure is performed by using an authentication unit of the device and/or a remote authentication unit.
Embodiment 16. The method of any one of the preceding embodiments, wherein the device is selected from the group consisting of: a mobile device, particularly a cell phone, and/or a smart phone, and/or a tablet computer, and/or a laptop, and/or a tablet, and/or a virtual
reality device, and/or a wearable, such as a smart watch; or another type of portable computer, a television device; a game console; a personal computer; an access system such as of an automotive.
Embodiment 17. The method of any one of the preceding embodiments, wherein steps a) to b) are performed by a mobile electronic device, wherein the speckle image is generated with a camera of the mobile electronic device.
Embodiment 18. A device for authenticating a user of a device, the device comprising: at least one illumination source configured for illuminating the user with coherent electromagnetic radiation associated with a wavelength between 850 nm and 1400 nm; at least one camera configured for generating at least one speckle image showing the user under illumination with the coherent electromagnetic radiation associated with a wavelength between 850 nm and 1400 nm, at least one processor configured for receiving the speckle image from the camera and determining at least one surface roughness measure based on the Speckle image and providing the surface roughness measure, at least one authentication unit configured for receiving the surface roughness measure and for configured for authenticating the user or denying the authentication of the user using the surface roughness measure.
Embodiment 19. The device according to the preceding embodiment, wherein the device is configured for performing the method according to any one of embodiments referring to a method.
Embodiment 20. Use of a surface roughness measure as obtained by a method according to any one of embodiments referring to a method and/or as obtained by a device according to any one of the preceding embodiments relating to a device for authentication a user.
Embodiment 21 . A computer program comprising instructions which, when the program is executed by the device according to any one of the preceding embodiments referring to a device, cause the device to perform the method according to any one of the preceding embodiments referring to a method.
Embodiment 22. A computer-readable storage medium comprising instructions which, when the instructions are executed by the device according to any one of the preceding embodiments referring to a device, cause the device to perform the method according to any one of the preceding embodiments referring to a method.
Embodiment 23. A non-transitory computer-readable storage medium, the computer-readable storage medium comprises instructions that when executed by a computer, cause the computer to: receive a request for accessing one or more functions associated with the device; execute at least one authentication process comprising the following steps:
- triggering to illuminate the user by coherent electromagnetic radiation associated with a wavelength between 850 nm and 1400 nm,
- triggering to generate at least one speckle image of the user while the user is being illuminated by the coherent electromagnetic radiation,
- determining at least one surface roughness measure based on the speckle image,
- authenticating the user or denial using the surface roughness measure.
Short description of the Figures
Further optional features and embodiments will be disclosed in more detail in the subsequent description of embodiments, preferably in conjunction with the dependent claims. Therein, the respective optional features may be realized in an isolated fashion as well as in any arbitrary feasible combination, as the skilled person will realize. The scope of the invention is not restricted by the preferred embodiments. The embodiments are schematically depicted in the Figures. Therein, identical reference numbers in these Figures refer to identical or functionally comparable elements.
In the Figures:
FIG. 1 illustrates an exemplary embodiment of a device for authenticating a user of a device;
FIG. 2 illustrates an example for determining a surface roughness measure;
FIG. 3 illustrates an example embodiment of a method for authenticating a user of a device; and
FIG. 4 illustrates an embodiment of a surface associated with a user.
Detailed description of the embodiments
The following embodiments are mere examples for implementing the method, the system or application device disclosed herein and shall not be considered limiting.
FIG. 1 illustrates an exemplary embodiment of a device 102 for authenticating a user 114 of a device 102. The device 102 comprises an illumination source 104, a camera 106 and a processor 108. The surface roughness may be determined with respect to skin of a user 114. The skin may be associated with a surface roughness. The surface roughness can be evaluated based on a surface roughness measure. The skin may be exposed to coherent electromagnetic radiation associated with a wavelength between 850 nm and 1400 nm.
The coherent electromagnetic radiation associated with a wavelength between 850 nm and 1400 nm may be emitted by the illumination source 104. The illumination source 104 may comprise one or more radiation sources such as a VCSEL array or a single laser diode. The radiation source may be associated with one or more light beams. For example, a single laser diode may emit one light beam, whereas a VCSEL array may emit a plurality of light beams. Preferably the number of light beams correspond to the number of VCSELs in the VCSEL array. Thus, the illumination source may emit a plurality of light beams. The plurality of light beams may result in projecting a pattern onto the object. Preferably, the illumination source 104 may emit patterned coherent electromagnetic radiation. Patterned coherent electromagnetic radiation may be suitable for projection a pattern onto the object. Additionally or alternatively, the illumination source 104 may comprise one or more optical elements. An optical element may be suitable for splitting and/or multi plicating light beams. Examples for optical elements can be diffractive optical elements, refractive optical elements, meta surface elements, lenses or the like. Hence, an illumination source 104 comprising a single laser diode or a VCSEL array in combination with an optical element may result in illuminating the object with patterned coherent electromagnetic radiation. The illumination source 104 may be associated with a field of illumination as indicated by the two lines originating from the illumination source 104.
Depending on the body part illuminated by the coherent electromagnetic radiation, different skin surface roughness measures may be determined. Different body parts of the userl 14 may be associated with different skin roughness 116. For example, a hand may be associated with a higher skin roughness whereas the face may be associated with a lower skin surface roughness. The surface roughness may be characteristic for a body part of the user 114 and/or for the identity of the user 114.
A speckle image may be generated while the userl 14 may be illuminated with coherent electromagnetic radiation, preferably patterned coherent electromagnetic radiation. Coherent electromagnetic radiation may interact with the skin of the user 114 once it may be projected onto the skin of the user 114. Coherent electromagnetic radiation forms speckle when interacting with a non-homogeneous and uneven surface such as skin. The different wavefronts of the coherent electromagnetic radiation may interact by means of interference. The interference of the different wavefronts of the coherent electromagnetic radiation may result in contrast variations of the coherent electromagnetic radiation on the skin of the user 114. These contrast variations may depend on the surface roughness associated with the surface the coherent electromagnetic radiation is illuminating. Hence, the roughness associated with the skin may influence the formation of speckle such as the size and orientation of the speckle. Followingly, analysis of the speckle may result in a surface roughness measure.
For analyzing the speckle, at least one speckle image is generated with a 106 such as a camera. The camera 106 may comprise a sensor 110. Optionally, the camera 106 may comprise a lens 112. In another example, the camera 106 may comprise a polarizer. The coherent electromagnetic radiation is in the infrared range. Thus, the coherent electromagnetic radiation may penetrate deeper than the epidermis. The surface roughness measure may specify the surface
roughness associated with the surface of skin. Followingly, information obtained by coherent electromagnetic radiation penetrating for example the dermis or deeper may overlay the desired information relating to the surface of the skin. A polarizer may be suitable for selecting the coherent electromagnetic radiation reflected from the surface of the skin and may be suitable for deselecting parts of the coherent electromagnetic radiation having interacted with skin layer such as the dermis or deeper layers.
The camera 106 may be associated with a field of view as indicated by the two lines originating from the camera 106. The camera 106 may have a field of view between 10°x10° and 75°x75°, preferably 55°x65°. The camera 106 may have a resolution below 2 megapixel, preferably between 0.3 megapixel and 1 .5 megapixel. Examples for the Speckle image can be found in FIG. 2.
The field of illumination may correspond at least partially to the field of view. At least a fraction of the field of view associated with the camera 106 may be independent of illumination with coherent electromagnetic radiation. Followingly, the Speckle image may show at least in parts the object under illumination with coherent electromagnetic radiation.
Further, the speckle image may be provided to and/or received by a processor 108. The processor 108 may comprise one or more processors. The processor 108 may determine the surface roughness measure based on the Speckle image. The processor 108 may determine the surface roughness measure as described within the context of FIG. 2.
The device 102 further comprises at least one authentication unit 118. The authentication unit 118 may be or may comprise at least one processor (in this embodiment the processor 108) and/or may be designed as software or application. The authentication may comprise a plurality of steps.
For example, the authentication unit 118 may perform at least one face detection using a flood image. The face detection may comprise analyzing the flood image. In particular, the analyzing of the flood image may comprise using at least one image recognition technique, in particular a face recognition technique. An image recognition technique comprises at least one process of identifying the user 114 in an image. The image recognition may comprise using at least one technique selected from the technique consisting of: color-based image recognition, e.g. using features such as template matching; segmentation and/or blob analysis e.g. using size, or shape; machine learning and/or deep learning e.g. using at least one convolutional neural network.
For example, the authentication may comprise identifying the user 114. The identifying may comprise assigning an identity to a detected face and/or at least one identity check and/or verifying an identity of the user. The identifying may comprise performing a face verification of the imaged face to be the user’s face. The identifying the user 114 may comprise matching the flood image, e.g. showing a contour of parts of the user, in particular parts of the user’s face,
with a template, e.g. a template image generated within an enrollment process. The identifying of the user may comprise determining if the imaged face is the face of the user 114, in particular if the imaged face corresponds to at least one image of the user’s face stored in at least one memory, e.g. of the device. Authentication may be successful if the flood image can be matched with an image template. Authentication may be unsuccessful if the flood image cannot be matched with an image template.
For example, the identifying of the user 114 may comprise determining a plurality of facial features. The analyzing may comprise comparing, in particular matching, the determined facial features with template features. The template features may be features extracted from at least one template. The template may be or may comprise at least one image generated in an enrollment process, e.g. when initializing the device. Template may be an image of an authorized user. The template features and/or the facial feature may comprise a vector. Matching of the features may comprise determining a distance between the vectors. The identifying of the user may comprise comparing the distance of the vectors to a least one predefined limit. The user 114 may be successfully identified in case the distance is < the predefined limit at least within tolerances. The user 114 may be declined and/or rejected otherwise.
For example, the image recognition may comprise using at least one model, in particular a trained model comprising at least one face recognition model. The analyzing of the flood image may be performed by using a face recognition system, such as FaceNet, e.g. as described in Florian Schroff, Dmitry Kalenichenko, James Philbin, “FaceNet: A Unified Embedding for Face Recognition and Clustering”, arXiv: 1503.03832. The trained model may comprises at least one convolutional neural network. For example, the convolutional neural network may be designed as described in M. D. Zeiler and R. Fergus, “Visualizing and understanding convolutional networks”, CoRR, abs/1311.2901 , 2013, or C. Szegedy et al., “Going deeper with convolutions”, CoRR, abs/1409.4842, 2014. For more details with respect to convolutional neural network for the face recognition system reference is made to Florian Schroff, Dmitry Kalenichenko, James Philbin, “FaceNet: A Unified Embedding for Face Recognition and Clustering”, arXiv: 1503.03832. As training data labelled image data from an image database may be used. Specifically, labeled faces may be used from one or more of G. B. Huang, M. Ramesh, T. Berg, and E. Learned-Miller, “Labeled faces in the wild: A database for studying face recognition in unconstrained environments”, Technical Report 07-49, University of Massachusetts, Amherst, October 2007, the Youtube® Faces Database as described in L. Wolf, T. Hassner, and I. Maoz, “Face recognition in unconstrained videos with matched background similarity”, in IEEE Conf, on CVPR, 2011 , or Google® Facial Expression Comparison dataset. The training of the convolutional neural network may be performed as described in Florian Schroff, Dmitry Kalenichenko, James Philbin, “FaceNet: A Unified Embedding for Face Recognition and Clustering”, arXiv: 1503.03832.
The authentication unit 118 is configured for receiving the surface roughness measure and for configured for authenticating the user 114 or denying the authentication of the user 114 using the surface roughness measure.
For example, the face detection and identification of the user 114 may be performed before authenticating the user or denial using the surface roughness measure. The authentication process may be aborted in case the user is not successfully identified. As outlined above, using two-dimensional images for authentication can be tricked. The method according to the present invention proposes to use the surface roughness measure as additional security feature for authentication. The authentication using the flood image may be validated using the surface roughness measure.
Alternatively, the authenticating the user 114 or denial using the surface roughness measure may be performed regardless of whether the face detection and identification of the user 114 was performed. The surface roughness measure may be used as biometric identifier for uniquely identifying the user 114.
For example, the authentication unit 118 may be configured for determining if the surface roughness measure corresponds to a surface roughness measure of a human being. For example, the authentication unit 118 may be configured for determining if the surface roughness measure corresponds to a surface roughness measure of the specific user 114. Determining if the surface roughness measure corresponds to a surface roughness measure of a human being and/or of the specific user may comprise comparing the surface roughness measure to at least one pre-defined or pre-determined range of values of surface roughness measure, e.g. stored in at least one database e.g. of the device 102 or of a remote database such as of a cloud. In case the determined surface roughness measure is at least within tolerances within the re-defined or pre-determined range of values of surface roughness measure, the user 114 is authenticated otherwise the authentication is unsuccessful. For example, the surface roughness measure may be a human skin roughness. In case the determined human skin roughness is within the range of 10 pm to 150 pm the user is authenticated. However, other ranges are possible.
The authentication unit 118 may be configured for allowing or declining the user 114 to access one or more functions associated with the device 102 depending on the authentication or denial. The allowing may comprise granting permission to access the one or more functions. In particular, the device 102 may comprise at least authorization unit. The authorization unit may be configured for access control. The authorization unit may comprise at least one processor, e.g. the processor 108, or may be designed as software or application. The authorization unit and the authentication unit 118 may be embodied integral, e.g. by using the same processor. The authorization unit may be configured for allowing the user to access the one or more functions, e.g. on the device 102, e.g. unlocking the device 102, in case of successful authentication of the user or declining the user to access the one or more functions, e.g. on the device 102, in case of non-successful authentication.
The device 102 may be configured for displaying a result of the authentication and/or the authorization e.g. by using at least one communication interface such as a user interface, e.g. a display.
All system components may be part of the device 102. In other embodiments, the components may be separated between a plurality of devices. For example, the processor 108 may be a server, whereas the illumination source 104 and the camera 106 may be part of one device 102 such as a mobile electronic device. The camera 106 may provide the speckle image to the processor 108. The processor 108 may provide the surface roughness measure to a device for displaying the surface roughness measure and/or a device for processing the surface roughness measure. In an example, the device 102 comprising the camera 106 and the illumination source 104 may further comprise a display for displaying the surface roughness measure and/or a surface roughness processor configured for processing the surface roughness measure further. Optionally, the device may comprise the processor 108.
FIG. 2 illustrates an example for determining a surface roughness measure. The surface roughness measure is determined based on the speckle images 202a, 202b. Examples for speckle images 202a, 202b are shown in FIG. 2. The speckle image 204 may be cropped to a predefined size. The speckle images 202a, 202b may be transformed by means of Fourier transformation. The result of Fourier transforming the speckle images 202a, 202b can be referred to as Fourier plots 204a, 204b. In particular, the Fourier plots may be obtained by means of Fast Fourier transform (FFT). The Fourier plots 204a, 204b may represent the speckle images 202a, 202b in the frequency domain. The Fourier plots 204a, 204b may represent the distribution of frequencies associated with the speckle images 202a, 202b. Followingly, the Fourier plots 204a, 204b may comprise the magnitude of frequencies associated with the speckle images 202a, 202b. The Fourier plots 204a, 204b, may be further transformed into power spectral density (PSD) plots 208. The Fourier plots 204a, 204b may be transformed into the PSD plots 206a, 206b by multiplying the magnitudes of the respective Fourier plots 204a, 204b with its conjugate. Radial averaging with respect to a predefined point such as the center point of a quadratic image may result in the double logarithmic magnitude versus frequency plots as it can be seen on the right side of FIG. 2. Radial averaging may refer to averaging values with the same distance to the predefined point. Determining the surface roughness based on PSD is advantageous since the PSD may take vertical and lateral features into account. This provides an in- depth picture of a surface roughness associated with a surface structure. Followingly, a realistic description of the surface of the object can be achieved. Further, an estimation on the distribution of surface irregularities is enabled.
The double logarithmic plotting may be used to visualize the fractal dimension. The fractal dimension may be an example for a surface roughness measure. The fractal dimension may be determined by fitting the double logarithmic magnitude versus frequency plots associated with the speckle images 202a, 202b with a linear function. In particular, the fractal dimension may be determined as the slope of the linear function fitted to the double logarithmic magnitude versus frequency plots associated with the speckle images 202a, 202b. A high surface roughness may correspond to a high fractal dimension. A low surface roughness may correspond to a low fractal dimension.
Additionally or alternatively, the surface roughness measure may comprise one or more parameters of an autocorrelation function associated with the speckle images 202a, 202b. The autocorrelation function may be obtained by inverse Fourier transform of the PDS plot 208. The autocorrelation function may be defined as follows:
The parameters T, O and may be further examples for surface roughness measures. may be referred to as the lateral correlation length, o may be referred to the standard deviation of the height associated with surface features of the object, a may be referred to as roughness exponent. A high may reflect a low surface roughness, a high a may reflect a high surface roughness and a high o may reflect a high surface roughness.
Further examples for surface roughness measures may be speckle contrast, speckle modulation, speckle size or the like. These examples are readily available from the speckle images 202a, 202b.
Speckle contrast y may refer to a ratio of a standard deviation of intensity values
preferably associated with a predefined area of the speckle images 202a, 202b, to a mean of the respective intensity values ?. Speckle contrast
may be defined according to the following equation:
Speckle modulation M may be calculated based on the following formula:
wherein N may refer to the total number of predefined areas of the speckle images 202a, 202b, the indexes i and j may refer to pixel numbers and thus, may define the predefined area of the rmax speckle images 202a, 202b and wherein
may refer to the maximum intensity value asso- rmin ciated with the predefined area of the speckle images 202a, 202b and wherein may refer to the minimum intensity value associated with the predefined area of the speckle images 202a, 202b.
The speckle size may be calculated for example by multiplying the number of pixels with the size of the pixels. In some embodiments, the speckle size may be averaged over a part of the one or more speckle images 202a, 202b and/or over the full one or more speckle images 202a, 202b.
Another embodiment for determining a surface roughness measure may comprise providing at least one of the speckle images 202a, 202b to a data-driven model such as a convolutional neural network (CNN). The data-driven model may receive at least one of the speckle images 202a, 202b at an input layer. The data-driven model may further comprise one or more hidden layers
and an output layer. The speckle images 202a, 202b may be of a predefined size. The input layer may be specified according to the predefined size of the speckle images 202a, 202b. The layers of the data-driven model may be connected. Hence, the speckle images 202a, 202b may be passed through the layers. In particular, the pixel values associated with the speckle images 202a, 202b may pass through the layers of the data-driven model. While the pixel values may pass through the layers of the data-driven model, the pixel values may be allowed to interact with each other and/or may be combined, preferably non-linearily. Additionally or alternatively, the pixel values may be transformed. Preferably the pixel values may be transformed into an indication of the surface roughness measure by the data-driven model. The indication of the surface roughness measure may comprise the surface roughness measure and/or the surface roughness measure may be derivable from the indication of the surface roughness measure. Followingly, the surface roughness measure may be received from the data-driven model and/or the data-driven model may provide the surface roughness measure.
In an embodiment, the data-driven model may be configured for providing the surface roughness measure, in particular by transforming the indication of the surface roughness measure into a surface roughness measure.
Using a data-driven model may be advantageous since these models may learn correlations being non-obvious or may reflect correlations between different factors an expert would consider easily. So, the use of a data-driven model may reduce the time invest while achieving accuracies exceeding whitebox models.
In an embodiment, the data-driven model may provide the indication of the surface roughness measure and/or the indication of the surface roughness measure may be received from the data-driven model. From the indication of the surface roughness measure, the surface roughness measure may be derivable by means of a mathematical operation and/or by means of a look up table. For example, the data-driven model may be classifier classifying speckle images 202a, 202b into different groups of surface roughness measures. Hence, the output may indicate the group label. The group label may indicate the surface roughness measure. The relation between the group label and the surface roughness measure may be specified e.g. by the look up table. Other embodiments for establishing the relation between the surface roughness measure and the indication of the surface roughness measure may be feasible.
For the purpose described above, the data-driven model may be parametrized and/or trained according to a training data set. The training data set may comprise a plurality of speckle images 202a, 202b and corresponding surface roughness measures and/or indications of the surface roughness measure. The surface roughness measure and/or the indication of the surface roughness measure may refer to a label associated with the speckle images 202a, 202b. Parametrizing may be a prerequisite for training the data-driven model. The data-driven model may be trained based on the parametrizing of the data-driven model.
Another embodiment for determining a surface roughness measure may comprise providing the speckle images 202a, 202b to a physical model. The physical model, preferably, may reflect physical phenomena in mathematical form, e.g., including first-principles models. A physical model may comprise a set of equations that describe an interaction between the object, in particular the surface of the object and the coherent electromagnetic radiation thereby resulting in a surface roughness measure. For example, the physical model may comprise and/or combine at least one of the relations associated with the speckle contrast, the speckle modulation, the speckle size, the fractal dimension or a combination thereof. The physical model may be a white box model. The physical model may transform the speckle images 202a, 202b into a surface roughness measure. The physical model may combine the relations described above linearly, e.g. to introduce a weighting between the speckle contrast, the speckle modulation, the speckle size, the fractal dimension or the like. Some of the factors described above may be related closely while others may be related loosely to the surface roughness measure. Hence, weighting may reflect these relations which results in a higher accuracy. This in turn enables the reliable determination of the surface roughness because one of the factors may not be sufficient for significant results.
FIG. 3 illustrates an example embodiment of a method for computer-implemented method for authenticating a user 114 of a device 102 such as the device described with respect to Figure 1. The method comprises the following steps: a. (302) receiving a request for accessing one or more functions associated with the device; b. executing at least one authentication process comprising the following steps: b.1 (304) triggering to illuminate the user by coherent electromagnetic radiation associated with a wavelength between 850 nm and 1400 nm, b.2 (306) triggering to generate at least one speckle image of the user while the user is being illuminated by the coherent electromagnetic radiation, b.3 (308) determining at least one surface roughness measure based on the speckle image, b.4 (310) authenticating the user or denial using the surface roughness measure.
Prior to determining the surface roughness measure in block 308, the speckle image may be cut to a predefined size. Thereby, background may be removed and the degree of speckles associated with the user 114 may be increased. Speckles associated with the user 114 may refer to speckles caused by coherent electromagnetic radiation illuminating at least a part of the user 114.
In block 308 the surface roughness measure may be determined as described within the context of FIG. 2 Preferably, the surface roughness measure may be determined by a processor as described within the context of FIG. 1 . The processor, the camera and the illumination source may be part of one device and/or system.
In block 308 the surface roughness measure may be provided. For example, the surface roughness measure may be provided to an application of a mobile electronic device. Further, the application may be configured for initiating the determining of the surface roughness measure and/or for initiating the generating of the speckle image. The application may display the surface roughness measure, in particular the value of the surface roughness measure for example to the user 114. In particular the value of the surface roughness measure may be provided to the user 114. The application may further process the surface roughness measure to derive properties of the human skin.
FIG. 4 illustrates an embodiment of a surface 402 associated with a user 114. The surface 402 may comprise a plurality of surface features. Surface features may be lateral surface features 410 and/or vertical surface features 408. The lateral surface feature 410 may be quantified according to the dashed line indicating a length of a sink on the surface 402. The vertical surface features 408 may be quantified according to the dashed line indicating a height of an uplift on the surface 402. This surface 402 may be illuminated by coherent electromagnetic radiation emitted from the illumination source 406 as described in the context of FIG. 1 and FIG. 3. A speckle image may be generated while the surface may be illuminated by coherent electromagnetic radiation with the camera 404 as described in the context of FIG. 1 and FIG. 3.
List of reference numbers device illumination source camera processor sensor lens user skin roughness
Authentication unit a speckle image b speckle image speckle image a Fourier plot b Fourier plot receiving a request triggering to illuminate the user triggering to generate at least one speckle image determining at least one surface roughness measure authenticating the user surface camera
Illumination source vertical surface features lateral surface features
Claims
1 . A computer-implemented method for authenticating a user (114) of a device (102), the method comprising: a. (302) receiving a request for accessing one or more functions associated with the device (102); b. executing at least one authentication process comprising the following steps: b.1 (304) triggering to illuminate the user (114) by coherent electromagnetic radiation associated with a wavelength between 850 nm and 1400 nm, b.2 (306) triggering to generate at least one speckle image of the user (114) while the user is being illuminated by the coherent electromagnetic radiation, b.3 (308) determining at least one surface roughness measure based on the speckle image, b.4 (310) authenticating the user (114) or denial using the surface roughness measure.
2. The method according to the preceding claim, wherein the method further comprises c. allowing or declining the user to access one or more functions associated with the device depending on the authentication or denial in step b.4.
3. The method according to the preceding claim, wherein the method further comprises determining if the user (114) correspond to an authorized user, wherein allowing or declining is further based on determining if the user (114) corresponds to an authorized user.
4. The method according to any one of the preceding claims, wherein the coherent electromagnetic radiation is patterned coherent electromagnetic radiation and/or wherein the coherent electromagnetic radiation comprises one or more light beams.
5. The method according to any one of the preceding claims, wherein determining the surface roughness measure based on the speckle image refers to determining the surface roughness measure based on a speckle pattern in the speckle image.
6. The method of any one of the preceding claims, wherein a distance between the user and a camera used for generating the speckle image is between 10 cm and 1 .5 m and/or wherein the distance between the user and an illumination source used for illuminating the user is between 10 cm and 1 .5 m.
7. The method of any one of the preceding claims, wherein the surface roughness measure is determined using the speckle image by providing the speckle image to at least one model and receiving the surface roughness measure from the model.
8. The method of any one of the preceding claims, wherein the device (102) is selected from the group consisting of: a mobile device, particularly a cell phone, and/or a smart phone,
and/or a tablet computer, and/or a laptop, and/or a tablet, and/or a virtual reality device, and/or a wearable, such as a smart watch; or another type of portable computer, a television device; a game console; a personal computer; an access system such as of an automotive.
9. The method of any one of the preceding claims, wherein steps a) to b) are performed by a mobile electronic device, wherein the speckle image is generated with a camera of the mobile electronic device.
10. A device (102) for authenticating a user (114) of a device (102), the device (102) comprising: at least one illumination source (104, 406) configured for illuminating the user (114) with coherent electromagnetic radiation associated with a wavelength between 850 nm and 1400 nm; at least one camera (106, 404) configured for generating at least one speckle image showing the user (114) under illumination with the coherent electromagnetic radiation associated with a wavelength between 850 nm and 1400 nm, at least one processor (108) configured for receiving the speckle image from the camera (106) and determining at least one surface roughness measure based on the speckle image and providing the surface roughness measure, at least one authentication unit (118) configured for receiving the surface roughness measure and for configured for authenticating the user (114) or denying the authentication of the user (114) using the surface roughness measure.
11 . The device (102) according to the preceding claim, wherein the device (102) is configured for performing the method according to any one of claims referring to a method.
12. Use of a surface roughness measure as obtained by a method according to any one of claims referring to a method and/or as obtained by a device (102) according to any one of the preceding claims relating to a device for authentication a user.
13. A computer program comprising instructions which, when the program is executed by the device (102) according to any one of the preceding claims referring to a device, cause the device (102) to perform the method according to any one of the preceding claims referring to a method.
14. A computer-readable storage medium comprising instructions which, when the instructions are executed by the device (102) according to any one of the preceding claims referring to a device, cause the device (102) to perform the method according to any one of the preceding claims referring to a method.
15. A non-transitory computer-readable storage medium, the computer-readable storage medium comprises instructions that when executed by a computer, cause the computer to:
receive a request for accessing one or more functions associated with the device (102); execute at least one authentication process comprising the following steps:
- (304) triggering to illuminate the user (114) by coherent electromagnetic radiation associated with a wavelength between 850 nm and 1400 nm,
- (306) triggering to generate at least one speckle image of the user (114) while the user (114) is being illuminated by the coherent electromagnetic radiation,
- (308) determining at least one surface roughness measure based on the speckle image,
- (310) authenticating the user (114) or denial using the surface roughness measure.
Applications Claiming Priority (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| EP23192195.8 | 2023-08-18 | ||
| EP23192195 | 2023-08-18 |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| WO2025040591A1 true WO2025040591A1 (en) | 2025-02-27 |
Family
ID=87748002
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| PCT/EP2024/073113 Pending WO2025040591A1 (en) | 2023-08-18 | 2024-08-16 | Skin roughness as security feature for face unlock |
Country Status (1)
| Country | Link |
|---|---|
| WO (1) | WO2025040591A1 (en) |
Cited By (1)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN120747042A (en) * | 2025-08-19 | 2025-10-03 | 陕西凝远新材料科技股份有限公司 | Autoclaved aerated concrete plate surface flatness evaluation method |
Citations (3)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| WO2014113728A1 (en) * | 2013-01-17 | 2014-07-24 | Sionyx, Inc. | Biometric imaging devices and associated methods |
| US20200134344A1 (en) * | 2018-10-25 | 2020-04-30 | Alibaba Group Holding Limited | Spoof detection using structured light illumination |
| US20220094456A1 (en) | 2020-09-23 | 2022-03-24 | Nokia Technologies Oy | Authentication by dielectric properties of skin |
-
2024
- 2024-08-16 WO PCT/EP2024/073113 patent/WO2025040591A1/en active Pending
Patent Citations (3)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| WO2014113728A1 (en) * | 2013-01-17 | 2014-07-24 | Sionyx, Inc. | Biometric imaging devices and associated methods |
| US20200134344A1 (en) * | 2018-10-25 | 2020-04-30 | Alibaba Group Holding Limited | Spoof detection using structured light illumination |
| US20220094456A1 (en) | 2020-09-23 | 2022-03-24 | Nokia Technologies Oy | Authentication by dielectric properties of skin |
Non-Patent Citations (5)
| Title |
|---|
| C. SZEGEDY ET AL.: "Going deeper with convolutions", CORR, 2014 |
| FLORIAN SCHROFFDMITRY KALENICHENKOJAMES PHILBIN: "FaceNet: A Unified Embedding for Face Recognition and Clustering", ARXIV:1503.03832 |
| G. B. HUANGM. RAMESHT. BERGE. LEARNED-MILLER: "Technical Report 07-49", October 2007, UNIVERSITY OF MASSACHUSETTS, article "Labeled faces in the wild: A database for studying face recognition in unconstrained environments" |
| L. WOLFT. HASSNERI. MAOZ: "Face recognition in unconstrained videos with matched background similarity", IEEE CONF. ON CVPR, 2011 |
| M. D. ZEILERR. FERGUS: "Visualizing and understanding convolutional networks", CORR, 2013 |
Cited By (1)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN120747042A (en) * | 2025-08-19 | 2025-10-03 | 陕西凝远新材料科技股份有限公司 | Autoclaved aerated concrete plate surface flatness evaluation method |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| US12288421B2 (en) | Optical skin detection for face unlock | |
| US20250094553A1 (en) | Face authentication including material data extracted from image | |
| WO2025040591A1 (en) | Skin roughness as security feature for face unlock | |
| US20250078567A1 (en) | Face authentication including occlusion detection based on material data extracted from an image | |
| WO2024231531A1 (en) | Projector with oled | |
| WO2025045642A1 (en) | Biometric recognition system | |
| US20250095335A1 (en) | Image manipulation for material information determination | |
| WO2024088779A1 (en) | Distance as security feature | |
| EP4530666A1 (en) | 2in1 projector with polarized vcsels and beam splitter | |
| WO2025046063A1 (en) | Single vcsel combined dot and flood projector | |
| WO2025046067A1 (en) | Optical elements on flood vcsels for 2in1 projectors | |
| WO2025172524A1 (en) | Beam profile analysis in combination with tof sensors | |
| WO2025176821A1 (en) | Method for authenticating a user of a device | |
| WO2024200502A1 (en) | Masking element | |
| WO2025012337A1 (en) | A method for authenticating a user of a device | |
| KR20250145018A (en) | OLED rear certification | |
| WO2025162970A1 (en) | Imaging system | |
| JP2025506218A (en) | Method and system for detecting vital signs | |
| CN121195290A (en) | Projectors with OLED | |
| EP4666254A1 (en) | Behind oled authentication |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| 121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 24755312 Country of ref document: EP Kind code of ref document: A1 |