WO2021162682A1 - Fingerprint sensors with reduced-illumination patterns - Google Patents
Fingerprint sensors with reduced-illumination patterns Download PDFInfo
- Publication number
- WO2021162682A1 WO2021162682A1 PCT/US2020/017745 US2020017745W WO2021162682A1 WO 2021162682 A1 WO2021162682 A1 WO 2021162682A1 US 2020017745 W US2020017745 W US 2020017745W WO 2021162682 A1 WO2021162682 A1 WO 2021162682A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- user
- images
- display
- blocks
- user device
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Ceased
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/12—Fingerprints or palmprints
- G06V40/1365—Matching; Classification
- G06V40/1376—Matching features related to ridge properties or fingerprint texture
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/10—Image acquisition
- G06V10/12—Details of acquisition arrangements; Constructional details thereof
- G06V10/14—Optical characteristics of the device performing the acquisition or on the illumination arrangements
- G06V10/141—Control of illumination
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/12—Fingerprints or palmprints
- G06V40/13—Sensors therefor
- G06V40/1318—Sensors therefor using electro-optical elements or layers, e.g. electroluminescent sensing
Definitions
- This disclosure describes methods and techniques for using reduced- illumination patterns with fingerprint sensors, as well as apparatuses including those fingerprint sensors. Using these reduced-illumination patterns, the techniques can authenticate a user in a shorter amount of time, with reduced power consumption and reduced damage to a display screen, such as an organic light-emitting diode (OLED) display.
- OLED organic light-emitting diode
- a computer-implemented method comprises determining, based on a location of a user touch to a display of a display system, small regions of the display, the small regions within a touch area of the display over which the user touch is superimposed, the small regions representing a reduced area relative to all of the touch area. Then the method includes illuminating with radiation, each of the small regions of the display, the illumination effective to cause the radiation to reflect off a user’s skin touching the touch area. After illuminating, the method includes capturing images at a sensor, the sensor configured to receive the radiation reflected off the user’s skin touching the touch area at one or more of the small regions, the images including one or more images corresponding to the one or more of the small regions, respectively.
- the method includes comparing the one or more images to an enrolled template, the enrolled template associated with a fingerprint of a verified user. The method also includes authenticating the user touch to the display based on the comparing of the one or more images to the enrolled template. Finally, the method includes being responsive to authenticating the user touch, enabling use of a function or peripheral.
- a user device comprises a display of a display system, a sensor, one or more processors, and one or more computer-readable media having instructions thereon that, responsive to execution by the one or more processors, perform the operations of the method mentioned above.
- a system, a software, or means includes performing the operations of the method mentioned above.
- a computing system e.g., the user device
- analyzes information e.g, fingerprint images
- the computing system uses the information associated with the user after the computing system receives explicit permission from the user to collect, store, or analyze the information.
- the computing system receives explicit permission from the user to collect, store, or analyze the information.
- the user will be provided with an opportunity to control whether programs or features of the user device or a remote system can collect and make use of the fingerprint for a current or subsequent authentication procedure.
- Individual users therefore, have control over what the computing system can or cannot do with fingerprint images and other information associated with the user.
- Information associated with the user e.g., an enrolled image
- a user device stores an enrolled image (also referred to as an “enrolled template”)
- the user device may encrypt the enrolled image. Pre-treating the data this way ensures the information cannot be traced back to the user, thereby removing any personally identifiable information that would otherwise be inferable from the enrolled image.
- the user has control over whether information about the user is collected and, if collected, how such information may be used by the computing system.
- FIG. 1 illustrates an example user device that authenticates a user using reduced- illumination patterns for capturing and matching a fingerprint.
- FIG. 2 illustrates another example user device that authenticates a user using reduced- illumination patterns for capturing and matching a fingerprint.
- FIG. 3 illustrates an example of a fingerprint identification system that implements reduced-illumination patterns for capturing and matching a fingerprint.
- FIG. 4 illustrates aspects of block-by-block matching used in reduced-illumination patterns for capturing and matching a fingerprint.
- FIG. 5 illustrates another example user device with a display screen and an optical Under Display Fingerprint Sensor (UDFPS).
- UFPS optical Under Display Fingerprint Sensor
- FIG. 6 illustrates a cross-section of an optical UDFPS embedded under an OLED display.
- FIG. 7 illustrates another example user device that utilizes reduced-illumination patterns for capturing and matching a fingerprint.
- FIG. 8 illustrates an example environment of a user device that dynamically illuminates, captures, transfers, and matches an image of a fingerprint to the enrolled template.
- FIG. 9 illustrates another example environment of a user device that dynamically illuminates, captures, transfers, and matches a plurality of fingerprint images to a plurality of enrolled templates.
- FIG. 10-1 illustrates an example logic-flow diagram for a capturing module of the fingerprint identification system of FIG. 3.
- FIG. 10-2 illustrates an example logic-flow diagram for a matching module of the fingerprint identification system of FIG. 3.
- FIG. 11 illustrates a computer-implemented method that implements reduced- illumination patterns for capturing and matching a fingerprint.
- FIG. 12 illustrates examples of patterns and minutiae used in matching fingerprints.
- This document describes apparatuses, methods, and techniques that enable large-area fingerprint sensors to capture a fingerprint and match it to an enrolled template with a reduced-illumination pattern.
- a user device may use a fingerprint identification system to capture a “verify image” and match patterns and/or minutiae of the verify image to an enrolled image.
- a “verify image” is a fingerprint image used for authentication.
- An “enrolled image” is an image that the user device captures during enrollment, such as when the user first sets up the smartphone or an application.
- an “enrolled image template” can be a mathematical representation of the enrolled image.
- the enrolled template can be a vectorized representation of the enrolled image and, among other advantages noted below, take less memory space in the user device.
- a vectorized representation for an enrolled image template is not required for matching a verify image to the enrolled image template.
- the described apparatuses, methods, and techniques can perform image-to- image (rather than vector-to-vector) comparisons, as well as other representations, to compare each verify image to the enrolled image.
- ROC receiver operating curve
- biometric security measurements may include false acceptance rate (FAR) for the proportion of times a fingerprint identification system grants access to an unauthorized person and false rejection rate (FRR) for the proportion of times a fingerprint identification system fails to grant access to an authorized person.
- FAR false acceptance rate
- FRR false rejection rate
- a fingerprint identification system with a high success rate has a low false acceptance rate and a low false rejection rate.
- FAR false acceptance rate
- FRR false rejection rate
- a fingerprint identification system with a high success rate has a low false acceptance rate and a low false rejection rate.
- With more detail in a large fingerprint image it is possible to make a more-accurate identification (lower false acceptance rate and lower false rejection rate).
- Standalone large-area fingerprint sensors however, occupy valuable space and limit the size of the display screen when embedded on the front of a user device (e.g., a smartphone).
- UDFPSs optical Under Display Fingerprint Sensors
- Manufacturers may embed optical UDFPSs under an OLED display screen.
- the OLED display screen accommodates a large fingerprint area without sacrificing valuable “real estate” of the display screen to be used for the primary purpose of fingerprint authentication.
- the use of optical UDFPSs have some drawbacks.
- the user device utilizes a standalone localized fingerprint sensor, the user can easily find the fingerprint sensor by sight and/or by the feel of a touch.
- the user device utilizes an optical UDFPS, the user often cannot see or touch the fingerprint sensor because the display screen (e.g., an OLED display) is smooth.
- another reason for embedding large-area optical UDFPSs is to aid the user to easily present a finger or a plurality of fingers for authentication.
- HBM high brightness mode
- a user device with a large-area optical UDFPS locates the fingerprint location using a touch-sensitive or location-sensitive display screen. Then, in high brightness mode, the OLED display screen illuminates a large area, such as a square area, around the fingerprint location. The user device captures the verify image in the large area and may use a processor ( e.g . , an image processor) to match the verify image to the enrolled template.
- a processor e.g . , an image processor
- optical UDFPS with an OLED display screen for fingerprint authentication
- Use of these optical UDFPS techniques may require illumination of a large area in highbrightness mode, capture of a large verify image, transfer of the large verify image to the processor, and match of the verify image to the enrolled template. Further, if the quality of the verify image is low, the user device may repeat this process until it can determine with enough confidence (e.g., low false acceptance rate and low false rejection rate) whether to grant or deny access to the user.
- OLED display screens used in user devices with optical UDFPS s exhibit localized “aging” of the display screen due to the repeated operations in high-brightness mode, which degrades the organic material in OLED display screens.
- Localized aging of the OLED display screen adversely affects the quality of the display screen. For example, when the OLED display screen displays an image with a white background, the user can easily notice locations on an OLED display screen where fingerprint sensing has been used, because those locations appear as an off- white area in a white background.
- this disclosure illustrates how to achieve user authentication in a short amount of time, preserve the battery charge of the user device, and preserve the quality of the OLED display screen by employing block-by-block capture- and-match techniques.
- Block-by-block where a block is NxN (e.g., 31x31) pixels, capturing and matching, leverages a technique of “fusion” of minutiae and pattern-correlation matching of a fingerprint.
- NxN e.g., 31x31
- capturing and matching leverages a technique of “fusion” of minutiae and pattern-correlation matching of a fingerprint.
- some user devices rely on conventional pattern-correlation matching instead of minutiae matching, and they attempt to correlate an entire fingerprint at once.
- This conventional technique is not realistically scalable and cannot easily support large areas, such as several square centimeters or high-resolution fingerprint images (e.g., resolutions of a thousand Dots-Per-Inch (DPI)).
- DPI Dots-Per-Inch
- the techniques disclosed here fuse minutiae and pattern-correlation matching, which can reduce illumination used to verify fingerprints.
- FIG. 1 illustrates an example user device 100 that authenticates a user input using reduced-illumination patterns for capturing and matching a fingerprint or a plurality of fingerprints.
- the user device 100 fuses minutiae matching with pattern-correlation matching.
- the user device 100 can authenticate a user in a short amount of time (low latency) and preserve the battery charge (less power) of the user device 100.
- the user device 100 utilizes a UDFPS embedded under an OLED display, using reduced illumination (e.g., illuminating visible light) patterns for capturing and matching the fmgerprint(s), which preserves the quality of the OLED display screen.
- the user device 100 may be any mobile or non-mobile user device.
- the user device 100 can be a mobile phone (e.g., a smartphone), a laptop computer, a wearable device (e.g., watches, eyeglasses, headphones, clothing), a tablet device, an automotive/vehicular device, a portable gaming device, an electronic reader device, or a remote-control device, or other mobile computing device that relies on fingerprint identification to perform a function.
- the user device 100 may represent a server, a network terminal device, a desktop computer, a television device, a display device, an entertainment set-top device, a steaming media device, a tabletop assistant device, a non-portable gaming device, business conferencing equipment, a payment station, a security checkpoint system, or other non-mobile computing device including a fingerprint identification system.
- the user device 100 includes an application 102, a fingerprint identification system 104, and a display screen 106.
- the fingerprint identification system 104 includes a sensor 108, an enrolled image template 110 (enrolled template 110), and the display screen 106 with a graphical user interface (GUI).
- the GUI may include instructions for the user to follow to authenticate themselves with the sensor 108.
- the GUI may include a target graphical element (e.g, an icon, a designated region) where the user is to touch the display to provide their fingerprint, or the user may be permitted to place their fmgertip(s) anywhere on the screen that the user prefers.
- a target graphical element e.g, an icon, a designated region
- the user device 100 may include additional or fewer components than what is illustrated in FIG. 1.
- the application 102 is some software, applet, peripheral, or other entity that requires or prefers authentication of a user.
- the application 102 can be a secured component of the user device 100 or an access entity to secure information accessible from the user device 100.
- the application 102 can be an online banking application software or webpage that requires fingerprint identification before logging in to an account.
- the application 102 may be part of an operating system (OS) that prevents access (generally) to the user device 100 until the user’s fingerprint is identified.
- OS operating system
- the application 102 may execute partially or wholly on the user device 100 or in “the cloud” (e.g., on a remote device accessed through the Internet).
- the application 102 may provide an interface to an online account, such as through an internet browser or an application programming interface (API).
- API application programming interface
- the sensor 108 can be any sensor able to capture a high-resolution image, such as five hundred (500) Dots-Per-Inch (DPI), seven hundred (700) DPI, one thousand (1000) DPI, and so forth.
- the sensor 108 may be a complementary metal- oxide-semiconductor (CMOS) image sensor, a charge-coupled device (CCD) image sensor, a capacitive image sensor, an ultrasonic image sensor, a thin film transistor (TFT) image sensor, a quanta image sensor (QIS), and so forth.
- CMOS complementary metal- oxide-semiconductor
- CCD charge-coupled device
- TFT thin film transistor
- QIS quanta image sensor
- the enrolled template 110 represents a predefined, user-specific fingerprint image template.
- the fingerprint identification system 104 records and stores the enrolled template 110 in advance during a coordinated setup session with the user device 100 and a particular user. Using the GUI being displayed in the display screen 106, the user device 100 instructs the user to press a finger on the sensor 108 one or more times until the fingerprint identification system 104 has an accurate image of the user’s fingerprint (or fingerprints, palm prints), which the user device 100 retains as an enrolled template 110.
- the fingerprint identification system 104 captures individual blocks of the fingerprint that are recognizable from user input at the sensor 108.
- a block can be an image area of NxN pixels (e.g, 31x31 pixels, 23x23 pixels).
- the fingerprint identification system 104 uses minutiae matching, pattern-correlation, or both, to extract individual blocks initially captured by the sensor 108 that may indicate a match to corresponding blocks of the enrolled template 110. Rather than the sensor 108 capturing the entire fingerprint image presented on the sensor 108, the fingerprint identification system 104 captures blocks representing only some of the entire fingerprint image and matches the captured blocks to blocks of the enrolled template 110.
- the fingerprint identification system 104 can match and verify blocks in parallel or series and from one or more fingerprints.
- the fingerprint identification system 104 can first capture a predetermined number of blocks from the fmgerprint(s) that the user presents to the sensor 108 and then the fingerprint identification system 104 matches the captured blocks from the fingerprint image(s) to blocks contained in the enrolled template 110. As the fingerprint identification system 104 captures one or more blocks from the fmgerprint(s) that the user presents to the sensor 108, the fingerprint identification system 104, in situ and/or dynamically, can match, operating in parallel, the captured blocks from the fingerprint image(s) to blocks contained in the enrolled template 110.
- the user device 100 detects a user input at the sensor 108.
- the fingerprint identification system 104 divides the user input into a quantity of P blocks, where P is an integer.
- the fingerprint identification system 104 by using the sensor 108, captures M number of blocks, where M is an integer less than P, from the user input, evaluating part of the full image during each iteration of the fingerprint capture.
- the fingerprint identification system 104 scores each of the captured individual M blocks against corresponding M’ blocks of the enrolled template 110, where M’ is an integer equal to M.
- M is an integer equal to M.
- the AT blocks represent a number of blocks in the verify image
- M ’ blocks represent a corresponding same number of blocks in the enrolled template 110.
- the fingerprint identification system 104 may identify and compare closest matching rotationally-invariant vectors of the corresponding M’ blocks of the enrolled template 110 in any direction. The outcome of these vectors’ transformation will be the same and mapped to the same vectors regardless of the orientation.
- the fingerprint identification system 104 replaces the minutiae with a pattern but treats the pattern like minutiae by assigning a location and an orientation to the pattern.
- each respective vector may include the following:
- the block’ s Fast Fourier Transforms (FFTs) of the polar representation with a high resolution in the theta (Q) direction, where theta (Q) is further described in FIG. 4.
- FFTs Fast Fourier Transforms
- the fingerprint identification system 104 determines respective scores of each of the captured individual M blocks from the vectors, and a confidence that the individual blocks M match the corresponding blocks M’ of the enrolled template 110. Based on the scores and confidences of the individual blocks, the fingerprint identification system 104 iteratively computes a confidence and a score for the user input relative to the enrolled template 110. [0027]
- the fingerprint identification system 104 updates the confidence and score that the user input matches the enrolled template 110. If during this iterative process the confidence in the user input does not satisfy a confidence threshold within a pre-determined latency time, the user input is marked as unidentifiable or unrecognized. The user device 100 conserves power by not granting access to the user input without needing to capture and verify the whole fingerprint image. However, if at any time prior to or after capturing all the individual M blocks, the fingerprint identification system 104 determines that the confidence and score of the user input satisfy respective thresholds, the fingerprint identification system 104 may automatically match the user input (the verify image) to the enrolled template 110, thereby authenticating the user input and granting access to the application 102 without having to capture an entire fingerprint image.
- the pre-determined latency time and the number of matching blocks to authenticate a user may differ from user-to-user depending on a user’s fingerprint ridges. For example, some users may have flatter fingerprint ridges compared to other users. In that case, the user device 100 with the associated fingerprint identification system 104 may take a little longer to authenticate users with flatter-than- nonnal fingerprint ridges. The users with flatter-than-normal fingerprint ridges, however, can still benefit from using the fingerprint identification system 104. Regardless of the “quality” of user’s fingerprint ridges, the fingerprint identification system 104 can authenticate the user in a shorter amount of time, using less power, and preserving the quality of the OLED display.
- M predetermined number of M out of P blocks
- FIG. 2 illustrates another example user device 200 that authenticates user inputs using reduced-illumination patterns for capturing and matching a fingerprint.
- the user device 200 is an example of the user device 100 set forth in FIG. 1.
- FIG. 2 illustrates the user device 200 as being a variety of example devices, including a smartphone 200-1, a tablet 200-2, a laptop 200-3, a desktop computer 200-4, a computing watch 200-5, computing eyeglasses 200-6, a gaming system or controller 200-7, a smart speaker system 200-8, and an appliance 200-9.
- the user device 200 can also include other devices, such as televisions, entertainment systems, audio systems, automobiles, unmanned vehicles (inair, on the ground, or submersible “drones”), trackpads, drawing pads, netbooks, e-readers, home security systems, doorbells, refrigerators, and other devices with a fingerprint identification system.
- the user device 200 includes one or more computer processors 202, one or more computer-readable media 204 (CRM 204), and one or more sensor components 206.
- the user device 200 further includes one or more communication and input/output (I/O) components 208, which can operate as an input device and/or an output device, for example, presenting a GUI and receiving inputs directed to the GUI.
- I/O input/output
- the one or more CRM 204 includes the application 102, the fingerprint identification system 104, the enrolled template 110, and a secured data store 214.
- the fingerprint identification system 104 includes an identification module 212.
- Other programs, services, and applications can be implemented as computer-readable instructions on the CRM 204, which can be executed by the computer processors 202 to provide functionalities described herein.
- the computer processors 202 and the CRM 204 which include memory media and storage media, are the main processing complex of the user device 200.
- the sensor 108 is included as one of the sensor components 206.
- the computer processors 202 may include any combination of one or more controllers, microcontrollers, processors, microprocessors, hardware processors, hardware processing units, digital signal processors, graphics processors, graphics processing units, and the like.
- the computer processors 202 may be an integrated processor and memory subsystem (e.g., implemented as a “system-on-chip”), which processes computer- executable instructions to control operations of the user device 200.
- the CRM 204 is configured as persistent and non-persistent storage of executable instructions (e.g., firmware, software, applications, modules, programs, functions) and data (e.g, user data, operational data, online data) to support execution of the executable instructions.
- executable instructions e.g., firmware, software, applications, modules, programs, functions
- data e.g, user data, operational data, online data
- Examples of the CRM 204 include volatile memory and non volatile memory, fixed and removable media devices, and any suitable memory device or electronic data storage that maintains executable instructions and supporting data.
- the CRM 204 can include various implementations of random-access memory (RAM), read only memory (ROM), flash memory, and other types of storage memory in various memory device configurations.
- the computer-readable media 204 excludes propagating signals.
- the computer-readable media 204 may be a solid-state drive (SSD) or a hard disk drive (HDD).
- the sensor components 206 may include other sensors for obtaining contextual information (e.g., sensor data) indicative of operating conditions (virtual or physical) of the user device 200 or the user device 200’s surroundings.
- the user device 200 monitors the operating conditions based in part on sensor data generated by the sensor components 206.
- other examples of the sensor components 206 include various types of cameras (e.g, optical, infrared), radar sensors, inertial measurement units, movement sensors, temperature sensors, position sensors, proximity sensors, light sensors, infrared sensors, moisture sensors, pressure sensors, and the like.
- the communication and I/O component 208 provides connectivity to the user device 200 and other devices and peripherals.
- the communication and I/O component 208 includes data network interfaces that provide connection and/or communication links between the device and other data networks, devices, or remote systems (e.g., servers).
- the communication and I/O component 208 couples the user device 200 to a variety of different types of components, peripherals, or accessoiy devices.
- Data input ports of the communication and I/O component 208 receive data, including image data, user inputs, communication data, audio data, video data, and the like.
- the communication and I/O component 208 enables wired or wireless communicating of device data between the user device 200 and other devices, computing systems, and networks. Transceivers of the communication and I/O component 208 enable cellular phone communication and other types of network data communication.
- the identification module 212 directs the fingerprint identification system 104 to perform reduced-illumination patterns for capturing and matching a fingerprint detected at the sensor components 206.
- the identification module 212 obtains some of the individual blocks of the user input being captured by the sensor 108 and scores each of the captured individual blocks against different blocks of the enrolled template 110. Whether the user device 200 and the fingerprint identification system 104 perform the user authentication serially or in parallel, when the identification module 212 directs the sensor 108 to capture additional individual blocks, the identification module 212 compiles the scores of the individual blocks already captured and generates, from the scores, a confidence value associated with the user input.
- the identification module 212 prior to capturing all the individual blocks of the user input, and as soon as the confidence satisfies a threshold, the identification module 212 automatically matches the user input to the enrolled template 110 and authenticates the user input. By doing so, the latency can be reduced for some of the authentication attempts. [0036] In response to the identification module 212 outputting an indication that a fingerprint is identified and matched to the user, the application 102 may grant the user access to the secured data store 214. Otherwise, the identification module 212 outputs an indication that fingerprint identification failed, and the user is restricted from having access to the secured data store 214, or the identification module 212 repeats the authentication process to verify that fingerprint identification failed more than once.
- FIG. 3 illustrates an example of a fingerprint identification system 104-1 that implements reduced-illumination patterns.
- This fingerprint identification system 104-1 illuminates, captures, and matches (or fails to match) a fingerprint or a plurality of fingerprints to the enrolled template 110.
- the fingerprint identification system 104-1 includes the enrolled template 108 and the sensor 108
- the fingerprint identification system 104-1 further includes identification module 302, which includes a capturing module 304 and a matching module 306.
- the capturing module 304 captures a user input at the sensor 108 at the direction of the identification module 302.
- the matching module 306 attempts to match the output from the capturing module 304 to the enrolled template 110. Instead of expecting the capturing module 304 to capture an entire fingerprint, the matching module 306 scores captured blocks against blocks of the enrolled template 110, and the scores R are tracked as new blocks (where Mis less than P ) are captured by the capturing module 304.
- the matching module 306 determines, based on the confidences and scores R associated with the individual M blocks, an overall composite score S and confidence C associated with the user input matching the enrolled template 110.
- the matching module 306 uses the highest-ranking individual block scores to produce the overall score S indicating whether the fingerprint matches the enrolled template 110.
- the matching module 306 maintains the confidence C in the overall score S and the confidence C increases as the confidence in the highest-ranking individual block scores also increase. As more blocks are captured and matched, the confidence C in the overall image score grows.
- the matching module 306 determines whether or not the confidence C satisfies a confidence threshold for matching the fingerprint image to the enrolled template 110. Rather than wait for, or expect the capturing module 304 to capture, an entire image, the fingerprint is authenticated as soon as the score S and its confidence C satisfy their respective thresholds.
- FIG. 4 illustrates aspects of block-by-block matching used in reduced- illumination patterns for capturing and matching a fingerprint.
- FIG. 4 is described in the context of the fingerprint identification system 104-1 and illustrates a verify image 402, which is divided into M blocks, including block 404.
- Each of the M blocks, including the block 404 is NxN (N can be any finite integer, such as 23, 31) pixels and is separated from another block by separation distances sDIS.
- the M blocks can be overlapping, non overlapping and apart (with a sliding distance of more than one pixel between the blocks), or adjacent (with a sliding distance of zero or one pixel between the blocks).
- FIG. 4 further includes an NxN sized block 406 of the enrolled template 110 and a ranked table 408.
- the rotational and translation matrices where the rotation and translation matrix between the two images 404 and 406, can be defined as: where ⁇ represents the angle between the center points (hx, hy) and (hx, hy) in Cartesian coordinates for the two images 404 and 406, T x represents the translation along the x-axis between the two images 404 and 406, and T y represents the translation along the y-axis between the two images (blocks) 404 and 406.
- the x-coordinates and the y-coordinates of image 406 can be transformed into the coordinate system of the image 404 using Equation 1.
- Rh n the rotation matrix between the blocks 404 and 406, herein called Rh n
- RM21 the inverse of the rotation matrix between the blocks 404 and 406, herein called RM21, as shown in Equation 2.
- RM 12 c RM 21 r 1
- the capturing module 304 determines a similarity between the vectors of the verify blocks 404 and the vectors of the enrolled blocks 406 along with the angle of rotation f and the con-elation.
- the matching module 306 then extracts the x-coordinate, the y- coordinate, and the angle correspondence in the output from the capturing module 304 and calculates the translation in the x-direction and y-direction for each block of the verify image.
- the matching module 306 merges vectors from the enrolled image templates using a rotation and translation matrix and drops redundant vectors based on a quality score.
- the matching module 306 drops redundant vectors using this quality- score to rank the highest (e.g., top ten) translation and rotation vectors.
- the ranked table 408 represents a data structure (table) that the matching module 306 may use to maintain the highest-ranking translation and rotation vectors.
- the outcome of the matching is the number of matching blocks between the verify image 402 and the enrolled image 406 that show similar translation and rotation. To increase the robustness of the matching, a small error is allowed in the translation and rotation to account for variations due to skin-plasticity distortions.
- the described techniques using reduced-illumination patterns for capturing and matching can be used for other forms of biometric matching, such as iris, palm print, and footprint.
- One area where parallel capturing and matching techniques tend to fail is when attempting to match a perfect pattern (e.g., a perfect zebra pattern) because, in that case, each block from the enrolled and a verified image is identical.
- a perfect pattern e.g., a perfect zebra pattern
- This limitation becomes irrelevant because biometric patterns are not perfect. It is that imperfection and uniqueness that gives the biometric pattern and the reduced-illumination patterns for capturing and matching techniques their value.
- FIG. 5 is an example of a user device 500 (e.g., a smartphone) with a display screen 506 (e.g., an OLED display) and an optical Under Display Fingerprint Sensor (UDFPS).
- the UDFPS (not illustrated in FIG. 5) has an active area 508 that can capture a fingerprint image, such as a verify image 510, when the user places their finger pad on the display screen 506.
- the active area 508 is smaller than the display screen 506 and is located on the bottom portion of the display screen 506 of the user device 500.
- the active area 508, however, can be a portion of the display screen 506 (as is illustrated in FIG. 5) or can be as large as the display screen 506 (not illustrated as such in FIG.
- the user device 500 utilizes an optical UDFPS, the user device 500 does not sacrifice valuable display area (e.g., at the bottom of the user device 500) to embed a standalone small-area fingerprint sensor. Instead, the user device 500 sacrifices only the top part of the user device 500 to include a speaker 502 and a front-facing camera 504. Note that the user device 500 may have other hardware features (not illustrated), such as a power button, a volume button, a home button, and so forth, that may be embedded in or on the user device 500.
- the display screen 506 of the user device 500 can be touch-sensitive or location-sensitive, which enables the display screen 506 to determine the location of the user’s finger pad prior to or during a user touch.
- a touch controller (not illustrated) determines the location of the user’s touch on the display screen 506. If the user device 500 utilizes technologies, such as radar, camera(s), motion sensors, or infrared optical imaging, the touch controller of the user device 500 may determine the location of the user’s touch just prior to the user touching the display screen 506.
- the touch controller of the user device 500 can determine the location of the user’s touch when the user touches the display screen 506.
- the touch controller of the user device 500 determines that the user’ s touch is within the active area 508, the user device 100 captures the verify image 510 in a touch area 512 inside the active area 508.
- the touch controller of the user device 500 determines the center of the user’s touch and activates the touch area 512 such that the center of the user’s touch is the center of the touch area 512.
- the display screen 506 is an OLED display (see FIG. 6 for more details)
- the OLED display illuminates in high brightness mode the touch area 512 inside the active area 508 of the display screen 506.
- FIG. 5 includes the verify image 510, which is divided into P blocks, including block 520.
- FIG. 6 illustrates a cross-section 600 of an optical Under Display Fingerprint Sensor (UDFPS) under an OLED display; refer to FIG. 5 for the example location of the described cross-section 600. Note that the cross-section 600 may include greater or fewer elements than are illustrated in FIG. 6.
- FIG. 6 helps introduce the operational principles of the optical UDFPS embedded under the OLED display; FIG. 6 is not intended to fully describe the OLED display nor the UDFPS.
- the OLED display includes a red-green-blue pattern 608 (RGB pattern 608), which includes RGB light-emitting elements that the OLED display uses to illuminate visible light to a user’s finger pad 602. Note that each RGB light-emitting element represents a pixel.
- the OLED display also includes thin film transistors 610 (TFTs 610), which the OLED display uses to turn on and off each pixel in the RGB pattern 608. For simplicity, in this example, the RGB pattern 608 and the TFTs 610 make up the OLED display.
- the touch controller determines the location of the touch area 512; refer to FIG. 5.
- the OLED display illuminates light using all the pixels inside the touch area 512.
- the illuminated light such as illuminated green light 612 and illuminated red light 614, passes through a cover glass 606 above the OLED display. Then, the illuminated light is reflected by ridges of a fingerprint 604 back through the cover glass 606.
- the reflected light such as reflected green light 616 and reflected red light 618, is captured by a sensor 620 (e.g., a CMOS image sensor). Note that the sensor 620 includes cells, where each cell also represents a pixel.
- the pixel includes a red, green, or blue lighting element, a TFT, and a cell of the sensor 620.
- the pixel may include more elements, which are not illustrated in FIG. 5.
- the reflected light off the user’s finger pad e.g., the reflected green light 616 and the reflected red light 618, travels towards the sensor 620, the light is directed by a collimator 622, which is embedded between the OLED display and the sensor 620.
- the collimator 622 guides the reflected light to each cell of the sensor 620 in such a way that each cell of the sensor 620 does not measure scattered light.
- the collimator 622 helps the sensor 620 to capture a verify image 510 in high resolution, with sharp instead of blurry images of the ridges of the fingerprint 604.
- the image information captured by the sensor 620 is transmitted to other components (not illustrated), such as peripheral integrated circuits (ICs), through bonding(s) 624.
- ICs peripheral integrated circuits
- the OLED display ages as the user uses the user device 500.
- repetitive use of the OLED display in high-brightness mode accelerates aging and, over time, creates localized aging for portions of the OLED display that are repetitively used.
- some manufacturers may design the OLED display to operate in less than the maximum (100%) of the high-brightness mode, such as at 50%. This solution may cause localized aging at a slower pace; for example, the user may notice the first signs of the localized aging after four years instead of after two years of utilizing the user device 500.
- the user device 500 still uses a considerable amount of power and time to locate the fingerprint position, illuminate the touch area 512, capture a large verify image 510, transfer the large verify image 510 to the processor(s) 202, and match the verify image 510 to the enrolled template 110. Further, if the quality of the verify image 510 is low, the user device 500 may repeat this process until it can determine with enough confidence (e.g., low false acceptance rate and low false rejection rate) that the user is authorized to use the user device 500 or an application software.
- enough confidence e.g., low false acceptance rate and low false rejection rate
- the user device 500 operates the OLED display screen at lower than 100% of the high-brightness mode, such as at 50%, localized aging still occurs, just at a slower pace.
- the OLED display when the user device 500 does not use reduced-illumination patterns for capturing and matching a fingerprint or a plurality of fingerprints, the OLED display, overtime, will degrade.
- FIG. 7 illustrates another example of a user device 700 (e.g., a smartphone) with a display screen 706 (e.g. , an OLED display) and an optical Under Display Fingerprint Sensor (UDFPS).
- the UDFPS in FIG. 7 has an active area 708 that can detect a user’s finger pad and capture a fingerprint image, such as a verify image 710, when the user places their finger pad on the display screen 706.
- the user device 700 in FIG. 7 sacrifices only the top part of the user device 700 to include a speaker 702 and a front-facing camera
- the touch controller of the user device 700 deteimines that the user’ s touch is within the active area 708, the user device 700 captures the verify image 710 in a touch area 712 inside the active area 708.
- the touch controller of the user device 700 determines the center of the user’s touch (illustrated as finger pad center location 714) and activates the touch area 712 such that the center of the user’s touch 714 is the center of the touch area 712.
- FIG. 7 illustrates a square touch area 712.
- the touch area 712 can be any two-dimensional (2D) shape, such as a circle, an ellipse, a triangle, a rectangle, a hexagon, an octagon, and so forth.
- the OLED display in FIG. 7 illuminates in high-brightness mode only M blocks out of the P blocks (where M is less than P) inside the touch area 712, where block 720 is one of the blocks illuminated in high-brightness mode. So, instead of illuminating in high-brightness mode and capturing the entire verify image 710 to be authenticated against the enrolled template 110, the user device 700 illuminates, captures, and verifies only M out of P blocks of the verify image 710, where each block is NxN pixels (e.g., 31x31 pixels). This is one example way in which the techniques reduce illumination using a pattern, the pattern here being blocks that, in total, are smaller than the touch area 712, and thus less illumination is made to the display screen 706.
- the reduced illuminating pattern can be random within the touch area 712.
- the fingerprint identification system e.g., fingerprint identification system 104
- the user device 700 may utilize a random-number generator (not illustrated) and illuminates only M out of P blocks, randomly.
- the fingerprint identification system divides the enrolled template 110 into a quantity of P ’ blocks.
- Chances are that the random-number generator, over the life of the user device 700, causes the OLED display to operate in high-brightness mode different pixels, blocks, and area patterns.
- the use of the random-number generator to illuminate random blocks reduces usage to some areas of the OLED and subsequently minimizes localized “aging” of the OLED display.
- the user device 700 may use tessellation in two dimensions (2D), which in mathematics is sometimes referred to as “planar tiling,” to select some but not all of the tiles, thereby spreading out the light-exposure to the organic material of the OLED display over the lifetime of the user device 700.
- 2D two dimensions
- planar tiling a geometric concept that deals with the arrangement of tiles to fill a 2D plane without any gaps, given a set of rules.
- the user device 700 may employ periodic tiling or non-periodic tiling to illuminate patterns made up of blocks (e.g., block 720) inside the touch area 712. Some simple examples of tiling are triangular tiling, checkered tiling, hexagonal tiling, topological square tiling, floret pentagonal tiling, and so forth. Also, the user device may employ tessellation of the touch area 712 within the OLED display in combination with the random-number generator.
- the user device 700 which implements reduced-illumination patterns (instead of illuminating the whole touch area 712) for capturing and matching the verify image 722, can successfully authenticate the user without increasing hardware complexity, by reducing latency, using less computational resources, using less power (preserving the battery charge), and preserving the quality of the OLED display.
- FIG. 8 illustrates another example environment 800 of a user device (e.g, a smartphone) that dynamically illuminates, captures, transfers, and matches (verifies) an image of a fingerprint (a verify image) to the enrolled template 110 (not illustrated in FIG. 8).
- a user device e.g, a smartphone
- the user device in the example environment 800 has an active area of UDFPS 808 (active area 808) inside the display screen of the user device.
- the user device in the example environment 800 determines the center location of the user’s finger pad (in FIG. 8, finger pad center location 814) and determines a touch area 812 based on the finger pad center location 814.
- the user device in the example 800 illuminates one block out the M blocks at a time. For example, at a first time interval , the user device illuminates block 820, captures the block 820, transfers the block’s image data (data transfer 830) for processing (e.g., at computer processors 202), and matches (match 850) a transferred block 840 to a respective block (not illustrated) of the enrolled template 110.
- the user device determines whether to grant or deny access to the user based on the overall score S, as described in FIG. 3. Similar to the description of FIG. 7, the user device in FIG. 8 can illuminate blocks in a random pattern, utilizing tessellation in 2D, or utilizing a combination of randomization and tessellation.
- the fingerprint identification system 104 can capture and match adjacent blocks to the blocks 820, 822, and 824 without illuminating such adjacent blocks. Even though the adjacent blocks may have a lower intensity of reflected light off the user’ s finger pad, such data can still be useful to increase the amount of data being evaluated for authentication. Capturing the images of adjacent blocks to the illuminated blocks 820, 822, and 824 is another way to reduce damage caused to the OLED display.
- the user device in the example environment 800 allows the user device to abort the authentication process at any time the user device makes a determination that the authorized user is not trying to gain access to the user device nor any of the biometric-protected applications. For example, assume the authorized user inserts their smartphone in their pocket. As the authorized user’s pocket lining touches the active area 808, the user device may illuminate only a fraction of M blocks (e.g., one or two blocks) to determine that the authorized user is not trying to gain access to the user device. As another example, the authorized user may hold their smartphone in their hand.
- M blocks e.g., one or two blocks
- the user device can determine within one, two, and so forth, time intervals that the user is not presenting their finger for authentication, enabling the user device to preserve power and preserve the OLED display by illuminating a minimum count of pixels.
- FIG. 9 illustrates another example environment 900 of a user device (e.g, a smartphone) that dynamically illuminates, captures, transfers, and matches (verifies) a plurality of fingerprint images (a plurality of verify images) to a plurality of enrolled templates 110 (not illustrated in FIG. 9).
- the example user device in FIG. 9 operates similarly to the example user device in FIG. 8.
- the example user device in FIG. 9 can dynamically authenticate a plurality of fingerprints, such as two, three, four, five, or even ten fingers of the user. Similar to the description of FIG. 7, the user device in FIG. 9 can illuminate blocks in a random pattern, utilizing tessellation in 2D, or utilizing a combination of randomization and tessellation.
- the user device authenticates two fingers of the user.
- the user device in the example environment 900 determines the center location of the user’s finger pad off a first finger (in FIG. 9, center location of first finger 914-1) and determines a touch area 912-1 based on the center location of the first finger 914-1.
- the user device in the example environment 900 determines a touch area 912-2 based on a center location of a second finger 914-2.
- the user device illuminates, captures, transfers, and matches, blocks 920-1 and 920-2 to respective blocks in the enrolled template(s) 110.
- the user device illuminates another block 922-1 from the touch area 912-1 and another block 922-2 from the touch area 912-2. If the user device successfully matches the blocks 922-1 and 922-2, the user device dynamically and progressively illuminates more blocks up to an N-th (last) time interval lx, as is illustrated by block 924-1 and 924- 2.
- the user may choose to authenticate the plurality of fingers dynamically, but one finger at a time.
- the user may also choose to request to authenticate the plurality of fingers dynamically and all fingers at the same time.
- the user device in the example environment 900 can abort the authentication process at any time the user device determines that the authorized user is not trying to gain access to the user device nor any of the biometric-protected applications, enabling the user device to preserve power and preserve the OLED display by illuminating a minimum count of pixels.
- FIG. 10-1 illustrates an example logic-flow of the capturing module 304 of the fingerprint identification system 104-1
- FIG. 10-2 illustrates an example logic-flow of the matching module 306 of the fingerprint identification system 104-1.
- the logical operations of the capturing module 304 are organized in stages 1000 through 1012. As illustrated in FIG. 10-1, at stage 1000, the capturing module 304 of the fingerprint identification system 104-1 receives an indication of a user touch (e.g., of a user’s finger pad), prior to or during the user touch. At stage 1002, utilizing randomization, tessellation in 2D, or a combination of randomization and tessellation, the fingerprint identification system 104-1 emits radiation (e.g., illuminates visible light using an OLED display) only in M out of P blocks. At stage 1004, the sensor 108 captures reflected images only off the M, including the block 404, out of P blocks. Note that the P blocks represent the entire user input, whereas the capturing module 304 captures only a portion of the entire user input, M out of P blocks, where M is greater or equal than one (1) block.
- a user touch e.g., of a user’s finger pad
- the fingerprint identification system 104-1 emits radiation (e.g
- the capturing module 304 runs the blocks of the user input through post-processing, where the images of the A/blocks are enhanced for the subsequent stage 1008 where the capturing module 304 computes an individual matching score R for each of the captured M blocks to be compared to the corresponding M’ blocks of the enrolled template 110.
- the capturing module 304 outputs the matching scores R for the M blocks to be used by the matching module 306 for fingerprint identification.
- the capturing module 304 determines whether there are still more blocks to be captured, such as in the case of dynamic illuminating, capturing, and matching, as is illustrated in FIGs. 8 and 9. If so, the fingerprint identification system 104- 1 with the capturing module 304 repeats stages 1002 through 1008.
- the matching module 306 may perform the logical operations of stages 1020 through 1030.
- the matching module 306 receives the output from the capturing module 304 and extracts T x , T y , Q, and the matching score R from each of the captured blocks.
- the matching module 306 extracts translation vectors T x and T y in both x and y directions for the blocks.
- the matching module 306 also extracts a rotational vector f based on a calculated rotation angle Q between the M blocks and matching M’ blocks of the enrolled template 110.
- the matching module 306 retains the T x , T y , Q, and the matching score R from each of the M blocks at the ranked table 408 (see FIG. 4).
- the matching module 306 sorts the translation vectors in the ranked table 408 based on matching scores R, and groups multiple matching blocks with the closest rotation and translation vectors into bins.
- the matching module 306 determines a confidence C of the matching scores R.
- the rotation and translation vector candidates [T x , T y , f ] are subjected to a random sample consensus (RANSAC) voting process to determine a correlation/matching score between the matching blocks. The higher the number of votes, the greater the correlation/matching score, and the greater the confidence C.
- the matching module 306 sorts the translation vectors using the correlation/matching scores within the ranked table 408.
- the matching module 306 groups multiple matching blocks with the closest rotation and translation vectors into bins of Q blocks.
- the matching module 306 selects the bins of the Q blocks with the highest matching scores R at stage 1024. For example, the matching module 306 may retain only the top-ten A scores. At stage 1026, the matching module 306 discards the bins of Q blocks with matching scores or confidences that do not satisfy a confidence threshold. For example, the matching module 306 may remove all R scores that are lower than the top- ten R scores. At stage 1028, the matching module 306 computes a composite score S and confidence C for the verify image, based on the scores R and confidences of the Q blocks in the highest-ranking bin.
- the matching module 306 selects a bin from the ranked table 408 with the highest quantity of matching Q blocks and extracts a final translational and rotation vector [T x , T y , f ] for the verify image, which is calculated as the average of the rotation and translation vectors of all the matching Q blocks within the bin.
- the matching module 306 After stage 1028, the matching module 306 returns to stage 1020 unless the confidence of the matching Q blocks within the bin satisfy a confidence threshold. At stage 1030, the matching module 306 outputs a successful authentication if the total quantity of votes in the highest-scoring bin is greater than a threshold, granting access to the secured data 218.
- FIG. 11 illustrates an example method 1100 performed by the user device, which authenticates user inputs implementing reduced-illumination patterns for illuminating, capturing, and matching, a fingerprint or a plurality of fingerprints.
- FIG. 11 is described in the context of FIG. 1, and user device 100. The operations performed in the example method 1100 may be performed in a different order or with additional or fewer steps than what is shown in FIG. 11.
- the user device 100 determines, based on a location of a user touch to a display (e.g ., display screen 106) of a display system, small regions (A / blocks) of the display.
- the small regions (M blocks) are within a touch area (P blocks) of the display over which the user touch is superimposed.
- the small regions represent a reduced area relative to all of the touch area (M is less than P).
- the user device 100 illuminates with radiation, each of the small regions ( blocks) of the display (e.g., and OLED display).
- the illumination causes the radiation to reflect off a user’s skin touching the touch area; refer to FIG. 6 as one aspect.
- the user device 100 captures images at a sensor (e.g., sensor 108).
- the sensor 108 is configured to receive the radiation reflected off the user’s skin touching the touch area at one or more of the M blocks.
- the captured M blocks of the captured images correspond to the illuminated M blocks at stage 1104.
- the user device 100 compares the captured image(s) of the M block(s) (the verify image) to the corresponding M’ block(s) of the enrolled template 110. Specifically, the user device 100 performs block-by-block comparison.
- the fingerprint identification system 104 of the user device 100 determines whether the M blocks being evaluated are the last M blocks needing evaluation.
- the fingerprint identification system 104 of the user device 100 may illuminate, capture, and match, additional M blocks if the user device is operating in a dynamic mode (refer to FIGs. 8 and 9), if one or more of the blocks does not match the corresponding M’ block of the enrolled template 110, if the user may have set a higher- than-normal security requirement for a specific application, or other reasons. If the user device 100 determines that additional M blocks need evaluating, the user device 100 repeats stages 1102 through 1108. If the user device 100 determines that no additional M blocks are needed to make a determination on user authenticity, the user device 100 moves to stage 1112.
- the user device 100 authenticates the user touch (user input) by performing block-by-block comparison, as described in FIGs. 1, 4, 10-1, and 10-2.
- the fingerprint identification system 104 of user device 100 requires that confidence C meets a pre-determined threshold level. If the confidence C does not meet the pre-determined threshold level, the fingerprint identification system 104 of the user device 100 issues a deny access 1114 verdict and, possibly, a message on the display screen 106 stating that access is denied. On the other hand, if the confidence C meets the predetermined threshold level, the fingerprint identification system 104 of the user device 100 issues a grant access 1116 verdict, granting access to the user device 100 or peripherals (e.g., application 102) associated with the user device 100.
- peripherals e.g., application 102
- FIG. 12 illustrates examples of patterns (1202, 1204, and 1206) and minutiae (1210 through 1230) used in matching fingerprints.
- the analysis of fingerprints for matching purposes generally requires the comparison of patterns and/or minutiae.
- the three main patterns of fingerprint ridges are an arch 1202, a loop 1204, and a whorl 1206.
- the arch 1202 is a fingerprint ridge that enters from one side of the finger, rises in the center forming an arc, and then exits the other side of the finger.
- the loop 1204 is a fingerprint ridge that enters from one side of the finger, forms a curve, and then exits on that same side of the finger.
- the whorl 1206 is a fingerprint ridge that is circular around a central point.
- the minutiae 1210 through 1230 are features of fingerprint ridges, such as ridge ending 1210, bifurcation 1212, short ridge 1214, dot 1216, bridge 1218, break 1220, spur 1222, island 1224, double bifurcation 1226, delta 1228, trifurcation 1230, lake or ridge enclosure (not illustrated), core (not illustrated), and so forth.
- Example 1 A computer-implemented method comprising: determining, based on a location of a user touch to a display of a display system, small regions of the display, the small regions within a touch area of the display over which the user touch is superimposed, the small regions representing a reduced area relative to all of the touch area; illuminating with radiation, each of the small regions of the display, the illumination effective to cause the radiation to reflect off a user’s skin touching the touch area; capturing images at a sensor, the sensor configured to receive the radiation reflected off the user’s skin touching the touch area at one or more of the small regions, the images including one or more images corresponding to the one or more of the small regions, respectively; comparing the one or more images to an enrolled template, the enrolled template associated with a fingerprint of a verified user; authenticating the user touch to the display based on the comparing of the one or more images to the enrolled template; and responsive to authenticating the user touch, enabling use of a function or peripheral.
- Example 2 The method of example 1, wherein: the sensor of the display system includes one or more under-display fingerprint sensors; and the display of display system is an organic light-emitting diode, OLED, display capable of operating in high-brightness mode, and wherein the illuminating uses the high-brightness mode.
- the sensor of the display system includes one or more under-display fingerprint sensors
- the display of display system is an organic light-emitting diode, OLED, display capable of operating in high-brightness mode, and wherein the illuminating uses the high-brightness mode.
- Example 3 The method of examples 1 or 2, wherein the sensor includes a complementary metal-oxide-semiconductor, CMOS, image sensor.
- CMOS complementary metal-oxide-semiconductor
- Example 4 The computer-implemented method of any of examples 1 to 3, wherein the images corresponding to the small regions include captured blocks, and the small regions of which the images are captured part of a tessellation in two dimensions of the touch area.
- Example 5 The computer-implemented method of any of examples 1 to 4, further comprising determining the location of the user touch.
- Example 6 The computer-implemented method of example 5, further comprising determining the location prior to the user touch, and wherein determining the small regions of the display is performed prior to, or after, the user touch.
- Example 7 The computer-implemented method of example 6, wherein determining the location prior to the user touch is performed using radar.
- Example 8 The computer-implemented method of any of examples 1 to 7, wherein determining the small regions of the display is determined using a randomness function, the randomness function effective to reduce repetitive use of portions of the display.
- Example 9 The computer-implemented method of any of examples 1 to 8, wherein illuminating the radiation emits visible light, the visible light emitted by thin film transistors in conjunction with red-green-blue, RGB, light-emitting elements of the organic light-emitting diode, OLED, display.
- Example 10 The computer-implemented method of any of examples 1 to 9, wherein the radiation passes through a glass layer of the display of the display system and the radiation then reflects off the user’s skin touching the touch area, after which the reflected radiation passes back through the glass layer to the sensor.
- Example 11 The computer-implemented method of any of examples 1 to 10, wherein the one or more images is a plurality of images, the illuminating is performed in series to capture the plurality of images, the capturing the plurality of images is performed in series, and the comparing the plurality of images to the enrolled template is performed in series.
- Example 12 The computer-implemented method of any of examples 1 to 10, wherein the one or more images is a plurality of images, the illuminating is performed in parallel to capture the plurality of images, the capturing the plurality of images is performed in parallel, and the comparing the plurality of images to the enrolled template is performed in parallel.
- Example 13 The computer-implemented method of any of examples 1 to 12, wherein the enrolled template includes vector-based templates, and wherein comparing the one or more images compares a vector conversion of the one or more images to the vector-based templates.
- Example 14 The computer-implemented method of any of examples 1 to 13, wherein the enrolled template includes multiple blocks, the multiple blocks arranged to create the same size as the small regions, and wherein comparing the one or more images to the enrolled template compares each of the one or more images to the multiple blocks to determine a confidence level for each of the one or more images, and wherein authenticating the user touch is performed responsive to a confidence threshold being met by the determined confidence level.
- Example 15 The computer-implemented method of example 14, wherein the individual blocks are: overlapping; non-overlapping and apart, with a sliding distance of more than one pixel between the blocks; or adjacent, with a sliding distance of zero or one pixel between the blocks.
- Example 16 The computer-implemented method of any of examples 1 to 15, wherein enabling use of the function or peripheral comprises unlocking a user device with which the display system is associated.
- Example 17 The computer-implemented method of examples 1 to 16, wherein the user touch includes two or more touches from two or more fingertips and the small regions include at least one small region for each of the two or more fingertips.
- Example 18 A user device comprising: a display of a display system; a sensor; one or more processors; and one or more computer-readable media having instructions thereon that, responsive to execution by the one or more processors, perform the operations of the method of any of examples 1 to 17.
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Theoretical Computer Science (AREA)
- Human Computer Interaction (AREA)
- Collating Specific Patterns (AREA)
- Image Input (AREA)
Abstract
This disclosure describes methods and techniques for using reduced-illumination patterns with fingerprint sensors, as well as apparatuses including those fingerprint sensors. Using these reduced-illumination patterns, the techniques can authenticate a user in a shorter amount of time, with reduced power consumption, and reduced damage to a display screen, such as an organic light-emitting diode (OLED) display.
Description
FINGERPRINT SENSORS WITH REDUCED-ILLUMINATION PATTERNS
BACKGROUND
[oooi] To address usability of fingerprint sensors without using valuable “real estate” on a display side of a user device, some device manufacturers embed sensors under their displays, such as with a large optical Under Display Fingerprint Sensor (UDFPS) embedded under an Organic Light-Emitting Diode (OLED) display. The use of large optical UDFPS embedded under the OLED display, however, requires considerable computational time and power to authenticate a user. Also, over time, such a solution degrades the OLED display.
SUMMARY
[0002] This disclosure describes methods and techniques for using reduced- illumination patterns with fingerprint sensors, as well as apparatuses including those fingerprint sensors. Using these reduced-illumination patterns, the techniques can authenticate a user in a shorter amount of time, with reduced power consumption and reduced damage to a display screen, such as an organic light-emitting diode (OLED) display.
[0003] In one aspect, a computer-implemented method comprises determining, based on a location of a user touch to a display of a display system, small regions of the display, the small regions within a touch area of the display over which the user touch is superimposed, the small regions representing a reduced area relative to all of the touch
area. Then the method includes illuminating with radiation, each of the small regions of the display, the illumination effective to cause the radiation to reflect off a user’s skin touching the touch area. After illuminating, the method includes capturing images at a sensor, the sensor configured to receive the radiation reflected off the user’s skin touching the touch area at one or more of the small regions, the images including one or more images corresponding to the one or more of the small regions, respectively. Once the sensor has received the radiation reflected off the user’s skin, the method includes comparing the one or more images to an enrolled template, the enrolled template associated with a fingerprint of a verified user. The method also includes authenticating the user touch to the display based on the comparing of the one or more images to the enrolled template. Finally, the method includes being responsive to authenticating the user touch, enabling use of a function or peripheral.
[0004] In another aspect, a user device comprises a display of a display system, a sensor, one or more processors, and one or more computer-readable media having instructions thereon that, responsive to execution by the one or more processors, perform the operations of the method mentioned above. In yet other aspects, a system, a software, or means includes performing the operations of the method mentioned above.
[0005] Throughout the disclosure, examples are described where a computing system (e.g., the user device) analyzes information (e.g, fingerprint images) associated with a user or a user device. The computing system uses the information associated with the user after the computing system receives explicit permission from the user to collect, store, or analyze the information. For example, in situations discussed below in which a
user device authenticates a user based on fingerprints, the user will be provided with an opportunity to control whether programs or features of the user device or a remote system can collect and make use of the fingerprint for a current or subsequent authentication procedure. Individual users, therefore, have control over what the computing system can or cannot do with fingerprint images and other information associated with the user. Information associated with the user (e.g., an enrolled image), if ever stored, is pre-treated in one or more ways so that personally identifiable information is removed before being transferred, stored, or otherwise used. For example, before a user device stores an enrolled image (also referred to as an “enrolled template”), the user device may encrypt the enrolled image. Pre-treating the data this way ensures the information cannot be traced back to the user, thereby removing any personally identifiable information that would otherwise be inferable from the enrolled image. Thus, the user has control over whether information about the user is collected and, if collected, how such information may be used by the computing system.
[0006] This summary introduces simplified concepts for fingerprint sensors with reduced-illumination patterns for capturing and matching a fingerprint, which is further described below in the Detailed Description and Drawings. For ease of description, the disclosure focuses on optical UDFPSs that use the OLED display to transmit radiation with various illumination patterns for capturing and matching the fingerprint. The techniques, however, are not limited to the use of visible light, UDFPS, or OLEDs. Also, the techniques are not limited to fingerprint identification on hands and feet; the techniques also apply to other forms of biometric identification, such as for retinal identification. This
summary is not intended to identify essential features of the claimed subject matter, nor is it intended for use in determining the scope of the claimed subject matter.
BRIEF DESCRIPTION OF THE DRAWINGS
[0007] The details of one or more aspects of user devices with fingerprint identification systems that utilize reduced-illumination patterns for capturing and matching a fingerprint are disclosed. The same numbers are used throughout the drawings to reference like features and components:
FIG. 1 illustrates an example user device that authenticates a user using reduced- illumination patterns for capturing and matching a fingerprint.
FIG. 2 illustrates another example user device that authenticates a user using reduced- illumination patterns for capturing and matching a fingerprint.
FIG. 3 illustrates an example of a fingerprint identification system that implements reduced-illumination patterns for capturing and matching a fingerprint.
FIG. 4 illustrates aspects of block-by-block matching used in reduced-illumination patterns for capturing and matching a fingerprint.
FIG. 5 illustrates another example user device with a display screen and an optical Under Display Fingerprint Sensor (UDFPS).
FIG. 6 illustrates a cross-section of an optical UDFPS embedded under an OLED display.
FIG. 7 illustrates another example user device that utilizes reduced-illumination patterns for capturing and matching a fingerprint.
FIG. 8 illustrates an example environment of a user device that dynamically illuminates, captures, transfers, and matches an image of a fingerprint to the enrolled template.
FIG. 9 illustrates another example environment of a user device that dynamically illuminates, captures, transfers, and matches a plurality of fingerprint images to a plurality of enrolled templates.
FIG. 10-1 illustrates an example logic-flow diagram for a capturing module of the fingerprint identification system of FIG. 3.
FIG. 10-2 illustrates an example logic-flow diagram for a matching module of the fingerprint identification system of FIG. 3.
FIG. 11 illustrates a computer-implemented method that implements reduced- illumination patterns for capturing and matching a fingerprint.
FIG. 12 illustrates examples of patterns and minutiae used in matching fingerprints.
DETAILED DESCRIPTION
Overview
[0008] This document describes apparatuses, methods, and techniques that enable large-area fingerprint sensors to capture a fingerprint and match it to an enrolled template with a reduced-illumination pattern.
[0009] For example, a user device (e.g., a smartphone, a tablet computer, a wristwatch) may use a fingerprint identification system to capture a “verify image” and
match patterns and/or minutiae of the verify image to an enrolled image. As described herein, a “verify image” is a fingerprint image used for authentication. An “enrolled image” is an image that the user device captures during enrollment, such as when the user first sets up the smartphone or an application. Further, as described herein, an “enrolled image template” (an enrolled template) can be a mathematical representation of the enrolled image. The enrolled template can be a vectorized representation of the enrolled image and, among other advantages noted below, take less memory space in the user device. While beneficial in some respects, the use of a vectorized representation for an enrolled image template is not required for matching a verify image to the enrolled image template. The described apparatuses, methods, and techniques can perform image-to- image (rather than vector-to-vector) comparisons, as well as other representations, to compare each verify image to the enrolled image.
[ooio] In more detail, small-area fingerprint sensors, whether embedded on the back or the front of the user device, limit the number of patterns and/or minutiae of the fingerprint image that are captured for authentication. On the other hand, when the user device uses a large-area fingerprint sensor, patterns and/or minutiae matching is achieved with high success. Note that in biometric security the success rate is often characterized using a receiver operating curve (ROC), which is a graphical plot that illustrates the diagnostic ability of a binary classifier system as its discrimination threshold is varied. More specifically, biometric security measurements may include false acceptance rate (FAR) for the proportion of times a fingerprint identification system grants access to an unauthorized person and false rejection rate (FRR) for the proportion of times a fingerprint
identification system fails to grant access to an authorized person. Qualitatively speaking, a fingerprint identification system with a high success rate has a low false acceptance rate and a low false rejection rate. With more detail in a large fingerprint image, it is possible to make a more-accurate identification (lower false acceptance rate and lower false rejection rate). Standalone large-area fingerprint sensors, however, occupy valuable space and limit the size of the display screen when embedded on the front of a user device (e.g., a smartphone).
[0011] To address issues that arise from the use of fingerprint sensors that are embedded as standalone localized sensors, some manufacturers may use optical Under Display Fingerprint Sensors (UDFPSs). Manufacturers may embed optical UDFPSs under an OLED display screen. The OLED display screen accommodates a large fingerprint area without sacrificing valuable “real estate” of the display screen to be used for the primary purpose of fingerprint authentication. Nevertheless, the use of optical UDFPSs have some drawbacks. When the user device utilizes a standalone localized fingerprint sensor, the user can easily find the fingerprint sensor by sight and/or by the feel of a touch. On the other hand, when the user device utilizes an optical UDFPS, the user often cannot see or touch the fingerprint sensor because the display screen (e.g., an OLED display) is smooth. Thus, another reason for embedding large-area optical UDFPSs is to aid the user to easily present a finger or a plurality of fingers for authentication.
[0012] These large-area optical UDFPSs may require, however, the OLED display screen to operate in high brightness mode (HBM) in order to capture the patterns and/or minutiae. Generally, a user device with a large-area optical UDFPS locates the fingerprint
location using a touch-sensitive or location-sensitive display screen. Then, in high brightness mode, the OLED display screen illuminates a large area, such as a square area, around the fingerprint location. The user device captures the verify image in the large area and may use a processor ( e.g . , an image processor) to match the verify image to the enrolled template.
[0013] Another issue using the optical UDFPS with an OLED display screen for fingerprint authentication is that it requires a considerable amount of power and time. Use of these optical UDFPS techniques may require illumination of a large area in highbrightness mode, capture of a large verify image, transfer of the large verify image to the processor, and match of the verify image to the enrolled template. Further, if the quality of the verify image is low, the user device may repeat this process until it can determine with enough confidence (e.g., low false acceptance rate and low false rejection rate) whether to grant or deny access to the user. Moreover, OLED display screens used in user devices with optical UDFPS s exhibit localized “aging” of the display screen due to the repeated operations in high-brightness mode, which degrades the organic material in OLED display screens. Localized aging of the OLED display screen adversely affects the quality of the display screen. For example, when the OLED display screen displays an image with a white background, the user can easily notice locations on an OLED display screen where fingerprint sensing has been used, because those locations appear as an off- white area in a white background.
[0014] To address issues that arise from standalone all-area fingerprint sensors (e.g. , capacitive fingerprint sensors) and issues that arise from the large optical UDFPS s
embedded under the OLED display, this disclosure illustrates how to achieve user authentication in a short amount of time, preserve the battery charge of the user device, and preserve the quality of the OLED display screen by employing block-by-block capture- and-match techniques.
[0015] Block-by-block, where a block is NxN (e.g., 31x31) pixels, capturing and matching, leverages a technique of “fusion” of minutiae and pattern-correlation matching of a fingerprint. Note that some user devices rely on conventional pattern-correlation matching instead of minutiae matching, and they attempt to correlate an entire fingerprint at once. This conventional technique is not realistically scalable and cannot easily support large areas, such as several square centimeters or high-resolution fingerprint images (e.g., resolutions of a thousand Dots-Per-Inch (DPI)). In contrast, the techniques disclosed here fuse minutiae and pattern-correlation matching, which can reduce illumination used to verify fingerprints.
[0016] While features and concepts of the described apparatuses, methods, and techniques for fingerprint identification systems of user devices can be implemented in any number of different environments, systems, devices, and/or various configurations, aspects that enable the fingerprint identification system with large-area fingerprint sensor(s) to capture a fingerprint (e.g, a verify image) and match it to an enrolled template are described in the context of the following example devices, systems, methods, and configurations.
Example Environments and Principles of Block-by-Block Matching
[0017] FIG. 1 illustrates an example user device 100 that authenticates a user input using reduced-illumination patterns for capturing and matching a fingerprint or a plurality of fingerprints. As described below, the user device 100 fuses minutiae matching with pattern-correlation matching. By using reduced-illumination patterns for capturing and matching the fmgerprint(s), the user device 100 can authenticate a user in a short amount of time (low latency) and preserve the battery charge (less power) of the user device 100. In addition, the user device 100 utilizes a UDFPS embedded under an OLED display, using reduced illumination (e.g., illuminating visible light) patterns for capturing and matching the fmgerprint(s), which preserves the quality of the OLED display screen.
[0018] The user device 100 may be any mobile or non-mobile user device. As a mobile user device, the user device 100 can be a mobile phone (e.g., a smartphone), a laptop computer, a wearable device (e.g., watches, eyeglasses, headphones, clothing), a tablet device, an automotive/vehicular device, a portable gaming device, an electronic reader device, or a remote-control device, or other mobile computing device that relies on fingerprint identification to perform a function. As a non-mobile user device, the user device 100 may represent a server, a network terminal device, a desktop computer, a television device, a display device, an entertainment set-top device, a steaming media device, a tabletop assistant device, a non-portable gaming device, business conferencing equipment, a payment station, a security checkpoint system, or other non-mobile computing device including a fingerprint identification system.
[0019] The user device 100 includes an application 102, a fingerprint identification system 104, and a display screen 106. The fingerprint identification system 104 includes a sensor 108, an enrolled image template 110 (enrolled template 110), and the display screen 106 with a graphical user interface (GUI). The GUI may include instructions for the user to follow to authenticate themselves with the sensor 108. For example, the GUI may include a target graphical element (e.g, an icon, a designated region) where the user is to touch the display to provide their fingerprint, or the user may be permitted to place their fmgertip(s) anywhere on the screen that the user prefers. These and other components of the user device 100 are communicatively coupled in various ways, including through the use of wired and wireless buses and links. The user device 100 may include additional or fewer components than what is illustrated in FIG. 1.
[0020] The application 102 is some software, applet, peripheral, or other entity that requires or prefers authentication of a user. For example, the application 102 can be a secured component of the user device 100 or an access entity to secure information accessible from the user device 100. The application 102 can be an online banking application software or webpage that requires fingerprint identification before logging in to an account. Or the application 102 may be part of an operating system (OS) that prevents access (generally) to the user device 100 until the user’s fingerprint is identified. The application 102 may execute partially or wholly on the user device 100 or in “the cloud” (e.g., on a remote device accessed through the Internet). For example, the application 102 may provide an interface to an online account, such as through an internet browser or an application programming interface (API).
[0021] The sensor 108 can be any sensor able to capture a high-resolution image, such as five hundred (500) Dots-Per-Inch (DPI), seven hundred (700) DPI, one thousand (1000) DPI, and so forth. Depending on the imaging technology that the user device 100 utilizes to capture the fingerprint image, the sensor 108 may be a complementary metal- oxide-semiconductor (CMOS) image sensor, a charge-coupled device (CCD) image sensor, a capacitive image sensor, an ultrasonic image sensor, a thin film transistor (TFT) image sensor, a quanta image sensor (QIS), and so forth. Thus, the techniques and methods described herein are not limited to a specific imaging technology.
[0022] The enrolled template 110 represents a predefined, user-specific fingerprint image template. The fingerprint identification system 104 records and stores the enrolled template 110 in advance during a coordinated setup session with the user device 100 and a particular user. Using the GUI being displayed in the display screen 106, the user device 100 instructs the user to press a finger on the sensor 108 one or more times until the fingerprint identification system 104 has an accurate image of the user’s fingerprint (or fingerprints, palm prints), which the user device 100 retains as an enrolled template 110.
[0023] The fingerprint identification system 104 captures individual blocks of the fingerprint that are recognizable from user input at the sensor 108. A block can be an image area of NxN pixels (e.g, 31x31 pixels, 23x23 pixels). The fingerprint identification system 104 uses minutiae matching, pattern-correlation, or both, to extract individual blocks initially captured by the sensor 108 that may indicate a match to corresponding blocks of the enrolled template 110. Rather than the sensor 108 capturing the entire fingerprint image presented on the sensor 108, the fingerprint identification system 104 captures blocks
representing only some of the entire fingerprint image and matches the captured blocks to blocks of the enrolled template 110. The fingerprint identification system 104 can match and verify blocks in parallel or series and from one or more fingerprints. Serially, the fingerprint identification system 104 can first capture a predetermined number of blocks from the fmgerprint(s) that the user presents to the sensor 108 and then the fingerprint identification system 104 matches the captured blocks from the fingerprint image(s) to blocks contained in the enrolled template 110. As the fingerprint identification system 104 captures one or more blocks from the fmgerprint(s) that the user presents to the sensor 108, the fingerprint identification system 104, in situ and/or dynamically, can match, operating in parallel, the captured blocks from the fingerprint image(s) to blocks contained in the enrolled template 110.
[0024] As one example, assume that the user device 100 detects a user input at the sensor 108. The fingerprint identification system 104 divides the user input into a quantity of P blocks, where P is an integer. The fingerprint identification system 104, however, by using the sensor 108, captures M number of blocks, where M is an integer less than P, from the user input, evaluating part of the full image during each iteration of the fingerprint capture.
[0025] The fingerprint identification system 104 scores each of the captured individual M blocks against corresponding M’ blocks of the enrolled template 110, where M’ is an integer equal to M. Note that the AT blocks represent a number of blocks in the verify image, while the M ’ blocks represent a corresponding same number of blocks in the enrolled template 110. For example, by transforming the M blocks of the fingerprint image
(the verify image) into rotationally-invariant vectors, the fingerprint identification system 104 may identify and compare closest matching rotationally-invariant vectors of the corresponding M’ blocks of the enrolled template 110 in any direction. The outcome of these vectors’ transformation will be the same and mapped to the same vectors regardless of the orientation. In one aspect, the fingerprint identification system 104 replaces the minutiae with a pattern but treats the pattern like minutiae by assigning a location and an orientation to the pattern.
[0026] The fingerprint identification system 104 generates vectors for each block of the captured blocks. For example, each respective vector may include the following:
1) Rotationally invariant Absolute-value Fast Fourier Transforms (AFFTs) of the block;
2) The block’s x-position and y-position — the Cartesian coordinates;
3) The block’s polar representation of the Cartesian coordinates; and
4) The block’s Fast Fourier Transforms (FFTs) of the polar representation with a high resolution in the theta (Q) direction, where theta (Q) is further described in FIG. 4.
The fingerprint identification system 104 determines respective scores of each of the captured individual M blocks from the vectors, and a confidence that the individual blocks M match the corresponding blocks M’ of the enrolled template 110. Based on the scores and confidences of the individual blocks, the fingerprint identification system 104 iteratively computes a confidence and a score for the user input relative to the enrolled template 110.
[0027] The fingerprint identification system 104 may initially capture one, two, three, and incrementally more, blocks ( i.e ., M 1 , M= 2, M 3,...) extracting the above- mentioned vectors from each captured block, and serially or in parallel, match the user input M blocks to corresponding M’ blocks of the enrolled template 110. The fingerprint identification system 104 updates the confidence and score that the user input matches the enrolled template 110. If during this iterative process the confidence in the user input does not satisfy a confidence threshold within a pre-determined latency time, the user input is marked as unidentifiable or unrecognized. The user device 100 conserves power by not granting access to the user input without needing to capture and verify the whole fingerprint image. However, if at any time prior to or after capturing all the individual M blocks, the fingerprint identification system 104 determines that the confidence and score of the user input satisfy respective thresholds, the fingerprint identification system 104 may automatically match the user input (the verify image) to the enrolled template 110, thereby authenticating the user input and granting access to the application 102 without having to capture an entire fingerprint image. Note that the pre-determined latency time and the number of matching blocks to authenticate a user may differ from user-to-user depending on a user’s fingerprint ridges. For example, some users may have flatter fingerprint ridges compared to other users. In that case, the user device 100 with the associated fingerprint identification system 104 may take a little longer to authenticate users with flatter-than- nonnal fingerprint ridges. The users with flatter-than-normal fingerprint ridges, however, can still benefit from using the fingerprint identification system 104. Regardless of the “quality” of user’s fingerprint ridges, the fingerprint identification system 104 can
authenticate the user in a shorter amount of time, using less power, and preserving the quality of the OLED display.
[0028] Alternatively, the fingerprint identification system 104 may initially capture a predetermined number of M out of P blocks (e.g.,M= 9) and extract the above-mentioned vectors from each captured block. Then, the fingerprint identification system 104 matches the user input M blocks to the corresponding M’ blocks of the enrolled template 110. The fingerprint identification system 104 calculates the confidence scores that the user input matches the enrolled template 110 and determines to either grant or deny access to the user. Additionally, the fingerprint identification system 104 may authenticate the user by evaluating fingerprints from one finger or a plurality of fingers, as it will become clearer in the description below.
[0029] FIG. 2 illustrates another example user device 200 that authenticates user inputs using reduced-illumination patterns for capturing and matching a fingerprint. The user device 200 is an example of the user device 100 set forth in FIG. 1. FIG. 2 illustrates the user device 200 as being a variety of example devices, including a smartphone 200-1, a tablet 200-2, a laptop 200-3, a desktop computer 200-4, a computing watch 200-5, computing eyeglasses 200-6, a gaming system or controller 200-7, a smart speaker system 200-8, and an appliance 200-9. The user device 200 can also include other devices, such as televisions, entertainment systems, audio systems, automobiles, unmanned vehicles (inair, on the ground, or submersible “drones”), trackpads, drawing pads, netbooks, e-readers, home security systems, doorbells, refrigerators, and other devices with a fingerprint identification system.
[0030] The user device 200 includes one or more computer processors 202, one or more computer-readable media 204 (CRM 204), and one or more sensor components 206. The user device 200 further includes one or more communication and input/output (I/O) components 208, which can operate as an input device and/or an output device, for example, presenting a GUI and receiving inputs directed to the GUI. The one or more CRM 204 includes the application 102, the fingerprint identification system 104, the enrolled template 110, and a secured data store 214. In the user device 200, the fingerprint identification system 104 includes an identification module 212. Other programs, services, and applications (not shown) can be implemented as computer-readable instructions on the CRM 204, which can be executed by the computer processors 202 to provide functionalities described herein. The computer processors 202 and the CRM 204, which include memory media and storage media, are the main processing complex of the user device 200. The sensor 108 is included as one of the sensor components 206.
[0031] The computer processors 202 may include any combination of one or more controllers, microcontrollers, processors, microprocessors, hardware processors, hardware processing units, digital signal processors, graphics processors, graphics processing units, and the like. The computer processors 202 may be an integrated processor and memory subsystem (e.g., implemented as a “system-on-chip”), which processes computer- executable instructions to control operations of the user device 200.
[0032] The CRM 204 is configured as persistent and non-persistent storage of executable instructions (e.g., firmware, software, applications, modules, programs, functions) and data (e.g, user data, operational data, online data) to support execution of
the executable instructions. Examples of the CRM 204 include volatile memory and non volatile memory, fixed and removable media devices, and any suitable memory device or electronic data storage that maintains executable instructions and supporting data. The CRM 204 can include various implementations of random-access memory (RAM), read only memory (ROM), flash memory, and other types of storage memory in various memory device configurations. The computer-readable media 204 excludes propagating signals. The computer-readable media 204 may be a solid-state drive (SSD) or a hard disk drive (HDD).
[0033] In addition to the sensor 108, the sensor components 206 may include other sensors for obtaining contextual information (e.g., sensor data) indicative of operating conditions (virtual or physical) of the user device 200 or the user device 200’s surroundings. The user device 200 monitors the operating conditions based in part on sensor data generated by the sensor components 206. In addition to the examples given in FIG. 1 for the sensor 108 to detect fingerprints, other examples of the sensor components 206 include various types of cameras (e.g, optical, infrared), radar sensors, inertial measurement units, movement sensors, temperature sensors, position sensors, proximity sensors, light sensors, infrared sensors, moisture sensors, pressure sensors, and the like.
[0034] The communication and I/O component 208 provides connectivity to the user device 200 and other devices and peripherals. The communication and I/O component 208 includes data network interfaces that provide connection and/or communication links between the device and other data networks, devices, or remote systems (e.g., servers). The communication and I/O component 208 couples the user device 200 to a variety of
different types of components, peripherals, or accessoiy devices. Data input ports of the communication and I/O component 208 receive data, including image data, user inputs, communication data, audio data, video data, and the like. The communication and I/O component 208 enables wired or wireless communicating of device data between the user device 200 and other devices, computing systems, and networks. Transceivers of the communication and I/O component 208 enable cellular phone communication and other types of network data communication.
[0035] The identification module 212 directs the fingerprint identification system 104 to perform reduced-illumination patterns for capturing and matching a fingerprint detected at the sensor components 206. In response to receiving an indication that the sensor components 206 detect a user touch, the identification module 212 obtains some of the individual blocks of the user input being captured by the sensor 108 and scores each of the captured individual blocks against different blocks of the enrolled template 110. Whether the user device 200 and the fingerprint identification system 104 perform the user authentication serially or in parallel, when the identification module 212 directs the sensor 108 to capture additional individual blocks, the identification module 212 compiles the scores of the individual blocks already captured and generates, from the scores, a confidence value associated with the user input. In some cases, prior to capturing all the individual blocks of the user input, and as soon as the confidence satisfies a threshold, the identification module 212 automatically matches the user input to the enrolled template 110 and authenticates the user input. By doing so, the latency can be reduced for some of the authentication attempts.
[0036] In response to the identification module 212 outputting an indication that a fingerprint is identified and matched to the user, the application 102 may grant the user access to the secured data store 214. Otherwise, the identification module 212 outputs an indication that fingerprint identification failed, and the user is restricted from having access to the secured data store 214, or the identification module 212 repeats the authentication process to verify that fingerprint identification failed more than once.
[0037] FIG. 3 illustrates an example of a fingerprint identification system 104-1 that implements reduced-illumination patterns. This fingerprint identification system 104-1 illuminates, captures, and matches (or fails to match) a fingerprint or a plurality of fingerprints to the enrolled template 110. Similar to the fingerprint identification system 104, the fingerprint identification system 104-1 includes the enrolled template 108 and the sensor 108 The fingerprint identification system 104-1 further includes identification module 302, which includes a capturing module 304 and a matching module 306.
[0038] The capturing module 304 captures a user input at the sensor 108 at the direction of the identification module 302. The matching module 306 attempts to match the output from the capturing module 304 to the enrolled template 110. Instead of expecting the capturing module 304 to capture an entire fingerprint, the matching module 306 scores captured blocks against blocks of the enrolled template 110, and the scores R are tracked as new blocks (where Mis less than P ) are captured by the capturing module 304. The matching module 306 determines, based on the confidences and scores R associated with the individual M blocks, an overall composite score S and confidence C associated with the user input matching the enrolled template 110.
[0039] The matching module 306 uses the highest-ranking individual block scores to produce the overall score S indicating whether the fingerprint matches the enrolled template 110. The matching module 306 maintains the confidence C in the overall score S and the confidence C increases as the confidence in the highest-ranking individual block scores also increase. As more blocks are captured and matched, the confidence C in the overall image score grows. The matching module 306 determines whether or not the confidence C satisfies a confidence threshold for matching the fingerprint image to the enrolled template 110. Rather than wait for, or expect the capturing module 304 to capture, an entire image, the fingerprint is authenticated as soon as the score S and its confidence C satisfy their respective thresholds.
[0040] FIG. 4 illustrates aspects of block-by-block matching used in reduced- illumination patterns for capturing and matching a fingerprint. FIG. 4 is described in the context of the fingerprint identification system 104-1 and illustrates a verify image 402, which is divided into M blocks, including block 404. Each of the M blocks, including the block 404, is NxN (N can be any finite integer, such as 23, 31) pixels and is separated from another block by separation distances sDIS. The M blocks can be overlapping, non overlapping and apart (with a sliding distance of more than one pixel between the blocks), or adjacent (with a sliding distance of zero or one pixel between the blocks). FIG. 4 further includes an NxN sized block 406 of the enrolled template 110 and a ranked table 408.
[0041] The angular rotation around the center points of blocks 404 and 406 in Cartesian coordinates (hx, hy) and (hx, by), respectively, transforms into a translation along the theta (Q) direction in the polar coordinate representation — this is called “phase
shifting.” Fast Fourier Transforms (FFTs) assume periodic boundary conditions. As such, the Absolute-value FFT (AFFT) of the block 404 represented in polar coordinates is rotationally invariant, and the rotation angle of the block 404 is the location of the maximum correlation between the FFT of the blocks 404 and 406, represented in polar coordinates.
[0042] The rotational and translation matrices, where the rotation and translation matrix between the two images 404 and 406, can be defined as:
where ø represents the angle between the center points (hx, hy) and (hx, hy) in Cartesian coordinates for the two images 404 and 406, Tx represents the translation along the x-axis between the two images 404 and 406, and Ty represents the translation along the y-axis between the two images (blocks) 404 and 406.
[0043] The x-coordinates and the y-coordinates of image 406 can be transformed into the coordinate system of the image 404 using Equation 1.
(Equation 1)
[0044] Furthermore, the rotation matrix between the blocks 404 and 406, herein called Rh n, is the inverse of the rotation matrix between the blocks 404 and 406, herein called RM21, as shown in Equation 2.
RM12 = c RM21 r1
(Equation 2)
[0045] The capturing module 304 determines a similarity between the vectors of the verify blocks 404 and the vectors of the enrolled blocks 406 along with the angle of rotation f and the con-elation. The matching module 306 then extracts the x-coordinate, the y- coordinate, and the angle correspondence in the output from the capturing module 304 and calculates the translation in the x-direction and y-direction for each block of the verify image.
[0046] At this stage, the matching module 306 merges vectors from the enrolled image templates using a rotation and translation matrix and drops redundant vectors based on a quality score. The matching module 306 drops redundant vectors using this quality- score to rank the highest (e.g., top ten) translation and rotation vectors. The ranked table 408 represents a data structure (table) that the matching module 306 may use to maintain the highest-ranking translation and rotation vectors.
[0047] The outcome of the matching is the number of matching blocks between the verify image 402 and the enrolled image 406 that show similar translation and rotation. To increase the robustness of the matching, a small error is allowed in the translation and rotation to account for variations due to skin-plasticity distortions.
[0048] The described techniques using reduced-illumination patterns for capturing and matching can be used for other forms of biometric matching, such as iris, palm print, and footprint. One area where parallel capturing and matching techniques tend to fail is when attempting to match a perfect pattern (e.g., a perfect zebra pattern) because, in that
case, each block from the enrolled and a verified image is identical. This limitation, however, becomes irrelevant because biometric patterns are not perfect. It is that imperfection and uniqueness that gives the biometric pattern and the reduced-illumination patterns for capturing and matching techniques their value.
[0049] The implementation of reduced-illumination patterns for capturing and matching the fingerprint image is achieved without increasing complexity while reducing latency, using less computational and battery power, and in some cases, preserve the quality of the display screen 106 (e.g., an OLED display).
Operational Principles of Optical UDFPSs embedded under OLED Displays
[0050] FIG. 5 is an example of a user device 500 (e.g., a smartphone) with a display screen 506 (e.g., an OLED display) and an optical Under Display Fingerprint Sensor (UDFPS). The UDFPS (not illustrated in FIG. 5) has an active area 508 that can capture a fingerprint image, such as a verify image 510, when the user places their finger pad on the display screen 506. In the example user device 500, the active area 508 is smaller than the display screen 506 and is located on the bottom portion of the display screen 506 of the user device 500. The active area 508, however, can be a portion of the display screen 506 (as is illustrated in FIG. 5) or can be as large as the display screen 506 (not illustrated as such in FIG. 5). Given that the user device 500 utilizes an optical UDFPS, the user device 500 does not sacrifice valuable display area (e.g., at the bottom of the user device 500) to embed a standalone small-area fingerprint sensor. Instead, the user device 500 sacrifices only the top part of the user device 500 to include a speaker 502 and a front-facing camera
504. Note that the user device 500 may have other hardware features (not illustrated), such as a power button, a volume button, a home button, and so forth, that may be embedded in or on the user device 500.
[0051] The display screen 506 of the user device 500 can be touch-sensitive or location-sensitive, which enables the display screen 506 to determine the location of the user’s finger pad prior to or during a user touch. Depending on what technology the user device 500 utilizes, a touch controller (not illustrated) determines the location of the user’s touch on the display screen 506. If the user device 500 utilizes technologies, such as radar, camera(s), motion sensors, or infrared optical imaging, the touch controller of the user device 500 may determine the location of the user’s touch just prior to the user touching the display screen 506. If the user device 500 utilizes technologies, such as resistive touch, surface capacitive touch, projected capacitive touch, or surface acoustic wave (SAW) touch, the touch controller of the user device 500 can determine the location of the user’s touch when the user touches the display screen 506.
[0052] As the touch controller of the user device 500 determines that the user’ s touch is within the active area 508, the user device 100 captures the verify image 510 in a touch area 512 inside the active area 508. In detail, the touch controller of the user device 500 determines the center of the user’s touch and activates the touch area 512 such that the center of the user’s touch is the center of the touch area 512. If the display screen 506 is an OLED display (see FIG. 6 for more details), the OLED display illuminates in high brightness mode the touch area 512 inside the active area 508 of the display screen 506. FIG. 5 includes the verify image 510, which is divided into P blocks, including block 520.
[0053] FIG. 6 illustrates a cross-section 600 of an optical Under Display Fingerprint Sensor (UDFPS) under an OLED display; refer to FIG. 5 for the example location of the described cross-section 600. Note that the cross-section 600 may include greater or fewer elements than are illustrated in FIG. 6. FIG. 6 helps introduce the operational principles of the optical UDFPS embedded under the OLED display; FIG. 6 is not intended to fully describe the OLED display nor the UDFPS.
[0054] The OLED display includes a red-green-blue pattern 608 (RGB pattern 608), which includes RGB light-emitting elements that the OLED display uses to illuminate visible light to a user’s finger pad 602. Note that each RGB light-emitting element represents a pixel. The OLED display also includes thin film transistors 610 (TFTs 610), which the OLED display uses to turn on and off each pixel in the RGB pattern 608. For simplicity, in this example, the RGB pattern 608 and the TFTs 610 make up the OLED display.
[0055] As the user touches the display screen 506 (the OLED display) inside the active area 508, the touch controller determines the location of the touch area 512; refer to FIG. 5. In some conventional systems, the OLED display illuminates light using all the pixels inside the touch area 512. The illuminated light, such as illuminated green light 612 and illuminated red light 614, passes through a cover glass 606 above the OLED display. Then, the illuminated light is reflected by ridges of a fingerprint 604 back through the cover glass 606. The reflected light, such as reflected green light 616 and reflected red light 618, is captured by a sensor 620 (e.g., a CMOS image sensor). Note that the sensor 620 includes cells, where each cell also represents a pixel. For simplicity, the pixel includes a red, green,
or blue lighting element, a TFT, and a cell of the sensor 620. In practice, however, the pixel may include more elements, which are not illustrated in FIG. 5. As the reflected light off the user’s finger pad (e.g., the reflected green light 616 and the reflected red light 618) travels towards the sensor 620, the light is directed by a collimator 622, which is embedded between the OLED display and the sensor 620. The collimator 622 guides the reflected light to each cell of the sensor 620 in such a way that each cell of the sensor 620 does not measure scattered light. The collimator 622 helps the sensor 620 to capture a verify image 510 in high resolution, with sharp instead of blurry images of the ridges of the fingerprint 604. The image information captured by the sensor 620 is transmitted to other components (not illustrated), such as peripheral integrated circuits (ICs), through bonding(s) 624.
[0056] In general, the OLED display ages as the user uses the user device 500. Moreover, repetitive use of the OLED display in high-brightness mode accelerates aging and, over time, creates localized aging for portions of the OLED display that are repetitively used. To minimize the damage caused to the organic material of the OLED display from the use of high-brightness mode, some manufacturers may design the OLED display to operate in less than the maximum (100%) of the high-brightness mode, such as at 50%. This solution may cause localized aging at a slower pace; for example, the user may notice the first signs of the localized aging after four years instead of after two years of utilizing the user device 500. Although operating the OLED display in less than 100% of high-brightness mode is favorable, such a solution is not optimal. The user device 500 still uses a considerable amount of power and time to locate the fingerprint position, illuminate the touch area 512, capture a large verify image 510, transfer the large verify
image 510 to the processor(s) 202, and match the verify image 510 to the enrolled template 110. Further, if the quality of the verify image 510 is low, the user device 500 may repeat this process until it can determine with enough confidence (e.g., low false acceptance rate and low false rejection rate) that the user is authorized to use the user device 500 or an application software. Also, even if the user device 500 operates the OLED display screen at lower than 100% of the high-brightness mode, such as at 50%, localized aging still occurs, just at a slower pace. Operating the OLED display in far less than 100% of the high-brightness mode, such as at 10%, limits the amount and intensity of reflected light off the user’s finger pad 602, causing poor capturing of the verify image 510. Thus, when the user device 500 does not use reduced-illumination patterns for capturing and matching a fingerprint or a plurality of fingerprints, the OLED display, overtime, will degrade.
Reduced-Illumination Patterns
[0057] FIG. 7 illustrates another example of a user device 700 (e.g., a smartphone) with a display screen 706 (e.g. , an OLED display) and an optical Under Display Fingerprint Sensor (UDFPS). Similar to the example user device 500 in FIG. 5, the UDFPS in FIG. 7 has an active area 708 that can detect a user’s finger pad and capture a fingerprint image, such as a verify image 710, when the user places their finger pad on the display screen 706. Similar to the example user device 500 in FIG. 5, the user device 700 in FIG. 7 sacrifices only the top part of the user device 700 to include a speaker 702 and a front-facing camera
704.
[0058] As the touch controller of the user device 700 deteimines that the user’ s touch is within the active area 708, the user device 700 captures the verify image 710 in a touch area 712 inside the active area 708. In detail, the touch controller of the user device 700 determines the center of the user’s touch (illustrated as finger pad center location 714) and activates the touch area 712 such that the center of the user’s touch 714 is the center of the touch area 712. FIG. 7 illustrates a square touch area 712. In practice, however, the touch area 712 can be any two-dimensional (2D) shape, such as a circle, an ellipse, a triangle, a rectangle, a hexagon, an octagon, and so forth.
[0059] Unlike the example user device 500 in FIG. 5, the OLED display in FIG. 7 illuminates in high-brightness mode only M blocks out of the P blocks (where M is less than P) inside the touch area 712, where block 720 is one of the blocks illuminated in high-brightness mode. So, instead of illuminating in high-brightness mode and capturing the entire verify image 710 to be authenticated against the enrolled template 110, the user device 700 illuminates, captures, and verifies only M out of P blocks of the verify image 710, where each block is NxN pixels (e.g., 31x31 pixels). This is one example way in which the techniques reduce illumination using a pattern, the pattern here being blocks that, in total, are smaller than the touch area 712, and thus less illumination is made to the display screen 706.
[0060] Given that this disclosure leverages block-by-block matching as described in FIGs. 1 through 4, in one aspect, the reduced illuminating pattern can be random within the touch area 712. For example, once the touch controller of the user device 700 deteimines the touch area 712, the fingerprint identification system (e.g., fingerprint
identification system 104) of the user device 700 divides the touch area 712 into P blocks. Then the user device 700 may utilize a random-number generator (not illustrated) and illuminates only M out of P blocks, randomly. Similarly, the fingerprint identification system divides the enrolled template 110 into a quantity of P ’ blocks. Chances are that the random-number generator, over the life of the user device 700, causes the OLED display to operate in high-brightness mode different pixels, blocks, and area patterns. The use of the random-number generator to illuminate random blocks reduces usage to some areas of the OLED and subsequently minimizes localized “aging” of the OLED display.
[0061] In other aspects, when the OLED display illuminates a user’ s finger pad (e.g. , 602) in high-brightness mode, the user device 700 may use tessellation in two dimensions (2D), which in mathematics is sometimes referred to as “planar tiling,” to select some but not all of the tiles, thereby spreading out the light-exposure to the organic material of the OLED display over the lifetime of the user device 700. Note that tessellation in 2D is a geometric concept that deals with the arrangement of tiles to fill a 2D plane without any gaps, given a set of rules. The user device 700 may employ periodic tiling or non-periodic tiling to illuminate patterns made up of blocks (e.g., block 720) inside the touch area 712. Some simple examples of tiling are triangular tiling, checkered tiling, hexagonal tiling, topological square tiling, floret pentagonal tiling, and so forth. Also, the user device may employ tessellation of the touch area 712 within the OLED display in combination with the random-number generator.
[0062] Thus, the user device 700, which implements reduced-illumination patterns (instead of illuminating the whole touch area 712) for capturing and matching the verify
image 722, can successfully authenticate the user without increasing hardware complexity, by reducing latency, using less computational resources, using less power (preserving the battery charge), and preserving the quality of the OLED display.
[0063] FIG. 8 illustrates another example environment 800 of a user device (e.g, a smartphone) that dynamically illuminates, captures, transfers, and matches (verifies) an image of a fingerprint (a verify image) to the enrolled template 110 (not illustrated in FIG. 8). Similar to FIG. 7, the user device in the example environment 800 has an active area of UDFPS 808 (active area 808) inside the display screen of the user device. Prior to or during a user touch, the user device in the example environment 800, determines the center location of the user’s finger pad (in FIG. 8, finger pad center location 814) and determines a touch area 812 based on the finger pad center location 814. Instead of concurrently illuminating M out P blocks of the touch area 812, the user device in the example 800 illuminates one block out the M blocks at a time. For example, at a first time interval , the user device illuminates block 820, captures the block 820, transfers the block’s image data (data transfer 830) for processing (e.g., at computer processors 202), and matches (match 850) a transferred block 840 to a respective block (not illustrated) of the enrolled template 110. If the block 820 matches the respective block of the enrolled template 110, the user device illuminates another block 822 at a second interval time
Similarly, the user device captures the block 822, transfers the block’s image data (data transfer 832) for processing, and matches (match 852) a transferred block 842 to a respective block of the enrolled template 110. If the block 822 matches the respective block of the enrolled template 110, the user device dynamically and progressively illuminates more blocks up to
an N-th (last) time interval /y, as is illustrated by block 824, data transfer 834, transferred block 844, and match 854. Finally, the user device in FIG. 8 determines whether to grant or deny access to the user based on the overall score S, as described in FIG. 3. Similar to the description of FIG. 7, the user device in FIG. 8 can illuminate blocks in a random pattern, utilizing tessellation in 2D, or utilizing a combination of randomization and tessellation.
[0064] In addition to capturing and matching the illustrated illuminated blocks 820, 822, and 824 in FIG. 8, the fingerprint identification system 104 can capture and match adjacent blocks to the blocks 820, 822, and 824 without illuminating such adjacent blocks. Even though the adjacent blocks may have a lower intensity of reflected light off the user’ s finger pad, such data can still be useful to increase the amount of data being evaluated for authentication. Capturing the images of adjacent blocks to the illuminated blocks 820, 822, and 824 is another way to reduce damage caused to the OLED display.
[0065] In addition to advantages described in FIGs. 1 through 7, the user device in the example environment 800 allows the user device to abort the authentication process at any time the user device makes a determination that the authorized user is not trying to gain access to the user device nor any of the biometric-protected applications. For example, assume the authorized user inserts their smartphone in their pocket. As the authorized user’s pocket lining touches the active area 808, the user device may illuminate only a fraction of M blocks (e.g., one or two blocks) to determine that the authorized user is not trying to gain access to the user device. As another example, the authorized user may hold their smartphone in their hand. As the user’s palm touches the active area 808,
the user device can determine within one, two, and so forth, time intervals that the user is not presenting their finger for authentication, enabling the user device to preserve power and preserve the OLED display by illuminating a minimum count of pixels.
[0066] FIG. 9 illustrates another example environment 900 of a user device (e.g, a smartphone) that dynamically illuminates, captures, transfers, and matches (verifies) a plurality of fingerprint images (a plurality of verify images) to a plurality of enrolled templates 110 (not illustrated in FIG. 9). The example user device in FIG. 9 operates similarly to the example user device in FIG. 8. In addition to the example user device in FIG. 8, the example user device in FIG. 9 can dynamically authenticate a plurality of fingerprints, such as two, three, four, five, or even ten fingers of the user. Similar to the description of FIG. 7, the user device in FIG. 9 can illuminate blocks in a random pattern, utilizing tessellation in 2D, or utilizing a combination of randomization and tessellation.
[0067] For simplicity, assume the user device authenticates two fingers of the user. Prior to or during a user touch, the user device in the example environment 900, determines the center location of the user’s finger pad off a first finger (in FIG. 9, center location of first finger 914-1) and determines a touch area 912-1 based on the center location of the first finger 914-1. Similarly, the user device in the example environment 900 determines a touch area 912-2 based on a center location of a second finger 914-2. At time interval ti, the user device illuminates, captures, transfers, and matches, blocks 920-1 and 920-2 to respective blocks in the enrolled template(s) 110. If the matching of blocks 920-1 and 920- 2 is successful, the user device illuminates another block 922-1 from the touch area 912-1 and another block 922-2 from the touch area 912-2. If the user device successfully matches
the blocks 922-1 and 922-2, the user device dynamically and progressively illuminates more blocks up to an N-th (last) time interval lx, as is illustrated by block 924-1 and 924- 2.
[0068] The user may choose to authenticate the plurality of fingers dynamically, but one finger at a time. The user may also choose to request to authenticate the plurality of fingers dynamically and all fingers at the same time. Also, similarly to the example in FIG. 8, the user device in the example environment 900 can abort the authentication process at any time the user device determines that the authorized user is not trying to gain access to the user device nor any of the biometric-protected applications, enabling the user device to preserve power and preserve the OLED display by illuminating a minimum count of pixels.
Example Methods
[0069] FIG. 10-1 illustrates an example logic-flow of the capturing module 304 of the fingerprint identification system 104-1, and FIG. 10-2 illustrates an example logic-flow of the matching module 306 of the fingerprint identification system 104-1.
[0070] The logical operations of the capturing module 304 are organized in stages 1000 through 1012. As illustrated in FIG. 10-1, at stage 1000, the capturing module 304 of the fingerprint identification system 104-1 receives an indication of a user touch (e.g., of a user’s finger pad), prior to or during the user touch. At stage 1002, utilizing randomization, tessellation in 2D, or a combination of randomization and tessellation, the fingerprint identification system 104-1 emits radiation (e.g., illuminates visible light using an OLED display) only in M out of P blocks. At stage 1004, the sensor 108 captures
reflected images only off the M, including the block 404, out of P blocks. Note that the P blocks represent the entire user input, whereas the capturing module 304 captures only a portion of the entire user input, M out of P blocks, where M is greater or equal than one (1) block.
[0071] At stage 1006, the capturing module 304 runs the blocks of the user input through post-processing, where the images of the A/blocks are enhanced for the subsequent stage 1008 where the capturing module 304 computes an individual matching score R for each of the captured M blocks to be compared to the corresponding M’ blocks of the enrolled template 110. At stage 1012, the capturing module 304 outputs the matching scores R for the M blocks to be used by the matching module 306 for fingerprint identification. At stage 1010, the capturing module 304 determines whether there are still more blocks to be captured, such as in the case of dynamic illuminating, capturing, and matching, as is illustrated in FIGs. 8 and 9. If so, the fingerprint identification system 104- 1 with the capturing module 304 repeats stages 1002 through 1008.
[0072] Turning to FIG. 10-2, the matching module 306 may perform the logical operations of stages 1020 through 1030. At stage 1020, the matching module 306 receives the output from the capturing module 304 and extracts Tx, Ty, Q, and the matching score R from each of the captured blocks.
[0073] The matching module 306 extracts translation vectors Tx and Ty in both x and y directions for the blocks. The matching module 306 also extracts a rotational vector f based on a calculated rotation angle Q between the M blocks and matching M’ blocks of the enrolled template 110. The matching module 306 retains the Tx, Ty, Q, and the matching
score R from each of the M blocks at the ranked table 408 (see FIG. 4). The matching module 306 sorts the translation vectors in the ranked table 408 based on matching scores R, and groups multiple matching blocks with the closest rotation and translation vectors into bins.
[0074] At stage 1022, the matching module 306 determines a confidence C of the matching scores R. The rotation and translation vector candidates [Tx, Ty, f ] are subjected to a random sample consensus (RANSAC) voting process to determine a correlation/matching score between the matching blocks. The higher the number of votes, the greater the correlation/matching score, and the greater the confidence C. The matching module 306 sorts the translation vectors using the correlation/matching scores within the ranked table 408. The matching module 306 groups multiple matching blocks with the closest rotation and translation vectors into bins of Q blocks.
[0075] The matching module 306 selects the bins of the Q blocks with the highest matching scores R at stage 1024. For example, the matching module 306 may retain only the top-ten A scores. At stage 1026, the matching module 306 discards the bins of Q blocks with matching scores or confidences that do not satisfy a confidence threshold. For example, the matching module 306 may remove all R scores that are lower than the top- ten R scores. At stage 1028, the matching module 306 computes a composite score S and confidence C for the verify image, based on the scores R and confidences of the Q blocks in the highest-ranking bin. The matching module 306 selects a bin from the ranked table 408 with the highest quantity of matching Q blocks and extracts a final translational and
rotation vector [Tx, Ty, f ] for the verify image, which is calculated as the average of the rotation and translation vectors of all the matching Q blocks within the bin.
[0076] After stage 1028, the matching module 306 returns to stage 1020 unless the confidence of the matching Q blocks within the bin satisfy a confidence threshold. At stage 1030, the matching module 306 outputs a successful authentication if the total quantity of votes in the highest-scoring bin is greater than a threshold, granting access to the secured data 218.
[0077] FIG. 11 illustrates an example method 1100 performed by the user device, which authenticates user inputs implementing reduced-illumination patterns for illuminating, capturing, and matching, a fingerprint or a plurality of fingerprints. FIG. 11 is described in the context of FIG. 1, and user device 100. The operations performed in the example method 1100 may be performed in a different order or with additional or fewer steps than what is shown in FIG. 11.
[0078] At stage 1102, the user device 100 determines, based on a location of a user touch to a display ( e.g ., display screen 106) of a display system, small regions (A / blocks) of the display. The small regions (M blocks) are within a touch area (P blocks) of the display over which the user touch is superimposed. Thus, the small regions represent a reduced area relative to all of the touch area (M is less than P).
[0079] At stage 1104, the user device 100 illuminates with radiation, each of the small regions ( blocks) of the display (e.g., and OLED display). The illumination causes the radiation to reflect off a user’s skin touching the touch area; refer to FIG. 6 as one aspect.
[0080] At stage 1106, the user device 100 captures images at a sensor (e.g., sensor 108). The sensor 108 is configured to receive the radiation reflected off the user’s skin touching the touch area at one or more of the M blocks. The captured M blocks of the captured images correspond to the illuminated M blocks at stage 1104.
[0081] At stage 1108, the user device 100 compares the captured image(s) of the M block(s) (the verify image) to the corresponding M’ block(s) of the enrolled template 110. Specifically, the user device 100 performs block-by-block comparison.
[0082] At stage 1110, the fingerprint identification system 104 of the user device 100 determines whether the M blocks being evaluated are the last M blocks needing evaluation. The fingerprint identification system 104 of the user device 100 may illuminate, capture, and match, additional M blocks if the user device is operating in a dynamic mode (refer to FIGs. 8 and 9), if one or more of the blocks does not match the corresponding M’ block of the enrolled template 110, if the user may have set a higher- than-normal security requirement for a specific application, or other reasons. If the user device 100 determines that additional M blocks need evaluating, the user device 100 repeats stages 1102 through 1108. If the user device 100 determines that no additional M blocks are needed to make a determination on user authenticity, the user device 100 moves to stage 1112.
[0083] At 1112, the user device 100 authenticates the user touch (user input) by performing block-by-block comparison, as described in FIGs. 1, 4, 10-1, and 10-2. At this stage 1112, the fingerprint identification system 104 of user device 100 requires that confidence C meets a pre-determined threshold level. If the confidence C does not meet
the pre-determined threshold level, the fingerprint identification system 104 of the user device 100 issues a deny access 1114 verdict and, possibly, a message on the display screen 106 stating that access is denied. On the other hand, if the confidence C meets the predetermined threshold level, the fingerprint identification system 104 of the user device 100 issues a grant access 1116 verdict, granting access to the user device 100 or peripherals (e.g., application 102) associated with the user device 100.
[0084] FIG. 12 illustrates examples of patterns (1202, 1204, and 1206) and minutiae (1210 through 1230) used in matching fingerprints. The analysis of fingerprints for matching purposes generally requires the comparison of patterns and/or minutiae. The three main patterns of fingerprint ridges are an arch 1202, a loop 1204, and a whorl 1206. The arch 1202 is a fingerprint ridge that enters from one side of the finger, rises in the center forming an arc, and then exits the other side of the finger. The loop 1204 is a fingerprint ridge that enters from one side of the finger, forms a curve, and then exits on that same side of the finger. The whorl 1206 is a fingerprint ridge that is circular around a central point. The minutiae 1210 through 1230 are features of fingerprint ridges, such as ridge ending 1210, bifurcation 1212, short ridge 1214, dot 1216, bridge 1218, break 1220, spur 1222, island 1224, double bifurcation 1226, delta 1228, trifurcation 1230, lake or ridge enclosure (not illustrated), core (not illustrated), and so forth.
[0085] The following are additional examples of the described systems and techniques parallel capturing and matching of a fingerprint.
Example 1: A computer-implemented method comprising: determining, based on a location of a user touch to a display of a display system, small regions of the display, the
small regions within a touch area of the display over which the user touch is superimposed, the small regions representing a reduced area relative to all of the touch area; illuminating with radiation, each of the small regions of the display, the illumination effective to cause the radiation to reflect off a user’s skin touching the touch area; capturing images at a sensor, the sensor configured to receive the radiation reflected off the user’s skin touching the touch area at one or more of the small regions, the images including one or more images corresponding to the one or more of the small regions, respectively; comparing the one or more images to an enrolled template, the enrolled template associated with a fingerprint of a verified user; authenticating the user touch to the display based on the comparing of the one or more images to the enrolled template; and responsive to authenticating the user touch, enabling use of a function or peripheral.
Example 2: The method of example 1, wherein: the sensor of the display system includes one or more under-display fingerprint sensors; and the display of display system is an organic light-emitting diode, OLED, display capable of operating in high-brightness mode, and wherein the illuminating uses the high-brightness mode.
Example 3: The method of examples 1 or 2, wherein the sensor includes a complementary metal-oxide-semiconductor, CMOS, image sensor.
Example 4: The computer-implemented method of any of examples 1 to 3, wherein the images corresponding to the small regions include captured blocks, and the small regions of which the images are captured part of a tessellation in two dimensions of the touch area.
Example 5: The computer-implemented method of any of examples 1 to 4, further comprising determining the location of the user touch.
Example 6: The computer-implemented method of example 5, further comprising determining the location prior to the user touch, and wherein determining the small regions of the display is performed prior to, or after, the user touch.
Example 7: The computer-implemented method of example 6, wherein determining the location prior to the user touch is performed using radar.
Example 8: The computer-implemented method of any of examples 1 to 7, wherein determining the small regions of the display is determined using a randomness function, the randomness function effective to reduce repetitive use of portions of the display.
Example 9: The computer-implemented method of any of examples 1 to 8, wherein illuminating the radiation emits visible light, the visible light emitted by thin film transistors in conjunction with red-green-blue, RGB, light-emitting elements of the organic light-emitting diode, OLED, display.
Example 10: The computer-implemented method of any of examples 1 to 9, wherein the radiation passes through a glass layer of the display of the display system and the radiation then reflects off the user’s skin touching the touch area, after which the reflected radiation passes back through the glass layer to the sensor.
Example 11: The computer-implemented method of any of examples 1 to 10, wherein the one or more images is a plurality of images, the illuminating is performed in series to capture the plurality of images, the capturing the plurality of images is performed in series, and the comparing the plurality of images to the enrolled template is performed in series.
Example 12: The computer-implemented method of any of examples 1 to 10, wherein the one or more images is a plurality of images, the illuminating is performed in parallel to
capture the plurality of images, the capturing the plurality of images is performed in parallel, and the comparing the plurality of images to the enrolled template is performed in parallel.
Example 13: The computer-implemented method of any of examples 1 to 12, wherein the enrolled template includes vector-based templates, and wherein comparing the one or more images compares a vector conversion of the one or more images to the vector-based templates.
Example 14: The computer-implemented method of any of examples 1 to 13, wherein the enrolled template includes multiple blocks, the multiple blocks arranged to create the same size as the small regions, and wherein comparing the one or more images to the enrolled template compares each of the one or more images to the multiple blocks to determine a confidence level for each of the one or more images, and wherein authenticating the user touch is performed responsive to a confidence threshold being met by the determined confidence level.
Example 15: The computer-implemented method of example 14, wherein the individual blocks are: overlapping; non-overlapping and apart, with a sliding distance of more than one pixel between the blocks; or adjacent, with a sliding distance of zero or one pixel between the blocks.
Example 16: The computer-implemented method of any of examples 1 to 15, wherein enabling use of the function or peripheral comprises unlocking a user device with which the display system is associated.
Example 17: The computer-implemented method of examples 1 to 16, wherein the user touch includes two or more touches from two or more fingertips and the small regions include at least one small region for each of the two or more fingertips.
Example 18: A user device comprising: a display of a display system; a sensor; one or more processors; and one or more computer-readable media having instructions thereon that, responsive to execution by the one or more processors, perform the operations of the method of any of examples 1 to 17.
Conclusion
[0086] While various embodiments of the disclosure are described in the foregoing description and shown in the drawings, it is to be understood that this disclosure is not limited thereto but may be variously embodied to practice within the scope of the following claims. From the foregoing description, it will be apparent that various changes may be made without departing from the spirit and scope of the disclosure as defined by the following claims.
Claims
1. A computer-implemented method comprising: determining, based on a location of a user touch to a display of a display system, small regions of the display, the small regions within a touch area of the display over which the user touch is superimposed, the small regions representing a reduced area relative to all of the touch area; illuminating with radiation, each of the small regions of the display, the illumination effective to cause the radiation to reflect off a user’s skin touching the touch area; capturing images at a sensor, the sensor configured to receive the radiation reflected off the user’s skin touching the touch area at one or more of the small regions, the images including one or more images corresponding to the one or more of the small regions, respectively; comparing the one or more images to an enrolled template, the enrolled template associated with a fingerprint of a verified user; authenticating the user touch to the display based on the comparing of the one or more images to the enrolled template; and responsive to authenticating the user touch, enabling use of a function or peripheral.
2. The method of claim 1, wherein: the sensor of the display system includes one or more under-display fingerprint sensors; and the display of display system is an organic light-emitting diode, OLED, display capable of operating in high-brightness mode, and wherein the illuminating uses the high brightness mode.
3. The method of claims 1 or 2, wherein the sensor includes a complementary metal- oxide-semiconductor, CMOS, image sensor.
4. The computer-implemented method of any of claims 1 to 3, wherein the images corresponding to the small regions include captured blocks, and the small regions of which the images are captured part of a tessellation in two dimensions of the touch area.
5. The computer-implemented method of any of claims 1 to 4, further comprising determining the location of the user touch.
6. The computer-implemented method of claim 5, further comprising determining the location prior to the user touch, and wherein determining the small regions of the display is performed prior to, or after, the user touch.
7. The computer-implemented method of claim 6, wherein determining the location prior to the user touch is performed using radar.
8. The computer-implemented method of any of claims 1 to 7, wherein determining the small regions of the display is determined using a randomness function, the randomness function effective to reduce repetitive use of portions of the display.
9. The computer-implemented method of any of claims 1 to 8, wherein illuminating the radiation emits visible light, the visible light emitted by thin film transistors in conjunction with red-green-blue, RGB, light-emitting elements of the organic light- emitting diode, OLED, display.
10. The computer-implemented method of any of claims 1 to 9, wherein the radiation passes through a glass layer of the display of the display system and the radiation then reflects off the user’s skin touching the touch area, after which the reflected radiation passes back through the glass layer to the sensor.
11. The computer-implemented method of any of claims 1 to 10, wherein the one or more images is a plurality of images, the illuminating is performed in series to capture the plurality of images, the capturing the plurality of images is performed in series, and the comparing the plurality of images to the enrolled template is performed in series.
12. The computer-implemented method of any of claims 1 to 10, wherein the one or more images is a plurality of images, the illuminating is performed in parallel to capture the plurality of images, the capturing the plurality of images is performed in parallel, and the comparing the plurality of images to the enrolled template is performed in parallel.
13. The computer-implemented method of any of claims 1 to 12, wherein the enrolled template includes vector-based templates, and wherein comparing the one or more images compares a vector conversion of the one or more images to the vector-based templates.
14. The computer-implemented method of any of claims 1 to 13, wherein the enrolled template includes multiple blocks, the multiple blocks arranged to create the same size as the small regions, and wherein comparing the one or more images to the enrolled template compares each of the one or more images to the multiple blocks to determine a confidence level for each of the one or more images, and wherein authenticating the user touch is performed responsive to a confidence threshold being met by the determined confidence level.
15. The computer-implemented method of claim 14, wherein the multiple blocks are: overlapping; non-overlapping and apart, with a sliding distance of more than one pixel between the blocks; or adjacent, with a sliding distance of zero or one pixel between the blocks.
16. The computer-implemented method of any of claims 1 to 15, wherein enabling use of the function or peripheral comprises unlocking a user device with which the display system is associated.
17. The computer-implemented method of claims 1 to 16, wherein the user touch includes two or more touches from two or more fingertips and the small regions include at least one small region for each of the two or more fingertips.
18. A user device comprising: a display of a display system; a sensor; one or more processors; and one or more computer-readable media having instructions thereon that, responsive to execution by the one or more processors, perform the operations of the method of any of claims 1 to 17.
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| PCT/US2020/017745 WO2021162682A1 (en) | 2020-02-11 | 2020-02-11 | Fingerprint sensors with reduced-illumination patterns |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| PCT/US2020/017745 WO2021162682A1 (en) | 2020-02-11 | 2020-02-11 | Fingerprint sensors with reduced-illumination patterns |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| WO2021162682A1 true WO2021162682A1 (en) | 2021-08-19 |
Family
ID=69844892
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| PCT/US2020/017745 Ceased WO2021162682A1 (en) | 2020-02-11 | 2020-02-11 | Fingerprint sensors with reduced-illumination patterns |
Country Status (1)
| Country | Link |
|---|---|
| WO (1) | WO2021162682A1 (en) |
Cited By (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN115424295A (en) * | 2022-08-03 | 2022-12-02 | 北京极豪科技有限公司 | Biological information recognition method, electronic device, and computer-readable medium |
| WO2023172333A1 (en) * | 2022-03-08 | 2023-09-14 | Google Llc | Spatially-configurable localized illumination for biometric authentication |
-
2020
- 2020-02-11 WO PCT/US2020/017745 patent/WO2021162682A1/en not_active Ceased
Non-Patent Citations (3)
| Title |
|---|
| "Handbook of Fingerprint Recognition, 2nd ed.", 1 January 2009, SPRINGER, article DAVIDE MALTONI: "Handbook of Fingerprint Recognition, 2nd ed.", pages: 167 - 170, 206, XP055739616 * |
| SANGHOON BAE ET AL: "Optical Fingerprint Sensor Based on a-Si:H TFT Technology", SOCIETY FOR INFORMATION DISPLAY - DIGEST OF TECHNICAL PAPERS 76.2, 30 May 2018 (2018-05-30), pages 1017 - 1020, XP055738818, Retrieved from the Internet <URL:https://onlinelibrary.wiley.com/doi/epdf/10.1002/sdtp.12199> [retrieved on 20201012], DOI: https://doi.org/10.1002/sdtp.12199 * |
| TAISUKE KAMADA ET AL: "OLED display incorporating organic photodiodes for fingerprint imaging", JOURNAL OF THE SOCIETY FOR INFORMATION DISPLAY SID, vol. 27, no. 6, 15 April 2019 (2019-04-15), US, pages 361 - 371, XP055738697, ISSN: 1071-0922, DOI: 10.1002/jsid.786 * |
Cited By (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| WO2023172333A1 (en) * | 2022-03-08 | 2023-09-14 | Google Llc | Spatially-configurable localized illumination for biometric authentication |
| CN115424295A (en) * | 2022-08-03 | 2022-12-02 | 北京极豪科技有限公司 | Biological information recognition method, electronic device, and computer-readable medium |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| JP6397036B2 (en) | Dynamic keyboard and touchscreen biometrics | |
| Holz et al. | Bodyprint: Biometric user identification on mobile devices using the capacitive touchscreen to scan body parts | |
| US7986816B1 (en) | Methods and systems for multiple factor authentication using gaze tracking and iris scanning | |
| EP3254232B1 (en) | Systems and methods for performing fingerprint based user authentication using imagery captured using mobile devices | |
| US9349035B1 (en) | Multi-factor authentication sensor for providing improved identification | |
| US20230045850A1 (en) | Fingerprint Capturing and Matching for Authentication | |
| CN107209848B (en) | System and method for personal identification based on multimodal biometric information | |
| US20150302252A1 (en) | Authentication method using multi-factor eye gaze | |
| US10445605B2 (en) | Biometric authentication of electronic signatures | |
| CN107004113B (en) | System and method for obtaining multi-modal biometric information | |
| US20130322705A1 (en) | Facial and fingerprint authentication | |
| CN106326830A (en) | Fingerprint recognition method and apparatus | |
| US20170109564A1 (en) | On-screen optical fingerprint capture for user authentication | |
| WO2021257108A1 (en) | Optical fingerprint system with varying integration times across pixels | |
| US10896250B2 (en) | Biometric authentication apparatus and biometric authentication method | |
| US20190095671A1 (en) | Electronic device including sequential operation of light source subsets while acquiring biometric image data and related methods | |
| US20190080065A1 (en) | Dynamic interface for camera-based authentication | |
| CN107908942B (en) | Electronic device, display system, integrated control chip and biometric verification method | |
| US20160371533A1 (en) | Method for recognizing operation body's characteristic information, electronic apparatus, safety apparatus and palm print recognizing device | |
| US12183117B2 (en) | User authentication using pose-based facial recognition | |
| WO2021162682A1 (en) | Fingerprint sensors with reduced-illumination patterns | |
| US11354934B2 (en) | Location matched small segment fingerprint reader | |
| US11507646B1 (en) | User authentication using video analysis | |
| US12056222B2 (en) | Display device including a fingerprint sensor and a method of driving the same | |
| CN110287861B (en) | Fingerprint identification method and device, storage medium and electronic equipment |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| 121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 20711687 Country of ref document: EP Kind code of ref document: A1 |
|
| NENP | Non-entry into the national phase |
Ref country code: DE |
|
| 122 | Ep: pct application non-entry in european phase |
Ref document number: 20711687 Country of ref document: EP Kind code of ref document: A1 |