[go: up one dir, main page]

EP4042267A1 - Techniques pour la détection d'empreintes digitales et l'authentification d'utilisateur - Google Patents

Techniques pour la détection d'empreintes digitales et l'authentification d'utilisateur

Info

Publication number
EP4042267A1
EP4042267A1 EP20874559.6A EP20874559A EP4042267A1 EP 4042267 A1 EP4042267 A1 EP 4042267A1 EP 20874559 A EP20874559 A EP 20874559A EP 4042267 A1 EP4042267 A1 EP 4042267A1
Authority
EP
European Patent Office
Prior art keywords
user
fingerprint
finger
array
touch sensor
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
EP20874559.6A
Other languages
German (de)
English (en)
Inventor
John T. Apostolos
William Mouyos
James D. Logan
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
AMI Research and Development LLC
Original Assignee
AMI Research and Development LLC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from US16/595,017 external-priority patent/US11048786B2/en
Application filed by AMI Research and Development LLC filed Critical AMI Research and Development LLC
Publication of EP4042267A1 publication Critical patent/EP4042267A1/fr
Withdrawn legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/30Authentication, i.e. establishing the identity or authorisation of security principals
    • G06F21/31User authentication
    • G06F21/32User authentication using biometric data, e.g. fingerprints, iris scans or voiceprints
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/041Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means
    • G06F3/044Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means by capacitive means

Definitions

  • This patent application relates to touchscreens and more particularly to techniques for detecting fingerprints and user authentication.
  • a typical touchscreen uses a projected capacitive grid structure where every electrode intersection can unambiguously be identified as a touch point or “cell”. As the user's finger slides up and down along the grid, the ridges and valleys of the finger also move across a small subset of these touch points.
  • - representing a fingerprint by prompting a user to swipe a finger along two or more paths on a sparse 2D touch array, detecting two or more one-dimensional (ID) time varying signal representative of ridge and valley detail from the sparse touch array for each swipe, and fusing the two or more ID signals together; - authenticating a user of a second device having a sparse ID touch sensor by detecting fingerprint information from the second device and forwarding the data for matching to a cloud processor or back to a first device with the sparse 2D touch array;
  • ID one-dimensional
  • detailed fingerprint information is obtained by operating a touch sensitive array in two modes.
  • touch sensitive elements of the array In a first mode, touch sensitive elements of the array to are scanned at a first rate to provide outputs sufficient to detect a position of the finger.
  • the touch sensitive elements of the array In a second mode, the touch sensitive elements of the array are scanned at a second rate higher than the first rate.
  • the second mode provides outputs from the array comprising a time-varying sequence of digital amplitude values that, over time, are representative of a ridge and valley detail of a rectangular portion of the fingerprint that passes adjacent the corresponding one of the touchscreen array elements as the person uses the touchscreen.
  • the ouputs obtained in the first mode can be used to determine one or more gestures from one or more fingers.
  • Kalman filtering can be used on the outputs of the array provided in the first to determine a likely position of the finger on subsequent scans.
  • the array may be a capacitive array comprising a set of parallel transmit electrodes located along a first axis in a first plane, and a set of parallel receive electrodes located along a second axis in a second plane, with the elements located adjacent where the transmit and receive electrodes cross.
  • the array may be a sparse array where a spacing between adjacent elements of the array is at least ten times greater than a ridge and valley spacing of the person’s fingerprint.
  • the outputs of the array obtained in the second mode may be matched against templates to authenticate the user.
  • the templates may each be two or more overlapping rectangular sub-templates. The sub-templates can be rotated independently of one another to improve the matching process.
  • Fig. l is a block diagram of a representative electronic system.
  • Figs. 2-1, 2-2, 2-3 and 2-4 illustrate a touch sensitive grid, a particular crosspoint, and the resulting signal produced by a ridge-valley detector as the finger moves past one of the crosspoints in the array.
  • Fig. 3-1 and 3-2 illustrate switching between a first and second scanning mode.
  • Fig. 4 illustrates an enrollment phase.
  • Fig. 5 illustrates a single tap fingerprint recognition mode.
  • Fig. 6 shows enrollment on one device and fingerprint detection on a second device.
  • Fig. 7 is a handgun use case.
  • Fig. 8 illustrates overlapping sub-templates.
  • Figs. 9A-9C are a swipe to unlock use case.
  • Fig. 10 illustrates a higher resolution touch sensitive array, and a selected sub- array near a finger.
  • Fig. 11 is a logical flow for producing a set of fingerprint codes.
  • Fig. 12 is a logical flow for capturing a fingerprint image.
  • Example System Fig. 1 is a block diagram of a representative electronic system 100 which may implement the techniques described herein.
  • the system 100 may be a smartphone, tablet, personal computer, automobile dashboard, vending machine, small appliance, hand-held device, or some other system that has a touch sensitive surface 102 that includes a touch array 120. It is now common for the touch array 120 to be an integral part of a display assembly 122.
  • Other components of the system 100 may include a central processing unit 200, memory 210, one or more wireless interfaces 220, other input devices such as buttons 220, and other peripheral devices such as cameras, microphones, speakers and the like 240.
  • the sensor array 120 detect touches of the person’s finger on or near the surface 102.
  • the sensor array includes elements 125 that are disposed as a two-dimensional matrix.
  • Each sensor element 125 also referred to as a “crosspoint” or “cell” herein
  • the sensor array 120 may not be an actual wire grid but may include capacitive pads that overlap in a diamond pattern, a totem-pole pattern, or other geometric patterns of overlaid conductive elements. What is important is that the array 120 provide a set of intersections or crosspoints arranged along an X and Y axis as a logical grid.
  • Other circuits in the system 100 may include a clock generator 300, signal generator 302, demultiplexer 304, multiplexer 310, demodulator 312, analog-to-digital converter 320.
  • the clock generator 300, signal generator 302, demux 304 and mux 310 are controlled by the CPU 200 or other controller to scan the outputs of each individual capacitive element 125 of the touch array 120 in a regular repeating pattern.
  • Processing logic may include touch coordinate determination 330, fingerprint ridge-valley detection 340, gesture recognition 350 and other signal processesing implemented in hardware or software.
  • the processing logic may provide additional outputs to functions such as user authentication 370 or software applications 380 executed by the CPU 200.
  • the processing logic uses outputs from the touch array 120 in various ways.
  • the touch coordinate 330 signal processing may identify one or more local maxima in the output of the array 120 signals provided by provides information representing the X-Y coordinates of one or more centroids of the user’s finger.
  • the centroid data may then be further processed over time by gesture recognition 350 and other applications 380.
  • the object when an object such as a finger approaches the touch array 120, the object causes a decrease in the mutual capacitance between only some of the electrodes in the array 120. For example, when a finger is placed near or on the intersection 125, the presence of the finger will decrease the charge coupling between only a few of the nearby electrodes. Thus the, location of the finger on the touchpad can be determined by identifying the one or more electrodes having a decrease in measured mutual capacitance.
  • the specific cross point can be identified by noting the particular transmit electrode to which the a signal was applied by the demultiplexer 304 at the time the decrease in capacitance was measured through the multiplexer 310 (demodulator 312) and analog-to-digital converter converter 320. In this way, the precise locations of one or more conductive objects such as one or more finger centroids may be determined.
  • the output of signal generator 302 is connected with demultiplexer 304, which allows one or more transmit (TX) signals to be applied to any of the M transmit electrodes 305 of touch array 120.
  • demultiplexer 304 is controlled so that a TX signal is applied to each transmit electrode 305 in a controlled sequence.
  • Demultiplexer 304 may also be used to ground, float, or connect an alternate signal to the other transmit electrodes to which the TX signal is not currently being applied. Because of the capacitive coupling between the transmit TX and receive RX electrodes 306, the TX signal applied to each transmit electrode 305 induces a current within each of several receive electrodes 306.
  • the RX signal on each of the receive electrodes 306 can then be measured in sequence by using multiplexer 310 to connect each of the N receive electrodes to demodulation circuit 312 in sequence.
  • TX electrode 305 and an RX electrode 306 The mutual capacitance associated with each intersection between a TX electrode 305 and an RX electrode 306 is sensed by selecting every available combination of TX electrode and an RX electrode using demultiplexer 304 and multiplexer 310.
  • multiplexer 310 may also be segmented to allow more than one of the receive electrodes in matrix to be routed to additional demodulation circuits.
  • the object When an object, such as a finger, approaches the touch array 120, the object causes a decrease in the measured mutual capacitance between only some of the electrodes. For example, if a finger is placed near the crosspoint 125 of a given one of the transmit electrodes 305 and receive electrodes 306, the presence of the finger will decrease the charge coupled between electrodes 305 and 306.
  • the location of the finger on the touchpad can be determined by identifying the one or more receive electrodes having a decrease in measured mutual capacitance in addition to identifying the transmit electrode to which the TX signal was applied at the time the decrease in capacitance was measured on the one or more receive electrodes.
  • the presence and locations of one or more conductive objects may be determined. The determination may be sequential, in parallel, or may occur more frequently at commonly used electrodes.
  • a finger or other conductive object may be used where the finger or conductive object causes an increase in measured capacitance at one or more electrodes, which may be arranged in a grid or other pattern.
  • a finger placed near an electrode of a capacitive sensor may introduce an additional capacitance to ground that increases the total capacitance between the electrode and ground.
  • the location of the finger can be determined based on the locations of one or more electrodes at which a change in measured capacitance is detected.
  • the ridge and valley detail of the user’s fingerprint may optionally be detected by processing the output of the A/D converter 320 over time.
  • the output of the A/D 320 is a signal representative of the ridge and valley detail of the user’s fingerprint.
  • the ridge and valley detail of the user’s fingerprint is also detected by processing the output of the A/D converter 320 over time.
  • the output of the A/D 320 is a signal representative of the ridge and valley detail of the user’s fingerprint.
  • Fig. 2-1 shows a sample grid 120 and adjacent fingertip 400.
  • the spacing between the sparse grid intersection points may be 0.25 inches, whereas a typical ridge and valley spacing is much smaller, on the order of 0.5 mm.
  • Fig. 2-2 shows this situation in more detail where a particular crosspoint 420 of the grid is identified as being closest to the centroid 410 of the fingertip 400.
  • a signal such as that shown in Fig.
  • ridge-valey detector 340 is produced by the ridge-valey detector 340 as different capacitance values are presented at different instances in time, depending opon whether a ridge or a valley is immediately adjacent the crosspoint 420.
  • the signal is a sequence of grayscale values as shown in the strip 450; the signal can also be thought of as a one-dimensional time varying signal 460 where amplitude represents changes in the detected ridge-valley depth over time. More details for how to detect the ridge and valley detail of the fingerprint are described in U.S. Patent 9,432,366 issued August 30, 2016 entitled “FINGERPRINT BASED SMARTPHONE USER VERIFICATION” which is hereby incorporated by reference. 1. Location of fast simple rate crossover points based on detected finger centroid
  • centroid location of the finger as detected by touch coordinate 330 may be used to control the clock generator, signal generator, and A/D converter to determine at which points in time to increase the sample rate of the array.
  • a single A/D converter 320 available to sample all of the crossover points 125 in the array 120.
  • the approach here is to operate a first mode to locate the coordinates of the finger of the using the touch coordinate processing 330 and then in a second mode adjust the clock generator to control the A/D 320 and clock to provide a higher sample rate in an area of the grid 120 closest to where the finger is known to be located.
  • a much slower sampling rate is adequate - as long as it is fast enough to for example sample the further possible movement of a finger centroid to another position on the grid.
  • 3-1 we time multiplex the A/D such that for a first, longer period of time (a first mode designated by the letter A) the portions of the grid located away from the last known position of the finger as sampled at a slow rate, and for a second, shorter period of time (a second mode designated by the letter B) the portions of the grid nearest the finger are sampled at least at the desired 14,000 samples per second.
  • the grid 120 may consist of a 21 x 21 array, or a total of 421 crossovre points.
  • the A/D 320 can sample at 150,000 samples per second (sps)
  • gesture recognition 250 may only need to sample a smaller window adjacent the finger (say a 9 x 9 subset) of the entire 21 x 21 array.
  • a filtering algorithm such as a Kalman filter to a series of detected finger centroid locations.
  • This filtered information may then be used to provide a more accurate prediction of the next most likely positon of a moving finger in the middle of a swipe.
  • Amplitude data from two or more crossover points adjacent the current centroid may also be used to improve the estimate.
  • This filtered information may be used to better determine where the closest crossover points are likely to be on the next scan of the array, in turn improving the ability to accurately place the higher sampling rate periods B.
  • the Kalman filtering operation may be implemented in the touch coordinate detector 330.
  • Using the resulting fingerprint data for an authentication process 370 also requires obtaining a data set representing one or more authorized users. The data set is then used for matching a currently detected fingerprint against one or more enrolled fingerprints. The aforementioned techniques may also be leveraged during initial enrollment of an authorized user.
  • Fig. 4 shows an example enrollment phase.
  • the process for obtaining a data set representing the enrolled, valid user may involve prompting a series of swipe inputs on the touch array 120. Each swipe results in a one-dimensional (ID) time-varying signal such signal 450 shown in Fig. 2-4.
  • the user may provide the series of swipes by following a deliberate pattern (such as the star pattern shown in Fig. 4 or other some other predetermined pattern), or the user may be prompted make a series of freeform swipe inputs anywhere they find comfortable.
  • time varying grayscale data shown in Fig. 2-4 may be maintained in the time domain signal; however in other instances this signal may be converted to a frequency domain representation to obtain phase information. It has been found that the phase information is also helpful in improving the fingerprint recognition process.
  • the one-dimensional time varying signal is obtained for each finger swipe it may be desirable to not only record the amplitude versus time information as per Fig. 2-3 but also record how the corresponding X, Y finger centroid changed as the user moved their finger.
  • the result is at least a two-dimensional set of data including time and amplitude information for a set of swipes; or multidimensional information, if the X-Y finger centroid information is also recorded.
  • the set of swipe data are fused to form an enrolled data set.
  • the data fusing may use a number of techniques, such as a least square two-dimensional polynomial fit, or a sinusoidal fit.
  • amplitude versus time swipe data this results in a three dimensional surface that is representative of the enrolled finger.
  • Other multidimensional curve fitting algorithms may be used when the finger swipe data includes the X-Y centroids, with the fusion resulting in a multi-dimensional manifold that is representational of the fingerprint. This fused data will not be a visual or photographic image of the fingerprint but will be representative of the fingerprint detail.
  • This fused data can then be used as the enrolled template against which the detected fingerprint data is matched during user authentication 370.
  • the authentication process 370 may be a “deep” learning process that continues to improve the representational surface / manifold with each subsequently detected swipe from a user who is already known to be authorized. For example, as the user engages in other operations with the device (such as interacting with an application 380 with finger swipes on the touch array) the new fingerprint data can be detected and fused with the existing data set.
  • a low-density sensor grid 120 without relying on finger motion to develop time-varying ID signals.
  • a user may simply tap the touchscreen and this event is still detected at one or more crossover points 502, 503, 504 closest to a centroid 501.
  • the three grayscale values resulting from this single tap plus the centroid X-Y information may then be used to provide at least a rough match via self-correlation against a previously enrolled data set. While detecting the 3 closest crossovers (or even the 9 closest crossovers) from a single tap is a very sparse sample of the fingerprint data, it may be sufficient to provide accurate authentication in some instances.
  • Self-correlating with a series of detected taps and associated centroids, collected as the user taps two or more places on the touchscreen while interacting with it, may provide further information for an improved user authentication 380. 5. Enrollment on touchscreen; detection on IoT device
  • Fig. 6 illustrates a use case for the above fingerprint detection methods and a unique authentication process.
  • An authorized user first enrolls their fingerprint using a touchscreen device 100 as has been described above using a tablet, smartphone, personal computer or other device which has a fully operational two-dimensional touchscreen. This full registration process results in a three-dimensional surface (taken from the set of one-dimensional time varying signals 450) or even a multidimensional manifold (with additional information such as the X-Y centroids) as been described above.
  • Once the user’s finger data is enrolled it is then stored in a non-volatile memory in the device 100 itself or may be transmitted to cloud processor 650. This data is then subsequently made available to a simpler device 600 via a Bluetooth pairing, or other wireless connection to device 100, or via another wired or wireless connection from the simpler device 600 to the cloud processor 650.
  • the device 600 (which may be a so-called “Internet of Things (IoT) device”) may serve a very simple function such as a light switch or a door lock. But even this simple device can benefit by ensuring that only an authorized user can manipulate it.
  • the IoT device may only have an one-dimensional, single sparse row 630 of capacitive touch sensors spaced for example 0.25 inches apart. This low-cost sparse 1-D sensor is then used to collect at least some information from the user's finger either from a swipe of the user’s finger or even a static tap. In any event, the information collected from the sensor 630 in the IoT device 600 is forwarded to either the cloud processor 650 or the device 100 for self-correlation against the enrolled fingerprint data.
  • IoT Internet of Things
  • an inexpensive IoT device such as an outdoor light switch may still take advantage of the authentication processes described above to prevent unauthorized tampering.
  • the device 100 again incorporates a fingerprint sensor 702 in the trigger and/or handle.
  • the type of device 100 may justify providing a higher density two-dimensional sensor 702 where the spacing between adjacent crosspoints is somewhat greater than the typical sparse touchscreen array 120.
  • Fingerprint information may be matched by processing logic located within the handgun itself or by pairing the gun over a wireless link to device 100 or cloud process 650 as described in connection with Fig. 6.
  • biometric sensors such as a moisture sensor (to detect moisture on the user’s palm), a heartbeat sensor, or a camera (which obtains face data, facial expression data, or iris scan information) may also be incorporated into handgun 700 and fused with data from the fingerprint sensor 702. A match against an enrolled data set by user authentication 370 may be needed in order to determine whether or not to electronically enable the trigger 710.
  • the techniques described herein may also use a novel approach to representing the enrolled data set.
  • a typical prior art approach considers a fingerprint data set (whether it be a photograph (visual scan) of the fingerprint, a set of 1-D time varying swipes, or a fused surface/manifold) to be a single template of a regular array of values.
  • a grayscale photograph of an authorized fingerprint has been divided into 32 overlapping pieces.
  • This template thus consists of a first set of first set of 16 templates as indicated by the solid lines, and a second set of 16 templates (which each overlap with pieces of the first set) as indicated by the dashed lines.
  • the detected fingerprint data is individually correlated against each of the 32 pieces before making a match / no-match decision.
  • each template or sub-template can be independently rotated during self-correlation (users also typically touch the screen with their finger in different orientations / angles or with different sides or edges of the finger from swipe to swipe).
  • scaling of detected data versus enrolled data can provide more accurate results (for example, a user may touch the touchpad applying different finger pressure at different times, resulting in distortion of the detected fingerprint or stteching of the skin).
  • fingerprint-based authentication may be performed using touchscreen grid during an operation such as a “slide to unlock”.
  • a slide to unlike prompt may encompass a small number of grid crossover points (for example only three or four of them).
  • touchscreen logic and A/D may sample each of three crossover points at approximately 14,000 samples per second (a total of 42,000 samples per second); ridge-valley fingerprint detection 340 can then detect fingerprint and match it against an enrolled data set.
  • the user may be prompted to swipe their finger along a set of enrollment “profile” lines presented on the display.
  • profile lines presented on the display.
  • the enrollment lines may for example appear on the screen one by one with the user being prompted to swipe his finger along each respective enrollment line.
  • the techniques described above use a single intersection on a grid-like touchscreen as the user swiped across the touchscreen to create a 1-D time varying signal representing the ridges and valleys of the fingerprint as the finger travels across the intersection in a straight line.
  • This 1-D signal can be thought of as a “barcode” or barcode fingerprint.
  • this barcode was then compared to an actual full 2-D image of a user’s fingerprint in order to discern whether there was a match or relevant correlation between the just-generated barcode and virtual, or synthetic, barcodes created from the stored actual fingerprint image.
  • a possible solution to this is to execute a calibration or setup routine whereby a user makes a multiplicity of swipes, generating a large number of 1-D barcodes.
  • one or more of barcodes may be “stitched together” to create the equivalent of a full fingerprint, or enough of a full fingerprint in order to correctly match future barcode scans with the stored calibrated set.
  • this process can lack precision due to the difficulty in capturing enough calibration swipes, and enough different calibration swipes, to have a suitable basis for comparison later.
  • a multiplicity of calibration barcodes might be stored and then used for later correlations with user swipes, without need to try to construct a representation of the full fingerprint image. Again, however, this approach does not work well without the user providing a large number of calibration swipes something device users would not necessarily want to endure merely to allow the manufacturer to eliminate the fingerprint sensor.
  • Two approaches can be used to create a functional representation of, or an actual fingerprint image, which can then be registered as the reference against which subsequent data from user swipes can later be compared.
  • an array of 12 x 12, or 144 intersections may be used. With a 40 mil spacing, an array of this size covers about a 480 x 480 mil (or 1 ⁇ 2 inch by 1 ⁇ 2 inch) portion of the finer.
  • a sample rate of 15 kHz per intersection is determined to be sufficient to validly create each of the 1-D scans signals.
  • Each 1-D signal is still a time-varying signal representative of the ridge and valley detail of the user’s fingerprint as shown in Fig 2-4. It would therefore take a 2.160 MHz sample rate to pick up the 144 signals generated by all such intersections. Controller clocks currently run at 3 MHz, so it should be possible to validly sample that many intersections on each pass.
  • the required computing power to process the signals from the 144 intersection is estimated to be something less than 0.5 gigaflops, which is a small fraction of the average smart phone processing capability.
  • the finger sensing sub-array could be selected to located within any portion of the touchscreen (as described in Section 1 above) since the device itself can detect the general area of the user’s touch and then immediately set up the more precisae sensing array around such location.
  • the selected portion of the touchscreen used as the fingerprint sensing array could also move if needed by the application or use of such a swipe. For instance, a verification swipe used to unlock a phone might have to traverse the whole screen in order to prevent accidental unlocking.
  • the purpose of the calibration or registration process in this sensing array embodiment would be to create a set of touch screen-generated fingerprint “barcodes” as a replacement for the previously required fingerprint image.
  • the user would first be prompted at 1110 to place the finger to be identified anywhere on the screen.
  • the system would then detect the location of the finger 1112 and set up 1114 the fingerprint sub-array in that area.
  • the device then senses the outputs of the array 1116, during which the user might be asked to move the finger enough such that one or more ridges or valleys pass over a multiplicity of sensors located in the path of motion.
  • each intersection generates a 1-D time varying signal as a fingerprint barcode.
  • the 1-D signal generated by an intersection point will therefore be quite similar to, but translated or rotated based upon the direction of travel of the finger and its orientation, from the 1-D signal generated by an adjacent intersection point.
  • the finger need only move the distance between adjacent sensors, in the example described here that being 40 mils, to capture information concerning ridges and valleys not completely aligned with the direction of travel.
  • the registration process therefore may consist of a user merely lightly rubbing the screen, thus permitting the 12x12 array to generate sufficient information from a variety of directions.
  • the resulting 1-D signals then become at step 1118 the registered reference set of “fingerprint barcodes”. These barcodes are then used at a later time 1120 and matched when it is desired to authenticate a current user.
  • generation of the fingerprint barcodes at 1116 could be done with a set of simpler motions via one or more swipes in one direction. This will create a fingerprint skewed at some angle, dependent upon the direction of the swipe or swipes. The variation in swipe angle may be compensated for in the processing using the neuromorphic fast pattern recognizer (US Patent 8,401,297).
  • the user may execute an registration application.
  • the flow of an example application is shown in Fig. 12.
  • the user is prompted to take a photo of the finger of interest.
  • the app would optimize the shoot to be used as a the eference fingerprint.
  • the app would also put the camera in macro mode 1212 and set the exposure settings in such a way so as to optimize the clarity of the ridges and valleys.
  • Such an app could then take a photo 1214.
  • the captured photo may then at 1216 be converted to grayscale or resampled or otherwise processed as needed for use in later fingerprint matching.
  • Any step of storing 1218 the photo would preferably preserve the privacy of the user by not storing such photo in the photo gallery, but only in a file accessible to the fingerprint recognition applications (which may even be stored in encrypted form).
  • Many smartphones now have cameras with distance sensing. This distance information may be used in the processing step 1216 to ascertain how far away the finger was being held from the camera, and thus permitting the image to be scaled appropriately.
  • the sensed distance information, or image analysis software may also determine if the finger was being held at an angle with respect to the camera. If it were, compensating parallax adjustments may be made to the image to make it appear as it might have had it been held at a correct angle facing the camera.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Security & Cryptography (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Hardware Design (AREA)
  • Software Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Image Input (AREA)
  • Collating Specific Patterns (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

Nous présentons plusieurs techniques uniques pour utiliser des réseaux de capteurs tactiles pour détecter des informations d'empreintes digitales et authentifier un utilisateur.
EP20874559.6A 2019-10-07 2020-10-05 Techniques pour la détection d'empreintes digitales et l'authentification d'utilisateur Withdrawn EP4042267A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US16/595,017 US11048786B2 (en) 2016-04-13 2019-10-07 Techniques for fingerprint detection and user authentication
PCT/US2020/054198 WO2021071762A1 (fr) 2019-10-07 2020-10-05 Techniques pour la détection d'empreintes digitales et l'authentification d'utilisateur

Publications (1)

Publication Number Publication Date
EP4042267A1 true EP4042267A1 (fr) 2022-08-17

Family

ID=75437595

Family Applications (1)

Application Number Title Priority Date Filing Date
EP20874559.6A Withdrawn EP4042267A1 (fr) 2019-10-07 2020-10-05 Techniques pour la détection d'empreintes digitales et l'authentification d'utilisateur

Country Status (2)

Country Link
EP (1) EP4042267A1 (fr)
WO (1) WO2021071762A1 (fr)

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8514284B2 (en) * 2009-12-17 2013-08-20 Raytheon Company Textured pattern sensing and detection, and using a charge-scavenging photodiode array for the same
CN106233306B (zh) * 2014-09-06 2019-09-27 深圳市汇顶科技股份有限公司 在移动设备上注册和识别指纹配置文件的方法及移动设备

Also Published As

Publication number Publication date
WO2021071762A1 (fr) 2021-04-15

Similar Documents

Publication Publication Date Title
US10438041B2 (en) Techniques for fingerprint detection and user authentication
US9349035B1 (en) Multi-factor authentication sensor for providing improved identification
JP6361942B2 (ja) 極小センシング領域を含む電子装置及びその指紋情報プロセッシング方法
US20160140379A1 (en) Improvements in or relating to user authentication
US10013597B2 (en) Multi-view fingerprint matching
US20160342826A1 (en) Fingerprint based smart phone user verification
WO2016099382A1 (fr) Authentification d'empreinte digitale à l'aide de données de capteur de toucher
EP3158501A1 (fr) Procédé et appareil de sécurité à base de biométrique faisant appel à des profils capacitifs
JP2002352234A (ja) 指紋センサーおよび位置調節装置
CN104932753A (zh) 一种触摸屏及其触摸方法和显示装置
WO2016209594A1 (fr) Capteur d'empreintes digitales multi-résolution
US20190080065A1 (en) Dynamic interface for camera-based authentication
WO2017180819A2 (fr) Création de points d'intersection virtuels sur un écran tactile pour permettre une authentification de l'utilisateur par empreintes digitales statique sans essuyage
US9785863B2 (en) Fingerprint authentication
CN110073354A (zh) 用于用户的生物识别认证的电子设备
GB2543323A (en) Secure biometric authentication
US11048786B2 (en) Techniques for fingerprint detection and user authentication
US11354934B2 (en) Location matched small segment fingerprint reader
EP3642759B1 (fr) Procédé d'enregistrement d'une empreinte digitale
US11580211B2 (en) Indication for multi-factor authentication
WO2021071762A1 (fr) Techniques pour la détection d'empreintes digitales et l'authentification d'utilisateur
CN111052133A (zh) 用于确定手指与指纹传感器接触的方法和指纹感测系统
KR100629410B1 (ko) 지문인증기능을 구비한 포인팅 장치 및 방법과, 이를 위한 휴대 단말기
KR20190028970A (ko) 손동작 인식을 이용한 보안 시스템
JP2013246567A (ja) 情報処理装置、認証方法およびプログラム

Legal Events

Date Code Title Description
STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE INTERNATIONAL PUBLICATION HAS BEEN MADE

PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: REQUEST FOR EXAMINATION WAS MADE

17P Request for examination filed

Effective date: 20220505

AK Designated contracting states

Kind code of ref document: A1

Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR

DAV Request for validation of the european patent (deleted)
DAX Request for extension of the european patent (deleted)
STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE APPLICATION IS DEEMED TO BE WITHDRAWN

18D Application deemed to be withdrawn

Effective date: 20230503