WO2019021264A1 - System for and method of classifying a fingerprint - Google Patents
System for and method of classifying a fingerprint Download PDFInfo
- Publication number
- WO2019021264A1 WO2019021264A1 PCT/IB2018/055687 IB2018055687W WO2019021264A1 WO 2019021264 A1 WO2019021264 A1 WO 2019021264A1 IB 2018055687 W IB2018055687 W IB 2018055687W WO 2019021264 A1 WO2019021264 A1 WO 2019021264A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- quasi
- fingerprint
- singular point
- orientation
- image
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Ceased
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/12—Fingerprints or palmprints
- G06V40/1347—Preprocessing; Feature extraction
- G06V40/1359—Extracting features related to ridge properties; Determining the fingerprint type, e.g. whorl or loop
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/12—Fingerprints or palmprints
- G06V40/1365—Matching; Classification
- G06V40/1376—Matching features related to ridge properties or fingerprint texture
Definitions
- This invention relates to fingerprint classification. More particularly, but not exclusively, this invention relates to a system for and a method of classifying a captured image of a fingerprint into preselected classes.
- a subject With fingerprint verification, a subject first claims a particular identity by, for example, entering in a unique PIN or presenting a personalized card.
- the recognition system then extracts the fingerprint template associated with that PIN or card, and compares it to the template generated from the fingerprint presented by the subject. It is a 1 :1 comparison.
- a subject does not claim any identity.
- the individual merely presents its fingerprint to the recognition system, for the system to identify the individual.
- the system then has to go through the entire database of stored fingerprint templates, comparing the template generated from the presented fingerprint with all the stored templates in the database. It is a 1 :M comparison, where M is the total number of records in the database.
- the database search time, T is directly proportional to the value of M. This implies that, if the number of records in the template database is significantly large, the system takes longer to generate a result.
- T is directly proportional to the value of M. This implies that, if the number of records in the template database is significantly large, the system takes longer to generate a result.
- For a database with a large value of M it is necessary to have it fragmented into a few partitions. This is done so that, when doing a database search, it does not become necessary to go through the entire database.
- a search through one of the partitions should, in most transactions, be able to generate the required result.
- fingerprint classes are known as fingerprint classes, and they are determined by a fingerprint classifier, being one of the modules of an automated fingerprint recognition system, by performing fingerprint analytics on captured images of fingerprints.
- fingerprint classes include the Left Loop (LL), Right Loop (RL), Central Twins (CT), Tented Arch (TA), and Plain Arch (PA).
- LL Left Loop
- RL Right Loop
- CT Central Twins
- TA Tented Arch
- PA Plain Arch
- fingerprints that belong to the LL class have a ridge pattern that emanates from the left-hand side of the fingerprint, flows inwards, and returns in the same direction.
- Fingerprints that belong to the RL class have a ridge pattern that emanates from the right-hand side of the fingerprint, flows inwards, and returns in the same direction.
- Fingerprints that belong to the CT class have a circular ridge pattern. Fingerprints that belong to the TA class have a ridge pattern that emanates from one side of the fingerprint, and returns in the opposite direction. The convex ridges in the middle of a fingerprint that belong to the TA class have significant curvature. Fingerprints that belong to the PA class have a ridge pattern that emanates from one side of the fingerprint, and returns in the opposite direction. The convex ridges in the middle of a fingerprint that belong to the PA class have insignificant curvature.
- fingerprint singular points or singularities In order to classify a captured fingerprint image into one of the preselected fingerprint classes, it is well known to make use of fingerprint characteristics or landmarks known as fingerprint singular points or singularities.
- the terms 'singularity' or 'singular points' of a fingerprint image is often used by those skilled in the art as a fingerprint core and a fingerprint delta.
- a fingerprint core is forensically defined as the inner-most turning point of a fingerprint loop.
- a fingerprint delta is a point where the fingerprint ridges tend to form a triangular shape. Accordingly, these terms should be understood, for purposes of this specification, as embracing such meaning.
- a problem associated with conventional singularity detection techniques is that, although a particular subject's fingerprint contains both a fingerprint core and a fingerprint delta, for example, in some instances it may happen that the captured image thereof does not include one of the delta and core. For such cases, the default classification rule would fail to order the captured image into the correct fingerprint class.
- a method of classifying a captured image of a fingerprint into a preselected class comprising the steps of: providing an orientation image of at least part of the captured image, the orientation image comprising a matrix of pixels that represent the local orientation of every ridge in the at least part of the captured image; detecting a first quasi-singular point corresponding with a first singular point of the fingerprint in a first region of the orientation image; detecting a second quasi-singular point corresponding with a second singular point of the fingerprint in a second region of the orientation image; and utilising data relating to the detected first and second quasi-singular points to classify the fingerprint in the captured image.
- the step of detecting the first quasi-singular point may include, navigating across the orientation image in the first region thereof in a first, preferably horizontal direction defined by a row of adjacent pixels of the orientation image, so as to locate a first location where adjacent pixels represent a change in orientation values from acute-to-obtuse or obtuse-to-acute, which first location marks the presence of the first quasi-singular point which corresponds with the first singular point, and wherein the change in orientation values is measured against a line extending in the first, preferably horizontal direction.
- the step of detecting the second quasi-singular point may include, navigating across the orientation image in the second region thereof in the first, preferably horizontal direction defined by a row of adjacent pixels of the orientation image at the second region thereof, so as to locate a second location where adjacent pixels represent a change in orientation values from acute-to-obtuse or obtuse-to- acute, which second location marks the presence of the second quasi-singular point which corresponds with the second singular point, and wherein the change in orientation values is measured against a line extending in the first direction, preferably horizontal direction.
- the step of providing the orientation image may comprise overlaying at least a part of the captured image of the fingerprint with a matrix or an array of non-overlapping pixel blocks, i.e. overlaying at least a part of the captured image of the fingerprint with block-wise orientation values.
- the orientation image may include a grid or an array comprising a plurality of horizontally spaced pixels that define rows and a plurality of vertically spaced pixels that define columns.
- the first region of the orientation image may be a peripheral region of the orientation image, preferably an upper peripheral region of the orientation image.
- the second region of the orientation image may be a lower peripheral region of the orientation image.
- the data relating to the detected first and second quasi-singular points may include: their respective presence or absence; their relative locations, preferably their coordinates relative to a reference point defined on the orientation image; and/or the respective types of singular points.
- the method may also include the step of, detecting a third quasi- singular point that corresponds with a third singular point of the fingerprint, at the second region of the orientation image, wherein the step of detecting the third quasi- singular point includes navigating across the orientation image in the second region thereof in the first, preferably horizontal direction defined by a row of adjacent pixels of the orientation image at the second region thereof, so as to locate a third location where adjacent pixels represent a change in orientation values.
- the method may also include the step of, detecting a fourth quasi- singular point that corresponds with a fourth singular point in the second region of the orientation image.
- the step of using the data of the detected first and second quasi - singular points to classify the fingerprint may include forming a slope of a line between the first quasi-singular point and the second quasi-singular point if both the first and second quasi-singular points are present in the fingerprint; and utilising the slope to classify the fingerprint.
- the step of utilizing the data of the detected first and second quasi-singular points to classify the fingerprint may include the steps of determining x-coordinate positions of the first and second quasi-singular points; determining the difference between the x-coordinate positions of the first and second quasi-singular points to determine a first value; comparing the first value to a predefined threshold value; and classifying the fingerprint into one of the preselected classes when the first value is less than the predefined threshold value.
- the method may include the step of determining the class of the fingerprint in the captured image if none of the first and second quasi-singular points are detected in the fingerprint.
- each of the first, second, third and fourth singular points may either be a fingerprint core or a fingerprint delta.
- a system for classifying a captured image of a fingerprint into a preselected class comprising: a processor; and a memory that is connected to the processor, the memory containing instructions which when executed by the processor cause the processor to: provide data relating to the captured image of the fingerprint; provide an orientation image of at least part of the captured image, the orientation image comprising a matrix of pixels that represent the local orientation of every ridge in the at least part of the captured image; detect a first quasi-singular point in a first region of the orientation image, which first quasi-singular point corresponds with a first singular point of the fingerprint; detect a second quasi-singular point in a second region of the orientation image, which second quasi-singular point corresponds with a second singular point of the fingerprint; and use data relating to the detected first and second quasi-singular points to classify the captured
- a non-transitory computer-readable device storing instructions thereon which when executed by a processor of a computing device performs the functions of: providing an orientation image of at least part of the captured image, the orientation image comprising a matrix of pixels that represent the local orientation of every ridge in the at least part of the captured image; detecting a first quasi-singular point corresponding with first singular point of the fingerprint in a first region of the orientation image; detecting a second quasi-singular point corresponding with a second singular point of the fingerprint in a second region of the orientation image; and utilising data relating to the detected first and second quasi-singular points to classify the fingerprint in the captured image.
- a method of classifying a fingerprint in a captured image into one of preselected fingerprint classes comprising: providing an orientation image of the captured image of the fingerprint; detecting a first singular point in the fingerprint in a first region of the orientation image; detecting a second quasi-singularity of a second singularity point of the fingerprint in a second region of the orientation image, wherein the second singular point is different from the first singular point; and using data of the detected first singular point and second quasi-singular point to classify the fingerprint.
- the step of detecting the first singular point may include, making horizontal row by row navigations or vertical column by column navigations, in a first region of the orientation image, so as to locate transition points defining locations where adjacent pixels represent a change in orientation values from acute-to-obtuse or obtuse-to-acute with respect to the direction of the navigation; and connecting the transition points to define a transition path, the end of which path defining the location of the first singular point.
- the step of detecting the second quasi-singular point may include the step of determining the type of first singular point which was previously detected; and accordingly navigate across the orientation image in a second region thereof in the first, preferably horizontal direction defined by a row of adjacent pixels of the orientation image at the second region thereof, so as to locate a location where adjacent pixels represent a change in orientation values from acute-to-obtuse or obtuse-to-acute, which second location marks the presence of the second quasi- singular point which corresponds with the second singular point that is different from the first singular point, and wherein the change in orientation values is measured against a line extending in the navigation direction (i.e. first direction, preferably horizontal direction).
- the step of providing the orientation image may comprise overlaying at least a part of the captured image of the fingerprint with a matrix or an array of non-overlapping pixel blocks, i.e. overlaying at least a part of the captured image of the fingerprint with block-wise orientation values.
- the orientation image may include a grid or an array comprising a plurality of horizontally spaced pixels that define rows and a plurality of vertically spaced pixels that define columns.
- the first region of the orientation image may extend between an upper peripheral region and middle region of the orientation image.
- the second region of the orientation image may be a lower peripheral region of the orientation image.
- the method may also include the step of, detecting a third quasi- singular point that corresponds with a third singular point of the fingerprint, at the second region of the orientation image, wherein the step of detecting the third quasi- singular point includes navigating across the orientation image in the second region thereof in the first, preferably horizontal direction defined by a row of adjacent pixels of the orientation image at the second region thereof, so as to locate a third location where adjacent pixels represent a change in orientation values.
- the method may also include the step of, detecting a fourth quasi- singular point that corresponds with a fourth singular point in the second region of the orientation image.
- the step of using the data of the detected first singular point and second quasi-singular point to classify the fingerprint may include forming a slope of a line between the first singular point and the quasi-singular point; and utilising the slope to classify the fingerprint.
- the method may include the step of determining the class of the fingerprint in the captured image if none of the first singular point and quasi-singular point are detected in the fingerprint.
- the first singular point may either be a fingerprint core or a fingerprint delta.
- a system for classifying a fingerprint in a captured image into one of preselected fingerprint classes comprising: a processor; and a memory that is connected to the processor, the memory containing instructions which when executed by the processor cause the processor to: provide data relating to the captured image of the fingerprint; provide an orientation image of the captured image of the fingerprint; detect a first singular point in the fingerprint in a first region of the orientation image; detect a second quasi-singularity of a second singularity of the fingerprint in a second region of the orientation image, in which the second singular point is different from the first singularity; and use data of the detected first singular point and second quasi-singular point to classify the fingerprint.
- a non-transitory computer-readable device storing instructions thereon which when executed by a processor of a computing device performs the functions of: providing an orientation image of the captured image of the fingerprint; detecting a first singular point in the fingerprint in a first region of the orientation image; detecting a second quasi-singularity of a second singularity of the fingerprint in a second region of the orientation image, wherein the second singular point is different from the first singular point; and using data of the detected first singular point and second quasi-singular point to classify the fingerprint.
- FIG. 1 shows a flow diagram illustrating steps of a method of classifying a captured image of a fingerprint into preselected classes according to the invention
- FIG. 2 shows an example captured image of a fingerprint
- FIG. 3 shows an example orientation image representation of a fingerprint
- FIG. 4 shows an example algorithm that could be executed in order to determine a quasi-location of a convex core in the image of FIG. 3;
- FIG. 5 shows an example algorithm that could be executed in order to determine a quasi-location of a concave core in the image of FIG. 3;
- FIG. 6 shows an example algorithm that could be executed in order to determine a quasi-location of a delta in the image of FIG. 3;
- FIG. 7 shows a high-level block diagram illustrating a system for classifying a captured image of a fingerprint into preselected classes according to the invention.
- FIGS. 8 - 13 show example uses of the invention as herein described.
- example embodiments of a method of and a system for classifying a captured image of a fingerprint into preselected classes are generally designated by the reference numeral 10 in FIGs. 1 and 7.
- FIG. 1 shows a flow diagram of the method 10 of classifying a captured image of a fingerprint into preselected classes.
- the method 10 comprises, at 12, providing a captured image of a fingerprint.
- the captured image is an electronic representation of a subject's fingerprint. It should be appreciated that the captured image could have been captured by a fingerprint reader, or by scanning a physical representation of a fingerprint.
- data relating to an orientation image of at least part of the captured image is generated.
- the orientation image comprises a matrix of pixels that represent the local orientation of every ridge in the at least part of the captured image.
- quasi-singular points of the fingerprint are detected in first and second peripheral regions of the orientation image.
- FIG. 2 shows a first example captured image 20 of a fingerprint 22.
- the fingerprint 22 comprises alternating ridges (24.1 , 24.n), which are represented by the dark lines, and furrows 26, which are indicated by white spaces in between the dark lines of the ridges 24.
- FIG. 3 is an example orientation image 28 of another fingerprint, being a fingerprint region of interest (ROI) overlaid with block-wise orientation values 30 obtained from a preceding ridge orientation estimation module.
- the orientation image 28 illustrates the local orientation of the fingerprint ridges and is seen as a matrix of pixels 32 that represent the local orientation of every ridge in a captured image of a fingerprint. As can be seen in FIG.
- the angle Q denotes an acute angle formed by a ridge with reference to a horizontal line A
- the angle ⁇ denotes an obtuse angle formed by a ridge with reference to the horizontal line A.
- Part of the method includes generating data that relates to the orientation image 28 that represents at least part of a captured image of a fingerprint.
- each pixel-block 32 is treated as a pixel, and a collection of horizontal pixel blocks 32 is treated as a row (R), while a collection of vertical pixel blocks is treated as a column (C).
- the orientation image 28 thus includes a grid comprising a plurality of rows (for example, 20 rows in total as shown in FIG. 3) and columns (for example, 20 columns in total as shown in FIG.
- the orientation image 28 accordingly is provided with a point of origin or reference point, C, at a left, upper corner thereof.
- the point of origin has the coordinates (0,0) from which a navigation for locating quasi-singularities (i.e. quasi- singular points) will emanate, as will be described below.
- the orientation image 28 has a second point D at a right, upper corner thereof.
- the second point D has the coordinates (0,20).
- the orientation image 28 has a third point E at a left, lower corner thereof, having the coordinates (0,20).
- the orientation image 28 has a fourth point F at a lower, right corner thereof, having the coordinates (20,20).
- the orientation image 28 includes two cores 34, 36 and one delta 38, which are detected by a conventional singularity detection module (not shown).
- a conventional singularity detection module In order to detect the singular points 34, 36, 38, the entire image 28 would have to be processed (as mentioned above, by the conventional singularity detection module), which can be a computationally expensive process, especially when many images need to be processed. This is largely due to its repetitive, iterative character.
- singular points 34, 36, 38 are, in the conventional application, used as inputs to a conventional model-based fingerprint classification module.
- the said classifier simply uses the analytical geometry of these singular points 34, 36, 38 to order the fingerprint into one of the five fingerprint classes. This geometry - in many instances - does not have to be exact, as long as the structural characteristics are preserved. This suggests that the location of these singular points 34, 36, 38 does not have to be exact.
- features that serve as representatives of the singular points 34, 36, 38 could equally be used in classifying a fingerprint.
- these features are - collectively - introduced as quasi-singularities or quasi-singularity points.
- the representation of a forensic core is referred to as a quasi-core, while the representation of a forensic delta is referred to as a quasi-delta.
- a quasi-location of a core should be understood to mean a quasi-core, and vice versa.
- a quasi-location of a delta should be understood to mean a quasi-delta, and vice versa.
- the method of classifying a fingerprint 10 in accordance with the present invention includes, navigating across an orientation image 28, preferably at the point of origin C, marked in FIG. 3 with coordinates (0,0), as mentioned before, in order to locate a first quasi-location 44 of the core 34 (i.e. the quasi-core), which first quasi-location represent a first, forensic singular point (i.e. a forensic core).
- a first row of the pixel blocks 30 are located in a first peripheral region 40 of the orientation image 28, wherein the first peripheral region 40 delineates an upper periphery of the orientation image 28.
- the first row of the pixel blocks extends from the point of origin C (0,0) and terminate at the second point D (20,0).
- the method 10 accordingly includes the step of detecting the first quasi-location 44 (i.e. first quasi-singularity), which step includes, navigating across the orientation image 28 in a first direction (being in the x-direction of a Cartesian x- axis from the point of origin C (0,0) up to the second point D (20,0), to obtain a first location where adjacent pixel blocks 32 represent a change in orientation values 30 from acute (i.e. Q)-to-obtuse (i.e. ⁇ ), measured against the horizontal line A extending in the first direction.
- first quasi-location 44 i.e. first quasi-singularity
- a second quasi-location 46 (i.e. second quasi-singularity point) of the core 36, being a second singular point, is detected.
- the lower periphery 42 is a bottom row of the image 28 which comprises a row of pixel blocks spanning between the point E (0,20) and F (20,20).
- the step of detecting the second quasi-location 46 includes, navigating in the first direction (i.e. from point E to point F, or vice versa), to obtain a second location where adjacent pixels 32 represent a change in orientation 30 from obtuse-to-acute, measured against the horizontal line extending in the first direction (i.e. from left to right).
- a third quasi-location 48 of the delta 38 is located in the second peripheral region 42 of the orientation image 28.
- the step of detecting the third quasi-location 48 includes, navigating in the first direction, to obtain a third location where adjacent pixels 32 represent a change in orientation 30 from acute-to-obtuse, measured against the horizontal line A line extending in the first direction.
- Representations of the cores 34, 36 and delta 38 being quasi-cores and a quasi- delta are, respectively, indicated by reference numerals 44, 46 and 48 in FIG. 3.
- Orientation values 30 range from 0 degrees to 180 degrees, measured against a horizontal plane (denoted by the horizontal line A), counterclockwise, from left to right.
- the orientation of a fingerprint ridge is either acute (i.e. less than 90 degrees) or obtuse (greater than 90 degrees).
- a path 50 is formed at the point where the orientation values 30 change from acute-to-obtuse or obtuse-to-acute. It is known to refer to this path 50 as a transition line 50.
- a transition line 50 is a trajectory of transition points.
- a transition point is formed where there is a change in orientation values (acute-to-obtuse or obtuse-to-acute) when moving horizontally from one pixel-block 32 to the next 32.
- the fingerprint of the type shown in FIG. 3 has two forensic cores 34, 36 (one convex and one concave), two quasi-cores 44, 46 (one convex and one concave), one forensic delta 38, and one quasi-delta 48.
- An acute change in orientation values 30 from one pixel-block 32 to the next/adjacent pixel block 32 in the same row indicates a transition point on a transition line 50 that leads to a core.
- the absolute difference between these two adjacent transition blocks 32, Odiff is less than a heuristically determined threshold, I quasi-core.
- a forensic core 34, 36 is located at the end of the transition line 50, a point where there is a high degree of acuteness in the orientation value 30 change between the two pixel-blocks 32 that form the transition point.
- the absolute difference between these two transition blocks 32, Odiff is less than a heuristically determined threshold, I quasi-core.
- An obtuse change in orientation values 30 from one pixel-block 32 to the next 32 indicates a transition point on a transition line 50 that leads to a forensic delta 38.
- the absolute difference between these two transition blocks 32, Odiff is less than a heuristically determined threshold, I quasi-delta.
- a forensic delta 38 is located at the end of the transition line 50, a point where there is a high degree of obtuseness in the orientation value 30 change between the two pixel-blocks 32 that form the transition point.
- the absolute difference between these two transition blocks 32, Odiff is less than a heuristically determined threshold, Tdeita.
- the method 10 further uses data relating to the determined quasi-locations to classify the captured fingerprint into the correct class.
- data includes, but not limited to: the presence or absence of the quasi-singular points; the relative locations of the quasi-singularities on the orientation image with respect to, for example a predefined point of reference such as the point of origin C; and the type(s) of detected quasi-singular point(s).
- a gradient (slope) of a vector or line linking the points is determined as part of classifying the fingerprint (for fingerprints that belong to the RL and LL class).
- Fingerprints that belong to the CT class are classified based on the mere presence of quasi-singular points, while images that belong to the TA class are classified of the basis of the relationship between the x-coordinate of a quasi-core (if it exists) and a quasi-delta (if it exists).
- Images that belong to the PA class are classified on the basis of the absence of quasi-singular points.
- a quasi-core is defined as the transition point located at the beginning of a transition line that leads to a forensic core.
- a quasi-delta is defined as the transition point located at the beginning of a transition line that leads to a forensic delta.
- the example algorithm shown in FIG. 4, outlines the procedure used to detect a convex quasi-core 44, if it exits, in a given fingerprint.
- the instructions in the algorithm are executed only once before getting the location of the convex quasi-core 44.
- the example algorithm, shown in FIG. 5, outlines the procedure used to detect a concave quasi-core 46, if it exits, in a given fingerprint.
- the instructions in each algorithm are executed only once before obtaining the location (such as x-coordinates) of the convex or concave quasi-core 46.
- the example algorithm, shown in FIG. 6, outlines the procedure used to detect a quasi-delta 48, if it exits, in a given fingerprint.
- the instructions in the algorithm are executed only once before obtaining the location of the quasi-delta 48.
- the orientation image 28 for a portion of the fingerprint in the captured image may be provided and the method may include detecting a forensic singular point, such as a first forensic core 34 and a second forensic core 36 in accordance with the conventional method of detecting the singular points in the fingerprint.
- the conventional detection module (not shown) may accordingly only be able to detect the first and second forensic cores 34, 36.
- the method in accordance with the present invention would comprise establishing the type of detected forensic singularities 34, 36 in the orientation image 28, and accordingly detect a third quasi-singular point 48 of a third forensic singular point 38 which is different in character to the previously detected first and second forensic singular points.
- the step of detecting the third quasi-singular point 48 may include navigating horizontally across a second periphery of the orientation image 28 (i.e. between point E and F) as described above to locate a transition point of the orientation values, which transition point would indicate the presence of the third quasi-singular point 48.
- the method would accordingly include the step of using the data of the detected first and second forensic singularities along with the detected third quasi-singularity to establish that the fingerprint has three singularities, and accordingly classify the fingerprint in the CT class.
- the method mentioned in this version can be equally used in establishing the classes of the other fingerprints which forensic singularities of which have been established by the conventional singularity detection module (not shown).
- the provided orientation image 28 may have been processed by the traditional singularity detection module (not shown) to establish the location of the forensic singularity, in most cases being a forensic core. Accordingly, the same orientation image 28 may be processed through the quasi-singularity detection module 16 according to the present invention, which would navigate across a second peripheral region of the orientation image 28 to locate a second quasi-singular point, typically a quasi-delta, in the orientation image 28, which second quasi-singularity corresponds with a second forensic singularity (i.e. forensic delta) that is different from the first singular point (i.e. forensic core).
- the quasi-singularity detection module 16 according to the present invention, which would navigate across a second peripheral region of the orientation image 28 to locate a second quasi-singular point, typically a quasi-delta, in the orientation image 28, which second quasi-singularity corresponds with a second forensic singularity (i.e. forensic delta) that is different from the first singular point (i.e. forensic core).
- the data (such as the coordinates) of the located second quasi-singular point and the data of the first singular point (as detected by the traditional singularity detection module) can be used to form a slope between the first singular point and second quasi-singular point in order to properly classify the fingerprint.
- FIG. 7 shows a high-level block diagram illustrating a system 10 for classifying a captured image of a fingerprint into preselected fingerprint classes.
- a plurality of distributed fingerprint readers 52.1 to 52. n capture a plurality of images relating to fingerprints. Data relating to a captured image of a fingerprint could then be subjected to a plurality of computing processes 54, including, but not limited to, contrast enhancement, and foreground segmentation, which are then stored onto a first database 56.
- the system 10 preferably comprises the first database 56 and a processor 58 which is connected to the first database 56.
- the system 10 further comprises a memory (not shown) which is connected to the processor 58 and configured to utilise the data relating to the fingerprint image and contains several instructions which can be executed by the processor 58.
- the algorithm comprises an orientation image module 14 for providing an orientation image of at least part of the captured image of the fingerprint 12.
- the provision of the orientation image may comprise overlaying a grid of pixels, comprising rows and columns of non- overlapping pixel blocks over the captured image of the fingerprint.
- the algorithm further comprises a quasi-singularity detection module 16 for detecting quasi- singular points of the fingerprint in the orientation image.
- the algorithm yet further comprises a classification module 18 for utilizing data relating to the detected quasi- singular points to classify the captured image 12 into preselected fingerprint classes. Data relating to the classification of the image is stored into a second database 60. It will be appreciated that a backend 62 for an operator may be utilised comprising the processor 58, a memory (not shown) and the second database 60.
- first and second databases 56, 60 form a single database.
- Backend 62 may hence receive data relating to a plurality of scanned fingerprint images which were captured by the plurality of fingerprint readers 51 .1 to 52. n.
- FIG. 8A there is shown a region of interest (i.e. ROI) extracted from a fingerprint with one forensic core and no forensic delta.
- the fingerprint sensor was not able to capture the forensic delta of this fingerprint.
- the quasi-singularity detection module 16 was able to detect a quasi-delta 64 and a quasi-core 66.
- the fingerprint classification module was able to order this fingerprint into the correct LL class.
- FIG. 9A there is shown a ROI extracted from a fingerprint with one forensic core and no forensic delta.
- the fingerprint sensor was not able to capture the forensic delta of this fingerprint.
- the quasi-singularity detection module was able to detect a quasi-delta 68 and a quasi-core 70.
- the fingerprint classification module was able to order this fingerprint into the correct RL class.
- FIG. 10A shows a ROI extracted from a fingerprint with two forensic cores and no forensic delta.
- the fingerprint sensor was not able to capture the two forensic deltas of this fingerprint.
- the quasi-singularity detection module was able to detect a quasi-delta 72 that represents one of the missing forensic deltas, as well as two quasi-cores 74.1 , 74.2.
- FIG. 10B this led to the accurate classification of the fingerprint in class CT.
- the classification result is more reliable if the input is very close to being ideal (in this case, three singularities out of four), than in a case where the input is less close to being ideal (in this case, two singularities out of four).
- FIG. 1 1 A shows a ROI extracted from a fingerprint with one forensic core and one forensic delta.
- the fingerprint sensor was able to capture both these features.
- the quasi-singularity detection module was able to detect both the quasi-core 76 and the quasi-delta 78.
- the fingerprint classification module was able to order this fingerprint into the correct (TA) class.
- FIG. 12A shows a ROI extracted from a fingerprint with no forensic core and no forensic delta.
- the quasi-singularity detection module did not detect any quasi-core or quasi-delta. As seen in FIG. 12B, this led to the accurate classification of the fingerprint in class PA.
- FIG. 13A shows a ROI extracted from a fingerprint with one forensic core and no forensic delta.
- the fingerprint sensor was not able to capture the forensic delta of this fingerprint. In addition to that, this fingerprint is not properly aligned. It seems to have slightly been rotated clock-wise.
- the quasi- singularity detection module was able to detect a quasi-core 80 and a quasi-delta 82. As seen in FIG. 13B, this led to the accurate classification of the fingerprint in class
- a quasi-singularity can either be a quasi-core or a quasi- delta.
- a quasi-core is defined as the transition point located at the beginning of the transition line that leads to a forensic core.
- a quasi- delta is defined as the transition point located at the beginning of the transition line that leads to a forensic delta.
- Quasi-singularity detection is more computationally efficient than the conventional detection of forensic singularities.
- the detection time is reduced, while maintaining the same classification accuracy, and, in some instances, the classification accuracy is improved.
- Even fingerprint images whose fingerprint foreground is skewed or rotated are correctly classified, because it is possible to detect their quasi- singularities.
Landscapes
- Engineering & Computer Science (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Collating Specific Patterns (AREA)
Abstract
Description
Claims
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| ZA2020/01144A ZA202001144B (en) | 2017-07-24 | 2020-02-24 | System for and method of classifying a fingerprint |
Applications Claiming Priority (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| ZA2017/05009 | 2017-07-24 | ||
| ZA201705009 | 2017-07-24 |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| WO2019021264A1 true WO2019021264A1 (en) | 2019-01-31 |
Family
ID=65040961
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| PCT/IB2018/055687 Ceased WO2019021264A1 (en) | 2017-07-24 | 2018-07-30 | System for and method of classifying a fingerprint |
Country Status (2)
| Country | Link |
|---|---|
| WO (1) | WO2019021264A1 (en) |
| ZA (1) | ZA202001144B (en) |
Citations (3)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| TW201015450A (en) * | 2008-10-07 | 2010-04-16 | Univ Nat Kaohsiung Applied Sci | Fingerprint classification method of using hierarchical singular point detection and traced orientation flow, and fingerprint classification system thereof |
| US20120195478A1 (en) * | 2011-02-01 | 2012-08-02 | Wen-Hsing Hsu | High-speed fingerprint feature identification system and method thereof according to triangle classifications |
| US20150347804A1 (en) * | 2014-03-03 | 2015-12-03 | Tsinghua University | Method and system for estimating fingerprint pose |
-
2018
- 2018-07-30 WO PCT/IB2018/055687 patent/WO2019021264A1/en not_active Ceased
-
2020
- 2020-02-24 ZA ZA2020/01144A patent/ZA202001144B/en unknown
Patent Citations (3)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| TW201015450A (en) * | 2008-10-07 | 2010-04-16 | Univ Nat Kaohsiung Applied Sci | Fingerprint classification method of using hierarchical singular point detection and traced orientation flow, and fingerprint classification system thereof |
| US20120195478A1 (en) * | 2011-02-01 | 2012-08-02 | Wen-Hsing Hsu | High-speed fingerprint feature identification system and method thereof according to triangle classifications |
| US20150347804A1 (en) * | 2014-03-03 | 2015-12-03 | Tsinghua University | Method and system for estimating fingerprint pose |
Non-Patent Citations (2)
| Title |
|---|
| AWAD ET AL.: "Efficient Fingorprint Claccifiootion Uoing Singular Point", INTERNATIONAL JOURNAL OF DIGITAL INFORMATION AND WIRELESS COMMUNICATIONS (IJDIWC), vol. 1, no. 3, 2011, pages 611 - 616, XP055570496, Retrieved from the Internet <URL:http://sdiwc.net/digital-library/download.php?id=00000168.pdf> * |
| JEYALAKSHMI ET AL.: "Fingerprint Image Classification using Singular Points and Orientation Information", INT. JOURNAL OF ENGINEERING RESEARCH AND APPLICATION, September 2017 (2017-09-01), pages 33 - 42, XP055570501, Retrieved from the Internet <URL:https://www.ijera.com/papers/Vol7_issue9/Part-2/F0709023342.pdf> * |
Also Published As
| Publication number | Publication date |
|---|---|
| ZA202001144B (en) | 2021-09-29 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| JP6167733B2 (en) | Biometric feature vector extraction device, biometric feature vector extraction method, and biometric feature vector extraction program | |
| CN110765992B (en) | Seal identification method, medium, equipment and device | |
| CN109740606B (en) | Image identification method and device | |
| WO2018074110A1 (en) | Fingerprint processing device, fingerprint processing method, program, and fingerprint processing circuit | |
| CN108292440B (en) | Method for identifying characteristic points of a calibration pattern within an image of the calibration pattern | |
| US20120082372A1 (en) | Automatic document image extraction and comparison | |
| Li et al. | Multimodal image registration with line segments by selective search | |
| US20160104039A1 (en) | Method for identifying a sign on a deformed document | |
| CN116309573B (en) | Defect detection method for printed characters of milk packaging box | |
| CN111222507A (en) | Automatic identification method of digital meter reading, computer readable storage medium | |
| CN105654423A (en) | Area-based remote sensing image registration method | |
| CN114549400A (en) | Image identification method and device | |
| CN110942473A (en) | Moving target tracking detection method based on characteristic point gridding matching | |
| CN110288040B (en) | Image similarity judging method and device based on topology verification | |
| KR101277737B1 (en) | An augmented reality system using array of multiple marker and a method thereof | |
| US10990796B2 (en) | Information processing apparatus, image processing method and recording medium on which image processing program is recorded | |
| JP2898562B2 (en) | License plate determination method | |
| Boukamcha et al. | 3D face landmark auto detection | |
| CN119090926A (en) | A difference map registration method based on feature point matching | |
| WO2019021264A1 (en) | System for and method of classifying a fingerprint | |
| CN111311657A (en) | A Homologous Registration Method for Infrared Images Based on Improved Corner Principal Direction Assignment | |
| CN105787487B (en) | Similarity matching method for shearing tool pictures | |
| Magnier et al. | Ridges and valleys detection in images using difference of rotating half smoothing filters | |
| Das | Automated Building Segmentation in Areal Images Using Boundary Edge Detection | |
| JP2965165B2 (en) | Pattern recognition method and recognition dictionary creation method |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| 121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 18838937 Country of ref document: EP Kind code of ref document: A1 |
|
| NENP | Non-entry into the national phase |
Ref country code: DE |
|
| 122 | Ep: pct application non-entry in european phase |
Ref document number: 18838937 Country of ref document: EP Kind code of ref document: A1 |
|
| 122 | Ep: pct application non-entry in european phase |
Ref document number: 18838937 Country of ref document: EP Kind code of ref document: A1 |
|
| 32PN | Ep: public notification in the ep bulletin as address of the adressee cannot be established |
Free format text: NOTING OF LOSS OF RIGHTS (EPO FORM 1205A DATED 22.01.2021) |