[go: up one dir, main page]

WO2025120713A1 - Information processing device, information processing method, and recording medium - Google Patents

Information processing device, information processing method, and recording medium Download PDF

Info

Publication number
WO2025120713A1
WO2025120713A1 PCT/JP2023/043341 JP2023043341W WO2025120713A1 WO 2025120713 A1 WO2025120713 A1 WO 2025120713A1 JP 2023043341 W JP2023043341 W JP 2023043341W WO 2025120713 A1 WO2025120713 A1 WO 2025120713A1
Authority
WO
WIPO (PCT)
Prior art keywords
information processing
feature points
image
clusters
fragmented
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
PCT/JP2023/043341
Other languages
French (fr)
Japanese (ja)
Inventor
祥明 外山
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
NEC Corp
Original Assignee
NEC Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by NEC Corp filed Critical NEC Corp
Priority to PCT/JP2023/043341 priority Critical patent/WO2025120713A1/en
Publication of WO2025120713A1 publication Critical patent/WO2025120713A1/en
Pending legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/12Fingerprints or palmprints

Definitions

  • This disclosure relates to the technical fields of information processing devices, information processing methods, and recording media.
  • Patent Document 1 describes a technology that replaces the coordinate position of each search fingerprint feature point included in the search fingerprint data with the coordinate position of a coordinate system based on a coordinate reference point included in the file fingerprint data, and checks for the presence or absence of any singular points scattered within a coordinate range near the coordinate reference point defined in the fingerprint matching parameters for the search fingerprint feature points located around the coordinate reference point and the group of file fingerprint feature points included in the file fingerprint data, performs a zone check of the coordinate range near the coordinate reference point, and a quality check of the group of feature points located within the coordinate range near the coordinate reference point, and determines whether to use the axis matching method or the position matching method.
  • the objective of this disclosure is to provide an information processing device, an information processing method, and a recording medium that aim to improve upon the technology disclosed in the prior art documents.
  • One aspect of the information processing device includes an extraction means for extracting a plurality of feature points from a pattern image, a classification means for classifying the plurality of feature points into a plurality of clusters, a generation means for generating a fragmented image from the pattern image based on the plurality of clusters, and an output means for outputting the fragmented image.
  • One aspect of the information processing method is to extract a plurality of feature points from a pattern image, classify the plurality of feature points into a plurality of clusters, generate a fragmented image from the pattern image based on the plurality of clusters, and output the fragmented image.
  • a computer program is recorded to cause a computer to execute an information processing method that extracts a plurality of feature points from a pattern image, classifies the plurality of feature points into a plurality of clusters, generates a fragmented image from the pattern image based on the plurality of clusters, and outputs the fragmented image.
  • FIG. 1 is a block diagram showing an example of a configuration of an information processing device according to the present disclosure.
  • FIG. 1 is a block diagram showing an example of a configuration of an information processing device according to the present disclosure.
  • 11 is a flowchart illustrating an example of an information processing operation of the information processing device according to the present disclosure.
  • FIG. 1 is a conceptual diagram illustrating an example of an information processing operation of an information processing device according to the present disclosure.
  • FIG. 1 is a block diagram showing an example of a configuration of an information processing device according to the present disclosure.
  • FIG. 1 is a block diagram showing an example of a configuration of an information processing device according to the present disclosure.
  • FIG. 1 is a block diagram showing an example of a configuration of an information processing device according to the present disclosure.
  • FIG. 1 is a block diagram showing an example of a configuration of an information processing device according to the present disclosure.
  • a first embodiment of an information processing device, an information processing method, and a recording medium will be described.
  • a first embodiment of an information processing device, an information processing method, and a recording medium will be described using an information processing device 1 according to this disclosure.
  • FIG. 1 is a block diagram showing the configuration of an information processing device 1 according to this disclosure. As shown in FIG. 1, the information processing device 1 includes an extraction unit 11, a classification unit 12, a generation unit 13, and an output unit 14.
  • the extraction unit 11 extracts a plurality of feature points from the pattern image.
  • the classification unit 12 classifies the plurality of feature points into a plurality of clusters.
  • the generation unit 13 generates a fragmented image from the pattern image based on the plurality of clusters.
  • the output unit 14 outputs the fragmented image.
  • a fingerprint image is an image of a fingerprint including a plurality of feature points.
  • a fingerprint image is an image that can be acquired from a living body, and includes at least one of a fingerprint, a palm print, and a foot print.
  • the information processing device 1 generates a fragmented image based on clusters of feature points. Therefore, the information processing device 1 can include meaningful areas of the pattern image in the fragmented image. Because the fragmented image is generated based on clusters of feature points, it is unlikely to include meaningless areas, such as areas that do not include feature points. The information processing device 1 can increase the amount of meaningful areas of the pattern image that are included in the area included in the fragmented image. [2: Second embodiment]
  • a second embodiment of an information processing device, an information processing method, and a recording medium will be described.
  • a second embodiment of an information processing device, an information processing method, and a recording medium will be described using an information processing device 2 according to this disclosure.
  • FIG. 2 is a block diagram showing the configuration of the information processing device 2.
  • the information processing device 2 includes a calculation device 21 and a storage device 22.
  • the information processing device 2 may include a communication device 23, an input device 24, and an output device 25.
  • the information processing device 2 does not have to include at least one of the communication device 23, the input device 24, and the output device 25.
  • the calculation device 21, the storage device 22, the communication device 23, the input device 24, and the output device 25 may be connected via a data bus 26.
  • the arithmetic device 21 includes, for example, at least one of a CPU (Central Processing Unit), a GPU (Graphics Processing Unit), and an FPGA (Field Programmable Gate Array).
  • the arithmetic device 21 reads a computer program.
  • the arithmetic device 21 may read a computer program stored in the storage device 22.
  • the arithmetic device 21 may read a computer program stored in a computer-readable and non-transient recording medium using a recording medium reading device (e.g., an input device 24 described later) not shown in the figure that is provided in the information processing device 2.
  • a recording medium reading device e.g., an input device 24 described later
  • the arithmetic device 21 may acquire (i.e., download or read) a computer program from a device (not shown) located outside the information processing device 2 via the communication device 23 (or other communication device).
  • the arithmetic device 21 executes the read computer program.
  • a logical function block for executing the operation to be performed by the information processing device 2 is realized within the calculation device 21.
  • the calculation device 21 can function as a controller for realizing a logical function block for executing the operation (in other words, processing) to be performed by the information processing device 2.
  • the calculation device 21 may output information to a device (not shown), such as another computer or a cloud server, that is provided outside the information processing device 2, via the communication device 23 (or another communication device).
  • the arithmetic device 21 realizes an extraction unit 211, which is a specific example of an "extraction means” described in the appendix described later, a classification unit 212, which is a specific example of a “classification means” described in the appendix described later, a generation unit 213, which is a specific example of a “generation means” described in the appendix described later, an output unit 214, which is a specific example of an "output means” described in the appendix described later, an acquisition unit 215, and a reception unit 216, which is a specific example of a "reception means” described in the appendix described later.
  • At least one of the acquisition unit 215 and the reception unit 216 may not be realized in the arithmetic device 21. Details of the operations of the extraction unit 211, classification unit 212, generation unit 213, output unit 214, acquisition unit 215, and reception unit 216 will be described later with reference to FIG. 3.
  • the storage device 22 can store desired data.
  • the storage device 22 may temporarily store a computer program executed by the arithmetic device 21.
  • the storage device 22 may temporarily store data that is temporarily used by the arithmetic device 21 when the arithmetic device 21 is executing a computer program.
  • the storage device 22 may store data that the information processing device 2 stores for a long period of time.
  • the storage device 22 may include at least one of a RAM (Random Access Memory), a ROM (Read Only Memory), a hard disk device, a magneto-optical disk device, an SSD (Solid State Drive), and a disk array device.
  • the storage device 22 may include a non-temporary recording medium.
  • the communication device 23 is capable of communicating with devices external to the information processing device 2 via a communication network (not shown).
  • the communication device 23 may be a communication interface based on standards such as Ethernet (registered trademark), Wi-Fi (registered trademark), Bluetooth (registered trademark), and USB (Universal Serial Bus).
  • the input device 24 is a device that accepts information input to the information processing device 2 from outside the information processing device 2.
  • the input device 24 may include an operating device (e.g., at least one of a keyboard, a mouse, and a touch panel) that can be operated by an operator of the information processing device 2.
  • the input device 24 may include a reading device that can read information recorded as data on a recording medium that can be attached externally to the information processing device 2.
  • the output device 25 is a device that outputs information to the outside of the information processing device 2.
  • the output device 25 may output information as an image. That is, the output device 25 may include a display device (so-called a display) capable of displaying an image showing the information to be output.
  • the output device 25 may output information as sound. That is, the output device 25 may include an audio device (so-called a speaker) capable of outputting sound.
  • the output device 25 may output information on paper. That is, the output device 25 may include a printing device (so-called a printer) capable of printing desired information on paper. [2-2: Information Processing Operation Performed by Information Processing Device 2]
  • FIG. 3 is a flowchart showing an example of the flow of the information processing operation performed by the information processing device 2.
  • the acquisition unit 215 acquires a pattern image (step S20).
  • the acquisition unit 215 may acquire a pattern image stored in the storage device 22.
  • the acquisition unit 215 may also acquire a pattern image from an external device via the communication device 23.
  • the extraction unit 211 extracts a plurality of feature points from the acquired pattern image (step S21).
  • FIG. 4(a) illustrates an example of a plurality of feature points x extracted from the pattern image PI.
  • the reception unit 216 receives the designation of the predetermined number (n) (step S22).
  • the reception unit 216 may receive the designation of the predetermined number (n) input by the user to the input device 24.
  • the operation of step S22 may be performed before step S20. Furthermore, the operation of step S22 is not a required operation in one information processing operation, and may not be performed in some cases. For example, when the user checks information related to feature points extracted from the pattern image and recognizes that a designation is necessary, the reception unit 216 may receive the input of the designation of the predetermined number (n).
  • Each of the feature points contained in the pattern image can form a cohesive group, i.e., a cluster.
  • the classification unit 212 classifies the feature points into a number of clusters (step S23).
  • the feature points classified into a cluster are called feature points belonging to the cluster. Details of the operation of step S23 will be described with reference to FIG. 3(b).
  • the classification unit 212 determines the number of clusters (C) to be classified from the number (N) of extracted feature points and a predetermined number such that each of the multiple clusters contains a predetermined number (n) of feature points from the multiple feature points (step S23-1).
  • the classification unit 212 may determine the number of clusters (C) by dividing the number (N) of extracted feature points by the predetermined number (n). In other words, when a pattern image contains a relatively large number of feature points, the classification unit 212 classifies the pattern image into a relatively large number of clusters. Also, when a pattern image contains a relatively small number of feature points, the classification unit 212 classifies the pattern image into a relatively small number of clusters.
  • the classification unit 212 selects C feature points as initial representative points (step S23-2).
  • the classification unit 212 may select the initial representative points randomly. Alternatively, the classification unit 212 may select feature points that are appropriately separated from each other as the initial representative points.
  • the classification unit 212 repeats a first operation (step S23-3) of classifying the multiple feature points into C clusters each including n feature points centered on a representative point, and a second operation (step S23-4) of selecting a new representative point from each of the classified C clusters.
  • the classification unit 212 may repeat the first and second operations until the classified clusters become cohesive and meaningful groups (step S23-5: yes).
  • Figure 4(b) illustrates multiple feature points x classified into five clusters C1, C2, C3, C4, and C5.
  • the generating unit 213 generates fragmented images from the pattern image based on the multiple clusters (step S24).
  • the generating unit 213 generates multiple fragmented images corresponding to each of the multiple clusters.
  • the generating unit 213 generates fragmented images so as to include feature points belonging to the corresponding clusters.
  • the generating unit 213 generates fragmented images according to the distribution of each of the feature points belonging to the corresponding clusters. In other words, the generating unit 213 generates fragmented images corresponding to clusters so that areas where feature points included in the clusters are not distributed are small.
  • Figure 4(c) illustrates fragmented images I1, I2, I3, I4, and I5 generated from the pattern image based on five clusters C1, C2, C3, C4, and C5. The shapes of the fragmented images will be described in other embodiments.
  • the generation unit 213 may generate a fragmented image based on the distance between the representative point of the cluster and each of the feature points belonging to the cluster.
  • the representative point of the cluster may be a feature point located at the center of the cluster, or may be a feature point located at the center of gravity of the cluster. Alternatively, the representative point of the cluster may be a position where no feature point exists.
  • the output unit 214 outputs the fragmented image (step 25).
  • the output unit 214 may control a display as the output device 25 to display the fragmented image.
  • the output unit 214 may control the storage device 22 to store the fragmented image in the storage device 22.
  • the output unit 214 may control the communication device 23 to transmit the fragmented image to an external device.
  • the information processing device 2 may be configured to allow the number of necessary fragmented images to be specified.
  • the receiving unit 216 may receive a specification of the desired number (C) of fragmented images.
  • the receiving unit 216 may receive a specification of the desired number (C) input by the user to the input device 24.
  • the number of necessary fragmented images is specified, the number of feature points contained in the fragmented images changes depending on the number of feature points contained in the pattern image.
  • Machine learning is used for image enhancement processing and feature extraction processing in fingerprint matching. Machine learning requires a large amount of training data.
  • Fingerprint matching may be performed using a fragmentary fingerprint image, such as a latent fingerprint image, which shows only a partial area of the entire fingerprint, rather than a fingerprint image that includes the entire fingerprint, such as an imprint fingerprint image.
  • a fragmentary fingerprint image such as a latent fingerprint image, which shows only a partial area of the entire fingerprint, rather than a fingerprint image that includes the entire fingerprint, such as an imprint fingerprint image.
  • a fragmentary fingerprint image of an area that does not include minutiae is not useful for matching, and is not useful for machine learning either. Therefore, for example, a fragmentary fingerprint image obtained by randomly dividing an imprint fingerprint is often not useful for machine learning.
  • the information processing device 2 disclosed herein is useful for mass generation of fragmentary fingerprint images that are useful for machine learning.
  • the information processing device 2 may prepare a fragmented fingerprint image that resembles a latent fingerprint by compositing a shading or other image as a background, as learning data to be used in machine learning for various processes in matching latent fingerprints.
  • a fragmented fingerprint image that resembles a latent fingerprint by compositing a shading or other image as a background, as learning data to be used in machine learning for various processes in matching latent fingerprints.
  • palm print images are larger in size and have more feature points to detect. As the number of feature points increases, the matching cost increases.
  • palm print images have a wide distribution of feature points, so they are susceptible to image distortion. This makes it difficult, for example, to overlay the entire palm print image. As such, matching the entire palm print image is difficult. Also, the entire palm print image often contains areas that are difficult to use for matching.
  • the information processing device 2 disclosed herein is useful for generating a fragmented palm print image including an area useful for matching palm print images.
  • the information processing device 2 disclosed herein determines the shape of a fragmented image corresponding to a cluster based on at least one of the distance between the representative point of the cluster and each of the feature points belonging to the cluster and the shape of the distribution of each of the feature points belonging to the cluster, and is therefore able to generate a fragmented image that includes many meaningful areas that include feature points.
  • the information processing device 2 can treat the areas included in each fragmented image as independently meaningful areas.
  • each fragmented image has an equal amount of information. Since the number of feature points included in a fragmented image or the number of fragmented images can be specified, the information processing device 2 can generate fragmented images having a desired amount of information.
  • a third embodiment of an information processing device, an information processing method, and a recording medium will be described below.
  • a third embodiment of an information processing device, an information processing method, and a recording medium will be described using an information processing device 3 according to this disclosure.
  • the operation of the generating unit 313 is different from that of the second embodiment.
  • the generating unit 313 generates an elliptical fragmented image.
  • the generating unit 313 may determine the elliptical shape according to the distribution of each of the feature points belonging to the cluster.
  • the generating unit 313 may determine the elliptical shape based on the shape of the distribution of each of the feature points belonging to the cluster.
  • the generating unit 313 may determine the shape of an ellipse based on the distance between the representative point of a cluster and each of the feature points belonging to the cluster. For example, the generating unit 313 may determine a major axis based on the distance between the representative point of a cluster and each of the feature points belonging to the cluster corresponding to the representative point, and generate a fragmented image corresponding to the cluster based on an ellipse that can be defined by this major axis. The generating unit 313 may determine the major axis of the ellipse based on the longest distance between the representative point of a cluster and each of the feature points belonging to the cluster corresponding to the representative point.
  • the generating unit 313 may generate a circular fragmented image.
  • the generating unit 313 may determine the radius of the circle based on the longest distance between the representative point of the cluster and each of the feature points belonging to the cluster corresponding to the representative point.
  • the information processing device 3 disclosed herein generates a fragmented image corresponding to a cluster based on an ellipse whose major axis is determined based on the distance between the representative point of the cluster and each of the feature points belonging to the cluster corresponding to the representative point, and can therefore generate a fragmented image according to the distribution shape of each of the feature points belonging to the cluster.
  • [4: Fourth embodiment
  • a fourth embodiment of an information processing device, an information processing method, and a recording medium will be described below.
  • a fourth embodiment of an information processing device, an information processing method, and a recording medium will be described using an information processing device 4 according to this disclosure.
  • the operation of the generating unit 413 is different from the second and third embodiments.
  • the generating unit 413 generates a polygonal fragmentation image.
  • the polygon may be a convex polygon.
  • the generating unit 413 may determine the polygonal shape according to the distribution of each of the feature points belonging to the cluster.
  • the generating unit 413 may determine the polygonal shape based on the shape of the distribution of each of the feature points belonging to the cluster.
  • the generating unit 413 may determine the polygonal shape based on the distance between the representative point of the cluster and each of the feature points belonging to the cluster. [4-2: Technical Effects of Information Processing Device 4]
  • the information processing device 4 generates polygonal fragmented images, and therefore can generate fragmented images according to the distribution shape of each of the feature points belonging to a cluster.
  • a fourth embodiment of an information processing device, an information processing method, and a recording medium will be described below.
  • a fifth embodiment of an information processing device, an information processing method, and a recording medium will be described below using an information processing device 5 according to this disclosure.
  • the fifth embodiment differs from the second to fourth embodiments in that a determination unit 517 is realized in the calculation device 21.
  • the determination unit 517 determines whether the fragmented image should be elliptical or polygonal in shape, depending on the distribution of each of the feature points belonging to the cluster. For each of the multiple clusters, the determination unit 517 determines whether the fragmented image corresponding to the cluster should be elliptical or polygonal.
  • the determining unit 517 may determine whether the fragmented image should be elliptical or polygonal in shape, depending on the distribution of each of the feature points belonging to the cluster.
  • the determining unit 517 may determine whether the fragmented image should be elliptical or polygonal, based on the shape of the distribution of each of the feature points belonging to the cluster (referred to as the "distribution shape").
  • the determining unit 517 may determine which of a plurality of predefined shapes the distribution shape is.
  • the predefined shapes include an ellipse and a polygon including at least one of a triangle, a rectangle, a pentagon, and a hexagon. Any of the predefined shapes determined by the determining unit 517 is referred to as the "area shape".
  • the determining unit 517 may determine whether the fragmented image should be elliptical or polygonal, based on the distance between the representative point of the cluster and each of the feature points belonging to the cluster.
  • the information processing device 5 may accept the area shape specified by the user who has confirmed the distribution shape.
  • the generating unit 513 generates a fragmented image of the region shape.
  • the generating unit 513 may deform the region shape so that the region shape includes the region of the distribution shape and the region other than the region of the distribution shape is reduced, and generate a fragmented image of the deformed region shape.
  • the deformation of the region shape by the generation unit 513 may include enlarging or reducing the region shape.
  • the deformation of the region shape by the generation unit 513 may include deformation to make any side of a polygon convex inwardly of the region shape.
  • the deformation of the region shape by the generation unit 513 may include changing the length of at least one of the major axis and minor axis of an elliptical shape.
  • the information processing device 5 disclosed herein determines whether the shape of a fragmented image should be elliptical or polygonal, and can generate a fragmented image according to the distribution shape of each of the feature points belonging to a cluster. [6: Notes]
  • the generating means generates a plurality of the fragmentation images corresponding to each of the plurality of clusters, The information processing device according to claim 1, wherein the fragmented image includes feature points that belong to a corresponding cluster.
  • the generating means generates, for each of the plurality of clusters, the fragmented image corresponding to the cluster in accordance with a distribution of each of the feature points belonging to the cluster.
  • [Appendix 12] The information processing device according to claim 1, further comprising a reception unit configured to receive a designation of a desired number of the fragmented images.
  • the pattern image includes at least one of a fingerprint image and a palm print image.
  • Appendix 14] Extract multiple feature points from the pattern image, classifying the feature points into a plurality of clusters; generating a fragmented image from the pattern image based on the plurality of clusters; and outputting the fragmented image.

Landscapes

  • Engineering & Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

This information processing device includes: an extraction means for extracting a plurality of feature points from a pattern image; a classification means for classifying the plurality of feature points into a plurality of clusters; a generation means for generating a fragmented image from the pattern image on the basis of the plurality of clusters; and an output means for outputting the fragmented image.

Description

情報処理装置、情報処理方法、及び、記録媒体Information processing device, information processing method, and recording medium

 この開示は、情報処理装置、情報処理方法、及び、記録媒体の技術分野に関する。 This disclosure relates to the technical fields of information processing devices, information processing methods, and recording media.

 特許文献1には、サーチ指紋データに含まれる各サーチ指紋特徴点の座標位置をファイル指紋データ内部に含まれる座標基準点を基準とした座標系の座標位置に置き換え、座標基準点の周辺に位置するサーチ指紋特徴点及びファイル指紋データに含まれるファイル指紋特徴点郡に対して、指紋照合パラメータ内で規定される座標基準点の近傍座標範囲内に点在する任意の特異点の存在有無の確認、座標基準点近傍座標範囲のゾーンチェック及び座標基準点近傍座標範囲内に位置する特徴点郡の品質チェックとを行い、軸照合方式及び位置合わせ照合方式のうちの何れを用いるかを決定する技術が記載されている。 Patent Document 1 describes a technology that replaces the coordinate position of each search fingerprint feature point included in the search fingerprint data with the coordinate position of a coordinate system based on a coordinate reference point included in the file fingerprint data, and checks for the presence or absence of any singular points scattered within a coordinate range near the coordinate reference point defined in the fingerprint matching parameters for the search fingerprint feature points located around the coordinate reference point and the group of file fingerprint feature points included in the file fingerprint data, performs a zone check of the coordinate range near the coordinate reference point, and a quality check of the group of feature points located within the coordinate range near the coordinate reference point, and determines whether to use the axis matching method or the position matching method.

特開2004-265038号公報JP 2004-265038 A

 この開示は、先行技術文献に開示された技術を改良することを目的とする情報処理装置、情報処理方法、及び、記録媒体を提供することを課題とする。 The objective of this disclosure is to provide an information processing device, an information processing method, and a recording medium that aim to improve upon the technology disclosed in the prior art documents.

 情報処理装置の一の態様は、紋様画像から複数の特徴点を抽出する抽出手段と、前記複数の特徴点を複数のクラスタに分類する分類手段と、前記複数のクラスタに基づいて、前記紋様画像から断片化画像を生成する生成手段と、前記断片化画像を出力する出力手段とを備える。 One aspect of the information processing device includes an extraction means for extracting a plurality of feature points from a pattern image, a classification means for classifying the plurality of feature points into a plurality of clusters, a generation means for generating a fragmented image from the pattern image based on the plurality of clusters, and an output means for outputting the fragmented image.

 情報処理方法の一の態様は、紋様画像から複数の特徴点を抽出し、前記複数の特徴点を複数のクラスタに分類し、前記複数のクラスタに基づいて、前記紋様画像から断片化画像を生成し、前記断片化画像を出力する。 One aspect of the information processing method is to extract a plurality of feature points from a pattern image, classify the plurality of feature points into a plurality of clusters, generate a fragmented image from the pattern image based on the plurality of clusters, and output the fragmented image.

 記録媒体の一の態様は、コンピュータに、紋様画像から複数の特徴点を抽出し、前記複数の特徴点を複数のクラスタに分類し、前記複数のクラスタに基づいて、前記紋様画像から断片化画像を生成し、前記断片化画像を出力する情報処理方法を実行させるためのコンピュータプログラムが記録されている。 In one embodiment of the recording medium, a computer program is recorded to cause a computer to execute an information processing method that extracts a plurality of feature points from a pattern image, classifies the plurality of feature points into a plurality of clusters, generates a fragmented image from the pattern image based on the plurality of clusters, and outputs the fragmented image.

この開示にかかる情報処理装置の構成の一例を示すブロック図である。FIG. 1 is a block diagram showing an example of a configuration of an information processing device according to the present disclosure. この開示にかかる情報処理装置の構成の一例を示すブロック図である。FIG. 1 is a block diagram showing an example of a configuration of an information processing device according to the present disclosure. この開示にかかる情報処理装置の情報処理動作の一例を示すフローチャートである。11 is a flowchart illustrating an example of an information processing operation of the information processing device according to the present disclosure. この開示にかかる情報処理装置の情報処理動作の一例を示す概念図である。FIG. 1 is a conceptual diagram illustrating an example of an information processing operation of an information processing device according to the present disclosure. この開示にかかる情報処理装置の構成の一例を示すブロック図である。FIG. 1 is a block diagram showing an example of a configuration of an information processing device according to the present disclosure. この開示にかかる情報処理装置の構成の一例を示すブロック図である。FIG. 1 is a block diagram showing an example of a configuration of an information processing device according to the present disclosure. この開示にかかる情報処理装置の構成の一例を示すブロック図である。FIG. 1 is a block diagram showing an example of a configuration of an information processing device according to the present disclosure.

 以下、図面を参照しながら、情報処理装置、情報処理方法、及び、記録媒体の実施形態について説明する。
 [1:第1実施形態]
Hereinafter, embodiments of an information processing device, an information processing method, and a recording medium will be described with reference to the drawings.
[1: First embodiment]

 情報処理装置、情報処理方法、及び、記録媒体の第1実施形態について説明する。以下では、この開示にかかる情報処理装置1を用いて、情報処理装置、情報処理方法、及び記録媒体の第1実施形態について説明する。
 [1-1:情報処理装置1の構成]
A first embodiment of an information processing device, an information processing method, and a recording medium will be described. Hereinafter, a first embodiment of an information processing device, an information processing method, and a recording medium will be described using an information processing device 1 according to this disclosure.
[1-1: Configuration of information processing device 1]

 図1は、この開示にかかる情報処理装置1の構成を示すブロック図である。図1に示すように、情報処理装置1は、抽出部11と、分類部12と、生成部13と、出力部14とを備える。 FIG. 1 is a block diagram showing the configuration of an information processing device 1 according to this disclosure. As shown in FIG. 1, the information processing device 1 includes an extraction unit 11, a classification unit 12, a generation unit 13, and an output unit 14.

 抽出部11は、紋様画像から複数の特徴点を抽出する。分類部12は、複数の特徴点を複数のクラスタに分類する。生成部13は、複数のクラスタに基づいて、紋様画像から断片化画像を生成する。出力部14は、断片化画像を出力する。 The extraction unit 11 extracts a plurality of feature points from the pattern image. The classification unit 12 classifies the plurality of feature points into a plurality of clusters. The generation unit 13 generates a fragmented image from the pattern image based on the plurality of clusters. The output unit 14 outputs the fragmented image.

 この開示において、紋様画像は、複数の特徴点を含む紋様の画像である。紋様画像は、生体から取得可能な画像であり、指紋、掌紋、及び足紋の少なくとも1つを含んでいる。
 [1-2:情報処理装置1の技術的効果]
In this disclosure, a fingerprint image is an image of a fingerprint including a plurality of feature points. A fingerprint image is an image that can be acquired from a living body, and includes at least one of a fingerprint, a palm print, and a foot print.
[1-2: Technical Effects of Information Processing Device 1]

 この開示にかかる情報処理装置1は、特徴点のクラスタに基づいて、断片化画像を生成する。このため、情報処理装置1は、断片化画像に、紋様画像のうちの意味のある領域を含めることができる。断片化画像は、特徴点のクラスタに基づいて生成されるので、例えば特徴点を含まない領域等の意味のない領域を含みにくい。情報処理装置1は、断片化画像が含む領域に占める、紋様画像のうちの意味のある領域をより多くすることができる。
 [2:第2実施形態]
The information processing device 1 according to this disclosure generates a fragmented image based on clusters of feature points. Therefore, the information processing device 1 can include meaningful areas of the pattern image in the fragmented image. Because the fragmented image is generated based on clusters of feature points, it is unlikely to include meaningless areas, such as areas that do not include feature points. The information processing device 1 can increase the amount of meaningful areas of the pattern image that are included in the area included in the fragmented image.
[2: Second embodiment]

 情報処理装置、情報処理方法、及び、記録媒体の第2実施形態について説明する。以下では、この開示にかかる情報処理装置2を用いて、情報処理装置、情報処理方法、及び記録媒体の第2実施形態について説明する。
 [2-1:情報処理装置2の構成]
A second embodiment of an information processing device, an information processing method, and a recording medium will be described. Hereinafter, a second embodiment of an information processing device, an information processing method, and a recording medium will be described using an information processing device 2 according to this disclosure.
[2-1: Configuration of information processing device 2]

 図2は、情報処理装置2の構成を示すブロック図である。図2に示すように、情報処理装置2は、演算装置21と、記憶装置22とを備えている。更に、情報処理装置2は、通信装置23と、入力装置24と、出力装置25とを備えていてもよい。但し、情報処理装置2は、通信装置23、入力装置24、及び出力装置25のうちの少なくとも一つを備えていなくてもよい。演算装置21と、記憶装置22と、通信装置23と、入力装置24と、出力装置25とは、データバス26を介して接続されていてもよい。 FIG. 2 is a block diagram showing the configuration of the information processing device 2. As shown in FIG. 2, the information processing device 2 includes a calculation device 21 and a storage device 22. Furthermore, the information processing device 2 may include a communication device 23, an input device 24, and an output device 25. However, the information processing device 2 does not have to include at least one of the communication device 23, the input device 24, and the output device 25. The calculation device 21, the storage device 22, the communication device 23, the input device 24, and the output device 25 may be connected via a data bus 26.

 演算装置21は、例えば、CPU(Central Processing Unit)、GPU(Graphics Proecssing Unit)及びFPGA(Field Programmable Gate Array)のうちの少なくとも一つを含む。演算装置21は、コンピュータプログラムを読み込む。例えば、演算装置21は、記憶装置22が記憶しているコンピュータプログラムを読み込んでもよい。例えば、演算装置21は、コンピュータで読み取り可能であって且つ一時的でない記録媒体が記憶しているコンピュータプログラムを、情報処理装置2が備える図示しない記録媒体読み取り装置(例えば、後述する入力装置24)を用いて読み込んでもよい。演算装置21は、通信装置23(或いは、その他の通信装置)を介して、情報処理装置2の外部に配置される不図示の装置からコンピュータプログラムを取得してもよい(つまり、ダウンロードしてもよい又は読み込んでもよい)。演算装置21は、読み込んだコンピュータプログラムを実行する。その結果、演算装置21内には、情報処理装置2が行うべき動作を実行するための論理的な機能ブロックが実現される。つまり、演算装置21は、情報処理装置2が行うべき動作(言い換えれば、処理)を実行するための論理的な機能ブロックを実現するためのコントローラとして機能可能である。演算装置21は、通信装置23(或いは、その他の通信装置)を介して、情報処理装置2の外部に設けられている他のコンピュータ、クラウドサーバ等の不図示の装置に情報を出力してもよい。 The arithmetic device 21 includes, for example, at least one of a CPU (Central Processing Unit), a GPU (Graphics Processing Unit), and an FPGA (Field Programmable Gate Array). The arithmetic device 21 reads a computer program. For example, the arithmetic device 21 may read a computer program stored in the storage device 22. For example, the arithmetic device 21 may read a computer program stored in a computer-readable and non-transient recording medium using a recording medium reading device (e.g., an input device 24 described later) not shown in the figure that is provided in the information processing device 2. The arithmetic device 21 may acquire (i.e., download or read) a computer program from a device (not shown) located outside the information processing device 2 via the communication device 23 (or other communication device). The arithmetic device 21 executes the read computer program. As a result, a logical function block for executing the operation to be performed by the information processing device 2 is realized within the calculation device 21. In other words, the calculation device 21 can function as a controller for realizing a logical function block for executing the operation (in other words, processing) to be performed by the information processing device 2. The calculation device 21 may output information to a device (not shown), such as another computer or a cloud server, that is provided outside the information processing device 2, via the communication device 23 (or another communication device).

 図2には、情報処理動作を実行するために演算装置21内に実現される論理的な機能ブロックの一例が示されている。図2に示すように、演算装置21内には、後述する付記に記載された「抽出手段」の一具体例である抽出部211と、後述する付記に記載された「分類手段」の一具体例である分類部212と、後述する付記に記載された「生成手段」の一具体例である生成部213と、後述する付記に記載された「出力手段」の一具体例である出力部214と、取得部215と、後述する付記に記載された「受付手段」の一具体例である受付部216とが実現される。但し、演算装置21内に、取得部215、及び受付部216のうちの少なくとも一法が実現されなくてもよい。抽出部211、分類部212、生成部213、出力部214、取得部215、及び受付部216の各々の動作の詳細については、図3を参照しながら後述する。 2 shows an example of a logical functional block realized in the arithmetic device 21 to execute information processing operations. As shown in FIG. 2, the arithmetic device 21 realizes an extraction unit 211, which is a specific example of an "extraction means" described in the appendix described later, a classification unit 212, which is a specific example of a "classification means" described in the appendix described later, a generation unit 213, which is a specific example of a "generation means" described in the appendix described later, an output unit 214, which is a specific example of an "output means" described in the appendix described later, an acquisition unit 215, and a reception unit 216, which is a specific example of a "reception means" described in the appendix described later. However, at least one of the acquisition unit 215 and the reception unit 216 may not be realized in the arithmetic device 21. Details of the operations of the extraction unit 211, classification unit 212, generation unit 213, output unit 214, acquisition unit 215, and reception unit 216 will be described later with reference to FIG. 3.

 記憶装置22は、所望のデータを記憶可能である。例えば、記憶装置22は、演算装置21が実行するコンピュータプログラムを一時的に記憶していてもよい。記憶装置22は、演算装置21がコンピュータプログラムを実行している場合に演算装置21が一時的に使用するデータを一時的に記憶してもよい。記憶装置22は、情報処理装置2が長期的に保存するデータを記憶してもよい。尚、記憶装置22は、RAM(Random Access Memory)、ROM(Read Only Memory)、ハードディスク装置、光磁気ディスク装置、SSD(Solid State Drive)及びディスクアレイ装置のうちの少なくとも一つを含んでいてもよい。つまり、記憶装置22は、一時的でない記録媒体を含んでいてもよい。 The storage device 22 can store desired data. For example, the storage device 22 may temporarily store a computer program executed by the arithmetic device 21. The storage device 22 may temporarily store data that is temporarily used by the arithmetic device 21 when the arithmetic device 21 is executing a computer program. The storage device 22 may store data that the information processing device 2 stores for a long period of time. The storage device 22 may include at least one of a RAM (Random Access Memory), a ROM (Read Only Memory), a hard disk device, a magneto-optical disk device, an SSD (Solid State Drive), and a disk array device. In other words, the storage device 22 may include a non-temporary recording medium.

 通信装置23は、不図示の通信ネットワークを介して、情報処理装置2の外部の装置と通信可能である。通信装置23は、イーサネット(登録商標)、Wi-Fi(登録商標)、Bluetooth(登録商標)、USB(Universal Serial Bus)等の規格に基づく通信インターフェースであってもよい。 The communication device 23 is capable of communicating with devices external to the information processing device 2 via a communication network (not shown). The communication device 23 may be a communication interface based on standards such as Ethernet (registered trademark), Wi-Fi (registered trademark), Bluetooth (registered trademark), and USB (Universal Serial Bus).

 入力装置24は、情報処理装置2の外部からの情報処理装置2に対する情報の入力を受け付ける装置である。例えば、入力装置24は、情報処理装置2のオペレータが操作可能な操作装置(例えば、キーボード、マウス及びタッチパネルのうちの少なくとも一つ)を含んでいてもよい。例えば、入力装置24は情報処理装置2に対して外付け可能な記録媒体にデータとして記録されている情報を読み取り可能な読取装置を含んでいてもよい。 The input device 24 is a device that accepts information input to the information processing device 2 from outside the information processing device 2. For example, the input device 24 may include an operating device (e.g., at least one of a keyboard, a mouse, and a touch panel) that can be operated by an operator of the information processing device 2. For example, the input device 24 may include a reading device that can read information recorded as data on a recording medium that can be attached externally to the information processing device 2.

 出力装置25は、情報処理装置2の外部に対して情報を出力する装置である。例えば、出力装置25は、情報を画像として出力してもよい。つまり、出力装置25は、出力したい情報を示す画像を表示可能な表示装置(いわゆる、ディスプレイ)を含んでいてもよい。例えば、出力装置25は、情報を音声として出力してもよい。つまり、出力装置25は、音声を出力可能な音声装置(いわゆる、スピーカ)を含んでいてもよい。例えば、出力装置25は、紙面に情報を出力してもよい。つまり、出力装置25は、紙面に所望の情報を印刷可能な印刷装置(いわゆる、プリンタ)を含んでいてもよい。
 [2-2:情報処理装置2が行う情報処理動作]
The output device 25 is a device that outputs information to the outside of the information processing device 2. For example, the output device 25 may output information as an image. That is, the output device 25 may include a display device (so-called a display) capable of displaying an image showing the information to be output. For example, the output device 25 may output information as sound. That is, the output device 25 may include an audio device (so-called a speaker) capable of outputting sound. For example, the output device 25 may output information on paper. That is, the output device 25 may include a printing device (so-called a printer) capable of printing desired information on paper.
[2-2: Information Processing Operation Performed by Information Processing Device 2]

 図3を参照しながら、情報処理装置2が行う情報処理動作について説明する。図3は、情報処理装置2が行う情報処理動作の流れの一例を示すフローチャートである。 The information processing operation performed by the information processing device 2 will be described with reference to FIG. 3. FIG. 3 is a flowchart showing an example of the flow of the information processing operation performed by the information processing device 2.

 図3(a)に示す様に、取得部215は、紋様画像を取得する(ステップS20)。取得部215は、記憶装置22内に記憶されている紋様画像を取得してもよい。また、取得部215は、通信装置23を介して外部の装置から紋様画像を取得してもよい。抽出部211は、取得された紋様画像から複数の特徴点を抽出する(ステップS21)。図4(a)は、紋様画像PIから抽出された複数の特徴点xを例示している。 As shown in FIG. 3(a), the acquisition unit 215 acquires a pattern image (step S20). The acquisition unit 215 may acquire a pattern image stored in the storage device 22. The acquisition unit 215 may also acquire a pattern image from an external device via the communication device 23. The extraction unit 211 extracts a plurality of feature points from the acquired pattern image (step S21). FIG. 4(a) illustrates an example of a plurality of feature points x extracted from the pattern image PI.

 受付部216は、所定数(n)の指定を受け付ける(ステップS22)。受付部216は、ユーザが入力装置24に入力した所定数(n)の指定を受け付けてもよい。なお、ステップS22の動作はステップS20より前に実施してもよい。またステップS22の動作は、1回の情報処理動作において必須の動作ではなく、実施しない場合があってもよい。例えば、ユーザが紋様画像から抽出された特徴点に関する情報を確認し、ユーザが指定が必要であると認めた場合に、受付部216は、所定数(n)の指定の入力を受け付けてもよい。 The reception unit 216 receives the designation of the predetermined number (n) (step S22). The reception unit 216 may receive the designation of the predetermined number (n) input by the user to the input device 24. The operation of step S22 may be performed before step S20. Furthermore, the operation of step S22 is not a required operation in one information processing operation, and may not be performed in some cases. For example, when the user checks information related to feature points extracted from the pattern image and recognizes that a designation is necessary, the reception unit 216 may receive the input of the designation of the predetermined number (n).

 紋様画像が含む特徴点の各々は、まとまりのあるグループ、すなわちクラスタを形成し得る。分類部212は、複数の特徴点を複数のクラスタに分類する(ステップS23)。クラスタに分類された特徴点を、クラスタに属する特徴点とよぶ。ステップS23の動作の詳細は、図3(b)を参照して説明する。 Each of the feature points contained in the pattern image can form a cohesive group, i.e., a cluster. The classification unit 212 classifies the feature points into a number of clusters (step S23). The feature points classified into a cluster are called feature points belonging to the cluster. Details of the operation of step S23 will be described with reference to FIG. 3(b).

 図3(b)に示す様に、分類部212は、複数のクラスタの各々が、複数の特徴点のうちの所定数(n)の特徴点を含むように、抽出された特徴点の数(N)と所定数とから、分類されるクラスタの数(C)を決める(ステップS23-1)。分類部212は、抽出された特徴点の数(N)を所定数(n)で除すことにより、クラスタの数(C)を決めるてもよい。つまり、分類部212は、紋様画像が比較的多くの特徴点を含む場合は、比較的多くのクラスタに分類する。また、分類部212は、紋様画像が比較的少ない特徴点を含む場合は、比較的少ない数のクラスタに分類する。 As shown in FIG. 3(b), the classification unit 212 determines the number of clusters (C) to be classified from the number (N) of extracted feature points and a predetermined number such that each of the multiple clusters contains a predetermined number (n) of feature points from the multiple feature points (step S23-1). The classification unit 212 may determine the number of clusters (C) by dividing the number (N) of extracted feature points by the predetermined number (n). In other words, when a pattern image contains a relatively large number of feature points, the classification unit 212 classifies the pattern image into a relatively large number of clusters. Also, when a pattern image contains a relatively small number of feature points, the classification unit 212 classifies the pattern image into a relatively small number of clusters.

 分類部212は、C個の特徴点を初期の代表点として選択する(ステップS23-2)。分類部212は、ランダムに初期の代表点を選択してもよい。また、分類部212は,適宜互いに離れた特徴点を初期の代表点として選択してもよい。 The classification unit 212 selects C feature points as initial representative points (step S23-2). The classification unit 212 may select the initial representative points randomly. Alternatively, the classification unit 212 may select feature points that are appropriately separated from each other as the initial representative points.

 分類部212は、複数の特徴点を、代表点を中心とするn個の特徴点を含むC個のクラスタに分類する第1動作(ステップS23-3)と、分類されたC個のクラスタの各々から新たな代表点を選択する第2動作(ステップS23-4)とを繰り返す。分類部212は、分類したクラスタが、まとまりのある、意味のあるグループとなる(ステップS23-5:yes)まで、第1動作と第2動作とを繰り返してもよい。図4(b)は、5個のクラスタC1,C2,C3,C4,C5に分類された複数の特徴点xを例示している。 The classification unit 212 repeats a first operation (step S23-3) of classifying the multiple feature points into C clusters each including n feature points centered on a representative point, and a second operation (step S23-4) of selecting a new representative point from each of the classified C clusters. The classification unit 212 may repeat the first and second operations until the classified clusters become cohesive and meaningful groups (step S23-5: yes). Figure 4(b) illustrates multiple feature points x classified into five clusters C1, C2, C3, C4, and C5.

 生成部213は、複数のクラスタに基づいて、紋様画像から断片化画像を生成する(ステップS24)。生成部213は、複数のクラスタの各々に対応する複数の断片化画像を生成する。生成部213は、対応するクラスタに属する特徴点を含むように断片化画像を生成する。生成部213は、対応するクラスタに属する特徴点の各々の分布に応じて、断片化画像を生成する。つまり、生成部213は、クラスタに含まれる特徴点が分布しない領域が小さくなるようにクラスタに対応する断片化画像がを生成する。図4(c)は、5個のクラスタC1,C2,C3,C4,C5に基づいて、紋様画像から生された断片化画像I1,I2,I3,I4,I5を例示している。なお、断片化画像の形状については、他の実施形態で説明する。 The generating unit 213 generates fragmented images from the pattern image based on the multiple clusters (step S24). The generating unit 213 generates multiple fragmented images corresponding to each of the multiple clusters. The generating unit 213 generates fragmented images so as to include feature points belonging to the corresponding clusters. The generating unit 213 generates fragmented images according to the distribution of each of the feature points belonging to the corresponding clusters. In other words, the generating unit 213 generates fragmented images corresponding to clusters so that areas where feature points included in the clusters are not distributed are small. Figure 4(c) illustrates fragmented images I1, I2, I3, I4, and I5 generated from the pattern image based on five clusters C1, C2, C3, C4, and C5. The shapes of the fragmented images will be described in other embodiments.

 生成部213は、クラスタの代表点とクラスタに属する特徴点の各々との距離に基づき、断片化画像を生成してもよい。クラスタの代表点は、クラスタの中心に位置する特徴点であってもよいし、クラスタの重心に位置する特徴点であってもよい。または、クラスタの代表点は、特徴点が存在しない位置であってもよい。 The generation unit 213 may generate a fragmented image based on the distance between the representative point of the cluster and each of the feature points belonging to the cluster. The representative point of the cluster may be a feature point located at the center of the cluster, or may be a feature point located at the center of gravity of the cluster. Alternatively, the representative point of the cluster may be a position where no feature point exists.

 出力部214は、断片化画像を出力する(ステップ25)。出力部214は、出力装置25としてのディスプレイを制御して、断片化画像を表示させてもよい。出力部214は、記憶装置22を制御して、記憶装置22に断片化画像を記憶させてもよい。出力部214は、通信装置23を制御して、外部の装置に断片化画像を送信してもよい。
 [2-3:変形例]
The output unit 214 outputs the fragmented image (step 25). The output unit 214 may control a display as the output device 25 to display the fragmented image. The output unit 214 may control the storage device 22 to store the fragmented image in the storage device 22. The output unit 214 may control the communication device 23 to transmit the fragmented image to an external device.
[2-3: Modifications]

 情報処理装置2は、必要な断片化画像の数を指定できるように構成されていてもよい。この場合、受付部216は、断片化画像の所望数(C)の指定を受け付けてもよい。受付部216は、ユーザが入力装置24に入力した所望数(C)の指定を受け付けてもよい。必要な断片化画像の数を指定する場合、紋様画像が含む特徴点の数に応じて、断片化画像が含む特徴点の数も変わる。
 [2-4-1:断片化画像の必要性1]
The information processing device 2 may be configured to allow the number of necessary fragmented images to be specified. In this case, the receiving unit 216 may receive a specification of the desired number (C) of fragmented images. The receiving unit 216 may receive a specification of the desired number (C) input by the user to the input device 24. When the number of necessary fragmented images is specified, the number of feature points contained in the fragmented images changes depending on the number of feature points contained in the pattern image.
[2-4-1: Necessity of fragmented images 1]

 指紋照合における画像強調処理、特徴抽出処理等に機械学習が用いられている。機械学習では、多くの学習データが必要とされる。 Machine learning is used for image enhancement processing and feature extraction processing in fingerprint matching. Machine learning requires a large amount of training data.

 押印指紋画像のように指紋全体を含む指紋画像ではなく、例えば遺留指紋画像のように、指紋全体の一部の領域しか写っていない断片的な指紋画像を用いた指紋の照合をする場合がある。遺留指紋の照合における各種の処理に機械学習が用いる場合には、断片的な指紋画像の多くの学習データが必要とされる。しかしながら、機械学習に有用な断片的な指紋画像のサンプルは多くない。例えば、特徴点を含まない領域の断片的な指紋画像は照合に有用ではなく、機械学習にも有用ではない。したがって、例えば、ランダムに押印指紋を分割した断片的な指紋画像は、機械学習に有用ではない場合が多い。この開示の情報処理装置2は、機械学習に有用な断片的な指紋画像の大量生成に有用である。 Fingerprint matching may be performed using a fragmentary fingerprint image, such as a latent fingerprint image, which shows only a partial area of the entire fingerprint, rather than a fingerprint image that includes the entire fingerprint, such as an imprint fingerprint image. When machine learning is used for various processes in matching latent fingerprints, a large amount of learning data of fragmentary fingerprint images is required. However, there are not many samples of fragmentary fingerprint images that are useful for machine learning. For example, a fragmentary fingerprint image of an area that does not include minutiae is not useful for matching, and is not useful for machine learning either. Therefore, for example, a fragmentary fingerprint image obtained by randomly dividing an imprint fingerprint is often not useful for machine learning. The information processing device 2 disclosed herein is useful for mass generation of fragmentary fingerprint images that are useful for machine learning.

 また、遺留指紋画像は、指紋以外の情報を含んでいる場合が多い。このため、情報処理装置2は、遺留指紋の照合における各種の処理の機械学習に用いる学習データとして、背景として、濃淡、他の画像を合成し、より遺留指紋らしい断片的な指紋画像を準備してもよい。
 [2-4-2:断片化画像の必要性2]
Furthermore, latent fingerprint images often contain information other than fingerprints. For this reason, the information processing device 2 may prepare a fragmented fingerprint image that resembles a latent fingerprint by compositing a shading or other image as a background, as learning data to be used in machine learning for various processes in matching latent fingerprints.
[2-4-2: Necessity of fragmented images 2]

 掌紋画像は、指紋画像と比較してサイズが大きく、検出される特徴点が多い。特徴点が多くなるにつれ、照合コストは大きくなる。また、掌紋画像は、特徴点分布が広いので、画像歪みの影響を受けやすい。このため、例えば、掌紋画像全体を重ね合わせることが困難である。このように、掌紋画像全体を照合することは難しい。また、掌紋画像全体には、照合に用いることが難しい領域が含まれている場合が多い。 Compared to fingerprint images, palm print images are larger in size and have more feature points to detect. As the number of feature points increases, the matching cost increases. In addition, palm print images have a wide distribution of feature points, so they are susceptible to image distortion. This makes it difficult, for example, to overlay the entire palm print image. As such, matching the entire palm print image is difficult. Also, the entire palm print image often contains areas that are difficult to use for matching.

 断片化した掌紋画像を生成することで、個別の画像から抽出される特徴点数を減らすことができ、照合コストを小さくすることができる。また、断片化した掌紋画像に掌紋画像の照合に有用な領域を含ませることで、歪みの影響を抑え、重要な情報を逃さず、不要な照合を減らすことができる。この開示の情報処理装置2は、掌紋画像の照合に有用な領域を含む、断片化した掌紋画像の生成に有用である。
 [2-5:情報処理装置2の技術的効果]
By generating a fragmented palm print image, the number of feature points extracted from individual images can be reduced, and matching costs can be reduced. Also, by including an area useful for matching palm print images in the fragmented palm print image, the effects of distortion can be suppressed, important information is not missed, and unnecessary matching can be reduced. The information processing device 2 disclosed herein is useful for generating a fragmented palm print image including an area useful for matching palm print images.
[2-5: Technical Effects of Information Processing Device 2]

 この開示にかかる情報処理装置2は、クラスタの代表点とクラスタの各々に属する特徴点の各々との距離、及び、クラスタに属する特徴点の各々の分布の形状の少なくとも一方に基づいて、クラスタに対応する断片化画像の形状を決定するので、特徴点を含む意味のある領域を多く含む断片化画像を生成することができる。情報処理装置2は、各々の断片化画像に含まれる領域を、各々独立して意味のある領域とすることができる。 The information processing device 2 disclosed herein determines the shape of a fragmented image corresponding to a cluster based on at least one of the distance between the representative point of the cluster and each of the feature points belonging to the cluster and the shape of the distribution of each of the feature points belonging to the cluster, and is therefore able to generate a fragmented image that includes many meaningful areas that include feature points. The information processing device 2 can treat the areas included in each fragmented image as independently meaningful areas.

 情報処理装置2は、所定数の特徴点を含むように各々の断片化画像を生成するので、各々の断片化画像は、同等の情報量を有する。断片化画像が含む特徴点の数、又は断片化画像の数を指定できるので、情報処理装置2は、所望の情報量を有する断片化画像を生成することができる。
 [3:第3実施形態]
Since the information processing device 2 generates each fragmented image so as to include a predetermined number of feature points, each fragmented image has an equal amount of information. Since the number of feature points included in a fragmented image or the number of fragmented images can be specified, the information processing device 2 can generate fragmented images having a desired amount of information.
[3: Third embodiment]

 情報処理装置、情報処理方法、及び、記録媒体の第3実施形態について説明する。以下では、この開示にかかる情報処理装置3を用いて、情報処理装置、情報処理方法、及び記録媒体の第3実施形態について説明する。第3実施形態では、生成部313の動作が第2実施形態と異なる。
 [3-1:情報処理装置3が行う情報処理動作]
A third embodiment of an information processing device, an information processing method, and a recording medium will be described below. Hereinafter, a third embodiment of an information processing device, an information processing method, and a recording medium will be described using an information processing device 3 according to this disclosure. In the third embodiment, the operation of the generating unit 313 is different from that of the second embodiment.
[3-1: Information Processing Operation Performed by Information Processing Device 3]

 生成部313は、楕円形状の断片化画像を生成する。生成部313は、クラスタに属する特徴点の各々の分布に応じて、楕円形状を定めてもよい。生成部313は、クラスタに属する特徴点の各々の分布の形状に基づいて、楕円形状を定めてもよい。 The generating unit 313 generates an elliptical fragmented image. The generating unit 313 may determine the elliptical shape according to the distribution of each of the feature points belonging to the cluster. The generating unit 313 may determine the elliptical shape based on the shape of the distribution of each of the feature points belonging to the cluster.

 生成部313は、クラスタの代表点とクラスタに属する特徴点の各々との距離に基づいて、楕円形状を定めてもよい。例えば、生成部313は、クラスタの代表点と代表点に対応するクラスタに属する特徴点の各々との距離に基づいて長軸を定め、この長軸で規定できる楕円に基づき、クラスタに対応する片化画像を生成してもよい。生成部313は、クラスタの代表点と代表点に対応するクラスタに属する特徴点の各々との距離のうち最も長い距離に基づいて楕円の長軸を定めてもよい。 The generating unit 313 may determine the shape of an ellipse based on the distance between the representative point of a cluster and each of the feature points belonging to the cluster. For example, the generating unit 313 may determine a major axis based on the distance between the representative point of a cluster and each of the feature points belonging to the cluster corresponding to the representative point, and generate a fragmented image corresponding to the cluster based on an ellipse that can be defined by this major axis. The generating unit 313 may determine the major axis of the ellipse based on the longest distance between the representative point of a cluster and each of the feature points belonging to the cluster corresponding to the representative point.

 生成部313は、円形状の断片化画像を生成してもよい。生成部313は、クラスタの代表点と代表点に対応するクラスタに属する特徴点の各々との距離のうち最も長い距離に基づいて円の半径を定めてもよい。
 [3-2:情報処理装置3の技術的効果]
 この開示にかかる情報処理装置3は、クラスタの代表点と代表点に対応するクラスタに属する特徴点の各々との距離に基づいて長軸が定められた楕円に基づき、クラスタに対応する断片化画像を生成するので、クラスタに属する特徴点の各々の分布の形状に応じた断片化画像を生成することができる。
 [4:第4実施形態]
The generating unit 313 may generate a circular fragmented image. The generating unit 313 may determine the radius of the circle based on the longest distance between the representative point of the cluster and each of the feature points belonging to the cluster corresponding to the representative point.
[3-2: Technical Effects of Information Processing Device 3]
The information processing device 3 disclosed herein generates a fragmented image corresponding to a cluster based on an ellipse whose major axis is determined based on the distance between the representative point of the cluster and each of the feature points belonging to the cluster corresponding to the representative point, and can therefore generate a fragmented image according to the distribution shape of each of the feature points belonging to the cluster.
[4: Fourth embodiment]

 情報処理装置、情報処理方法、及び、記録媒体の第4実施形態について説明する。以下では、この開示にかかる情報処理装置4を用いて、情報処理装置、情報処理方法、及び記録媒体の第4実施形態について説明する。第4実施形態では、生成部413の動作が第2実施形態、第3実施形態と異なる。
 [4-1:情報処理装置4が行う情報処理動作]
A fourth embodiment of an information processing device, an information processing method, and a recording medium will be described below. Hereinafter, a fourth embodiment of an information processing device, an information processing method, and a recording medium will be described using an information processing device 4 according to this disclosure. In the fourth embodiment, the operation of the generating unit 413 is different from the second and third embodiments.
[4-1: Information Processing Operation Performed by Information Processing Device 4]

 生成部413は、多角形状の断片化画像を生成する。多角形は凸多角形であってもよい。生成部413は、クラスタに属する特徴点の各々の分布に応じて、多角形状を定めてもよい。生成部413は、クラスタに属する特徴点の各々の分布の形状に基づいて、多角形状を定めてもよい。生成部413は、クラスタの代表点とクラスタに属する特徴点の各々との距離に基づいて、多角形状を定めてもよい。
 [4-2:情報処理装置4の技術的効果]
The generating unit 413 generates a polygonal fragmentation image. The polygon may be a convex polygon. The generating unit 413 may determine the polygonal shape according to the distribution of each of the feature points belonging to the cluster. The generating unit 413 may determine the polygonal shape based on the shape of the distribution of each of the feature points belonging to the cluster. The generating unit 413 may determine the polygonal shape based on the distance between the representative point of the cluster and each of the feature points belonging to the cluster.
[4-2: Technical Effects of Information Processing Device 4]

 この開示にかかる情報処理装置4は、多角形状の断片化画像を生成するので、クラスタに属する特徴点の各々の分布の形状に応じた断片化画像を生成することができる。
 [5:第5実施形態]
The information processing device 4 according to this disclosure generates polygonal fragmented images, and therefore can generate fragmented images according to the distribution shape of each of the feature points belonging to a cluster.
[5: Fifth embodiment]

 情報処理装置、情報処理方法、及び、記録媒体の第4実施形態について説明する。以下では、この開示にかかる情報処理装置5を用いて、情報処理装置、情報処理方法、及び記録媒体の第5実施形態について説明する。第5実施形態は、演算装置21内に、判定部517が実現される点で、第2実施形態~第4実施形態と異なる。
 [5-1:情報処理装置5が行う情報処理動作]
A fourth embodiment of an information processing device, an information processing method, and a recording medium will be described below. A fifth embodiment of an information processing device, an information processing method, and a recording medium will be described below using an information processing device 5 according to this disclosure. The fifth embodiment differs from the second to fourth embodiments in that a determination unit 517 is realized in the calculation device 21.
[5-1: Information Processing Operation Performed by Information Processing Device 5]

 判定部517は、クラスタに属する特徴点の各々の分布に応じて、断片化画像を、楕円形状にするか、又は多角形状にするかを判定する。判定部517は、複数のクラスタの各々について、クラスタに対応する断片化画像を、楕円形状にするか、又は多角形状にするかを判定する。 The determination unit 517 determines whether the fragmented image should be elliptical or polygonal in shape, depending on the distribution of each of the feature points belonging to the cluster. For each of the multiple clusters, the determination unit 517 determines whether the fragmented image corresponding to the cluster should be elliptical or polygonal.

 判定部517は、クラスタに属する特徴点の各々の分布に応じて、断片化画像を、楕円形状にするか、又は多角形状にするかを判定してもよい。判定部517は、クラスタに属する特徴点の各々の分布の形状(「分布形状」とよぶ)に基づいて、断片化画像を、楕円形状にするか、又は多角形状にするかを判定してもよい。判定部517は、分布形状が、予め定められている複数の形状の何れの形状であるのかを判定してもよい。予め定められている複数の形状は、楕円形、並びに、三角形、四角形、五角形、及び六角形の少なくとも何れかを含む多角形を含んでいる。判定部517により判定された、予め定められている複数の形状の何れの形状を「領域形状」とよぶ。または、判定部517は、クラスタの代表点とクラスタに属する特徴点の各々との距離に基づいて、断片化画像を、楕円形状にするか、又は多角形状にするかを判定してもよい。 The determining unit 517 may determine whether the fragmented image should be elliptical or polygonal in shape, depending on the distribution of each of the feature points belonging to the cluster. The determining unit 517 may determine whether the fragmented image should be elliptical or polygonal, based on the shape of the distribution of each of the feature points belonging to the cluster (referred to as the "distribution shape"). The determining unit 517 may determine which of a plurality of predefined shapes the distribution shape is. The predefined shapes include an ellipse and a polygon including at least one of a triangle, a rectangle, a pentagon, and a hexagon. Any of the predefined shapes determined by the determining unit 517 is referred to as the "area shape". Alternatively, the determining unit 517 may determine whether the fragmented image should be elliptical or polygonal, based on the distance between the representative point of the cluster and each of the feature points belonging to the cluster.

 または、情報処理装置5は、分布形状を確認したユーザが指定する領域形状を受け付けてもよい。 Alternatively, the information processing device 5 may accept the area shape specified by the user who has confirmed the distribution shape.

 生成部513は、領域形状の断片化画像を生成する。生成部513は、領域形状が、分布形状の領域を包含し、かつ、分布形状の領域以外の領域が少なくなるように、領域形状を変形させて、変形させた領域形状の断片化画像を生成してもよい。 The generating unit 513 generates a fragmented image of the region shape. The generating unit 513 may deform the region shape so that the region shape includes the region of the distribution shape and the region other than the region of the distribution shape is reduced, and generate a fragmented image of the deformed region shape.

 生成部513による領域形状の変形は、領域形状の拡大又は縮小を含んでいてもよい。生成部513による領域形状の変形は、多角形の何れかの辺を領域形状の内側に凸させる変形を含んでいてもよい。生成部513による領域形状の変形は、楕円形状の長軸、及び短軸の少なくとも一方の長さの変更を含んでいてもよい。
 [5-2:情報処理装置5の技術的効果]
The deformation of the region shape by the generation unit 513 may include enlarging or reducing the region shape. The deformation of the region shape by the generation unit 513 may include deformation to make any side of a polygon convex inwardly of the region shape. The deformation of the region shape by the generation unit 513 may include changing the length of at least one of the major axis and minor axis of an elliptical shape.
[5-2: Technical Effects of Information Processing Device 5]

 この開示にかかる情報処理装置5は、断片化画像の形状を、楕円形状にするか、又は多角形状にするかを判定するので、クラスタに属する特徴点の各々の分布の形状に応じた断片化画像を生成することができる。
 [6:付記]
The information processing device 5 disclosed herein determines whether the shape of a fragmented image should be elliptical or polygonal, and can generate a fragmented image according to the distribution shape of each of the feature points belonging to a cluster.
[6: Notes]

 以上説明した実施形態に関して、更に以下の付記を開示する。
 [付記1]
 紋様画像から複数の特徴点を抽出する抽出手段と、
 前記複数の特徴点を複数のクラスタに分類する分類手段と、
 前記複数のクラスタに基づいて、前記紋様画像から断片化画像を生成する生成手段と、
 前記断片化画像を出力する出力手段と
 を備える情報処理装置。
 [付記2]
 複数のクラスタの各々は、前記複数の特徴点のうちの所定数の特徴点を含む
 付記1に記載の情報処理装置。
 [付記3]
 前記所定数の指定を受け付ける受付手段を備える
 付記2に記載の情報処理装置。
 [付記4]
 前記生成手段は、前記複数のクラスタの各々に対応する複数の前記断片化画像を生成し、
 前記断片化画像は、対応するクラスタに属する特徴点を含む
 付記1に記載の情報処理装置。
 [付記5]
 前記生成手段は、前記複数のクラスタの各々について、クラスタに属する特徴点の各々の分布に応じて、前記クラスタに対応する前記断片化画像を生成する
 付記1に記載の情報処理装置。
 [付記6]
 前記生成手段は、前記複数のクラスタの各々の代表点と前記複数のクラスタの各々に属する特徴点の各々との距離に基づき、前記複数のクラスタの各々に対応する複数の前記断片化画像を生成する
 付記1に記載の情報処理装置。
 [付記7]
 前記生成手段は、前記代表点と前記代表点に対応するクラスタに属する特徴点の各々との距離に基づいて長軸が定められた楕円に基づき、前記クラスタに対応する前記断片化画像を生成する
 付記6に記載の情報処理装置。
 [付記8]
 前記生成手段は、前記複数のクラスタの各々について、クラスタに属する特徴点の各々の分布の形状に基づいて、前記クラスタに対応する前記断片化画像の形状を決定する
 付記1に記載の情報処理装置。
 [付記9]
 前記生成手段は、多角形状の前記断片化画像を生成する
 付記8に記載の情報処理装置。
 [付記10]
 前記多角形状は凸多角形状である
 付記9に記載の情報処理装置。
 [付記11]
 前記断片化画像の形状を、楕円形状にするか、又は多角形状にするかを判定する判定手段を備える
 付記5に記載の情報処理装置。
 [付記12]
 前記断片化画像の所望数の指定を受け付ける受付手段を備える
 付記1に記載の情報処理装置。
 [付記13]
 前記紋様画像は、指紋画像、及び掌紋画像の少なくとも一方を含む
 付記1に記載の情報処理装置。
 [付記14]
 紋様画像から複数の特徴点を抽出し、
 前記複数の特徴点を複数のクラスタに分類し、
 前記複数のクラスタに基づいて、前記紋様画像から断片化画像を生成し、
 前記断片化画像を出力する
 情報処理方法。
 [付記15]
 コンピュータに、
 紋様画像から複数の特徴点を抽出し、
 前記複数の特徴点を複数のクラスタに分類し、
 前記複数のクラスタに基づいて、前記紋様画像から断片化画像を生成し、
 前記断片化画像を出力する
 情報処理方法を実行させるためのコンピュータプログラムが記録されている記録媒体。
The following supplementary notes are further disclosed regarding the above-described embodiment.
[Appendix 1]
An extraction means for extracting a plurality of feature points from a pattern image;
A classification means for classifying the plurality of feature points into a plurality of clusters;
a generating means for generating a fragmented image from the pattern image based on the plurality of clusters;
and an output means for outputting the fragmented image.
[Appendix 2]
The information processing device according to claim 1, wherein each of the plurality of clusters includes a predetermined number of feature points from the plurality of feature points.
[Appendix 3]
The information processing device according to claim 2, further comprising a reception unit configured to receive the designation of the predetermined number.
[Appendix 4]
The generating means generates a plurality of the fragmentation images corresponding to each of the plurality of clusters,
The information processing device according to claim 1, wherein the fragmented image includes feature points that belong to a corresponding cluster.
[Appendix 5]
The information processing device according to claim 1, wherein the generating means generates, for each of the plurality of clusters, the fragmented image corresponding to the cluster in accordance with a distribution of each of the feature points belonging to the cluster.
[Appendix 6]
The information processing device according to claim 1, wherein the generating means generates a plurality of the fragmented images corresponding to each of the plurality of clusters based on a distance between a representative point of each of the plurality of clusters and each of feature points belonging to each of the plurality of clusters.
[Appendix 7]
The information processing device according to claim 6, wherein the generating means generates the fragmented image corresponding to the cluster based on an ellipse whose major axis is determined based on a distance between the representative point and each of the feature points belonging to the cluster corresponding to the representative point.
[Appendix 8]
The information processing device according to claim 1, wherein the generating means determines, for each of the plurality of clusters, a shape of the fragmented image corresponding to the cluster based on a distribution shape of each of the feature points belonging to the cluster.
[Appendix 9]
The information processing device according to claim 8, wherein the generating means generates the fragmented image having a polygonal shape.
[Appendix 10]
The information processing device according to claim 9, wherein the polygonal shape is a convex polygonal shape.
[Appendix 11]
The information processing device according to claim 5, further comprising a determination means for determining whether the shape of the fragmented image is to be an ellipse or a polygon.
[Appendix 12]
The information processing device according to claim 1, further comprising a reception unit configured to receive a designation of a desired number of the fragmented images.
[Appendix 13]
The information processing device according to claim 1, wherein the pattern image includes at least one of a fingerprint image and a palm print image.
[Appendix 14]
Extract multiple feature points from the pattern image,
classifying the feature points into a plurality of clusters;
generating a fragmented image from the pattern image based on the plurality of clusters;
and outputting the fragmented image.
[Appendix 15]
On the computer,
Extract multiple feature points from the pattern image,
classifying the feature points into a plurality of clusters;
generating a fragmented image from the pattern image based on the plurality of clusters;
A recording medium on which a computer program for executing an information processing method for outputting the fragmented image is recorded.

 以上、実施の形態を参照してこの開示を説明したが、この開示は上述の実施形態に限定されるものではない。この開示の構成や詳細には、この開示のスコープ内で当業者が理解し得る様々な変更をすることができる。そして、各実施形態は、適宜他の実施の形態と組み合わせることができる。 Although this disclosure has been described above with reference to the embodiments, this disclosure is not limited to the above-mentioned embodiments. Various modifications that can be understood by those skilled in the art can be made to the configuration and details of this disclosure within the scope of this disclosure. Furthermore, each embodiment can be combined with other embodiments as appropriate.

1,2,3,4,5 情報処理装置
11,211 抽出部
12,212 分類部
13,213,313,413,513 生成部
14,214 出力部
215 取得部
216 受付部
517 判定部
1, 2, 3, 4, 5 Information processing device 11, 211 Extraction unit 12, 212 Classification unit 13, 213, 313, 413, 513 Generation unit 14, 214 Output unit 215 Acquisition unit 216 Reception unit 517 Determination unit

Claims (15)

 紋様画像から複数の特徴点を抽出する抽出手段と、
 前記複数の特徴点を複数のクラスタに分類する分類手段と、
 前記複数のクラスタに基づいて、前記紋様画像から断片化画像を生成する生成手段と、
 前記断片化画像を出力する出力手段と
 を備える情報処理装置。
An extraction means for extracting a plurality of feature points from a pattern image;
A classification means for classifying the plurality of feature points into a plurality of clusters;
a generating means for generating a fragmented image from the pattern image based on the plurality of clusters;
and an output means for outputting the fragmented image.
 複数のクラスタの各々は、前記複数の特徴点のうちの所定数の特徴点を含む
 請求項1に記載の情報処理装置。
The information processing apparatus according to claim 1 , wherein each of a plurality of clusters includes a predetermined number of the plurality of feature points.
 前記所定数の指定を受け付ける受付手段を備える
 請求項2に記載の情報処理装置。
The information processing apparatus according to claim 2 , further comprising a reception unit for receiving a designation of the predetermined number.
 前記生成手段は、前記複数のクラスタの各々に対応する複数の前記断片化画像を生成し、
 前記断片化画像は、対応するクラスタに属する特徴点を含む
 請求項1に記載の情報処理装置。
The generating means generates a plurality of the fragmentation images corresponding to each of the plurality of clusters,
The information processing apparatus according to claim 1 , wherein the fragmented image includes feature points that belong to a corresponding cluster.
 前記生成手段は、前記複数のクラスタの各々について、クラスタに属する特徴点の各々の分布に応じて、前記クラスタに対応する前記断片化画像を生成する
 請求項1に記載の情報処理装置。
The information processing apparatus according to claim 1 , wherein the generating means generates, for each of the plurality of clusters, the fragmented image corresponding to the cluster in accordance with a distribution of each of the feature points belonging to the cluster.
 前記生成手段は、前記複数のクラスタの各々の代表点と前記複数のクラスタの各々に属する特徴点の各々との距離に基づき、前記複数のクラスタの各々に対応する複数の前記断片化画像を生成する
 請求項1に記載の情報処理装置。
The information processing apparatus according to claim 1 , wherein the generating means generates a plurality of the fragmented images corresponding to each of the plurality of clusters based on a distance between a representative point of each of the plurality of clusters and each of feature points belonging to each of the plurality of clusters.
 前記生成手段は、前記代表点と前記代表点に対応するクラスタに属する特徴点の各々との距離に基づいて長軸が定められた楕円に基づき、前記クラスタに対応する前記断片化画像を生成する
 請求項6に記載の情報処理装置。
The information processing device according to claim 6 , wherein the generating means generates the fragmented image corresponding to the cluster based on an ellipse whose major axis is determined based on a distance between the representative point and each of the feature points belonging to the cluster corresponding to the representative point.
 前記生成手段は、前記複数のクラスタの各々について、クラスタに属する特徴点の各々の分布の形状に基づいて、前記クラスタに対応する前記断片化画像の形状を決定する
 請求項1に記載の情報処理装置。
The information processing apparatus according to claim 1 , wherein the generating means determines, for each of the plurality of clusters, a shape of the fragmented image corresponding to the cluster based on a distribution shape of each of the feature points belonging to the cluster.
 前記生成手段は、多角形状の前記断片化画像を生成する
 請求項8に記載の情報処理装置。
The information processing apparatus according to claim 8 , wherein the generating means generates the fragmented image having a polygonal shape.
 前記多角形状は凸多角形状である
 請求項9に記載の情報処理装置。
The information processing device according to claim 9 , wherein the polygonal shape is a convex polygonal shape.
 前記断片化画像の形状を、楕円形状にするか、又は多角形状にするかを判定する判定手段を備える
 請求項5に記載の情報処理装置。
The information processing apparatus according to claim 5 , further comprising a determination unit that determines whether the shape of the fragmented image is to be an ellipse or a polygon.
 前記断片化画像の所望数の指定を受け付ける受付手段を備える
 請求項1に記載の情報処理装置。
The information processing apparatus according to claim 1 , further comprising a reception unit that receives a designation of a desired number of the fragmented images.
 前記紋様画像は、指紋画像、及び掌紋画像の少なくとも一方を含む
 請求項1に記載の情報処理装置。
The information processing device according to claim 1 , wherein the pattern image includes at least one of a fingerprint image and a palm print image.
 紋様画像から複数の特徴点を抽出し、
 前記複数の特徴点を複数のクラスタに分類し、
 前記複数のクラスタに基づいて、前記紋様画像から断片化画像を生成し、
 前記断片化画像を出力する
 情報処理方法。
Extract multiple feature points from the pattern image,
classifying the feature points into a plurality of clusters;
generating a fragmented image from the pattern image based on the plurality of clusters;
and outputting the fragmented image.
 コンピュータに、
 紋様画像から複数の特徴点を抽出し、
 前記複数の特徴点を複数のクラスタに分類し、
 前記複数のクラスタに基づいて、前記紋様画像から断片化画像を生成し、
 前記断片化画像を出力する
 情報処理方法を実行させるためのコンピュータプログラムが記録されている記録媒体。
On the computer,
Extract multiple feature points from the pattern image,
classifying the feature points into a plurality of clusters;
generating a fragmented image from the pattern image based on the plurality of clusters;
A recording medium on which a computer program for executing an information processing method for outputting the fragmented image is recorded.
PCT/JP2023/043341 2023-12-04 2023-12-04 Information processing device, information processing method, and recording medium Pending WO2025120713A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/JP2023/043341 WO2025120713A1 (en) 2023-12-04 2023-12-04 Information processing device, information processing method, and recording medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/JP2023/043341 WO2025120713A1 (en) 2023-12-04 2023-12-04 Information processing device, information processing method, and recording medium

Publications (1)

Publication Number Publication Date
WO2025120713A1 true WO2025120713A1 (en) 2025-06-12

Family

ID=95979655

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2023/043341 Pending WO2025120713A1 (en) 2023-12-04 2023-12-04 Information processing device, information processing method, and recording medium

Country Status (1)

Country Link
WO (1) WO2025120713A1 (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2004265038A (en) * 2003-02-28 2004-09-24 Nec Corp Fingerprint collation device and method therefor
US20140233812A1 (en) * 2011-03-02 2014-08-21 Precise Biometrics Ab Method of matching, biometric matching apparatus, and computer program
WO2018074601A1 (en) * 2016-10-21 2018-04-26 日本電気株式会社 Synthesizing device, synthesizing method, and program
CN108875907A (en) * 2018-04-23 2018-11-23 北方工业大学 A kind of fingerprint identification method and device based on deep learning

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2004265038A (en) * 2003-02-28 2004-09-24 Nec Corp Fingerprint collation device and method therefor
US20140233812A1 (en) * 2011-03-02 2014-08-21 Precise Biometrics Ab Method of matching, biometric matching apparatus, and computer program
WO2018074601A1 (en) * 2016-10-21 2018-04-26 日本電気株式会社 Synthesizing device, synthesizing method, and program
CN108875907A (en) * 2018-04-23 2018-11-23 北方工业大学 A kind of fingerprint identification method and device based on deep learning

Similar Documents

Publication Publication Date Title
Zhang et al. Interpreting adversarially trained convolutional neural networks
US20190354896A1 (en) Information processing apparatus, information processing method, and computer-readable storage medium
WO2019102533A1 (en) Document classification device
JP2014229115A (en) Information processing device and method, program, and storage medium
JP2018165911A (en) Identification system, identification method and program
JP7239936B2 (en) Mask structure optimization device, mask structure optimization method and program
JP2019159836A (en) Learning program, learning method and learning device
JP2011134102A (en) Information processing apparatus and information processing method
JP2019191895A (en) Data analysis system and data analysis method
KR102234013B1 (en) Methods and apparatuses for classifying data point using convex hull based on centroid of cluster
JP2015197897A (en) Image processor, and image processing method
US12471798B2 (en) Sacroiliitis discrimination method using sacroiliac joint MR image
JP3634574B2 (en) Information processing method and apparatus
WO2025120713A1 (en) Information processing device, information processing method, and recording medium
AU2015204339A1 (en) Information processing apparatus and information processing program
JP2021174471A (en) Discriminator learning device and discriminator learning method
CN111783088A (en) Malicious code family clustering method and device and computer equipment
US12353994B2 (en) System and method for reasoning about the diversity and robustness of an ensemble of classifiers
JP2023139296A (en) Signal processing method, signal processing device, and signal processing program
CN111507195B (en) Iris segmentation neural network model training method, iris segmentation method and device
WO2023127062A1 (en) Data generation method, machine learning method, information processing device, data generation program, and machine learning program
JP2020030752A (en) Information processing device, information processing method and program
US20230409684A1 (en) Biometric authentication system, biometric authentication method, and computer program
CN108021935A (en) A kind of Dimensionality reduction method and device based on big data technology
JP7527441B2 (en) Model training method and model training system

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23960728

Country of ref document: EP

Kind code of ref document: A1