Detailed Description
The application is described in further detail below with reference to the drawings and examples. It is to be understood that the specific embodiments described herein are merely illustrative of the application and are not limiting of the application. It should be noted that, for convenience of description, only the portions related to the present application are shown in the drawings.
It should be noted that, without conflict, the embodiments of the present application and features of the embodiments may be combined with each other. The application will be described in detail below with reference to the drawings in connection with embodiments.
It should be noted that all actions for acquiring signals, information or data in the present application are performed in compliance with the corresponding data protection legislation policy of the country of location and obtaining the authorization granted by the owner of the corresponding device.
In recent years, biometric technology has been widely applied to various terminal devices or electronic apparatuses. Biometric techniques include, but are not limited to, fingerprint recognition, palm print recognition, vein recognition, iris recognition, face recognition, living body recognition, anti-counterfeit recognition, and the like. Among them, fingerprint recognition generally includes optical fingerprint recognition, capacitive fingerprint recognition, and ultrasonic fingerprint recognition. With the rise of the full-screen technology, the fingerprint identification module can be arranged In a local area or a whole area below the display screen so as to form Under-screen (render-display) optical fingerprint identification, or part or all of the optical fingerprint identification module can be integrated into the display screen of the electronic equipment so as to form In-screen (In-display) optical fingerprint identification. The display screen may be an Organic LIGHT EMITTING Diode (OLED) display screen or a liquid crystal display screen (LiquidCrystal Display, LCD) or the like. The fingerprint identification method generally comprises the steps of image acquisition, preprocessing, feature extraction, feature matching and the like. Some or all of the above steps may be implemented by conventional Computer Vision (CV) algorithms, or by artificial intelligence (ARTIFICIAL INTELLIGENCE, AI) based deep learning algorithms. The fingerprint identification technology can be applied to portable or mobile terminals such as smart phones, tablet computers and game devices, and other electronic devices such as intelligent door locks, automobiles and bank automatic teller machines, and is used for fingerprint unlocking, fingerprint payment, fingerprint attendance checking, identity authentication and the like.
In the field of biometric identification, it is often necessary to segment an image to cut out a biometric region in the image in order to improve the accuracy of subsequent biometric identification. In the related art, the biological feature extraction model or the image segmentation model can be trained by manually labeling training data. This approach requires a large amount of data to be annotated, and is therefore labor intensive. Meanwhile, due to the existence of labeling errors, the expression capability of the biological characteristics on texture details is weak, so that the accuracy of biological characteristic extraction is low. The application provides a biological feature extraction model training method which is beneficial to reducing the labor cost and improving the accuracy of biological features.
Referring to FIG. 1, a flow 100 of one embodiment of a biometric extraction model training method in accordance with the present application is shown. The biological feature extraction model training method can be applied to various electronic devices. For example, may include, but is not limited to, servers, desktop computers, laptop portable computers, and the like. The biological feature extraction model training method comprises the following steps:
Step 101, performing data enhancement processing of fixed pixel positions on each first image in the image set containing the target biological feature, and obtaining a second image corresponding to each first image.
In this embodiment, the execution subject of the biometric extraction model training method may acquire in advance an image set containing the target biometric. The target biometric characteristic may be any predetermined biometric characteristic, such as a fingerprint characteristic, a palm print characteristic, an iris characteristic, a lip print characteristic, etc. Accordingly, the image set containing the target biometric feature may be a fingerprint image set, a palm print image set, an iris image set, or a lip print image set. The image set described above may be acquired in a variety of ways. For example, the existing image set stored therein may be acquired from another server (e.g., database server) for storing training data through a wired connection or a wireless connection. For another example, images may be collected by a terminal device, and the collected images may be aggregated into an image set. The image collection environment (such as image collection module, collection temperature, etc.) and the finger state (such as finger temperature, finger humidity, pressing force, etc.) of the image collection can be diversified, so as to ensure the difference of the images and improve the generalization of the trained biological feature extraction model.
In this embodiment, the execution subject may use each image in the image set as a first image, and perform data enhancement processing for fixing the pixel positions of each first image, so as to obtain a second image corresponding to each first image. Where data enhancement refers to a technique that produces similar but different data by making a series of random changes to the data. The size of the data set can be enlarged by the data enhancement processing. The data enhancement processing of the fixed pixel point position may include data enhancement processing of the fixed pixel point position and data enhancement processing of transforming the pixel point position.
In some alternative implementations of the present embodiment, the data enhancement processing for fixed pixel point locations may include, but is not limited to, at least one of gaussian blur processing, random noise processing, brightness conversion, contrast conversion, hue conversion, saturation conversion. For each first image, the execution subject may perform at least one of gaussian blur processing, random noise processing, brightness conversion, contrast conversion, tone conversion, and saturation conversion on the first image data to obtain a second image corresponding to the first image.
As an example, fig. 2 shows a characteristic intensity comparison chart before and after the data enhancement processing for fixing the pixel positions of the first image. The original biometric map of the first image is shown at 201. After the data enhancement processing of the fixed pixel position is performed on the first image, the biometric image is shown as a reference numeral 202, and the feature intensity is changed compared with the biometric image shown as a reference numeral 201.
It should be noted that, the data enhancement processing for fixing the pixel position may include, but is not limited to, the above list, and will not be described in detail here.
And 102, performing data enhancement processing for converting pixel positions on each second image to obtain a third image corresponding to each first image.
In this embodiment, after obtaining the second images corresponding to the first images, the execution subject may perform data enhancement processing for transforming the pixel positions of the second images to obtain third images corresponding to the second images, that is, obtain third images corresponding to the first images.
In some alternative implementations of the present embodiment, the data enhancement processing that transforms pixel locations may include, but is not limited to, at least one of flipping, rotating, translating, mirroring. For each first image, the executing body may perform at least one of flipping, rotating, translating, and mirroring on the second image corresponding to the first image, to obtain a third image corresponding to the first image. It should be noted that the first image and the third image may have the same size. For example, if the rotation operation is performed, the rotated image may be subjected to processing such as cropping so that the processed image is the same as the first image in size.
Step 103, combining each first image and the corresponding second image into a positive sample pair, combining each first image and the corresponding third image into a negative sample pair, and training the neural network based on the obtained positive sample pair and negative sample pair to obtain a biological feature extraction model.
In this embodiment, the second image is obtained by performing data enhancement processing on the fixed pixel positions of the first image, so that the positions of the biometric lines (such as fingerprint lines, palm lines, lip lines, etc.) in the first image and the corresponding second image are the same. Thus, each first image and its corresponding second image can be combined into a positive sample pair. The third image is obtained by performing data enhancement processing on the position of the pixel point of the first image, so that the positions of the biological characteristic lines (such as fingerprint lines, palm lines, lip lines and the like) in the first image and the corresponding third image are different. Thus, each first image and its corresponding second image can be combined into a negative sample pair. Further, the neural network may be trained based on the obtained positive and negative sample pairs to obtain a biometric extraction model.
Here, the training goal of the model may be to pull the feature pitch of the positive sample pair closer while expanding the feature pitch of the negative sample pair. Taking a training fingerprint feature extraction model as an example, the training target can enable the biological features extracted by the model from images with the same fingerprint lines to be the same as possible, so that the same finger with different acquisition environments and finger states can be ensured to extract the same features as possible, the accuracy of subsequent fingerprint identification is improved, and meanwhile, the biological features extracted by the model from images with different fingerprint lines can be enabled to be different as possible, so that the subsequent fingerprint identification error caused by similar features acquired by the model from different fingers is avoided. Wherein the similarity of features is measured by euclidean distance between the characterizable features. The biological feature extraction model obtained after training has longer model length of the features extracted from the clear region relative to the model length of the features extracted from the fuzzy region, so that the region where the biological features in the image to be identified are positioned can be determined based on the feature model length, thereby facilitating subsequent biological feature identification, image segmentation and the like.
In this embodiment, the neural network may be a convolutional neural network (Convolutional Neural Network, CNN) using various existing structures (e.g., UNet, etc.). In practice, the convolutional neural network is a feedforward neural network, and an artificial neuron of the feedforward neural network can respond to surrounding units in a part of coverage area, so that the convolutional neural network has excellent performance on image processing, and can be used for extracting frame features in a sample video. In this embodiment, the neural network may include, but is not limited to, a convolutional layer, a deconvolution layer, and the like. The convolution layer may be used to extract image features and downsample the feature map (downsample). The anti-pooling layer may be used to extract image features and upsample the feature map (upsample). As used herein, a neural network, the feature map of the output may be the same size as the image input to the neural network.
In some optional implementations of this embodiment, the execution body may train to obtain the biometric extraction model by using the following sub-steps:
Substep S11 generates a triplet based on the positive and negative pairs of samples comprising the same first image. The triplet may include a first image, a second image in a positive sample pair to which the first image belongs, and a third image in a negative sample pair to which the first image belongs. For each first image, one triplet may be constructed, so that multiple triples may be obtained. Each triplet may be used as a training sample, and multiple samples may constitute a sample set for training the biometric extraction model. The sample set is generated based on the image set without the annotation, so that the unsupervised training of the biological feature extraction model is realized, the labor cost is reduced, and the annotation error is avoided.
In the substep S12, the following model training steps are iteratively performed:
First, extracting target triples from the triples. The extraction manner and the extraction number of the target triples are not limited in the present application. For example, at least one target triplet may be randomly extracted, or at least one target triplet may be extracted in a specified order.
And secondly, inputting the target triplets into the neural network to obtain a first characteristic diagram, a second characteristic diagram and a third characteristic diagram. Here, the first feature map, the second feature map, and the third feature map may be a biometric map corresponding to the first image, a biometric map corresponding to the second image, and a biometric map corresponding to the third image in the target triplet, respectively.
And thirdly, determining a loss value of the neural network based on the first feature map, the second feature map, the third feature map and a preset triplet loss function. Here, the loss value is a value of a loss function (loss function). The loss function is a non-negative real value function that can be used to characterize the difference between the detected result and the actual result. In general, the smaller the loss value, the better the robustness of the model. The loss function may be set according to actual requirements, where a triplet loss function may be used.
And a fourth step of updating parameters of the neural network based on the loss value. Here, the gradient of the loss value with respect to the neural network parameter may be found using a back propagation algorithm, and then the parameters of the neural network may be updated based on the gradient using a gradient descent algorithm. Specifically, a gradient of the loss value relative to parameters of each layer of the neural network can be obtained by using a chain rule (chain rule) and a back propagation algorithm (Back Propgation Algorithm, BP algorithm). In practice, the above-described back propagation algorithm may also be referred to as an error back propagation (Error Back Propagation, BP) algorithm, or an error back propagation algorithm, which is a learning algorithm suitable for multi-layer neuronal networks. In the back propagation process, the partial derivative of the loss function to the weight of each neuron can be obtained layer by layer to form the gradient of the loss function to the weight vector as the basis for modifying the weight. Gradient descent (GRADIENT DESCENT) algorithms are a method commonly used in the machine learning art to solve model parameters. In solving for the minimum of the loss function, the neuron weights (e.g., parameters of the convolution kernel in the convolution layer, etc.) may be adjusted by a gradient descent algorithm based on the calculated gradients.
And S13, stopping iteration in response to the condition of stopping iteration, and obtaining a biological feature extraction model. Here, each time the target triplet is input, the parameters of the neural network may be updated once based on the loss value of the neural network until the stop iteration condition is satisfied. In practice, the stop iteration condition may be variously set as needed. For example, if the number of training times of the neural network is equal to the preset number of times, it may be determined that the training is completed. As another example, training may be determined to be complete when the loss values of the neural network converge. When the neural network training is completed, the trained neural network can be determined as a biological feature extraction model.
The method provided by the embodiment of the application comprises the steps of carrying out data enhancement processing on fixed pixel positions of each first image in an image set to obtain second images corresponding to each first image, carrying out data enhancement processing on the positions of the converted pixel points of each second image to obtain third images corresponding to each first image, finally combining each first image and the corresponding second image into positive sample pairs, combining each first image and the corresponding third image into negative sample pairs, training a neural network based on the obtained positive sample pairs and negative sample pairs to obtain a biological feature extraction model, and realizing automatic generation of the positive and negative sample pairs and unsupervised training of the biological feature extraction model. On the one hand, the biological feature extraction model can be obtained through training without manually marking training data, and the labor cost is reduced. On the other hand, because the manual labeling link is omitted, labeling errors are avoided, the expression capability of the biological features extracted by the biological feature extraction model obtained in the training mode on texture details is stronger, and therefore the accuracy of biological feature extraction is improved.
With further reference to fig. 3, a flow 300 of one embodiment of a biometric extraction method is shown. The biometric feature extraction method can be applied to various electronic devices. For example, may include, but is not limited to, smart phones, tablet computers, laptop portable computers, car computers, palm top computers, desktop computers, set top boxes, smart televisions, cameras, wearable devices, and the like.
The flow 300 of the biometric extraction method includes the steps of:
in step 301, a target image is acquired.
In this embodiment, the execution subject of the biometric extraction method may acquire a target image, where the target image may be any image to be biometric extracted, such as a fingerprint image recorded in a fingerprint acquisition area by a user, a palm print recorded in a palm print acquisition area, and so on.
Step 302, inputting the target image into a biological feature extraction model to obtain a biological feature map of the target image.
In this embodiment, the execution subject may input the target image into the biometric extraction model to obtain a biometric map of the target image. The biological feature extraction model may be trained using the biological feature extraction model training method described in the above embodiments. The specific generation process may be described in the above embodiments, and will not be described herein.
After the extraction of the biometric image, the biometric image may be subjected to other processing as needed, such as biometric identification, biometric region segmentation, and the like, which is not particularly limited herein.
The biological feature extraction method of the embodiment can be used for extracting biological features in the image, and the extracted biological features have stronger expression capability on texture details, so that the accuracy of biological feature extraction can be improved.
With further reference to fig. 4, a flow 400 of one embodiment of an image segmentation method is shown. The image segmentation method is applicable to various electronic devices. For example, may include, but is not limited to, smart phones, tablet computers, laptop portable computers, car computers, palm top computers, desktop computers, set top boxes, smart televisions, cameras, wearable devices, and the like.
The image segmentation method flow 400 includes the steps of:
step 401, acquiring a target image.
In this embodiment, the execution subject of the image segmentation method may acquire a target image, where the target image may be any image to be subjected to biometric feature extraction, such as a fingerprint image recorded in a fingerprint acquisition area by a user, a palm print recorded in a palm print acquisition area, and the like.
Step 402, inputting the target image into a biological feature extraction model to obtain a biological feature map of the target image.
In this embodiment, the execution subject may input the target image into the biometric extraction model to obtain a biometric map of the target image. The biological feature extraction model may be trained using the biological feature extraction model training method described in the above embodiments. The specific generation process may be described in the above embodiments, and will not be described herein.
Step 403, determining the characteristic module length of each pixel point in the target image based on the biological characteristic map.
In this embodiment, the execution subject may determine the feature module length of each pixel point in the target image based on the biometric map. For each pixel point, the characteristic module length of the pixel point is the characteristic module length of the corresponding characteristic point of the pixel point in the biological characteristic map, namely the module length of the corresponding characteristic point in the characteristic value of each channel of the biological characteristic map. Can be calculated by squaring and root-summing the eigenvalues.
As an example, the image and feature map may be the same size. At this time, for each feature point in the biometric map, the execution subject may first determine a feature module length corresponding to the feature point based on the feature value of the feature point in each channel of the biometric map. That is, the square sum root of the eigenvalues of the feature points in each channel is calculated, and the feature module length is obtained. And then, determining the characteristic module length as the characteristic module length of the pixel point corresponding to the characteristic point in the image. It should be noted that the feature map may include one or more channels, and the number of channels may be determined based on the number of convolution kernels of the last convolution layer, where the specific value of the number of channels is not limited.
Step 404, determining a biometric area in the target image based on the feature pattern length.
In this embodiment, the biometric extraction model has a longer pattern length for features extracted from a clear region (e.g., fingerprint region, palm print region, lip print region, or iris region, etc.) than for features extracted from a blurred region (e.g., background region). After obtaining the characteristic pattern length of each pixel point in the target image, the execution subject can obtain a pattern length map using the characteristic pattern length as a pixel value. Based on the pattern length map, the execution subject can determine a biometric region (such as a fingerprint region, a palm print region, a lip print region, or an iris region, etc.) in the target image. As an example, an area constituted by pixel points whose modulus value is greater than a preset threshold value may be determined as the target area. As yet another example, the pattern length map may be first subjected to image processing (e.g., truncation, normalization, correction, etc.) to obtain a processed pattern length map. And then, determining the region formed by the pixel points with the module length values larger than the preset threshold value in the processed module length diagram as a biological characteristic region. Thereby, an efficient segmentation of the image is achieved.
In some optional implementations of this embodiment, the executing body may determine the biometric area in the target image by:
and in the first step, the characteristic modular length larger than the first threshold value is adjusted to the first threshold value so as to avoid that the outliers influence the subsequent processing.
And secondly, carrying out normalization processing on the characteristic modular length of each pixel point. For example, the characteristic module length of each pixel point after the first step of processing can be divided by the first threshold value, so that the characteristic module length of each pixel point is normalized, and the data is easier to process.
And thirdly, correcting the characteristic module length of each pixel point after normalization processing, and determining the region formed by the pixel points with the corrected pixel points larger than the second threshold value as a biological characteristic region. Here, image correction modes such as gamma correction can be adopted to adjust pixel value distribution, so that threshold value clamping control is facilitated. After correction, the region constituted by the pixel points greater than the second threshold value may be determined as a biometric region. Thereby, an efficient segmentation of the image is achieved. Based on the determined biological feature region, convenience can be provided for subsequent operations such as biological feature recognition.
As an example, fig. 5 is a schematic view of the processing effect of the image segmentation method. The original image is a fingerprint image, as indicated by reference numeral 501. An image processed in accordance with the alternative implementation described above is shown at 502, in which the biometric area, i.e., the fingerprint area, is effectively segmented.
The image segmentation method of the embodiment can be used for extracting the biological characteristics in the image, and the extracted biological characteristics have stronger expression capability on texture details, so that the accuracy of extracting the biological characteristics can be improved, and the accuracy of image segmentation is further improved.
With further reference to fig. 6, as an implementation of the method shown in the foregoing figures, the present application provides an embodiment of a biometric extraction model training apparatus, where the embodiment of the apparatus corresponds to the embodiment of the method shown in fig. 1, and the apparatus is particularly applicable to various electronic devices.
As shown in fig. 6, the training device 600 for a biological feature extraction model according to the present embodiment includes a first processing unit 601 configured to perform data enhancement processing for fixing pixel positions on each first image in an image set including a target biological feature to obtain second images corresponding to each first image, a second processing unit 602 configured to perform data enhancement processing for converting pixel positions on each second image to obtain third images corresponding to each first image, and a training unit 603 configured to combine each first image and the corresponding second image into a positive sample pair, combine each first image and the corresponding third image into a negative sample pair, and train a neural network based on the obtained positive sample pair and negative sample pair to obtain a biological feature extraction model.
In some optional implementations of this embodiment, the training unit 603 is further configured to generate a triplet based on a positive sample pair and a negative sample pair including the same first image, iteratively perform a model training step of extracting a target triplet from the obtained triplet, inputting the target triplet to a neural network to obtain a first feature map, a second feature map, and a third feature map, determining a loss value of the neural network based on the first feature map, the second feature map, the third feature map, and a preset triplet loss function, updating parameters of the neural network based on the loss value, and stopping iteration in response to satisfying a stop iteration condition, to obtain a biometric extraction model.
In some optional implementations of this embodiment, the second processing unit 602 is further configured to, for each first image, perform at least one of flipping, rotating, translating, and mirroring on a second image corresponding to the first image, to obtain a third image corresponding to the first image.
In some optional implementations of this embodiment, the first processing unit 601 is further configured to, for each first image in the image set including the target biometric feature, perform at least one of gaussian blur processing, random noise processing, brightness transformation, contrast transformation, hue transformation, and saturation transformation on the first image data, and obtain a second image corresponding to the first image.
The device provided by the embodiment of the application obtains the second image corresponding to each first image by carrying out data enhancement processing on the fixed pixel positions of each first image in the image set, then carries out data enhancement processing on the positions of the converted pixels of each second image to obtain the third image corresponding to each first image, finally combines each first image and the corresponding second image into a positive sample pair, combines each first image and the corresponding third image into a negative sample pair, trains the neural network based on the obtained positive sample pair and the obtained negative sample pair, obtains a biological feature extraction model, and realizes automatic generation of the positive and negative sample pairs and unsupervised training of the biological feature extraction model. On the one hand, the biological feature extraction model can be obtained through training without manually marking training data, and the labor cost is reduced. On the other hand, because the manual labeling link is omitted, labeling errors are avoided, the expression capability of the biological features extracted by the biological feature extraction model obtained in the training mode on texture details is stronger, and therefore the accuracy of biological feature extraction is improved.
With further reference to fig. 7, as an implementation of the method shown in the above figures, the present application provides an embodiment of a biometric extraction device, which corresponds to the embodiment of the method shown in fig. 1, and which is particularly applicable to various electronic apparatuses.
As shown in fig. 7, the biometric extraction apparatus 700 of the present embodiment includes an acquisition unit 701 for acquiring a target image, and a feature extraction unit 702 for inputting the target image into a biometric extraction model to obtain a biometric map of the target image.
In this embodiment, the biometric extraction model may be trained using the biometric extraction model training method described in the above embodiment. The specific generation process may be described in the above embodiments, and will not be described herein.
The device provided by the embodiment of the application can be used for extracting the biological characteristics in the image, and the extracted biological characteristics have stronger expression capability on texture details, so that the accuracy of extracting the biological characteristics can be improved.
With further reference to fig. 8, as an implementation of the method shown in the foregoing figures, the present application provides an embodiment of an image segmentation apparatus, which corresponds to the method embodiment shown in fig. 1, and which is particularly applicable to various electronic devices.
As shown in fig. 8, the image segmentation apparatus 800 of the present embodiment includes an acquisition unit 801 for acquiring a target image, a feature extraction unit 802 for inputting the target image into a biometric extraction model to obtain a biometric image of the target image, a first determination unit 803 for determining a feature pattern length of each pixel point in the target image based on the biometric image, and a second determination unit 804 for determining a biometric region in the target image based on the feature pattern length.
In some optional implementations of this embodiment, the second determining unit is further configured to adjust a feature module length greater than a first threshold to the first threshold, normalize the feature module length of each pixel, correct the feature module length of each pixel after normalization, and determine an area formed by the pixels greater than the second threshold after correction as the biological feature area.
In some optional implementations of this embodiment, the image and the feature map have the same size, and the first determining unit is further configured to, for each feature point in the biometric map, determine, based on a feature value of the feature point in each channel of the biometric map, a feature mode length corresponding to the feature point, and determine, as a feature mode length of a pixel point corresponding to the feature point in the target image, the feature mode length corresponding to the feature point.
The device provided by the embodiment of the application can be used for extracting the biological characteristics in the image, and the extracted biological characteristics have stronger expression capability on texture details, so that the accuracy of extracting the biological characteristics can be improved, and the accuracy of image segmentation is further improved.
The embodiment of the application also provides electronic equipment, which comprises one or more processors and a storage device, wherein one or more programs are stored on the storage device, and when the one or more programs are executed by the one or more processors, the one or more processors are enabled to realize the method for training the biological feature extraction model.
Referring now to fig. 9, a schematic diagram of an electronic device for implementing some embodiments of the present application is shown. The electronic device shown in fig. 9 is only an example and should not impose any limitation on the functionality and scope of use of embodiments of the present application.
As shown in fig. 9, the electronic device 900 may include a processing means (e.g., a central processor, a graphics processor, etc.) 901, which may perform various appropriate actions and processes according to a program stored in a Read Only Memory (ROM) 902 or a program loaded from a storage means 908 into a Random Access Memory (RAM) 903. In the RAM903, various programs and data necessary for the operation of the electronic device 900 are also stored. The processing device 901, the ROM 902, and the RAM903 are connected to each other through a bus 904. An input/output (I/O) interface 905 is also connected to the bus 904.
In general, devices including input devices 906 such as a touch screen, touch pad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, etc., output devices 907 including a Liquid Crystal Display (LCD), speaker, vibrator, etc., storage devices 908 including a magnetic disk, hard disk, etc., and communication devices 909 may be connected to the I/O interface 905. The communication means 909 may allow the electronic device 900 to communicate wirelessly or by wire with other devices to exchange data. While fig. 9 shows an electronic device 900 having various means, it is to be understood that not all illustrated means are required to be implemented or provided. More or fewer devices may be implemented or provided instead. Each block shown in fig. 9 may represent one device or a plurality of devices as needed.
The embodiment of the application also provides a computer program product, which comprises a computer program, wherein the computer program realizes the biological feature extraction model training method when being executed by a processor.
In particular, according to some embodiments of the application, the processes described above with reference to flowcharts may be implemented as computer software programs. For example, some embodiments of the application include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising program code for performing the method shown in the flow chart. In such embodiments, the computer program may be downloaded and installed from a network via communication device 909, or installed from storage device 908, or installed from ROM 902. The above-described functions defined in the methods of some embodiments of the present application are performed when the computer program is executed by the processing device 901.
The embodiment of the application also provides a computer readable medium, on which a computer program is stored, which when being executed by a processor, implements the above-mentioned method for training a biometric extraction model.
It should be noted that, the computer readable medium according to some embodiments of the present application may be a computer readable signal medium or a computer readable storage medium, or any combination of the two. The computer readable storage medium can be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples of a computer-readable storage medium may include, but are not limited to, an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In some embodiments of the application, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In some embodiments of the application, however, the computer readable signal medium may comprise a propagated data signal with computer readable program code embodied therein, either in baseband or as part of a carrier wave. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination of the foregoing. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to electrical wiring, fiber optic cable, RF (radio frequency), and the like, or any suitable combination of the foregoing.
In some embodiments, the clients, servers may communicate using any currently known or future developed network protocol, such as HTTP (HyperText TransferProtocol ), and may be interconnected with any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include a local area network ("LAN"), a wide area network ("WAN"), the internet (e.g., the internet), and peer-to-peer networks (e.g., ad hoc peer-to-peer networks), as well as any currently known or future developed networks.
The computer readable medium may be included in the electronic device or may exist alone without being incorporated into the electronic device. The computer-readable medium carries one or more programs which, when executed by the electronic device, cause the electronic device to perform the methods of the above-described embodiments.
Computer program code for carrying out operations for some embodiments of the present application may be written in one or more programming languages, including an object oriented programming language such as Java, smalltalk, C ++ or any combination thereof and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of remote computers, the remote computer may be connected to the user computer through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computer (e.g., connected via the internet using an internet service provider).
The flowcharts and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present application. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units described in some embodiments of the application may be implemented in software or in hardware. The described units may also be provided in a processor, for example as a processor comprising a first determination unit, a second determination unit, a selection unit and a third determination unit. Wherein the names of the units do not constitute a limitation of the units themselves in some cases.
The functions described above herein may be performed, at least in part, by one or more hardware logic components. For example, without limitation, exemplary types of hardware logic that may be used include Field Programmable Gate Arrays (FPGAs), application Specific Integrated Circuits (ASICs), application Specific Standard Products (ASSPs), systems-on-a-chip (SOCs), complex Programmable Logic Devices (CPLDs), and the like.
The above description is only illustrative of the few preferred embodiments of the present application and of the principles of the technology employed. It will be appreciated by those skilled in the art that the scope of the application in the embodiments of the present application is not limited to the specific combination of the above technical features, but also encompasses other technical features formed by any combination of the above technical features or their equivalents without departing from the spirit of the application. Such as the above-described features, are mutually replaced with the technical features having similar functions (but not limited to) disclosed in the embodiments of the present application.