[go: up one dir, main page]

CN114882308B - Biological feature extraction model training method and image segmentation method - Google Patents

Biological feature extraction model training method and image segmentation method

Info

Publication number
CN114882308B
CN114882308B CN202210378236.3A CN202210378236A CN114882308B CN 114882308 B CN114882308 B CN 114882308B CN 202210378236 A CN202210378236 A CN 202210378236A CN 114882308 B CN114882308 B CN 114882308B
Authority
CN
China
Prior art keywords
image
biometric
feature
target
map
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210378236.3A
Other languages
Chinese (zh)
Other versions
CN114882308A (en
Inventor
周世豪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
TIANJIN JIHAO TECHNOLOGY CO LTD
Original Assignee
TIANJIN JIHAO TECHNOLOGY CO LTD
Beijing Kuangshi Technology Co Ltd
Beijing Megvii Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by TIANJIN JIHAO TECHNOLOGY CO LTD, Beijing Kuangshi Technology Co Ltd, Beijing Megvii Technology Co Ltd filed Critical TIANJIN JIHAO TECHNOLOGY CO LTD
Priority to CN202210378236.3A priority Critical patent/CN114882308B/en
Publication of CN114882308A publication Critical patent/CN114882308A/en
Application granted granted Critical
Publication of CN114882308B publication Critical patent/CN114882308B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Artificial Intelligence (AREA)
  • Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)

Abstract

The embodiment of the application discloses a biological feature extraction model training method and an image segmentation method. The method comprises the steps of carrying out data enhancement processing on fixed pixel positions of first images in an image set containing target biological characteristics to obtain second images corresponding to the first images, carrying out data enhancement processing on the second images to obtain third images corresponding to the first images, combining each first image and the corresponding second image into positive sample pairs, combining each first image and the corresponding third image into negative sample pairs, and training a neural network based on the obtained positive sample pairs and negative sample pairs to obtain a biological characteristic extraction model. According to the embodiment, the labor cost for training the biological feature extraction model is reduced, and the accuracy of biological feature extraction is improved.

Description

Biological feature extraction model training method and image segmentation method
Technical Field
The embodiment of the application relates to the technical field of computers, in particular to a biological feature extraction model training method and an image segmentation method.
Background
With the development of computer technology, image segmentation technology has been widely applied to the field of computer vision. In image recognition scenarios, it is often necessary to perform image segmentation on an image to crop out a biometric region in the image in order to improve accuracy of subsequent biometric recognition.
In the prior art, a biological feature extraction model or an image segmentation model can be trained by manually marking training data. This approach requires a large amount of data to be annotated, and is therefore labor intensive. Meanwhile, due to the existence of labeling errors, the expression capability of the biological characteristics on texture details is weak, so that the accuracy of biological characteristic extraction is low.
Disclosure of Invention
The embodiment of the application provides a biological feature extraction model training method and an image segmentation method, which are used for solving the technical problems of higher labor cost and lower accuracy of biological feature extraction in the prior art for training a biological feature extraction model.
In a first aspect, an embodiment of the present application provides a method for training a biological feature extraction model, where the method includes performing data enhancement processing on fixed pixel positions on each first image in an image set including a target biological feature to obtain second images corresponding to each first image, performing data enhancement processing on each second image to transform pixel positions to obtain third images corresponding to each first image, combining each first image and the corresponding second image into a positive sample pair, combining each first image and the corresponding third image into a negative sample pair, and training a neural network based on the obtained positive sample pair and negative sample pair to obtain a biological feature extraction model.
In a second aspect, an embodiment of the present application provides a method for extracting a biological feature, where the method includes obtaining a target image, and inputting the target image to a biological feature extraction model trained by the method described in the first aspect, to obtain a biological feature map of the target image.
In a third aspect, an embodiment of the present application provides an image segmentation method, where the method includes obtaining a target image, inputting the target image to a biometric extraction model trained by the method described in the first aspect to obtain a biometric image of the target image, determining a feature module length of each pixel point in the target image based on the biometric image, and determining a biometric region in the target image based on the feature module length.
In a fourth aspect, an embodiment of the application provides an electronic device comprising one or more processors, storage means having stored thereon one or more programs which, when executed by the one or more processors, cause the one or more processors to implement a method as described in the first, second or third aspects.
In a fifth aspect, an embodiment of the present application provides a computer readable medium having stored thereon a computer program which when executed by a processor implements a method as described in the first, second or third aspects.
In a sixth aspect, embodiments of the present application provide a computer program product comprising a computer program which, when executed by a processor, implements the method described in the first, second or third aspects.
The biological feature extraction model training method and the image segmentation method provided by the embodiment of the application are characterized in that the data enhancement processing of fixed pixel point positions is carried out on each first image in an image set containing target biological features to obtain second images corresponding to each first image, then the data enhancement processing of pixel point positions is carried out on each second image to obtain third images corresponding to each first image, finally each first image and the corresponding second image are combined into positive sample pairs, each first image and the corresponding third image are combined into negative sample pairs, and the neural network is trained based on the obtained positive sample pairs and negative sample pairs to obtain a biological feature extraction model, so that the automatic generation of the positive and negative sample pairs and the unsupervised training of the biological feature extraction model are realized. On one hand, the biological feature extraction model can be obtained through training without manually marking training data, so that the labor cost is reduced. On the other hand, because the manual labeling link is omitted, labeling errors are avoided, the expression capability of the biological features extracted by the biological feature extraction model obtained in the training mode on texture details is stronger, and therefore the accuracy of biological feature extraction is improved.
Drawings
Other features, objects and advantages of the present application will become more apparent upon reading of the detailed description of non-limiting embodiments, made with reference to the accompanying drawings in which:
FIG. 1 is a flow chart of one embodiment of a biometric extraction model training method of the present application;
FIG. 2 is a graph showing the contrast of the feature intensities before and after the data enhancement processing for fixing the pixel positions of the first image in the training method of the biological feature extraction model according to the present application;
FIG. 3 is a flow chart of one embodiment of a biometric extraction method of the present application;
FIG. 4 is a flow chart of one embodiment of an image segmentation method of the present application;
FIG. 5 is a schematic view showing the processing effect of the image segmentation method of the present application;
FIG. 6 is a schematic diagram of the architecture of one embodiment of a biometric extraction model training apparatus of the present application;
FIG. 7 is a schematic diagram of the structure of one embodiment of the biometric extraction device of the present application;
FIG. 8 is a schematic view of the structure of an embodiment of the image segmentation apparatus of the present application;
fig. 9 is a schematic diagram of a computer system for implementing an electronic device according to an embodiment of the present application.
Detailed Description
The application is described in further detail below with reference to the drawings and examples. It is to be understood that the specific embodiments described herein are merely illustrative of the application and are not limiting of the application. It should be noted that, for convenience of description, only the portions related to the present application are shown in the drawings.
It should be noted that, without conflict, the embodiments of the present application and features of the embodiments may be combined with each other. The application will be described in detail below with reference to the drawings in connection with embodiments.
It should be noted that all actions for acquiring signals, information or data in the present application are performed in compliance with the corresponding data protection legislation policy of the country of location and obtaining the authorization granted by the owner of the corresponding device.
In recent years, biometric technology has been widely applied to various terminal devices or electronic apparatuses. Biometric techniques include, but are not limited to, fingerprint recognition, palm print recognition, vein recognition, iris recognition, face recognition, living body recognition, anti-counterfeit recognition, and the like. Among them, fingerprint recognition generally includes optical fingerprint recognition, capacitive fingerprint recognition, and ultrasonic fingerprint recognition. With the rise of the full-screen technology, the fingerprint identification module can be arranged In a local area or a whole area below the display screen so as to form Under-screen (render-display) optical fingerprint identification, or part or all of the optical fingerprint identification module can be integrated into the display screen of the electronic equipment so as to form In-screen (In-display) optical fingerprint identification. The display screen may be an Organic LIGHT EMITTING Diode (OLED) display screen or a liquid crystal display screen (LiquidCrystal Display, LCD) or the like. The fingerprint identification method generally comprises the steps of image acquisition, preprocessing, feature extraction, feature matching and the like. Some or all of the above steps may be implemented by conventional Computer Vision (CV) algorithms, or by artificial intelligence (ARTIFICIAL INTELLIGENCE, AI) based deep learning algorithms. The fingerprint identification technology can be applied to portable or mobile terminals such as smart phones, tablet computers and game devices, and other electronic devices such as intelligent door locks, automobiles and bank automatic teller machines, and is used for fingerprint unlocking, fingerprint payment, fingerprint attendance checking, identity authentication and the like.
In the field of biometric identification, it is often necessary to segment an image to cut out a biometric region in the image in order to improve the accuracy of subsequent biometric identification. In the related art, the biological feature extraction model or the image segmentation model can be trained by manually labeling training data. This approach requires a large amount of data to be annotated, and is therefore labor intensive. Meanwhile, due to the existence of labeling errors, the expression capability of the biological characteristics on texture details is weak, so that the accuracy of biological characteristic extraction is low. The application provides a biological feature extraction model training method which is beneficial to reducing the labor cost and improving the accuracy of biological features.
Referring to FIG. 1, a flow 100 of one embodiment of a biometric extraction model training method in accordance with the present application is shown. The biological feature extraction model training method can be applied to various electronic devices. For example, may include, but is not limited to, servers, desktop computers, laptop portable computers, and the like. The biological feature extraction model training method comprises the following steps:
Step 101, performing data enhancement processing of fixed pixel positions on each first image in the image set containing the target biological feature, and obtaining a second image corresponding to each first image.
In this embodiment, the execution subject of the biometric extraction model training method may acquire in advance an image set containing the target biometric. The target biometric characteristic may be any predetermined biometric characteristic, such as a fingerprint characteristic, a palm print characteristic, an iris characteristic, a lip print characteristic, etc. Accordingly, the image set containing the target biometric feature may be a fingerprint image set, a palm print image set, an iris image set, or a lip print image set. The image set described above may be acquired in a variety of ways. For example, the existing image set stored therein may be acquired from another server (e.g., database server) for storing training data through a wired connection or a wireless connection. For another example, images may be collected by a terminal device, and the collected images may be aggregated into an image set. The image collection environment (such as image collection module, collection temperature, etc.) and the finger state (such as finger temperature, finger humidity, pressing force, etc.) of the image collection can be diversified, so as to ensure the difference of the images and improve the generalization of the trained biological feature extraction model.
In this embodiment, the execution subject may use each image in the image set as a first image, and perform data enhancement processing for fixing the pixel positions of each first image, so as to obtain a second image corresponding to each first image. Where data enhancement refers to a technique that produces similar but different data by making a series of random changes to the data. The size of the data set can be enlarged by the data enhancement processing. The data enhancement processing of the fixed pixel point position may include data enhancement processing of the fixed pixel point position and data enhancement processing of transforming the pixel point position.
In some alternative implementations of the present embodiment, the data enhancement processing for fixed pixel point locations may include, but is not limited to, at least one of gaussian blur processing, random noise processing, brightness conversion, contrast conversion, hue conversion, saturation conversion. For each first image, the execution subject may perform at least one of gaussian blur processing, random noise processing, brightness conversion, contrast conversion, tone conversion, and saturation conversion on the first image data to obtain a second image corresponding to the first image.
As an example, fig. 2 shows a characteristic intensity comparison chart before and after the data enhancement processing for fixing the pixel positions of the first image. The original biometric map of the first image is shown at 201. After the data enhancement processing of the fixed pixel position is performed on the first image, the biometric image is shown as a reference numeral 202, and the feature intensity is changed compared with the biometric image shown as a reference numeral 201.
It should be noted that, the data enhancement processing for fixing the pixel position may include, but is not limited to, the above list, and will not be described in detail here.
And 102, performing data enhancement processing for converting pixel positions on each second image to obtain a third image corresponding to each first image.
In this embodiment, after obtaining the second images corresponding to the first images, the execution subject may perform data enhancement processing for transforming the pixel positions of the second images to obtain third images corresponding to the second images, that is, obtain third images corresponding to the first images.
In some alternative implementations of the present embodiment, the data enhancement processing that transforms pixel locations may include, but is not limited to, at least one of flipping, rotating, translating, mirroring. For each first image, the executing body may perform at least one of flipping, rotating, translating, and mirroring on the second image corresponding to the first image, to obtain a third image corresponding to the first image. It should be noted that the first image and the third image may have the same size. For example, if the rotation operation is performed, the rotated image may be subjected to processing such as cropping so that the processed image is the same as the first image in size.
Step 103, combining each first image and the corresponding second image into a positive sample pair, combining each first image and the corresponding third image into a negative sample pair, and training the neural network based on the obtained positive sample pair and negative sample pair to obtain a biological feature extraction model.
In this embodiment, the second image is obtained by performing data enhancement processing on the fixed pixel positions of the first image, so that the positions of the biometric lines (such as fingerprint lines, palm lines, lip lines, etc.) in the first image and the corresponding second image are the same. Thus, each first image and its corresponding second image can be combined into a positive sample pair. The third image is obtained by performing data enhancement processing on the position of the pixel point of the first image, so that the positions of the biological characteristic lines (such as fingerprint lines, palm lines, lip lines and the like) in the first image and the corresponding third image are different. Thus, each first image and its corresponding second image can be combined into a negative sample pair. Further, the neural network may be trained based on the obtained positive and negative sample pairs to obtain a biometric extraction model.
Here, the training goal of the model may be to pull the feature pitch of the positive sample pair closer while expanding the feature pitch of the negative sample pair. Taking a training fingerprint feature extraction model as an example, the training target can enable the biological features extracted by the model from images with the same fingerprint lines to be the same as possible, so that the same finger with different acquisition environments and finger states can be ensured to extract the same features as possible, the accuracy of subsequent fingerprint identification is improved, and meanwhile, the biological features extracted by the model from images with different fingerprint lines can be enabled to be different as possible, so that the subsequent fingerprint identification error caused by similar features acquired by the model from different fingers is avoided. Wherein the similarity of features is measured by euclidean distance between the characterizable features. The biological feature extraction model obtained after training has longer model length of the features extracted from the clear region relative to the model length of the features extracted from the fuzzy region, so that the region where the biological features in the image to be identified are positioned can be determined based on the feature model length, thereby facilitating subsequent biological feature identification, image segmentation and the like.
In this embodiment, the neural network may be a convolutional neural network (Convolutional Neural Network, CNN) using various existing structures (e.g., UNet, etc.). In practice, the convolutional neural network is a feedforward neural network, and an artificial neuron of the feedforward neural network can respond to surrounding units in a part of coverage area, so that the convolutional neural network has excellent performance on image processing, and can be used for extracting frame features in a sample video. In this embodiment, the neural network may include, but is not limited to, a convolutional layer, a deconvolution layer, and the like. The convolution layer may be used to extract image features and downsample the feature map (downsample). The anti-pooling layer may be used to extract image features and upsample the feature map (upsample). As used herein, a neural network, the feature map of the output may be the same size as the image input to the neural network.
In some optional implementations of this embodiment, the execution body may train to obtain the biometric extraction model by using the following sub-steps:
Substep S11 generates a triplet based on the positive and negative pairs of samples comprising the same first image. The triplet may include a first image, a second image in a positive sample pair to which the first image belongs, and a third image in a negative sample pair to which the first image belongs. For each first image, one triplet may be constructed, so that multiple triples may be obtained. Each triplet may be used as a training sample, and multiple samples may constitute a sample set for training the biometric extraction model. The sample set is generated based on the image set without the annotation, so that the unsupervised training of the biological feature extraction model is realized, the labor cost is reduced, and the annotation error is avoided.
In the substep S12, the following model training steps are iteratively performed:
First, extracting target triples from the triples. The extraction manner and the extraction number of the target triples are not limited in the present application. For example, at least one target triplet may be randomly extracted, or at least one target triplet may be extracted in a specified order.
And secondly, inputting the target triplets into the neural network to obtain a first characteristic diagram, a second characteristic diagram and a third characteristic diagram. Here, the first feature map, the second feature map, and the third feature map may be a biometric map corresponding to the first image, a biometric map corresponding to the second image, and a biometric map corresponding to the third image in the target triplet, respectively.
And thirdly, determining a loss value of the neural network based on the first feature map, the second feature map, the third feature map and a preset triplet loss function. Here, the loss value is a value of a loss function (loss function). The loss function is a non-negative real value function that can be used to characterize the difference between the detected result and the actual result. In general, the smaller the loss value, the better the robustness of the model. The loss function may be set according to actual requirements, where a triplet loss function may be used.
And a fourth step of updating parameters of the neural network based on the loss value. Here, the gradient of the loss value with respect to the neural network parameter may be found using a back propagation algorithm, and then the parameters of the neural network may be updated based on the gradient using a gradient descent algorithm. Specifically, a gradient of the loss value relative to parameters of each layer of the neural network can be obtained by using a chain rule (chain rule) and a back propagation algorithm (Back Propgation Algorithm, BP algorithm). In practice, the above-described back propagation algorithm may also be referred to as an error back propagation (Error Back Propagation, BP) algorithm, or an error back propagation algorithm, which is a learning algorithm suitable for multi-layer neuronal networks. In the back propagation process, the partial derivative of the loss function to the weight of each neuron can be obtained layer by layer to form the gradient of the loss function to the weight vector as the basis for modifying the weight. Gradient descent (GRADIENT DESCENT) algorithms are a method commonly used in the machine learning art to solve model parameters. In solving for the minimum of the loss function, the neuron weights (e.g., parameters of the convolution kernel in the convolution layer, etc.) may be adjusted by a gradient descent algorithm based on the calculated gradients.
And S13, stopping iteration in response to the condition of stopping iteration, and obtaining a biological feature extraction model. Here, each time the target triplet is input, the parameters of the neural network may be updated once based on the loss value of the neural network until the stop iteration condition is satisfied. In practice, the stop iteration condition may be variously set as needed. For example, if the number of training times of the neural network is equal to the preset number of times, it may be determined that the training is completed. As another example, training may be determined to be complete when the loss values of the neural network converge. When the neural network training is completed, the trained neural network can be determined as a biological feature extraction model.
The method provided by the embodiment of the application comprises the steps of carrying out data enhancement processing on fixed pixel positions of each first image in an image set to obtain second images corresponding to each first image, carrying out data enhancement processing on the positions of the converted pixel points of each second image to obtain third images corresponding to each first image, finally combining each first image and the corresponding second image into positive sample pairs, combining each first image and the corresponding third image into negative sample pairs, training a neural network based on the obtained positive sample pairs and negative sample pairs to obtain a biological feature extraction model, and realizing automatic generation of the positive and negative sample pairs and unsupervised training of the biological feature extraction model. On the one hand, the biological feature extraction model can be obtained through training without manually marking training data, and the labor cost is reduced. On the other hand, because the manual labeling link is omitted, labeling errors are avoided, the expression capability of the biological features extracted by the biological feature extraction model obtained in the training mode on texture details is stronger, and therefore the accuracy of biological feature extraction is improved.
With further reference to fig. 3, a flow 300 of one embodiment of a biometric extraction method is shown. The biometric feature extraction method can be applied to various electronic devices. For example, may include, but is not limited to, smart phones, tablet computers, laptop portable computers, car computers, palm top computers, desktop computers, set top boxes, smart televisions, cameras, wearable devices, and the like.
The flow 300 of the biometric extraction method includes the steps of:
in step 301, a target image is acquired.
In this embodiment, the execution subject of the biometric extraction method may acquire a target image, where the target image may be any image to be biometric extracted, such as a fingerprint image recorded in a fingerprint acquisition area by a user, a palm print recorded in a palm print acquisition area, and so on.
Step 302, inputting the target image into a biological feature extraction model to obtain a biological feature map of the target image.
In this embodiment, the execution subject may input the target image into the biometric extraction model to obtain a biometric map of the target image. The biological feature extraction model may be trained using the biological feature extraction model training method described in the above embodiments. The specific generation process may be described in the above embodiments, and will not be described herein.
After the extraction of the biometric image, the biometric image may be subjected to other processing as needed, such as biometric identification, biometric region segmentation, and the like, which is not particularly limited herein.
The biological feature extraction method of the embodiment can be used for extracting biological features in the image, and the extracted biological features have stronger expression capability on texture details, so that the accuracy of biological feature extraction can be improved.
With further reference to fig. 4, a flow 400 of one embodiment of an image segmentation method is shown. The image segmentation method is applicable to various electronic devices. For example, may include, but is not limited to, smart phones, tablet computers, laptop portable computers, car computers, palm top computers, desktop computers, set top boxes, smart televisions, cameras, wearable devices, and the like.
The image segmentation method flow 400 includes the steps of:
step 401, acquiring a target image.
In this embodiment, the execution subject of the image segmentation method may acquire a target image, where the target image may be any image to be subjected to biometric feature extraction, such as a fingerprint image recorded in a fingerprint acquisition area by a user, a palm print recorded in a palm print acquisition area, and the like.
Step 402, inputting the target image into a biological feature extraction model to obtain a biological feature map of the target image.
In this embodiment, the execution subject may input the target image into the biometric extraction model to obtain a biometric map of the target image. The biological feature extraction model may be trained using the biological feature extraction model training method described in the above embodiments. The specific generation process may be described in the above embodiments, and will not be described herein.
Step 403, determining the characteristic module length of each pixel point in the target image based on the biological characteristic map.
In this embodiment, the execution subject may determine the feature module length of each pixel point in the target image based on the biometric map. For each pixel point, the characteristic module length of the pixel point is the characteristic module length of the corresponding characteristic point of the pixel point in the biological characteristic map, namely the module length of the corresponding characteristic point in the characteristic value of each channel of the biological characteristic map. Can be calculated by squaring and root-summing the eigenvalues.
As an example, the image and feature map may be the same size. At this time, for each feature point in the biometric map, the execution subject may first determine a feature module length corresponding to the feature point based on the feature value of the feature point in each channel of the biometric map. That is, the square sum root of the eigenvalues of the feature points in each channel is calculated, and the feature module length is obtained. And then, determining the characteristic module length as the characteristic module length of the pixel point corresponding to the characteristic point in the image. It should be noted that the feature map may include one or more channels, and the number of channels may be determined based on the number of convolution kernels of the last convolution layer, where the specific value of the number of channels is not limited.
Step 404, determining a biometric area in the target image based on the feature pattern length.
In this embodiment, the biometric extraction model has a longer pattern length for features extracted from a clear region (e.g., fingerprint region, palm print region, lip print region, or iris region, etc.) than for features extracted from a blurred region (e.g., background region). After obtaining the characteristic pattern length of each pixel point in the target image, the execution subject can obtain a pattern length map using the characteristic pattern length as a pixel value. Based on the pattern length map, the execution subject can determine a biometric region (such as a fingerprint region, a palm print region, a lip print region, or an iris region, etc.) in the target image. As an example, an area constituted by pixel points whose modulus value is greater than a preset threshold value may be determined as the target area. As yet another example, the pattern length map may be first subjected to image processing (e.g., truncation, normalization, correction, etc.) to obtain a processed pattern length map. And then, determining the region formed by the pixel points with the module length values larger than the preset threshold value in the processed module length diagram as a biological characteristic region. Thereby, an efficient segmentation of the image is achieved.
In some optional implementations of this embodiment, the executing body may determine the biometric area in the target image by:
and in the first step, the characteristic modular length larger than the first threshold value is adjusted to the first threshold value so as to avoid that the outliers influence the subsequent processing.
And secondly, carrying out normalization processing on the characteristic modular length of each pixel point. For example, the characteristic module length of each pixel point after the first step of processing can be divided by the first threshold value, so that the characteristic module length of each pixel point is normalized, and the data is easier to process.
And thirdly, correcting the characteristic module length of each pixel point after normalization processing, and determining the region formed by the pixel points with the corrected pixel points larger than the second threshold value as a biological characteristic region. Here, image correction modes such as gamma correction can be adopted to adjust pixel value distribution, so that threshold value clamping control is facilitated. After correction, the region constituted by the pixel points greater than the second threshold value may be determined as a biometric region. Thereby, an efficient segmentation of the image is achieved. Based on the determined biological feature region, convenience can be provided for subsequent operations such as biological feature recognition.
As an example, fig. 5 is a schematic view of the processing effect of the image segmentation method. The original image is a fingerprint image, as indicated by reference numeral 501. An image processed in accordance with the alternative implementation described above is shown at 502, in which the biometric area, i.e., the fingerprint area, is effectively segmented.
The image segmentation method of the embodiment can be used for extracting the biological characteristics in the image, and the extracted biological characteristics have stronger expression capability on texture details, so that the accuracy of extracting the biological characteristics can be improved, and the accuracy of image segmentation is further improved.
With further reference to fig. 6, as an implementation of the method shown in the foregoing figures, the present application provides an embodiment of a biometric extraction model training apparatus, where the embodiment of the apparatus corresponds to the embodiment of the method shown in fig. 1, and the apparatus is particularly applicable to various electronic devices.
As shown in fig. 6, the training device 600 for a biological feature extraction model according to the present embodiment includes a first processing unit 601 configured to perform data enhancement processing for fixing pixel positions on each first image in an image set including a target biological feature to obtain second images corresponding to each first image, a second processing unit 602 configured to perform data enhancement processing for converting pixel positions on each second image to obtain third images corresponding to each first image, and a training unit 603 configured to combine each first image and the corresponding second image into a positive sample pair, combine each first image and the corresponding third image into a negative sample pair, and train a neural network based on the obtained positive sample pair and negative sample pair to obtain a biological feature extraction model.
In some optional implementations of this embodiment, the training unit 603 is further configured to generate a triplet based on a positive sample pair and a negative sample pair including the same first image, iteratively perform a model training step of extracting a target triplet from the obtained triplet, inputting the target triplet to a neural network to obtain a first feature map, a second feature map, and a third feature map, determining a loss value of the neural network based on the first feature map, the second feature map, the third feature map, and a preset triplet loss function, updating parameters of the neural network based on the loss value, and stopping iteration in response to satisfying a stop iteration condition, to obtain a biometric extraction model.
In some optional implementations of this embodiment, the second processing unit 602 is further configured to, for each first image, perform at least one of flipping, rotating, translating, and mirroring on a second image corresponding to the first image, to obtain a third image corresponding to the first image.
In some optional implementations of this embodiment, the first processing unit 601 is further configured to, for each first image in the image set including the target biometric feature, perform at least one of gaussian blur processing, random noise processing, brightness transformation, contrast transformation, hue transformation, and saturation transformation on the first image data, and obtain a second image corresponding to the first image.
The device provided by the embodiment of the application obtains the second image corresponding to each first image by carrying out data enhancement processing on the fixed pixel positions of each first image in the image set, then carries out data enhancement processing on the positions of the converted pixels of each second image to obtain the third image corresponding to each first image, finally combines each first image and the corresponding second image into a positive sample pair, combines each first image and the corresponding third image into a negative sample pair, trains the neural network based on the obtained positive sample pair and the obtained negative sample pair, obtains a biological feature extraction model, and realizes automatic generation of the positive and negative sample pairs and unsupervised training of the biological feature extraction model. On the one hand, the biological feature extraction model can be obtained through training without manually marking training data, and the labor cost is reduced. On the other hand, because the manual labeling link is omitted, labeling errors are avoided, the expression capability of the biological features extracted by the biological feature extraction model obtained in the training mode on texture details is stronger, and therefore the accuracy of biological feature extraction is improved.
With further reference to fig. 7, as an implementation of the method shown in the above figures, the present application provides an embodiment of a biometric extraction device, which corresponds to the embodiment of the method shown in fig. 1, and which is particularly applicable to various electronic apparatuses.
As shown in fig. 7, the biometric extraction apparatus 700 of the present embodiment includes an acquisition unit 701 for acquiring a target image, and a feature extraction unit 702 for inputting the target image into a biometric extraction model to obtain a biometric map of the target image.
In this embodiment, the biometric extraction model may be trained using the biometric extraction model training method described in the above embodiment. The specific generation process may be described in the above embodiments, and will not be described herein.
The device provided by the embodiment of the application can be used for extracting the biological characteristics in the image, and the extracted biological characteristics have stronger expression capability on texture details, so that the accuracy of extracting the biological characteristics can be improved.
With further reference to fig. 8, as an implementation of the method shown in the foregoing figures, the present application provides an embodiment of an image segmentation apparatus, which corresponds to the method embodiment shown in fig. 1, and which is particularly applicable to various electronic devices.
As shown in fig. 8, the image segmentation apparatus 800 of the present embodiment includes an acquisition unit 801 for acquiring a target image, a feature extraction unit 802 for inputting the target image into a biometric extraction model to obtain a biometric image of the target image, a first determination unit 803 for determining a feature pattern length of each pixel point in the target image based on the biometric image, and a second determination unit 804 for determining a biometric region in the target image based on the feature pattern length.
In some optional implementations of this embodiment, the second determining unit is further configured to adjust a feature module length greater than a first threshold to the first threshold, normalize the feature module length of each pixel, correct the feature module length of each pixel after normalization, and determine an area formed by the pixels greater than the second threshold after correction as the biological feature area.
In some optional implementations of this embodiment, the image and the feature map have the same size, and the first determining unit is further configured to, for each feature point in the biometric map, determine, based on a feature value of the feature point in each channel of the biometric map, a feature mode length corresponding to the feature point, and determine, as a feature mode length of a pixel point corresponding to the feature point in the target image, the feature mode length corresponding to the feature point.
The device provided by the embodiment of the application can be used for extracting the biological characteristics in the image, and the extracted biological characteristics have stronger expression capability on texture details, so that the accuracy of extracting the biological characteristics can be improved, and the accuracy of image segmentation is further improved.
The embodiment of the application also provides electronic equipment, which comprises one or more processors and a storage device, wherein one or more programs are stored on the storage device, and when the one or more programs are executed by the one or more processors, the one or more processors are enabled to realize the method for training the biological feature extraction model.
Referring now to fig. 9, a schematic diagram of an electronic device for implementing some embodiments of the present application is shown. The electronic device shown in fig. 9 is only an example and should not impose any limitation on the functionality and scope of use of embodiments of the present application.
As shown in fig. 9, the electronic device 900 may include a processing means (e.g., a central processor, a graphics processor, etc.) 901, which may perform various appropriate actions and processes according to a program stored in a Read Only Memory (ROM) 902 or a program loaded from a storage means 908 into a Random Access Memory (RAM) 903. In the RAM903, various programs and data necessary for the operation of the electronic device 900 are also stored. The processing device 901, the ROM 902, and the RAM903 are connected to each other through a bus 904. An input/output (I/O) interface 905 is also connected to the bus 904.
In general, devices including input devices 906 such as a touch screen, touch pad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, etc., output devices 907 including a Liquid Crystal Display (LCD), speaker, vibrator, etc., storage devices 908 including a magnetic disk, hard disk, etc., and communication devices 909 may be connected to the I/O interface 905. The communication means 909 may allow the electronic device 900 to communicate wirelessly or by wire with other devices to exchange data. While fig. 9 shows an electronic device 900 having various means, it is to be understood that not all illustrated means are required to be implemented or provided. More or fewer devices may be implemented or provided instead. Each block shown in fig. 9 may represent one device or a plurality of devices as needed.
The embodiment of the application also provides a computer program product, which comprises a computer program, wherein the computer program realizes the biological feature extraction model training method when being executed by a processor.
In particular, according to some embodiments of the application, the processes described above with reference to flowcharts may be implemented as computer software programs. For example, some embodiments of the application include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising program code for performing the method shown in the flow chart. In such embodiments, the computer program may be downloaded and installed from a network via communication device 909, or installed from storage device 908, or installed from ROM 902. The above-described functions defined in the methods of some embodiments of the present application are performed when the computer program is executed by the processing device 901.
The embodiment of the application also provides a computer readable medium, on which a computer program is stored, which when being executed by a processor, implements the above-mentioned method for training a biometric extraction model.
It should be noted that, the computer readable medium according to some embodiments of the present application may be a computer readable signal medium or a computer readable storage medium, or any combination of the two. The computer readable storage medium can be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples of a computer-readable storage medium may include, but are not limited to, an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In some embodiments of the application, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In some embodiments of the application, however, the computer readable signal medium may comprise a propagated data signal with computer readable program code embodied therein, either in baseband or as part of a carrier wave. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination of the foregoing. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to electrical wiring, fiber optic cable, RF (radio frequency), and the like, or any suitable combination of the foregoing.
In some embodiments, the clients, servers may communicate using any currently known or future developed network protocol, such as HTTP (HyperText TransferProtocol ), and may be interconnected with any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include a local area network ("LAN"), a wide area network ("WAN"), the internet (e.g., the internet), and peer-to-peer networks (e.g., ad hoc peer-to-peer networks), as well as any currently known or future developed networks.
The computer readable medium may be included in the electronic device or may exist alone without being incorporated into the electronic device. The computer-readable medium carries one or more programs which, when executed by the electronic device, cause the electronic device to perform the methods of the above-described embodiments.
Computer program code for carrying out operations for some embodiments of the present application may be written in one or more programming languages, including an object oriented programming language such as Java, smalltalk, C ++ or any combination thereof and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of remote computers, the remote computer may be connected to the user computer through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computer (e.g., connected via the internet using an internet service provider).
The flowcharts and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present application. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units described in some embodiments of the application may be implemented in software or in hardware. The described units may also be provided in a processor, for example as a processor comprising a first determination unit, a second determination unit, a selection unit and a third determination unit. Wherein the names of the units do not constitute a limitation of the units themselves in some cases.
The functions described above herein may be performed, at least in part, by one or more hardware logic components. For example, without limitation, exemplary types of hardware logic that may be used include Field Programmable Gate Arrays (FPGAs), application Specific Integrated Circuits (ASICs), application Specific Standard Products (ASSPs), systems-on-a-chip (SOCs), complex Programmable Logic Devices (CPLDs), and the like.
The above description is only illustrative of the few preferred embodiments of the present application and of the principles of the technology employed. It will be appreciated by those skilled in the art that the scope of the application in the embodiments of the present application is not limited to the specific combination of the above technical features, but also encompasses other technical features formed by any combination of the above technical features or their equivalents without departing from the spirit of the application. Such as the above-described features, are mutually replaced with the technical features having similar functions (but not limited to) disclosed in the embodiments of the present application.

Claims (11)

1.一种生物特征提取模型训练方法,其特征在于,所述方法包括:1. A method for training a biometric feature extraction model, characterized in that the method comprises: 对包含目标生物特征的图像集中的各第一图像进行固定像素点位置的数据增强处理,得到各第一图像对应的第二图像;Data augmentation processing with fixed pixel positions is performed on each first image in the image set containing the target biometric features to obtain a second image corresponding to each first image; 对各第二图像进行变换像素点位置的数据增强处理,得到各第一图像对应的第三图像;Data augmentation processing is performed on each second image to transform the pixel positions, resulting in a third image corresponding to each first image. 将每个第一图像与对应的第二图像组合为正样本对,将每个第一图像与对应的第三图像组合为负样本对,基于所得到的正样本对和负样本对,对神经网络进行训练,得到生物特征提取模型。Each first image is combined with its corresponding second image to form a positive sample pair, and each first image is combined with its corresponding third image to form a negative sample pair. Based on the obtained positive and negative sample pairs, the neural network is trained to obtain a biometric feature extraction model. 2.根据权利要求1所述的方法,其特征在于,所述基于所得到的正样本对和负样本对,对神经网络进行训练,得到生物特征提取模型,包括:2. The method according to claim 1, characterized in that, the step of training the neural network based on the obtained positive sample pairs and negative sample pairs to obtain a biometric feature extraction model includes: 基于包含相同第一图像的正样本对和负样本对,生成三元组;Triples are generated based on positive and negative sample pairs containing the same first image; 迭代执行如下模型训练步骤:从所得到的三元组中提取目标三元组;将所述目标三元组输入至神经网络,得到第一特征图、第二特征图和第三特征图;基于所述第一特征图、所述第二特征图、所述第三特征图和预设的三元组损失函数,确定所述神经网络的损失值;基于所述损失值更新所述神经网络的参数;The model training process is iteratively executed as follows: extracting target triples from the obtained triples; inputting the target triples into the neural network to obtain a first feature map, a second feature map, and a third feature map; determining the loss value of the neural network based on the first feature map, the second feature map, the third feature map, and a preset triple loss function; and updating the parameters of the neural network based on the loss value. 响应于满足停止迭代条件,停止迭代,得到生物特征提取模型。The iteration stops when the stopping condition is met, and the biometric extraction model is obtained. 3.根据权利要求1所述的方法,其特征在于,所述对各第二图像进行变换像素点位置的数据增强处理,得到各第一图像对应的第三图像,包括:3. The method according to claim 1, characterized in that, the step of performing data enhancement processing on each second image to change the pixel position, thereby obtaining a third image corresponding to each first image, includes: 对于每个第一图像,对该第一图像对应的第二图像进行翻转、旋转、平移、镜像中的至少一项处理,得到与该第一图像对应的第三图像。For each first image, perform at least one of the following operations on the corresponding second image: flip, rotate, translate, or mirror, to obtain the third image corresponding to the first image. 4.根据权利要求1-3之一所述的方法,其特征在于,所述对包含目标生物特征的图像集中的各第一图像进行固定像素点位置的数据增强处理,得到各第一图像对应的第二图像,包括:4. The method according to any one of claims 1-3, characterized in that, the step of performing data augmentation processing on each first image in the image set containing the target biometric features at fixed pixel positions to obtain a second image corresponding to each first image includes: 对于包含目标生物特征的图像集中的每个第一图像,将该第一图像数据进行高斯模糊处理、随机噪声处理、亮度变换、对比度变换、色调变换、饱和度变换中的至少一项处理,得到该第一图像对应的第二图像。For each first image in the image set containing the target biometric features, the first image data is processed by at least one of Gaussian blurring, random noise processing, brightness transformation, contrast transformation, hue transformation, and saturation transformation to obtain the second image corresponding to the first image. 5.一种生物特征提取方法,其特征在于,所述方法包括:5. A method for extracting biometric features, characterized in that the method comprises: 获取目标图像;Acquire the target image; 将所述目标图像输入至采用权利要求1-4中任一所述的方法训练的生物特征提取模型,得到所述目标图像的生物特征图。The target image is input into a biometric extraction model trained using any one of the methods described in claims 1-4 to obtain a biometric map of the target image. 6.一种图像分割方法,其特征在于,所述方法包括:6. An image segmentation method, characterized in that the method comprises: 获取目标图像;Acquire the target image; 将所述目标图像输入至采用权利要求1-4中任一所述的方法训练的生物特征提取模型,得到所述目标图像的生物特征图;The target image is input into a biometric extraction model trained using any one of the methods described in claims 1-4 to obtain a biometric map of the target image; 基于所述生物特征图,确定所述目标图像中各像素点的特征模长;Based on the biometric map, the feature modulus of each pixel in the target image is determined; 基于所述特征模长,确定所述目标图像中的生物特征区域。Based on the feature modulus, the biometric regions in the target image are determined. 7.根据权利要求6所述的方法,其特征在于,所述基于所述特征模长,确定所述目标图像中的生物特征区域,包括:7. The method according to claim 6, wherein determining the biometric region in the target image based on the feature modulus comprises: 将大于第一阈值的特征模长调整为所述第一阈值;Adjust the feature magnitude that is greater than the first threshold to the first threshold; 对各像素点的特征模长进行归一化处理;Normalize the feature magnitude of each pixel; 对归一化处理后的各像素点的特征模长进行校正,并将校正后大于第二阈值的像素点所构成的区域确定为生物特征区域。The feature magnitude of each pixel after normalization is corrected, and the region formed by pixels whose corrected feature magnitude is greater than the second threshold is determined as the biometric region. 8.根据权利要求6所述的方法,其特征在于,所述目标图像与所述生物特征图的尺寸相同;所述基于所述生物特征图,确定所述目标图像中各像素点的特征模长,包括:8. The method according to claim 6, wherein the target image and the biometric map have the same size; the step of determining the feature modulus of each pixel in the target image based on the biometric map includes: 对于所述生物特征图中的每个特征点,基于该特征点在所述生物特征图的各通道的特征值,确定该特征点对应的特征模长,并将该特征点对应的特征模长确定为所述目标图像中与该特征点对应的像素点的特征模长。For each feature point in the biometric map, based on the feature value of the feature point in each channel of the biometric map, the feature modulus corresponding to the feature point is determined, and the feature modulus corresponding to the feature point is determined as the feature modulus of the pixel point corresponding to the feature point in the target image. 9.一种电子设备,其特征在于,包括:9. An electronic device, characterized in that it comprises: 一个或多个处理器;One or more processors; 存储装置,其上存储有一个或多个程序,Storage device, on which one or more programs are stored, 当所述一个或多个程序被所述一个或多个处理器执行,使得所述一个或多个处理器实现如权利要求1-8中任一所述的方法。When the one or more programs are executed by the one or more processors, the one or more processors implement the method as described in any one of claims 1-8. 10.一种计算机可读介质,其上存储有计算机程序,其特征在于,该程序被处理器执行时实现如权利要求1-8中任一所述的方法。10. A computer-readable medium having a computer program stored thereon, characterized in that the program, when executed by a processor, implements the method as described in any one of claims 1-8. 11.一种计算机程序产品,包括计算机程序,其特征在于,该计算机程序被处理器执行时实现权利要求1-8中任一所述的方法。11. A computer program product comprising a computer program, characterized in that, when executed by a processor, the computer program implements the method described in any one of claims 1-8.
CN202210378236.3A 2022-04-12 2022-04-12 Biological feature extraction model training method and image segmentation method Active CN114882308B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210378236.3A CN114882308B (en) 2022-04-12 2022-04-12 Biological feature extraction model training method and image segmentation method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210378236.3A CN114882308B (en) 2022-04-12 2022-04-12 Biological feature extraction model training method and image segmentation method

Publications (2)

Publication Number Publication Date
CN114882308A CN114882308A (en) 2022-08-09
CN114882308B true CN114882308B (en) 2025-11-28

Family

ID=82670107

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210378236.3A Active CN114882308B (en) 2022-04-12 2022-04-12 Biological feature extraction model training method and image segmentation method

Country Status (1)

Country Link
CN (1) CN114882308B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116503614B (en) * 2023-04-27 2024-07-02 杭州食方科技有限公司 Dinner plate shape feature extraction network training method and dinner plate shape information generation method
CN119339415A (en) * 2024-09-27 2025-01-21 深圳市诚创液晶显示有限公司 Liquid crystal display fingerprint module reading method, system, device and storage medium

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111368934A (en) * 2020-03-17 2020-07-03 腾讯科技(深圳)有限公司 Image recognition model training method, image recognition method and related device
CN113657411A (en) * 2021-08-23 2021-11-16 北京达佳互联信息技术有限公司 Neural network model training method, image feature extraction method and related device

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111797976B (en) * 2020-06-30 2024-04-12 北京灵汐科技有限公司 Training method, image recognition method, device, equipment and medium for neural network
CN112069921A (en) * 2020-08-18 2020-12-11 浙江大学 A Small-Sample Visual Object Recognition Method Based on Self-Supervised Knowledge Transfer
CN112232384B (en) * 2020-09-27 2024-11-05 北京迈格威科技有限公司 Model training method, image feature extraction method, target detection method and device
CN112329785A (en) * 2020-11-25 2021-02-05 Oppo广东移动通信有限公司 Image management method, device, terminal and storage medium
CN113538235B (en) * 2021-06-30 2024-01-09 北京百度网讯科技有限公司 Training method and device for image processing model, electronic equipment and storage medium
CN114187459A (en) * 2021-11-05 2022-03-15 北京百度网讯科技有限公司 Training method and device of target detection model, electronic equipment and storage medium

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111368934A (en) * 2020-03-17 2020-07-03 腾讯科技(深圳)有限公司 Image recognition model training method, image recognition method and related device
CN113657411A (en) * 2021-08-23 2021-11-16 北京达佳互联信息技术有限公司 Neural network model training method, image feature extraction method and related device

Also Published As

Publication number Publication date
CN114882308A (en) 2022-08-09

Similar Documents

Publication Publication Date Title
US12387527B2 (en) Detecting forged facial images using frequency domain information and local correlation
US12266211B2 (en) Forgery detection of face image
CN108509915B (en) Method and device for generating face recognition model
CN108520220B (en) Model generation method and device
WO2022134971A1 (en) Noise reduction model training method and related apparatus
CN111369427B (en) Image processing method, image processing device, readable medium and electronic equipment
CN111695421B (en) Image recognition method and device and electronic equipment
CN107507153B (en) Image denoising method and device
CN108241855B (en) Image generation method and device
CN113505848B (en) Model training method and device
CN114359289A (en) An image processing method and related device
CN112507897A (en) Cross-modal face recognition method, device, equipment and storage medium
JP2021174529A (en) Methods and devices for detecting living organisms
CN114882308B (en) Biological feature extraction model training method and image segmentation method
CN116977195A (en) Adjustment method, device, equipment and storage medium for restoration model
CN117197134A (en) Defect detection method, device, equipment and storage medium
CN112070022A (en) Face image recognition method and device, electronic equipment and computer readable medium
CN113205530B (en) Shadow area processing method and device, computer readable medium and electronic device
CN114529750A (en) Image classification method, device, equipment and storage medium
CN113256556B (en) Image selection method and device
CN113298731B (en) Image color migration method and device, computer readable medium and electronic equipment
CN114386596A (en) Method for generating countermeasure sample, electronic device and computer readable medium
CN114120423A (en) Face image detection method and device, electronic equipment and computer readable medium
CN119006835B (en) Skin image segmentation method for reducing influence of confounding factors
CN113947178B (en) An interpretability method based on model dynamic behavior and related device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20230403

Address after: No. S, 17/F, No. 1, Zhongguancun Street, Haidian District, Beijing 100082

Applicant after: Beijing Jigan Technology Co.,Ltd.

Address before: No. 1268, 1f, building 12, neijian Middle Road, Xisanqi building materials City, Haidian District, Beijing 100096

Applicant before: BEIJING KUANGSHI TECHNOLOGY Co.,Ltd.

Applicant before: MEGVII (BEIJING) TECHNOLOGY Co.,Ltd.

TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20230707

Address after: 201-1, Floor 2, Building 4, No. 188, Rixin Road, Binhai Science Park, Binhai, Tianjin, 300451

Applicant after: Tianjin Jihao Technology Co.,Ltd.

Address before: No. S, 17/F, No. 1, Zhongguancun Street, Haidian District, Beijing 100082

Applicant before: Beijing Jigan Technology Co.,Ltd.

GR01 Patent grant