WO2023071180A1 - 真伪识别方法、装置、电子设备以及存储介质 - Google Patents
真伪识别方法、装置、电子设备以及存储介质 Download PDFInfo
- Publication number
- WO2023071180A1 WO2023071180A1 PCT/CN2022/096019 CN2022096019W WO2023071180A1 WO 2023071180 A1 WO2023071180 A1 WO 2023071180A1 CN 2022096019 W CN2022096019 W CN 2022096019W WO 2023071180 A1 WO2023071180 A1 WO 2023071180A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- image
- preset
- dimension
- identified
- feature
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Ceased
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
Definitions
- the present disclosure relates to the technical field of image processing, and in particular to an authenticity identification method, device, electronic equipment and storage medium.
- Embodiments of the present disclosure at least provide an authenticity identification method, device, electronic equipment, and storage medium.
- an embodiment of the present disclosure provides a method for authenticity identification, which is applied to a target deep neural network, including:
- At least one of a forged region in the image to be identified and authenticity result information of the object to be identified is determined.
- using the target deep neural network can more accurately determine the image features describing multiple preset dimensions of the image to be recognized, and then use the determined image features to more accurately identify whether the object to be recognized is a forged object, that is, to obtain a more accurate Accurate true and false result information, and more accurate forged areas can be obtained.
- these two detection tasks can promote each other and share some specific image features, which can not only improve detection efficiency, but also enhance The ability to extract image features improves the detection accuracy of two detection tasks.
- the determining at least one of the forged region in the image to be identified and the authenticity result information of the object to be identified based on the image features includes:
- the number of feature points in the first image feature of the highest preset dimension is small, and the feature data dimension corresponding to a feature point is relatively high, thus effectively removing redundant information in the image to be recognized, while retaining and increasing Effective information for authenticity identification, so it can effectively improve the accuracy of the determined authenticity result information;
- the number of feature points in the second image feature corresponding to the lowest preset dimension is large, so the corresponding image to be identified can be determined more accurately The authenticity information corresponding to each pixel in the image, and then can more accurately determine the forged area in the image to be recognized.
- the feature data dimension corresponding to each feature point in the second image feature corresponding to the lowest preset dimension is low , so the speed of determining the forged region can be improved.
- the image features include a second image feature corresponding to each preset dimension in multiple preset dimensions, and a first image feature corresponding to each preset dimension in multiple preset dimensions ;
- the extraction of image features corresponding to multiple preset dimensions of the image to be recognized includes:
- performing operations such as dimensionality reduction and fusion on the first image features of each preset dimension can more accurately determine that the image to be recognized corresponds to the second image feature of each preset dimension in multiple preset dimensions.
- the determining, based on the first image features corresponding to each preset dimension, that the image to be recognized corresponds to a second image feature of each preset dimension in multiple preset dimensions includes:
- the first dimension group includes a first preset dimension and a second preset dimension, and the first preset dimension is higher than the second preset dimension;
- a first feature processing operation is performed on the second image feature corresponding to the first preset dimension in the first dimension group, to obtain the same as the second preset A third image feature whose dimensions match; wherein, the feature map corresponding to the third image feature has the same image resolution as the feature map corresponding to the first image feature of the second preset dimension;
- the first image feature corresponding to the higher preset dimension among the two adjacent preset dimensions is processed, and the processed third image feature is the same as the first image feature corresponding to the lower preset dimension.
- the corresponding feature maps have the same image resolution; subsequent feature fusion based on the third image feature and the first image feature with the same data dimension and image resolution can improve the fusion accuracy and obtain a more accurate second image features.
- the first image feature corresponding to the second preset dimension in the first dimension group is determined based on the obtained third image feature and the first dimension group
- the second image features corresponding to the second preset dimension include:
- a second image feature corresponding to a second preset dimension in the first dimension group is determined.
- the above-mentioned third image feature and the first image feature specifically have the same data dimension and the same image resolution, so the two can be accurately spliced; after that, feature extraction and Processing and other operations can obtain more accurate processing results, that is, the above-mentioned second image features.
- the extracting the first image feature of the image to be recognized corresponding to each preset dimension in multiple preset dimensions includes:
- the second dimension group includes a third preset dimension and a fourth preset dimension, and the third preset dimension is lower than said fourth preset dimension;
- a second feature processing operation is performed on the first image feature corresponding to the third preset dimension in the second dimension group to obtain the same as the fourth preset Dimensionally matched fourth image features;
- the first image feature is processed according to the fourth preset dimension, and the fourth image feature that matches the fourth preset dimension can be determined more accurately; the fourth image feature is then continuously processed to obtain The first image feature of matches the fourth preset dimension.
- the first image features of lower preset dimensions in each second dimension group are sequentially processed, so that the first image features corresponding to each preset dimension can be determined more accurately.
- the obtaining the authenticity result information of the object to be identified based on the first image feature corresponding to the highest preset dimension among the plurality of preset dimensions includes:
- the feature data corresponding to each feature point has more dimensions, and the number of feature points is less. Image features. Therefore, using the first image features, the above-mentioned first predicted probability and second predicted probability can be determined more accurately, and then, based on the first predicted probability and second predicted probability, more accurate authenticity result information can be obtained.
- the determining the forged region in the image to be identified based on the second image feature corresponding to the lowest preset dimension among the plurality of preset dimensions includes:
- a forged area in the image to be identified is determined.
- the second image feature corresponding to the lowest preset dimension is obtained by reducing the dimensionality, adding corresponding feature points, and concatenating the first image features corresponding to the preset dimensions of each level. Therefore, each of the second image features
- the feature data corresponding to each feature point can more accurately characterize whether the corresponding feature point is a forged feature point; It can be more accurately determined that each pixel point in the image to be recognized is a forged pixel point forgery result information, and then a more accurate forgery area can be determined.
- the acquisition includes an image to be recognized of the object to be recognized, including:
- an image region corresponding to the object to be recognized is extracted from the original image to obtain the image to be recognized.
- the detection frame and the key points can respectively determine the image area occupied by the object to be recognized in the original image, and the combination of the two to determine the above image area can play a role in mutual calibration, so it can be more accurate. image area, that is, a more accurate image to be recognized can be obtained.
- the extracting the image region corresponding to the object to be recognized from the original image based on the detection frame and the plurality of key points to obtain the image to be recognized includes:
- an image area corresponding to the object to be identified is extracted from the original image to obtain the image to be identified.
- the area corresponding to the target area information includes a complete object to be identified, which is beneficial to improving the accuracy of authenticity identification corresponding to the object to be identified.
- the extracting the image region corresponding to the object to be recognized from the original image based on the detection frame and the plurality of key points to obtain the image to be recognized includes:
- the image area corresponding to the object to be identified is extracted from the original image, and the obtained The image to be recognized.
- the image region corresponding to the object to be recognized that occupies a larger area occupied by the original image is extracted, which can ensure a larger resolution of the obtained image to be recognized, which is beneficial to improve the accuracy of authenticity recognition.
- the above method further includes:
- a heat map is generated based on the forged area and the image to be identified; wherein, the heat value of a pixel point in the heat map corresponding to the forged area is higher than a preset value.
- the thermal map is used to realize the visualization of the forged area corresponding to the image to be recognized, which improves the intuitiveness of the forged area.
- the above-mentioned authenticity identification method also includes the step of training the target deep neural network:
- the sample image is input into the target deep neural network to be trained, and the multiple sample images are processed through the target neural network to obtain the first prediction score, the The second predicted score of the sample object being a forged object, and the predicted probability information that each pixel in each sample image is a forged pixel;
- the target deep neural network to be trained is trained by using the network loss information until the preset training condition is met, and a trained target deep neural network is obtained.
- the prediction result of the forged region can be directly obtained, so the above prediction probability information can be used to characterize the prediction result of the forgery region; through the first prediction score and the second prediction score
- the value can determine the authenticity identification result of the sample object. Therefore, based on the first predicted score and the second predicted score (corresponding to the detection task of authenticity identification), and the predicted probability information and the standard probability information (corresponding to the detection task of the forged region), these two detection tasks
- the prediction value is used to establish the network loss information of the training target neural network, which can effectively improve the detection accuracy of the trained target neural network through the mutual promotion of the two detection tasks.
- the generating network loss information based on the first prediction score, the second prediction score, prediction probability information and standard probability information corresponding to each sample image includes:
- the network loss information is generated based on the first loss information and the second loss information.
- the above-mentioned first loss information can be determined more accurately by using the authenticity identification information of the sample image, that is, the above-mentioned first prediction score and the second prediction score; Probability information) and standard results (standard probability information), the above second loss information can be determined more accurately; then, based on the first loss information and the second loss information, network loss information representing the losses of the two detection tasks can be generated.
- an authenticity identification device including:
- An image acquisition module configured to acquire an image to be identified including an object to be identified
- a feature extraction module configured to extract a first image feature corresponding to a plurality of preset dimensions of the image to be identified; wherein, the number of feature points corresponding to the first image feature is negatively correlated with the value of the corresponding preset dimension;
- a detection module configured to determine at least one of a forged region in the image to be identified and authenticity result information of the object to be identified based on the image features.
- an embodiment of the present disclosure further provides an electronic device, including: a processor, a memory, and a bus, the memory stores machine-readable instructions executable by the processor, and when the electronic device is running, the processing
- the processor communicates with the memory through a bus, and when the machine-readable instructions are executed by the processor, the above-mentioned first aspect, or the steps in any possible implementation manner of the first aspect are executed.
- embodiments of the present disclosure further provide a computer-readable storage medium, on which a computer program is stored, and when the computer program is executed by a processor, the above-mentioned first aspect, or any of the first aspects of the first aspect, may be executed. Steps in one possible implementation.
- FIG. 1 shows a flow chart of a method for authenticity identification provided by an embodiment of the present disclosure
- FIG. 2 shows a flow chart of another authenticity identification method provided by an embodiment of the present disclosure
- FIG. 3 shows a flowchart of a network training method provided by an embodiment of the present disclosure
- Fig. 4 shows a schematic diagram of an authenticity identification device provided by an embodiment of the present disclosure
- Fig. 5 shows a schematic diagram of an electronic device provided by an embodiment of the present disclosure.
- the target deep neural network can be used to accurately determine the image features describing multiple preset dimensions of the image to be identified, and then use the determined image features to accurately identify whether the object to be identified is a fake object, that is, to obtain a more accurate The true and false result information, and the more accurate forged area can be obtained.
- these two detection tasks can promote each other and share some specific image features, which can not only improve detection efficiency, but also enhance The ability to extract image features can extract the key information corresponding to authenticity identification in the image, and improve the detection accuracy of the two detection tasks.
- the authenticity identification method provided by the embodiment of the present disclosure will be described below by taking the executing subject as a device capable of computing as an example.
- the authenticity identification method provided by the present disclosure is applied to the target deep neural network, and may include the following steps:
- the object to be identified may be an object that needs to be authenticated, for example, a human face, and the object to be identified may be determined according to a specific application scenario, which is not limited in the present disclosure.
- the above-mentioned image to be recognized may be taken by the above-mentioned device with computing capability, or it may be taken by other shooting devices and transmitted to the above-mentioned device with computing capability, which is not limited in the present disclosure.
- the above image to be recognized may be a captured original image, or may be a sub-image corresponding to the object to be recognized intercepted from the captured original image, which is not limited in the present disclosure.
- the aforementioned image to be recognized may be selected from a video clip, or may be an independently existing image, which is not limited in the present disclosure.
- the aforementioned image features may include a second image feature corresponding to each preset dimension in the plurality of preset dimensions, and a first image feature corresponding to each preset dimension in the plurality of preset dimensions.
- the above-mentioned image features can be extracted by using the following steps: first extract the first image feature corresponding to each preset dimension in the multiple preset dimensions of the image to be recognized, and then, based on the first image feature corresponding to each preset dimension Image features, determining that the image to be recognized corresponds to a second image feature of each preset dimension in a plurality of preset dimensions.
- Different preset dimensions correspond to different numbers of feature points of the first image feature, and data dimensions of feature data corresponding to the feature points are also different.
- the higher the preset dimension the smaller the number of corresponding feature points of the first image feature, and the higher the data dimension of feature data corresponding to a feature point, the smaller the corresponding feature map of the first image feature.
- the preset dimensions are pre-set according to specific application scenarios.
- feature extraction is first performed on the image to be recognized, and the original image features corresponding to the lowest preset dimension are obtained, and then the extracted image features are processed by using a convolution layer, etc., to obtain the first image feature corresponding to the lowest preset dimension .
- the adjacent first image features corresponding to the higher preset dimensions are obtained.
- the feature map corresponding to the second image feature has the same size and resolution as the image to be recognized.
- the second image feature corresponding to the lowest preset dimension is finally determined by performing operations such as reducing the data dimension of the feature data and concatenating the first image feature in descending order of the preset dimensions.
- the feature data corresponding to each feature point of the second image feature can more accurately represent whether the corresponding feature point is a forged feature point.
- the authenticity result information of the object to be identified can be obtained based on the first image feature corresponding to the highest preset dimension among the plurality of preset dimensions; and/or, based on one of the plurality of preset dimensions
- the second image feature corresponding to the lowest preset dimension determines the forged region in the image to be identified.
- the feature data corresponding to each feature point in the first image feature corresponding to the highest preset dimension has more dimensions and fewer feature points.
- image features can more accurately describe image features that are effective for authenticity identification, so The authenticity result information of the object to be identified can be determined more accurately by using the first image feature.
- a convolutional layer is used to process the first image feature corresponding to the highest preset dimension to obtain the authenticity result information.
- the second image feature corresponding to the lowest preset dimension is the first image feature corresponding to each level of preset dimension through dimensionality reduction, adding corresponding feature points, and splicing operations. Therefore, each feature point of the second image feature corresponds to The feature data can more accurately characterize whether the corresponding feature point is a forged feature point; and because the feature map corresponding to the second image feature has the same resolution as the image to be recognized, therefore, using the second image feature can be more accurately It is determined that each pixel in the image to be recognized is the forgery result information of a forged pixel, and then a more accurate forged area can be determined.
- a convolutional layer is used to process the second image feature corresponding to the aforementioned lowest preset dimension to obtain the aforementioned forged region.
- the following steps can be used to determine the second image features corresponding to each preset dimension:
- group the preset dimensions take every two adjacent preset dimensions as a group to obtain multiple first dimension groups; the first dimension groups include the first preset dimension and the second preset dimension, The first preset dimension is higher than the second preset dimension.
- the dimension of a certain first dimension group may be the first preset dimension in the first dimension group.
- the second image corresponding to the first preset dimension in the first dimension group Perform a feature processing operation on the feature to obtain a third image feature that matches the second preset dimension; wherein, the feature map corresponding to the third image feature corresponds to the first image feature corresponding to the second preset dimension
- the feature maps of have the same image resolution and size. Based on the obtained third image feature and the first image feature corresponding to the second preset dimension in the first dimension group, determine the second image feature corresponding to the second preset dimension in the first dimension group .
- the feature map corresponding to the fifth image feature has the same image resolution and size as the feature map corresponding to the first image feature corresponding to the second preset dimension. Based on the obtained fifth image feature and the first image feature corresponding to the second preset dimension in the first dimension group, determine the second image feature corresponding to the second preset dimension in the first dimension group .
- the above-mentioned feature processing operation may be a transposed convolution operation, through which the data dimension of the feature data corresponding to the feature points can be reduced while increasing the number of feature points, that is, the resolution of the corresponding feature map can be improved.
- the first image feature 291 corresponding to the higher preset dimension among the two adjacent preset dimensions in the first dimension group is subjected to a feature processing operation, and the processing
- the last fifth image feature has the same data dimension as the first image feature corresponding to the lower preset dimension, and the corresponding feature map has the same image resolution; after that, the feature is processed into the third image feature after operation
- the first image feature corresponding to the second preset dimension in the first dimension group is spliced to obtain a spliced image feature, such as the spliced image feature 21 shown in FIG. 2 .
- the second image features 23 corresponding to the second preset dimensions in the first dimension group are determined. For example, at least one convolution process may be performed on the spliced image features 21 to obtain the second image features 23.
- a stitched image feature 21 , a second image feature 23 , etc. are shown in FIG. 2 .
- the second image feature 23 corresponding to the higher preset dimension among the two adjacent preset dimensions in the first dimension group Perform a feature processing operation, the processed third image feature has the same data dimension as the first image feature corresponding to the lower preset dimension, and the corresponding feature map has the same image resolution; after that, the third image
- the feature is concatenated with the first image feature corresponding to the second preset dimension in the first dimension group to obtain the concatenated image feature, as shown in FIG. 2 , the concatenated image feature 22 .
- the second image features 24 corresponding to the second preset dimensions in the first dimension group are determined. For example, at least one convolution process may be performed on the spliced image features 22 to obtain the second image features 24.
- the above-mentioned third image feature corresponding to the same first dimension group and the first image feature corresponding to the second preset dimension have the same data dimension and the same image resolution, so the two can be spliced accurately; after that, based on Operations such as feature extraction and processing are performed on the spliced image features after splicing, and relatively accurate processing results can be obtained, that is, the above-mentioned second image features.
- the first image feature and the second image feature in each first dimension group are sequentially processed in order of dimensions from high to low, and the second image feature corresponding to the lowest preset dimension can be obtained.
- the following steps can be used to determine the forged region in the image to be identified:
- the probability information that each pixel in the image to be recognized is a fake pixel is determined.
- the second image feature 25 of the lowest preset dimension is shown in FIG. 2 .
- the feature map corresponding to the second image feature of the lowest preset dimension has the same resolution and size as the image to be recognized. Therefore, each pixel in the feature map corresponding to the second image feature is identical to each pixel in the image to be recognized. If the pixel points correspond to each other, then, using the second image feature, it can be more accurately determined that each pixel point in the image to be recognized is a forged pixel point forgery result information.
- the second image feature 25 corresponding to the lowest preset dimension can be processed by using a fully connected network layer, a classifier, etc., to obtain probability information that each pixel in the image to be recognized is a fake pixel.
- each pixel in the image to be recognized After obtaining the probability information corresponding to each pixel in the image to be recognized, based on the determined probability information and a preset probability threshold, determine forgery result information that each pixel in the image to be recognized is a forged pixel. Based on the forgery result information corresponding to each pixel, a forged area in the image to be identified is determined.
- the aforementioned preset probability thresholds are flexibly set according to specific application scenarios.
- the probability value corresponding to the above probability information is greater than or equal to the above preset probability threshold, it is determined that the pixel corresponding to the above probability information is a fake pixel; when the probability value corresponding to the above probability information is less than the above preset probability threshold , and determine that the pixel corresponding to the above probability information is a pixel that has not been tampered with.
- the pixel points determined to be fake pixel points may form at least one fake region.
- a mask map M pred with the same size as the image to be identified is created. Afterwards, the MASK map M pred is filled according to the following formula:
- (i, j) represents the row and column identifier of the corresponding pixel
- ⁇ represents the above-mentioned preset probability threshold
- I pred represents the probability value corresponding to the above-mentioned probability information.
- a heat map may be generated based on the forged area and the image to be identified; wherein, the heat value of a pixel point in the heat map corresponding to the forged area is higher than a preset value.
- the resolution and size of the heat map are determined according to the resolution and size of the image to be recognized, for example, the heat map may be set to have the same size and resolution as the image to be recognized. Afterwards, the heat values of the pixels corresponding to the forged area in the heat map can be higher than the preset value, and the heat values of the pixels in the heat map are equal;
- the above probability information sets the heat value of the corresponding pixel in the heat map. Specifically, as the probability value corresponding to the probability information increases, the heat value of the corresponding pixel also increases.
- the heat values corresponding to pixels outside the forged area may be set to be equal, or set according to the probability value corresponding to the above probability information, which is not limited in the present disclosure.
- the visualization of the forged area corresponding to the image to be recognized is realized by using the heat map, which improves the intuition and interpretability of the forged area.
- the following steps can be used to extract the first image feature of the image to be recognized corresponding to each preset dimension in multiple preset dimensions:
- the image to be recognized can be input into the target deep neural network, and then undergo at least one column-depth separable convolution operation to obtain the first image with the lowest preset dimension.
- Feature 26 For example, the image to be recognized can be input into the target deep neural network, and then undergo at least one column-depth separable convolution operation to obtain the first image with the lowest preset dimension.
- every two adjacent preset dimensions can be used as a group to obtain a plurality of second dimension groups; the second dimension group includes the third preset dimension and the second dimension Four preset dimensions, the third preset dimension is lower than the fourth preset dimension.
- the dimension of the second dimension group may be the third preset dimension in the second dimension group.
- each second dimension group After the above-mentioned second dimension group is determined, perform the following operations on each second dimension group in order of dimensions from low to high until the first image feature of each preset dimension except the lowest preset dimension is determined:
- a second feature processing operation is performed on the first image feature corresponding to the third preset dimension in the second dimension group to obtain the same as the fourth preset A fourth image feature whose dimensions match; based on the obtained fourth image feature, determine a first image feature corresponding to a fourth preset dimension in the second dimension group.
- the first image feature 26 corresponding to the lower preset dimension in the second dimension group is subjected to a feature processing operation to obtain a fourth image matching the higher preset dimension feature 29, and then, based on the obtained fourth image feature 29, after operations such as convolution, determine the first image feature 27 corresponding to a higher preset dimension.
- the above-mentioned second feature processing operation may be a separable convolution operation, which is specifically used to increase the data dimension of the feature data of the feature points in the first image feature and reduce the number of feature points.
- the fourth image feature obtained after the second feature processing operation matches the higher fourth preset dimension in the second dimension group.
- At least one convolution operation can be performed to obtain the first image feature corresponding to the higher preset dimension in the second dimension group , the first image feature and the fourth image feature have the same preset dimension, and the corresponding feature maps have the same image resolution.
- the first image feature in the corresponding second dimension group is processed, and the fourth image feature matching the fourth preset dimension can be determined more accurately;
- the fourth image feature is further processed, and the obtained first image feature matches the fourth preset dimension.
- the first image features of lower preset dimensions in each second dimension group are sequentially processed, so that the first image features corresponding to each preset dimension can be determined more accurately.
- FIG. 2 shows the first image feature 27, the first image feature 28, etc. in the second dimension group, and the fourth image feature 29, etc. in the second dimension group.
- the following steps can be used to determine the authenticity result information of the object to be identified:
- First determine two scores specifically, based on the first image feature corresponding to the highest preset dimension, determine the first score for the object to be recognized as a real object and the second score for the object to be recognized as a counterfeit object .
- the first image feature may be processed through at least one convolution operation to obtain a first score indicating that the object to be identified is a real object and a second score indicating that the object to be identified is a fake object.
- determine two predicted probabilities specifically, based on the first score and the second score, determine the first predicted probability that the object to be identified is a real object and the object to be identified is The second predicted probability of a fake object.
- i and class both represent the identity of the object to be recognized as a real object or a fake object
- i is 0, which means the object to be recognized is a real object
- i is 1, which means the object to be recognized is a fake object
- p represents the prediction probability
- class is 0
- p class represents the first predicted probability that the object to be recognized is a real object
- p class represents the second predicted probability that the object to be recognized is a forged object
- x represents the score
- x[i ] represents the first score of the object to be recognized as a real object
- x[i] represents the second score of the object to be recognized as a forged object.
- a classifier may be used to determine the above two prediction probabilities.
- the authenticity result prediction is performed, specifically, the authenticity result information of the object to be identified is determined based on the first prediction probability and the second prediction probability.
- the first predicted probability can be compared with the second predicted probability, and the identification result corresponding to the larger predicted probability can be used as the authenticity result information. For example, if the first predicted probability is greater than the second predicted probability, then The authenticity result information indicates that the object to be identified is a real object, and if the first predicted probability is less than or equal to the second predicted probability, the authenticity result information indicates that the object to be identified is a forged object.
- c is a parameter used to characterize the authenticity result information.
- the feature data corresponding to each feature point in the first image feature corresponding to the highest preset dimension has more dimensions, and the number of feature points is less.
- image features can be described more accurately and are effective for authenticity identification. Therefore, using the first image feature, the above-mentioned first and second prediction probabilities can be determined more accurately, and then, based on the first and second prediction probabilities, more accurate authenticity result information can be obtained.
- the image to be recognized in the above embodiments may be directly captured by the photographing device, or may be a sub-image intercepted from the captured original image.
- the image to be recognized corresponding to the object to be recognized can be intercepted by the following steps:
- the above-mentioned original image may be obtained from a video clip.
- frame extraction processing may be performed at equal intervals from a video clip to obtain multiple frames of original images, and then implement the disclosed method on each frame of the original image. Realize the authenticity identification of each frame image.
- Both the detection frame and the key points can respectively determine the image area occupied by the object to be recognized in the original image. Combining the two to determine the above image area can play a role in mutual calibration, so a more accurate image area can be obtained. That is, a more accurate image to be recognized can be obtained.
- the number of key points located in the detection frame is counted first; then, based on the counted quantity, the proportion of the key points located in the detection frame in all key points is determined, and when the proportion is greater than the preset proportion
- the above image region can be determined based on the position of the key point and the position of the detection frame.
- the determined image area may be only the area corresponding to the detection frame, or may include the area corresponding to the detection frame and all key points.
- the object recognition can be re-determined, and the detection frame and key points of the object to be recognized can be re-determined.
- the image region corresponding to the object to be recognized is extracted from the original image to obtain the image to be recognized. Specifically, the following steps can be used to achieve :
- the area corresponding to the above initial area information may be an area including a detection frame and a preset number of key points.
- the area corresponding to the target area information includes the complete object to be identified and a small part of the environment around the object, which is conducive to improving the accuracy of authenticity identification corresponding to the object to be identified.
- the image to be recognized when the image to be recognized is recognized, not only the detection frame and key points can be obtained, but also the area of the area occupied by the object to be recognized in the original image (hereinafter referred to as area area information) and the area of the object to be recognized can be determined.
- area area information which can be saved in a json file for subsequent processing.
- the required information can be extracted from the json file.
- the area area information can be obtained from the json file first, and in the case that the area area corresponding to the area area information is larger than the preset area, the detection frame and the multiple key points, extract the image area corresponding to the object to be recognized from the original image, and obtain the image to be recognized.
- the obtained image to be recognized can be saved as a picture in png format.
- Extracting the image region corresponding to the object to be recognized that occupies a larger area in the original image can ensure a larger resolution of the obtained image to be recognized, which is conducive to improving the accuracy of authenticity recognition.
- the present disclosure also provides a training method of a target deep neural network, as shown in FIG. 3 , which may include the following steps:
- the sample image above is an image including a sample object, for example, a sample image including a human face.
- the sample image here may be the original image taken by the shooting device, or a sub-image intercepted from the original image including the object to be recognized.
- the above network loss information includes the first loss information corresponding to the authenticity detection of the sample object and the second loss information corresponding to the determined loss area.
- the first sample probability that the sample object is a real object and the second sample probability that the sample object is a fake object can be determined.
- the calculation methods of the first sample probability and the second sample probability are the same as the above-mentioned calculation methods of the first prediction probability and the second prediction probability, and will not be repeated here.
- the first loss information can be generated using the following formula:
- L c represents the first loss information
- i represents the identity of the sample object as a real object or a fake object
- i is 0, which means the sample object is a real object
- i is 1, which means the sample is a fake object
- p represents the sample probability
- p 0 Indicates the first sample probability that the sample object is a real object
- p 1 represents the second sample probability that the sample object is a fake object
- q represents the standard probability
- q 0 represents the first standard probability that the sample object is a real object
- q 1 represents the sample The second standard probability that the object is a fake object.
- the second loss information can be generated using the following formula:
- L region represents the second loss information
- (i, j) represents the row and column identifier of the corresponding pixel
- I pred represents the predicted probability information corresponding to the corresponding pixel
- Mtarget represents the standard probability information corresponding to the corresponding pixel.
- the network loss information may be generated based on the first loss information and the second loss information.
- the following formula can be used to realize:
- L represents the network loss information
- a and b represent the preset weights.
- the prediction result of the forged region can be directly obtained, so the above prediction probability information can be used to characterize the prediction result of the forgery region; through the first prediction score and the second prediction
- the score can determine the authenticity identification result of the sample object. Therefore, based on the first predicted score and the second predicted score (corresponding to the detection task of authenticity identification), and the predicted probability information and the standard probability information (corresponding to the detection task of the forged region), these two detection tasks
- the prediction value is used to establish the network loss information of the training target neural network, which can effectively improve the detection accuracy of the trained target neural network through the mutual promotion of the two detection tasks.
- the writing order of each step does not mean a strict execution order and constitutes any limitation on the implementation process.
- the specific execution order of each step should be based on its function and possible
- the inner logic is OK.
- the embodiment of the present disclosure also provides an authenticity identification device corresponding to the authenticity identification method. Since the problem-solving principle of the device in the embodiment of the present disclosure is similar to the above-mentioned authenticity identification method in the embodiment of the disclosure, the device For the implementation, please refer to the implementation of the method, and the repeated parts will not be repeated.
- FIG. 4 it is a schematic diagram of the structure of an authenticity identification device provided by an embodiment of the present disclosure, and the device includes:
- An image acquiring module 410 configured to acquire an image to be recognized including an object to be recognized.
- the feature extraction module 420 is configured to extract image features corresponding to multiple preset dimensions of the image to be recognized; wherein, the number of feature points corresponding to the image feature is negatively correlated with the value of the corresponding preset dimension.
- the detection module 430 is configured to determine at least one of a forged region in the image to be recognized and authenticity result information of the object to be recognized based on the image feature.
- the detection module 430 is specifically configured to:
- the image feature includes a second image feature corresponding to each preset dimension in a plurality of preset dimensions, and a first image feature corresponding to each preset dimension in a plurality of preset dimensions;
- the feature extraction module 420 extracts image features corresponding to multiple preset dimensions of the image to be recognized, it is used to:
- the feature extraction module 420 determines that the image to be recognized corresponds to the second image feature of each preset dimension in multiple preset dimensions based on the first image feature corresponding to each preset dimension , specifically for:
- the first dimension group includes a first preset dimension and a second preset dimension, and the first preset dimension is higher than the second preset dimension;
- the first feature processing operation is performed on the first image feature corresponding to the first preset dimension in the first dimension group, to obtain the second preset dimension.
- a third image feature whose dimensions match; wherein, the feature map corresponding to the third image feature has the same image resolution as the feature map corresponding to the first image feature of the second preset dimension;
- the feature extraction module 420 determines the first dimension group based on the obtained third image feature and the first image feature corresponding to the second preset dimension in the first dimension group When the second image feature corresponding to the second preset dimension in is used for:
- a second image feature corresponding to a second preset dimension in the first dimension group is determined.
- the feature extraction module 420 when the feature extraction module 420 extracts the first image feature of the image to be recognized corresponding to each preset dimension in multiple preset dimensions, it is used to:
- the second dimension group includes a third preset dimension and a fourth preset dimension, and the third preset dimension is lower than said fourth preset dimension;
- a second feature processing operation is performed on the first image feature corresponding to the third preset dimension in the second dimension group to obtain the same as the fourth preset Dimensionally matched fourth image features;
- the detection module 430 is configured to: when obtaining the authenticity result information of the object to be identified based on the first image feature corresponding to the highest preset dimension among the multiple preset dimensions:
- the detection module 430 is configured to:
- a forged area in the image to be identified is determined.
- the image acquisition module 410 when the image acquisition module 410 acquires the image to be identified including the object to be identified, it is used to:
- an image region corresponding to the object to be recognized is extracted from the original image to obtain the image to be recognized.
- the image acquisition module 410 extracts the image region corresponding to the object to be recognized from the original image based on the detection frame and the plurality of key points, and obtains the image to be recognized , for:
- an image area corresponding to the object to be identified is extracted from the original image to obtain the image to be identified.
- the image acquisition module 410 extracts the image region corresponding to the object to be recognized from the original image based on the detection frame and the plurality of key points, and obtains the image to be recognized , for:
- the image area corresponding to the object to be identified is extracted from the original image, and the obtained The image to be recognized.
- the detection module 430 is further configured to:
- a heat map is generated based on the forged area and the image to be identified; wherein, the heat value of a pixel point in the heat map corresponding to the forged area is higher than a preset value.
- the above-mentioned device also includes a training module 440 for training the target deep neural network, and the training module 440 is used for:
- the sample image is input into the target deep neural network to be trained, and the multiple sample images are processed through the target neural network to obtain the first prediction score, the The second predicted score of the sample object being a forged object, and the predicted probability information that each pixel in each sample image is a forged pixel;
- the target deep neural network to be trained is trained by using the network loss information until the preset training condition is met, and a trained target deep neural network is obtained.
- the training module 440 is used to:
- the network loss information is generated based on the first loss information and the second loss information.
- an embodiment of the present disclosure also provides an electronic device.
- FIG. 5 it is a schematic structural diagram of an electronic device 500 provided by an embodiment of the present disclosure, including a processor 51 , a memory 52 , and a bus 53 .
- the memory 52 is used to store execution instructions, including a memory 521 and an external memory 522; the memory 521 here is also called an internal memory, and is used to temporarily store the calculation data in the processor 51 and the data exchanged with the external memory 522 such as a hard disk,
- the processor 51 exchanges data with the external memory 522 through the memory 521.
- the processor 51 communicates with the memory 52 through the bus 53, so that the processor 51 executes the following instructions:
- Embodiments of the present disclosure also provide a computer-readable storage medium, on which a computer program is stored, and when the computer program is run by a processor, the steps of the authenticity identification method described in the foregoing method embodiments are executed.
- the storage medium may be a volatile or non-volatile computer-readable storage medium.
- the computer program product of the authenticity identification method provided by the embodiments of the present disclosure includes a computer-readable storage medium storing program codes, and the instructions included in the program code can be used to execute the authenticity identification method described in the above method embodiments
- the computer program product can be specifically realized by means of hardware, software or a combination thereof.
- the computer program product is embodied as a computer storage medium, and in another optional embodiment, the computer program product is embodied as a software product, such as a software development kit (Software Development Kit, SDK) etc. wait.
- the units described as separate components may or may not be physically separated, and the components shown as units may or may not be physical units, that is, they may be located in one place, or may be distributed to multiple network units. Part or all of the units can be selected according to actual needs to achieve the purpose of the solution of this embodiment.
- each functional unit in each embodiment of the present disclosure may be integrated into one processing unit, each unit may exist separately physically, or two or more units may be integrated into one unit.
- the functions are realized in the form of software function units and sold or used as independent products, they can be stored in a non-volatile computer-readable storage medium executable by a processor.
- the technical solution of the present disclosure is essentially or the part that contributes to the prior art or the part of the technical solution can be embodied in the form of a software product, and the computer software product is stored in a storage medium, including Several instructions are used to make a computer device (which may be a personal computer, a server, or a network device, etc.) execute all or part of the steps of the methods described in various embodiments of the present disclosure.
- the aforementioned storage media include: U disk, mobile hard disk, read-only memory (Read-Only Memory, ROM), random access memory (Random Access Memory, RAM), magnetic disk or optical disc and other media that can store program codes. .
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- General Health & Medical Sciences (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Evolutionary Computation (AREA)
- Artificial Intelligence (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Health & Medical Sciences (AREA)
- Image Analysis (AREA)
Abstract
Description
Claims (17)
- 一种真伪识别方法,其特征在于,应用于目标深度神经网络,包括:获取包括待识别对象的待识别图像;提取所述待识别图像对应于多个预设维度的图像特征;其中,图像特征对应的特征点的数量与对应的预设维度的值负相关;基于所述图像特征,确定所述待识别图像中的伪造区域和所述待识别对象的真伪结果信息中的至少一项。
- 根据权利要求1所述的方法,其特征在于,所述基于所述图像特征,确定所述待识别图像中的伪造区域和所述待识别对象的真伪结果信息中的至少一项,包括:基于所述多个预设维度中最高预设维度对应的第一图像特征,得到所述待识别对象的真伪结果信息;和/或,基于所述多个预设维度中最低预设维度对应的第二图像特征,确定所述待识别图像中的伪造区域。
- 根据权利要求1或2所述的方法,其特征在于,所述图像特征包括多个预设维度中每个预设维度对应的第二图像特征,和多个预设维度中每个预设维度对应的第一图像特征;所述提取所述待识别图像对应于多个预设维度的图像特征,包括:提取所述待识别图像对应于多个预设维度中每个预设维度的第一图像特征;基于各个预设维度对应的第一图像特征,确定所述待识别图像对应于多个预设维度中每个预设维度的第二图像特征。
- 根据权利要求3所述的方法,其特征在于,所述基于各个预设维度对应的第一图像特征,确定所述待识别图像对应于多个预设维度中每个预设维度的第二图像特征,包括:将每两个相邻的预设维度作为一个组,得到多个第一维度组;所述第一维度组包括第一预设维度和第二预设维度,所述第一预设维度高于所述第二预设维度;按照维度从高到低的顺序,分别对非最高维度的第一维度组执行如下操作,直到确定所述最低预设维度对应的第二图像特征:按照所述第一维度组中的第二预设维度,对所述第一维度组中的第一预设维度对应的第二图像特征进行第一特征处理操作,得到与所述第二预设维度相匹配的第三图像特征;其中,所述第三图像特征对应的特征图与所述第二预设维度的第一图像特征对应的特征图具有相同的图像分辨率;基于得到的所述第三图像特征和所述第一维度组中的第二预设维度对应的第一图像特征,确定所述第一维度组中的第二预设维度对应的第二图像特征。
- 根据权利要求4所述的方法,其特征在于,所述基于得到的所述第三图像特征和所述第一维度组中的第二预设维度对应的第一图像特征,确定所述第一维度组中的第二预设维度对应的第二图像特征,包括:将得到的所述第三图像特征与所述第一维度组中的第二预设维度对应的第一图像特征进行拼接处理,得到拼接图像特征;基于所述拼接图像特征,确定所述第一维度组中的第二预设维度对应的第二图像特征。
- 根据权利要求3至5任一项所述的方法,其特征在于,所述提取所述待识别图像对 应于多个预设维度中每个预设维度的第一图像特征,包括:提取所述待识别图像对应于最低预设维度的第一图像特征;将每两个相邻的预设维度作为一个组,得到多个第二维度组;所述第二维度组包括第三预设维度和第四预设维度,所述第三预设维度低于所述第四预设维度;按照维度从低到高的顺序,分别对每个第二维度组执行如下操作,直到确定除最低预设维度以外的每个预设维度的第一图像特征:按照所述第二维度组中的第四预设维度,对所述第二维度组中的第三预设维度对应的第一图像特征进行第二特征处理操作,得到与所述第四预设维度相匹配的第四图像特征;基于得到的所述第四图像特征,确定所述第二维度组中的第四预设维度对应的第一图像特征。
- 根据权利要求2所述的方法,其特征在于,所述基于所述多个预设维度中最高预设维度对应的第一图像特征,得到所述待识别对象的真伪结果信息,包括:基于最高预设维度对应的第一图像特征,确定所述待识别对象为真实对象的第一分值和所述待识别对象为伪造对象的第二分值;基于所述第一分值和第二分值,确定所述待识别对象为真实对象的第一预测概率和所述待识别对象为伪造对象的第二预测概率;基于所述第一预测概率和所述第二预测概率,确定所述待识别对象的真伪结果信息。
- 根据权利要求2或7所述的方法,其特征在于,所述基于所述多个预设维度中最低预设维度对应的第二图像特征,确定所述待识别图像中的伪造区域,包括:基于最低预设维度对应的第二图像特征,确定所述待识别图像中每个像素点为伪造像素点的概率信息;基于确定的所述概率信息和预设概率阈值,确定所述待识别图像中每个像素点为伪造像素点的伪造结果信息;基于每个像素点对应的所述伪造结果信息,确定所述待识别图像中的伪造区域。
- 根据权利要求1至8任一项所述的方法,其特征在于,所述获取包括待识别对象的待识别图像,包括:获取原始图像;对所述原始图像进行识别,确定所述待识别对象的检测框和所述待识别对象对应的多个关键点;基于所述检测框和所述多个关键点,从所述原始图像中提取所述待识别对象对应的图像区域,得到所述待识别图像。
- 根据权利要求9所述的方法,其特征在于,所述基于所述检测框和所述多个关键点,从所述原始图像中提取所述待识别对象对应的图像区域,得到所述待识别图像,包括:基于所述多个关键点和所述检测框,确定所述待识别对象在所述原始图像中的初始区域信息;按照预设比例信息,对所述初始区域信息对应的区域进行扩展,得到所述待识别对象在所述原始图像中的目标区域信息;按照所述目标区域信息,从所述原始图像中提取所述待识别对象对应的图像区域,得 到所述待识别图像。
- 根据权利要求9所述的方法,其特征在于,所述基于所述检测框和所述多个关键点,从所述原始图像中提取所述待识别对象对应的图像区域,得到所述待识别图像,包括:确定所述待识别对象在所述原始图像中区域面积信息;在所述区域面积信息对应的区域面积大于预设面积的情况下,对基于所述检测框和所述多个关键点,从所述原始图像中提取所述待识别对象对应的图像区域,得到所述待识别图像。
- 根据权利要求1至11任一项所述的方法,其特征在于,在确定所述伪造区域之后,还包括:基于所述伪造区域和所述待识别图像,生成热力图;其中,所述热力图中对应于所述伪造区域的像素点的热力值高于预设值。
- 根据权利要求1至12任一项所述的方法,其特征在于,还包括训练所述目标深度神经网络的步骤:获取多张样本图像;将所述样本图像输入待训练的目标深度神经网络,经过所述目标神经网络对所述多张样本图像进行处理,得到每张样本图像中样本对象为真实对象的第一预测分值、所述样本对象为伪造对象的第二预测分值,和每张样本图像中每个像素点为伪造像素点的预测概率信息;基于每张样本图像对应的第一预测分值、第二预测分值、预测概率信息和标准概率信息,生成网络损失信息;利用所述网络损失信息对待训练的目标深度神经网络进行训练,直到满足预设训练条件,得到训练好的目标深度神经网络。
- 根据权利要求13所述的方法,其特征在于,所述基于每张样本图像对应的第一预测分值、第二预测分值、预测概率信息和标准概率信息,生成网络损失信息,包括:基于每张样本图像对应的第一预测分值、第二预测分值,生成第一损失信息;基于每张样本图像对应的预测概率信息和标准概率信息,生成第二损失信息;基于所述第一损失信息和所述第二损失信息,生成所述网络损失信息。
- 一种真伪识别装置,其特征在于,包括:图像获取模块,用于获取包括待识别对象的待识别图像;特征提取模块,用于提取所述待识别图像对应于多个预设维度的图像特征;其中,图像特征对应的特征点的数量与对应的预设维度的值负相关;检测模块,用于基于所述图像特征,确定所述待识别图像中的伪造区域和所述待识别对象的真伪结果信息中的至少一项。
- 一种电子设备,其特征在于,包括:处理器、存储器和总线,所述存储器存储有所述处理器可执行的机器可读指令,当电子设备运行时,所述处理器与所述存储器之间通过总线通信,所述机器可读指令被所述处理器执行时执行如权利要求1至14任一项所述的真伪识别方法的步骤。
- 一种计算机可读存储介质,其特征在于,该计算机可读存储介质上存储有计算机 程序,该计算机程序被处理器运行时执行如权利要求1至14任一项所述的真伪识别方法的步骤。
Applications Claiming Priority (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202111275648.6 | 2021-10-29 | ||
| CN202111275648.6A CN113920565A (zh) | 2021-10-29 | 2021-10-29 | 真伪识别方法、装置、电子设备以及存储介质 |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| WO2023071180A1 true WO2023071180A1 (zh) | 2023-05-04 |
Family
ID=79244013
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| PCT/CN2022/096019 Ceased WO2023071180A1 (zh) | 2021-10-29 | 2022-05-30 | 真伪识别方法、装置、电子设备以及存储介质 |
Country Status (2)
| Country | Link |
|---|---|
| CN (1) | CN113920565A (zh) |
| WO (1) | WO2023071180A1 (zh) |
Cited By (3)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN116630729A (zh) * | 2023-05-26 | 2023-08-22 | 太平金融科技服务(上海)有限公司深圳分公司 | 一种伪造图像识别方法、装置、设备及介质 |
| CN118864479A (zh) * | 2024-09-26 | 2024-10-29 | 杭州海康威视数字技术股份有限公司 | 一种伪造图像检测方法、系统及装置 |
| CN119274024A (zh) * | 2024-12-10 | 2025-01-07 | 国网数字科技控股有限公司 | 图像真伪检测模型训练方法和图像真伪检测方法 |
Families Citing this family (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN113920565A (zh) * | 2021-10-29 | 2022-01-11 | 上海商汤智能科技有限公司 | 真伪识别方法、装置、电子设备以及存储介质 |
| CN116824211A (zh) * | 2023-05-08 | 2023-09-29 | 中国银联股份有限公司 | 一种图像处理方法、装置、设备及存储介质 |
Citations (5)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20130163829A1 (en) * | 2011-12-21 | 2013-06-27 | Electronics And Telecommunications Research Institute | System for recognizing disguised face using gabor feature and svm classifier and method thereof |
| CN110555481A (zh) * | 2019-09-06 | 2019-12-10 | 腾讯科技(深圳)有限公司 | 一种人像风格识别方法、装置和计算机可读存储介质 |
| CN111310616A (zh) * | 2020-02-03 | 2020-06-19 | 北京市商汤科技开发有限公司 | 图像处理方法及装置、电子设备和存储介质 |
| CN112668462A (zh) * | 2020-12-25 | 2021-04-16 | 平安科技(深圳)有限公司 | 车损检测模型训练、车损检测方法、装置、设备及介质 |
| CN113920565A (zh) * | 2021-10-29 | 2022-01-11 | 上海商汤智能科技有限公司 | 真伪识别方法、装置、电子设备以及存储介质 |
Family Cites Families (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN111738244B (zh) * | 2020-08-26 | 2020-11-24 | 腾讯科技(深圳)有限公司 | 图像检测方法、装置、计算机设备和存储介质 |
| CN112465807A (zh) * | 2020-12-14 | 2021-03-09 | 深圳市芊熠智能硬件有限公司 | 车牌图像真伪识别方法、装置、设备和介质 |
-
2021
- 2021-10-29 CN CN202111275648.6A patent/CN113920565A/zh active Pending
-
2022
- 2022-05-30 WO PCT/CN2022/096019 patent/WO2023071180A1/zh not_active Ceased
Patent Citations (5)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20130163829A1 (en) * | 2011-12-21 | 2013-06-27 | Electronics And Telecommunications Research Institute | System for recognizing disguised face using gabor feature and svm classifier and method thereof |
| CN110555481A (zh) * | 2019-09-06 | 2019-12-10 | 腾讯科技(深圳)有限公司 | 一种人像风格识别方法、装置和计算机可读存储介质 |
| CN111310616A (zh) * | 2020-02-03 | 2020-06-19 | 北京市商汤科技开发有限公司 | 图像处理方法及装置、电子设备和存储介质 |
| CN112668462A (zh) * | 2020-12-25 | 2021-04-16 | 平安科技(深圳)有限公司 | 车损检测模型训练、车损检测方法、装置、设备及介质 |
| CN113920565A (zh) * | 2021-10-29 | 2022-01-11 | 上海商汤智能科技有限公司 | 真伪识别方法、装置、电子设备以及存储介质 |
Non-Patent Citations (1)
| Title |
|---|
| WEN WU, ZHANG XU-HONG, WEN ZHI-YUN: "Text Feature Selection based on Improved CHI and PCA", COMPUTER ENGINEERING & SCIENCE, vol. 43, no. 9, 1 September 2021 (2021-09-01), XP093061965 * |
Cited By (3)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN116630729A (zh) * | 2023-05-26 | 2023-08-22 | 太平金融科技服务(上海)有限公司深圳分公司 | 一种伪造图像识别方法、装置、设备及介质 |
| CN118864479A (zh) * | 2024-09-26 | 2024-10-29 | 杭州海康威视数字技术股份有限公司 | 一种伪造图像检测方法、系统及装置 |
| CN119274024A (zh) * | 2024-12-10 | 2025-01-07 | 国网数字科技控股有限公司 | 图像真伪检测模型训练方法和图像真伪检测方法 |
Also Published As
| Publication number | Publication date |
|---|---|
| CN113920565A (zh) | 2022-01-11 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| WO2023071180A1 (zh) | 真伪识别方法、装置、电子设备以及存储介质 | |
| Zhou et al. | Two-stream neural networks for tampered face detection | |
| CN108805047B (zh) | 一种活体检测方法、装置、电子设备和计算机可读介质 | |
| EP3916627A1 (en) | Living body detection method based on facial recognition, and electronic device and storage medium | |
| WO2022161286A1 (zh) | 图像检测方法、模型训练方法、设备、介质及程序产品 | |
| CN110853033B (zh) | 基于帧间相似度的视频检测方法和装置 | |
| WO2019218824A1 (zh) | 一种移动轨迹获取方法及其设备、存储介质、终端 | |
| JP2022133378A (ja) | 顔生体検出方法、装置、電子機器、及び記憶媒体 | |
| WO2020258667A1 (zh) | 图像识别方法及装置、非易失性可读存储介质、计算机设备 | |
| WO2021082562A1 (zh) | 活体检测方法、装置、电子设备、存储介质及程序产品 | |
| CN113642639B (zh) | 活体检测方法、装置、设备和存储介质 | |
| CN113723310B (zh) | 基于神经网络的图像识别方法及相关装置 | |
| US20200218772A1 (en) | Method and apparatus for dynamically identifying a user of an account for posting images | |
| CN115171199B (zh) | 图像处理方法、装置及计算机设备、存储介质 | |
| Huang et al. | Learnable Descriptive Convolutional Network for Face Anti-Spoofing. | |
| Han et al. | Two-stream neural networks for tampered face detection | |
| CN112396059A (zh) | 一种证件识别方法、装置、计算机设备及存储介质 | |
| CN114511893B (zh) | 一种卷积神经网络的训练方法、人脸识别方法及装置 | |
| WO2023272991A1 (zh) | 一种数据处理方法、装置、计算机设备和存储介质 | |
| CN118072400A (zh) | 一种基于空频时序特征的视频人脸伪造检测方法及系统 | |
| Einy et al. | IoT Cloud‐Based Framework for Face Spoofing Detection with Deep Multicolor Feature Learning Model | |
| CN114782853B (zh) | 视频数据处理方法、装置、计算机设备和存储介质 | |
| CN115273245A (zh) | 活体检测的方法、装置及计算机可读存储介质 | |
| CN116110136A (zh) | 活体检测方法和系统 | |
| CN116664386A (zh) | 图像处理方法、装置、移动终端以及计算机可读存储介质 |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| 121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 22885090 Country of ref document: EP Kind code of ref document: A1 |
|
| NENP | Non-entry into the national phase |
Ref country code: DE |
|
| 122 | Ep: pct application non-entry in european phase |
Ref document number: 22885090 Country of ref document: EP Kind code of ref document: A1 |
|
| 122 | Ep: pct application non-entry in european phase |
Ref document number: 22885090 Country of ref document: EP Kind code of ref document: A1 |
|
| 32PN | Ep: public notification in the ep bulletin as address of the adressee cannot be established |
Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 26/11/2024) |
|
| 122 | Ep: pct application non-entry in european phase |
Ref document number: 22885090 Country of ref document: EP Kind code of ref document: A1 |