US20230076017A1 - Method for training neural network by using de-identified image and server providing same - Google Patents
Method for training neural network by using de-identified image and server providing same Download PDFInfo
- Publication number
- US20230076017A1 US20230076017A1 US17/795,597 US202117795597A US2023076017A1 US 20230076017 A1 US20230076017 A1 US 20230076017A1 US 202117795597 A US202117795597 A US 202117795597A US 2023076017 A1 US2023076017 A1 US 2023076017A1
- Authority
- US
- United States
- Prior art keywords
- image
- neural network
- training
- dimensions
- decoding
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
- G06N3/0455—Auto-encoder networks; Encoder-decoder networks
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/088—Non-supervised learning, e.g. competitive learning
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/74—Image or video pattern matching; Proximity measures in feature spaces
- G06V10/761—Proximity, similarity or dissimilarity measures
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/77—Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
- G06V10/7715—Feature extraction, e.g. by transforming the feature space, e.g. multi-dimensional scaling [MDS]; Mappings, e.g. subspace methods
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/77—Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
- G06V10/774—Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
- G06V10/7747—Organisation of the process, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/82—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/50—Maintenance of biometric data or enrolment thereof
- G06V40/53—Measures to keep reference information secret, e.g. cancellable biometrics
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/0499—Feedforward networks
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/09—Supervised learning
Definitions
- the disclosure relates to a neural network training method, and more particularly, to a neural network training method using a de-identified image.
- artificial intelligence technology is also replacing conventional data analysis technology to expand the scope of its application.
- the artificial intelligence uses biometric information such as a fingerprint, a face or the like to verify the identity of a user as a method for user authentication and provides a personalized service based thereon.
- a neural network used for the artificial intelligence may need to learn various biometric information, and training data itself used for this training may include the personal information as the biometric information. It is thus necessary to be cautious about the usage and management of the personal information.
- the current training method used for the artificial intelligence may be about data in a general format, and does not consider the protection and de-identification of the personal information.
- An object of the disclosure is to provide a neural network training method using a de-identified image.
- Another object of the disclosure is to provide a method to de-identify an image by considering prediction accuracy of a neural network.
- a neural network training method using a de-identified image includes: encoding a first image represented by an n-th dimensional vector into a predetermined p-th dimensional second image; decoding the second image into a q-th dimensional third image; inputting the third image to the neural network and extracting object information included in the third image; and training at least one parameter information used for a computation in the neural network by using the extracted object information.
- the training may include training the parameter information used for the encoding or decoding computation based on the extracted object information.
- the training may include training the parameter information by using a correlation between a degree of de-identification defining similarity between the third image and the first image and prediction accuracy of the neural network for the object information.
- a size of p-th dimensions may be determined based on the degree of the de-identification.
- N-th dimensions and q-th dimensions may have the same size.
- the method may further include storing the second image encoded in the determined p-th dimensions, wherein the decoding may include decoding the stored second image into the third image when the neural network performs the training.
- the decoding may include decoding the second image to have a data value different from that of the first image when decoding the second image in the q-th dimensions.
- a neural network training server using a de-identified image includes: a de-identification unit encoding a first image represented by an n-th dimensional vector into a predetermined p-th dimensional second image, and then decoding the second image into a q-th dimensional third image; and a training unit inputting the third image to the neural network and extracting an object information included in the third image to train at least one parameter information used for a computation in the neural network.
- the training unit may train the parameter information used for the encoding or decoding computation based on the extracted object information.
- the training unit may train the parameter information by using a correlation between a degree of de-identification defining similarity between the third image and the first image and prediction accuracy of the neural network for the object information.
- the server may further include a database storing the second image encoded in determined p-th dimensions, wherein the de-identification unit decodes the stored second image into the third image when the neural network performs the training.
- the de-identification unit may decode the second image to have a data value different from that of the first image when decoding the second image in the q-th dimensions.
- the neural network it is possible for the neural network to perform the training without using the personal information included in the image because the de-identified image is used for the training of the neural network.
- the disclosure it is also possible to process the image more efficiently by adjusting the dimension of the information included in the image based on the prediction performance of the neural network and encoding the same.
- FIG. 1 is an exemplary diagram showing a system providing a service based on de-identified image recognition according to an embodiment of the disclosure.
- FIG. 2 is an exemplary diagram showing object information detection in a neural network training method performed by a service operation server according to an embodiment of the disclosure.
- FIG. 3 is a flowchart showing a training process of the service operation server according to an embodiment of the disclosure.
- FIG. 4 is an exemplary diagram showing an image encoding method of the service operation server according to an embodiment of the disclosure.
- FIG. 5 is an exemplary diagram showing an image decoding method of the service operation server according to an embodiment of the disclosure.
- FIG. 6 is an exemplary diagram showing a configuration of the neural network of the service operation server according to an embodiment of the disclosure.
- FIG. 7 is an exemplary diagram showing the neural network training method performed by the service operation server according to an embodiment of the disclosure.
- FIG. 8 is a block diagram showing a configuration of the service operation server according to another embodiment of the disclosure.
- FIG. 1 is an exemplary diagram showing a system providing a service based on de-identified image recognition according to an embodiment of the disclosure.
- a user 5 may capture the user's own face by using a user terminal 10 .
- a service operation server 100 may receive a captured face image 20 from the user 5 , compare an identity image with the captured facial image 20 , perform user authentication, and provide a service based thereon.
- the service operation server 100 may perform the user authentication by using a neural network and perform a faster and more accurate user verification procedure than a conventional image processing method by using the neural network.
- Iterative training may be required in order to the neural network used for the user authentication to determine the identity of the user through the image, and the service operation server 100 may thus store various training data in a separate database 200 .
- the neural network used for the user authentication may be required to training images including facial information of the user 5 as the training data and may thus be required to directly use personal information.
- the service operation server 100 may protect the personal information by de-identifying image including the personal information such as the facial information used as the training data and making it impossible to restore the de-identified image to an original image.
- the service operation server 100 may de-identify the image received from the user terminal 10 and store the de-identified image as the training data in the database 200 to be used for the training of the neural network.
- the neural network used for detecting the image may use data values in the images represented as multi-dimensional vectors in a color space.
- the data values in the image may be defined as a height, a width and a depth, which define its size as a three-dimensional vector.
- the depth may be defined as a size of a channel, which is a method of representing information on each pixel.
- the depth of the image may be represented as a red-green-blue (RGB) code as information that defines its color.
- RGB red-green-blue
- a size of the channel may be three (3) and each channel may store a RGB value of the corresponding pixel.
- the channel it is possible to represent information in the image as the multi-dimensional vector by allowing the channel to include grayscale or hue, saturation and value (HSV).
- HSV hue, saturation and value
- the plurality of images may be used for the training of the neural network, and the training data may also include a four-dimensional vector by adding the number of these images.
- the neural network may apply a filter defined for each channel to the data values in the image represented by the multi-dimensional vector to perform a convolutional computation and extract feature information in the image.
- the convolutional computation may be sequentially performed in units of layers included in the neural network, and the neural network may include a network in which the convolutional layers are stacked.
- the neural network may be trained by adjusting a weight of a filter that emphasizes the specific color.
- the neural network may be trained to extract object information 30 from the facial image including the facial information of the user 5 for the user verification.
- the object information 30 may be the feature information having the highest importance for distinguishing a large number of data, and may be, for example, unique information having the highest identification power in the facial information.
- the neural network may extract eyes, nose, mouth and the like, included in the facial information, from the image, and may be trained to extract, as the object information 30 , information determined to be necessary to identify whose facial image these extracted images belong to, such as their shapes, relative positional relationship and the like.
- the object information 30 may include one or more feature information, and a weight may be given to each feature information based importance required for the identification.
- the facial image here may include the personal information as it is, and the training data itself used for the training may be the personal information.
- the neural network may use an actual image as the training data to have a robust performance on images under various capturing conditions, and the actual image may include more diverse personal information.
- an additional processing process for separately protecting the personal information may be required for using the facial image for the training of the neural network, and the training method according to the disclosure may de-identify the image including the personal information and train the neural network.
- the service operation server 100 may include a de-identification unit 110 and a training unit 120 .
- the de-identification unit 110 may de-identify the face image captured by and transmitted from the user terminal 10 .
- the de-identification unit 110 may include an encoder 112 and a decoder 114 .
- the de-identification unit 110 may encode the received face image by using the encoder 112 and convert the image through decoding again to generate training data.
- the image converted by the decoder 114 may include information represented in a format different from that of the original face image.
- the converted image may be converted for the identity of the object included in the image not to be recognized with naked eyes and may have a format which is impossible to be restored to the original face image.
- the converted image may be used as input data of a neural network model and may thus be information making it impossible to recognize the identity with the naked eyes and represented in a more efficient format for the neural network model to extract the object information 30 .
- the training unit 120 may train the neural network which extracts the object information 30 by using the de-identified image received from the de-identification unit 110 .
- the neural network may output the object information 30 .
- the training unit 120 may train and update weights of various parameters used for extracting the feature information in the neural network by using the output object information 30 .
- the training unit 120 may also train various parameters of the encoder 112 or decoder 114 , used to de-identify the image together with the neural network.
- the first image 20 represented by an n-th dimensional vector may be encoded into a predetermined p-th dimensional second image 21 (S 10 ).
- N-th dimensions may define the number of features representing information included in the first image 20 captured by the user terminal 10 and transmitted to the service operation server 100 .
- the n-th dimensions may be a size of the image and defined as three dimensions of height, width and depth of the image.
- the n-th dimensional first image 20 may be compressed using the encoder 112 and converted into the p-th dimensional second image 21 .
- the received first image 20 may be input to the encoder 112 , and the encoder 112 may generate the p-th dimensional second image 21 according to a predetermined compression rule.
- the second image 21 may be encoded into the p-th dimensions, which are reduced dimensions different from dimensions representing the first image 20 .
- data values of the n-th dimensions representing the first image 20 may be compressed according to an encoding standard, and the second image 21 may thus be represented in the p-th dimensions having a reduced data size compared to the n-th dimensions.
- the encoder 112 may encode the first image 20 in the p-th dimensions in which the number of dimensions itself is reduced compared to the n-th dimensions or the data size representing the data value of each dimension is reduced.
- the p-th dimensional second image 21 generated by the encoder 112 may be decoded into a q-th dimensional third image 22 (S 20 ).
- Q-th dimensions may re-expand the p-th dimensions through the decoder 114 and redefine the data values in the input second image 21 .
- the decoder 114 may expand the data values of the second image 21 defined in the p-th dimensions back to data values in the q-th dimensions.
- the decoder 114 may expand the data values of the second image 21 into q-th dimensional information by using weight parameters.
- the q-th dimensions decoded in this embodiment may be defined as the same dimensions as the n-th dimensions of the first image 20 before being encoded.
- the third image 22 decoded in the q-th dimensions may be generated from the encoded second image 21 and may have different data values from the original first image 20 on the same dimensions.
- the first image 20 may be an image captured in a general way and it is thus possible to identify the object included therein with the naked eyes.
- the third image 22 may include information redefined after being encoded, and include data values in a form in which the object is impossible to be identified with the naked eyes.
- the training method according to this embodiment may generate the second image 21 and the third image 22 by de-identifying the first image 20 .
- the de-identified third image 22 may be input to the neural network, and the object information 30 included in the third image 22 may be extracted (S 30 ).
- the neural network may be trained using the extracted object information 30 (S 40 ).
- the neural network may update parameters in the network to have a more robust performance on the de-identified images, which are the input data.
- the neural network may include a plurality of hidden layers 124 and 126 , which perform the convolutional computation in an input layer 122 and an output layer 128 .
- the hidden layers 124 and 126 may be nodes performing the weight computation in response to the input data.
- the neural network may include the convolutional layers performing the convolutional computation, and the nodes in the hidden layers 124 and 126 may be filters performing the convolutional computation on the data values in the de-identified image for each unit region.
- the data values for each channel in the input image may be sequentially computed while passing through the plurality of convolutional layers, the characteristic distribution or pattern of the data values emphasized may be found through this computation, and the object information 30 in the image may be extracted.
- the object information 30 extracted as an output signal for the training may be compared with a target value of the actual image to calculate an error, and the calculated error may be propagated in the neural network in a reverse direction according to a backpropagation algorithm.
- the neural network may be trained to find a cause of the error by using the backpropagation algorithm and change a connection weight between the nodes of the neural network.
- the training method according to this embodiment may be performed by training a weight parameter W 2 between each node performing the convolutional computation in the neural network, together with a de-identification parameter W 1 used for the encoding or the decoding.
- the training may be performed to update the parameters used for the computation as the size and sampling rate of a unit block that becomes a data sampling standard when the dimensions of the encoder 112 are reduced.
- the training may also be performed to update the relevant weight parameters when the dimensions used for the decoding are expanded.
- the training method according to this embodiment may update the parameters in the neural network by using the extracted object information 30 and simultaneously update the parameters of the encoder 112 and decoder 114 , which perform the de-identification.
- the degree of the de-identification may be determined based on the compression rate of the encoder 112 , and it may thus be advantageous to protect the personal information included in the image when the degree of the de-identification is increased.
- prediction accuracy of the neural network by using the decoded third image 22 may be lower.
- the training unit 120 may also relatively adjust the parameters used for the de-identification in consideration of the degree of the de-identification and the prediction accuracy. For example, the training unit 120 may update the encoding or decoding parameters to weaken information necessary to recognize the identity with the naked eyes and to strengthen information advantageous for extracting the feature information of the neural network.
- the training unit may further train the parameter information by using a correlation between the degree of the de-identification defining similarity between the third image 22 and the first image 20 and the prediction accuracy of the neural network for the object information 30 .
- the training method may train the encoder 112 , the decoder 114 and the neural network together, change the image to specify a portion that may be recognized well by a machine, not a human, and simultaneously allow the neural network to easily adapt thereto.
- the neural network may be trained to emphasize a portion necessary to mechanically extract the object from the image and weaken the rest.
- the service operation server 100 may include the de-identification unit 110 and the training unit 120 .
- the de-identification unit 110 may include the encoder 112 and the decoder 114 .
- the encoder 112 may compress the n-th dimensional first image 20 captured by the user terminal 10 and generate the compressed p-th dimensional second image 21 .
- the encoder 112 may compress the data values of the n-th dimensions in the first image 20 according to the encoding standard, and thus may generate the p-th dimensional second image 21 having the reduced dimensions compared to the n-th dimensions.
- the decoder 114 may change the second image 21 having the compressed information compared to the first image 20 to the q-th dimensional third image 22 .
- the q-th dimensions may be defined as the same dimensions as the n-th dimensions of the first image 20 .
- the third image 22 changed to the q-th dimensions may be decoded from the second image 21 and may have different data values from the original first image 20 .
- the third image 22 may include the information redefined after being encoded, and may include the data values in the form in which the object is impossible to be identified with the naked eyes.
- the de-identification unit 110 may use the encoder 112 and the decoder 114 to use the second image 21 as an intermediate image and finally generate the third image 22 as the training data.
- the training unit 120 may use the third image 22 as the input data and extract the object information 30 by using the neural network.
- the neural network may include the plurality of layers sequentially performing the convolutional computation on the input data, and each layer may include the plurality of nodes including the data for the convolutional computation.
- the neural network may perform the convolutional computation on the data values in the q-th dimensions of the third image 22 . That is, the node in the convolution layer may perform the convolutional computation on the data values through the filter.
- each node may define the weight when transmitting the data value as the parameter, and in this embodiment, the training may be performed to update the weight by using the object information 30 output to the neural network.
- training unit 120 may also train the parameters of the de-identification unit 110 in addition to the neural network.
- the de-identification unit 110 may encode the first image 20 as the second image 21 and encode the same back into the q-th dimensions, and the number of the p-th dimensions defining the dimensions of the second image 21 may thus be determined through the training.
- the training unit 120 may train the neural network to output an optimal result for the de-identified image in consideration of both the degree of the de-identification and the degree of the prediction of the neural network.
- the service operation server 100 may include a separate database 200 storing the de-identified image.
- the iterative training may be required to be performed through a large amount of the training data for the training of the neural network, and it is thus also possible to store the de-identified image in the database 200 and train the neural network through the same.
- the service operation server 100 may de-identify the received first images 20 and then delete the same, and store only the de-identified second image 21 or third image 22 in the database 200 to prevent damage caused by leakage of the personal information.
- the second image 21 is stored in the database 200 , it is possible to further increase efficiency in the training of the neural network by accumulating a determined amount of the second images 21 , and bulk-decoding the accumulated second images 21 to generate the third image 22 .
- the neural network it is possible for the neural network to perform the training without directly using the personal information included in the image because the de-identified image is used for the training of the network.
- the various embodiments described herein may be implemented in a computer-readable recording medium by using, for example, software, hardware or a combination thereof.
- the embodiments described herein may be implemented using at least one of application specific integrated circuits (ASICs), digital signal processors (DSPs), digital signal processing devices (DSPDs), programmable logic devices (PLDs), field programmable gate arrays (FPGAs), processors, controllers, micro-controllers, microprocessors or electric units for performing other functions.
- ASICs application specific integrated circuits
- DSPs digital signal processors
- DSPDs digital signal processing devices
- PLDs programmable logic devices
- FPGAs field programmable gate arrays
- processors controllers, micro-controllers, microprocessors or electric units for performing other functions.
- the embodiments described in the disclosure may be implemented by the processor itself.
- the embodiments such as procedures and functions described in the specification may be implemented by separate software modules.
- Each of the software modules may perform one or more functions and operations described in the specification.
- a software code may be implemented as a software application written in a suitable programming language.
- the software code may be stored in a memory module and executed by a control module.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Evolutionary Computation (AREA)
- General Physics & Mathematics (AREA)
- Software Systems (AREA)
- General Health & Medical Sciences (AREA)
- Health & Medical Sciences (AREA)
- Artificial Intelligence (AREA)
- Computing Systems (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Multimedia (AREA)
- Databases & Information Systems (AREA)
- Medical Informatics (AREA)
- Molecular Biology (AREA)
- Life Sciences & Earth Sciences (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Data Mining & Analysis (AREA)
- General Engineering & Computer Science (AREA)
- Mathematical Physics (AREA)
- Human Computer Interaction (AREA)
- Image Analysis (AREA)
Applications Claiming Priority (3)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| KR10-2020-0010271 | 2020-01-29 | ||
| KR1020200010271A KR102126197B1 (ko) | 2020-01-29 | 2020-01-29 | 비식별화된 이미지를 이용한 신경망 학습 방법 및 이를 제공하는 서버 |
| PCT/KR2021/001023 WO2021153971A1 (fr) | 2020-01-29 | 2021-01-26 | Procédé d'apprentissage de réseau neuronal utilisant une image anonymisée et serveur mettant en œuvre celui-ci |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| US20230076017A1 true US20230076017A1 (en) | 2023-03-09 |
Family
ID=71407813
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| US17/795,597 Pending US20230076017A1 (en) | 2020-01-29 | 2021-01-26 | Method for training neural network by using de-identified image and server providing same |
Country Status (3)
| Country | Link |
|---|---|
| US (1) | US20230076017A1 (fr) |
| KR (1) | KR102126197B1 (fr) |
| WO (1) | WO2021153971A1 (fr) |
Cited By (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20240161541A1 (en) * | 2022-11-14 | 2024-05-16 | DeCloak Intelligences Co. | Face recognition system and method |
| US20240177521A1 (en) * | 2022-11-14 | 2024-05-30 | DeCloak Intelligences Co. | Monitoring system and monitoring method |
Families Citing this family (4)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| KR102126197B1 (ko) * | 2020-01-29 | 2020-06-24 | 주식회사 카카오뱅크 | 비식별화된 이미지를 이용한 신경망 학습 방법 및 이를 제공하는 서버 |
| KR102397837B1 (ko) * | 2020-06-25 | 2022-05-16 | 주식회사 자비스넷 | 엣지 컴퓨팅 기반 보안 감시 서비스 제공 장치, 시스템 및 그 동작 방법 |
| WO2022097766A1 (fr) * | 2020-11-04 | 2022-05-12 | 한국전자기술연구원 | Procédé et dispositif de restauration de zone masquée |
| WO2022097982A1 (fr) * | 2020-11-06 | 2022-05-12 | 주식회사 아이온커뮤니케이션즈 | Procédé et serveur de fourniture d'un service de signature numérique basé sur la reconnaissance faciale |
Citations (6)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20100074322A1 (en) * | 2008-09-25 | 2010-03-25 | Renesas Technology Corp. | Image processing apparatus |
| US20180247193A1 (en) * | 2017-02-24 | 2018-08-30 | Xtract Technologies Inc. | Neural network training using compressed inputs |
| US20190273948A1 (en) * | 2019-01-08 | 2019-09-05 | Intel Corporation | Method and system of neural network loop filtering for video coding |
| US20200034520A1 (en) * | 2018-07-26 | 2020-01-30 | Deeping Source Inc. | Method for training and testing obfuscation network capable of processing data to be concealed for privacy, and training device and testing device using the same |
| US20200167966A1 (en) * | 2018-11-27 | 2020-05-28 | Raytheon Company | Computer architecture for artificial image generation using auto-encoder |
| US20200177470A1 (en) * | 2018-07-03 | 2020-06-04 | Kabushiki Kaisha Ubitus | Method for enhancing quality of media |
Family Cites Families (8)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| KR101378295B1 (ko) * | 2009-12-18 | 2014-03-27 | 한국전자통신연구원 | 영상의 프라이버시 마스킹 방법 및 장치 |
| MX2019000713A (es) * | 2016-07-18 | 2019-11-28 | Nant Holdings Ip Llc | Sistemas, aparatos y metodos para maquina de aprendizaje distribuido. |
| KR101713089B1 (ko) * | 2016-10-31 | 2017-03-07 | (주)아이엠시티 | 개인정보 보호를 위한 이미지 자동 모자이크 처리장치 및 방법 |
| US20180129900A1 (en) * | 2016-11-04 | 2018-05-10 | Siemens Healthcare Gmbh | Anonymous and Secure Classification Using a Deep Learning Network |
| KR101861520B1 (ko) * | 2017-03-15 | 2018-05-28 | 광주과학기술원 | 얼굴 비식별화 방법 |
| KR101793510B1 (ko) * | 2017-03-27 | 2017-11-06 | 한밭대학교 산학협력단 | 얼굴 학습 및 인식 시스템과 그 방법 |
| US20180374105A1 (en) * | 2017-05-26 | 2018-12-27 | Get Attached, Inc. | Leveraging an intermediate machine learning analysis |
| KR102126197B1 (ko) * | 2020-01-29 | 2020-06-24 | 주식회사 카카오뱅크 | 비식별화된 이미지를 이용한 신경망 학습 방법 및 이를 제공하는 서버 |
-
2020
- 2020-01-29 KR KR1020200010271A patent/KR102126197B1/ko active Active
-
2021
- 2021-01-26 US US17/795,597 patent/US20230076017A1/en active Pending
- 2021-01-26 WO PCT/KR2021/001023 patent/WO2021153971A1/fr not_active Ceased
Patent Citations (6)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20100074322A1 (en) * | 2008-09-25 | 2010-03-25 | Renesas Technology Corp. | Image processing apparatus |
| US20180247193A1 (en) * | 2017-02-24 | 2018-08-30 | Xtract Technologies Inc. | Neural network training using compressed inputs |
| US20200177470A1 (en) * | 2018-07-03 | 2020-06-04 | Kabushiki Kaisha Ubitus | Method for enhancing quality of media |
| US20200034520A1 (en) * | 2018-07-26 | 2020-01-30 | Deeping Source Inc. | Method for training and testing obfuscation network capable of processing data to be concealed for privacy, and training device and testing device using the same |
| US20200167966A1 (en) * | 2018-11-27 | 2020-05-28 | Raytheon Company | Computer architecture for artificial image generation using auto-encoder |
| US20190273948A1 (en) * | 2019-01-08 | 2019-09-05 | Intel Corporation | Method and system of neural network loop filtering for video coding |
Non-Patent Citations (5)
| Title |
|---|
| Akyazi et al., "Learning-Based Image Compression using Convolutional Autoencoder and Wavelet Decomposition," 2019, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops (Year: 2019) * |
| Al-Allaf, "Improving the Performance of Backpropagation Neural Network Algorithm for Image Compression/Decompression System," 2010, Journal of Computer Science 6 (11): 1347-1354 (Year: 2010) * |
| Baluja et al., "Task-specific color spaces and compression for machine-based object recognition", 2019, Technical Disclosure Commons, https://www.tdcommons.org/dpubs_series/2067 (Year: 2019) * |
| Dimililer, "Backpropagation Neural Network Implementation for Medical Image Compression," 2013, http://dx.doi.org/10.1155/2013/453098 (Year: 2013) * |
| Evaluating the Effectiveness of Automated Identity Masking (AIM) Methods with Human Perception and a Deep Convolutional Neural Network (CNN)" Hooge, 2019, arXiv (Year: 2019) * |
Cited By (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20240161541A1 (en) * | 2022-11-14 | 2024-05-16 | DeCloak Intelligences Co. | Face recognition system and method |
| US20240177521A1 (en) * | 2022-11-14 | 2024-05-30 | DeCloak Intelligences Co. | Monitoring system and monitoring method |
Also Published As
| Publication number | Publication date |
|---|---|
| WO2021153971A1 (fr) | 2021-08-05 |
| KR102126197B1 (ko) | 2020-06-24 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| US20230076017A1 (en) | Method for training neural network by using de-identified image and server providing same | |
| US12266211B2 (en) | Forgery detection of face image | |
| CN111709408B (zh) | 图像真伪检测方法和装置 | |
| CN114387641B (zh) | 基于多尺度卷积网络和ViT的虚假视频检测方法及系统 | |
| US10984225B1 (en) | Masked face recognition | |
| CN111680672B (zh) | 人脸活体检测方法、系统、装置、计算机设备和存储介质 | |
| CN111160313B (zh) | 一种基于lbp-vae异常检测模型的人脸表示攻击检测方法 | |
| CN111275685B (zh) | 身份证件的翻拍图像识别方法、装置、设备及介质 | |
| CN112801054B (zh) | 人脸识别模型的处理方法、人脸识别方法及装置 | |
| CN111339897B (zh) | 活体识别方法、装置、计算机设备和存储介质 | |
| CN117854138B (zh) | 基于大数据的信息采集分析方法、装置、设备及存储介质 | |
| CN118609061B (zh) | 基于ai识别的安检设备控制方法、装置、设备及存储介质 | |
| CN111476269A (zh) | 均衡样本集构建、翻拍图像识别方法、装置、设备及介质 | |
| CN114677611A (zh) | 数据识别方法、存储介质及设备 | |
| CN116631026B (zh) | 一种图像识别方法、模型训练方法及装置 | |
| Mowla et al. | Selective fuzzy ensemble learner for cognitive detection of bio-identifiable modality spoofing in MCPS | |
| CN117436061A (zh) | 员工信息数据库管理系统及方法 | |
| CN110852239A (zh) | 一种人脸识别系统 | |
| CN116188439A (zh) | 一种基于身份识别概率分布的伪造换脸图像检测方法和装置 | |
| CN119919306B (en) | Image feature updating method and device and electronic equipment | |
| CN116597500B (zh) | 虹膜识别方法、装置、设备及存储介质 | |
| CN116188796B (zh) | 图像识别模型训练方法以及图像识别方法 | |
| KUMAR et al. | IMAGE FEATURE ENCODING USING LOWNERIZATION TENSOR | |
| Wilson | Deep Fingerprint Matching from Contactless to Contact Fingerprints for Increased Interoperability | |
| Laakkonen | Domain-Augmented Meta-Learning for Generalizable Deepfake Detection |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| AS | Assignment |
Owner name: KAKAOBANK CORP., KOREA, REPUBLIC OF Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:CHOI, HO YEOL;REEL/FRAME:060639/0620 Effective date: 20220726 |
|
| STPP | Information on status: patent application and granting procedure in general |
Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER |
|
| STPP | Information on status: patent application and granting procedure in general |
Free format text: FINAL REJECTION MAILED |
|
| STPP | Information on status: patent application and granting procedure in general |
Free format text: RESPONSE AFTER FINAL ACTION FORWARDED TO EXAMINER |
|
| STPP | Information on status: patent application and granting procedure in general |
Free format text: ADVISORY ACTION MAILED |
|
| STPP | Information on status: patent application and granting procedure in general |
Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION |
|
| STPP | Information on status: patent application and granting procedure in general |
Free format text: NON FINAL ACTION MAILED |
|
| STPP | Information on status: patent application and granting procedure in general |
Free format text: FINAL REJECTION MAILED |
|
| STPP | Information on status: patent application and granting procedure in general |
Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION |