[go: up one dir, main page]

CN109409962A - Image processing method, device, electronic equipment, computer readable storage medium - Google Patents

Image processing method, device, electronic equipment, computer readable storage medium Download PDF

Info

Publication number
CN109409962A
CN109409962A CN201811328072.3A CN201811328072A CN109409962A CN 109409962 A CN109409962 A CN 109409962A CN 201811328072 A CN201811328072 A CN 201811328072A CN 109409962 A CN109409962 A CN 109409962A
Authority
CN
China
Prior art keywords
face
customer
image
score
similarity
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201811328072.3A
Other languages
Chinese (zh)
Inventor
李林泽
姚聪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Megvii Technology Co Ltd
Original Assignee
Beijing Megvii Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Megvii Technology Co Ltd filed Critical Beijing Megvii Technology Co Ltd
Priority to CN201811328072.3A priority Critical patent/CN109409962A/en
Publication of CN109409962A publication Critical patent/CN109409962A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/02Marketing; Price estimation or determination; Fundraising
    • G06Q30/0281Customer communication at a business location, e.g. providing product or service information, consulting
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification

Landscapes

  • Engineering & Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Finance (AREA)
  • General Health & Medical Sciences (AREA)
  • Strategic Management (AREA)
  • Multimedia (AREA)
  • Human Computer Interaction (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Health & Medical Sciences (AREA)
  • Development Economics (AREA)
  • Accounting & Taxation (AREA)
  • General Business, Economics & Management (AREA)
  • Game Theory and Decision Science (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Marketing (AREA)
  • Economics (AREA)
  • Image Analysis (AREA)

Abstract

The present invention provides a kind of image processing method, device, electronic equipment, computer readable storage mediums, this method comprises: obtaining the facial image of customer to be identified, the similarity between the facial image and the custom image pre-saved is calculated, at least one similarity is obtained;Then judge the similarity threshold whether at least one described similarity all not up to pre-saves;When to be, determine that the customer is and to save the facial image as new custom image for the first time to shop customer;Otherwise determine that the customer is to repeat to shop customer.In this way, the customer that can be automatically identified to shop is new customer or repeats to shop customer, hotel owner or sales force do not have to the memory further according to itself to identify to customer.

Description

Image processing method, image processing device, electronic equipment and computer readable storage medium
Technical Field
The present invention relates to the field of image processing, and in particular, to an image processing method, an image processing apparatus, an electronic device, and a computer-readable storage medium.
Background
With the creation and promotion of new retail and intelligent retail concepts, there is an increasing desire for physical stores to bring users a better user experience, such as a store owner or sales personnel to know whether a store customer is a new customer who first arrives at the store or an old customer who arrives at the store many times, thereby facilitating the provision of targeted services to customers.
However, since the store or sales person has limited manpower and memory, it is difficult to effectively memorize all of the customers who arrive at the store many times, and thus it is difficult for the store or sales person to know the number of times the customer has arrived at the store.
Disclosure of Invention
Embodiments of the present invention provide an image processing method, an image processing apparatus, an electronic device, and a computer-readable storage medium to alleviate the above problem.
In a first aspect, an embodiment of the present invention provides an image processing method, where the method includes: acquiring a face image of a customer to be identified; calculating the similarity between the face image and a pre-stored customer image to obtain at least one similarity; judging whether all the at least one similarity does not reach a pre-stored similarity threshold value; if yes, determining that the customer is a first-time customer in a store, and saving the face image as a new customer image; when not, determining the customer as a repeat-to-store customer.
With reference to an embodiment of the first aspect, the method further includes: extracting age characteristics of the face image; when the age characteristic represents that the customer is in a preset age stage, determining a first threshold value stored in advance as the similarity threshold value; otherwise, determining a second threshold value which is saved in advance as the similarity threshold value; wherein the first threshold is greater than the second threshold.
With reference to an embodiment of the first aspect, the method further includes: acquiring a test set comprising a plurality of sample pictures; testing the accuracy of the test set under different thresholds; and taking the obtained threshold value under the highest accuracy as the similarity threshold value.
With reference to an embodiment of the first aspect, the method further includes: extracting feature information of the face image; performing score evaluation based on the characteristic information to obtain a face quality score; judging whether the face quality score reaches a pre-stored quality score threshold value; correspondingly, calculating the similarity between the face image and the pre-stored customer image comprises the following steps: and when the face quality score reaches the quality score threshold value, calculating the similarity between the face image and a pre-stored customer image.
With reference to an implementation manner of the first aspect, each of the repeat-to-store customers corresponds to one of the customer images, and each of the customer images corresponds to one of the pre-stored original face quality scores, and the method further includes: replacing the customer image with the face image when the face quality score is judged to be greater than the original face quality score of the customer image corresponding to the repeat-to-store customer, wherein the similarity between the face image and the customer image reaches the similarity threshold.
In combination with an embodiment of the first aspect, each of the repeat-to-store customers has a plurality of customer images, each of the customer images having a pre-stored original face quality score, and the method further includes: when the face quality score is judged to be larger than a plurality of original face quality scores corresponding to the repeat-to-store customers, determining a customer image corresponding to the minimum original face quality score in the plurality of original face quality scores as a target customer image; replacing the target customer image with the facial image; wherein the similarity between the face image and the target customer image reaches the similarity threshold.
With reference to an implementation manner of the first aspect, the feature information includes: the method comprises the following steps of carrying out score evaluation based on the feature information to obtain a face quality score, wherein the face shielding feature information, the face ambiguity feature information and the face posture feature information comprise: obtaining a pose score based on the face pose feature information; obtaining a face ambiguity value based on the face ambiguity feature information; obtaining a face shielding score based on the face shielding characteristic information; and calculating the face quality score based on a formula of face-quality-score being 1/(f1 × pos + f2 × blurriness + f3 × occlusion), wherein the face-quality-score is the face quality score, the pos is the posture score, the blurriness is the face ambiguity value, the occlusion is the face occlusion score, and f1, f2 and f3 are preset weight values respectively.
With reference to an implementation manner of the first aspect, the obtaining a pose score based on the face pose feature information includes: acquiring the face pose feature information according to a pre-stored face pose model, wherein the face pose feature information comprises three Euler angle values corresponding to the face image; calculating the attitude score based on the formula of pos, p1 × yaw + p2 × row + p3 × pitch, wherein pos is the attitude score, pitch is a pitch angle rotating around an X axis, yaw is a yaw rotating around a Y axis, roll is a roll rotating around a Z axis, and p1, p2 and p3 are preset weight values; and obtaining a face ambiguity value based on the face ambiguity feature information, including: acquiring face ambiguity feature information including blurress according to a face ambiguity model which is stored in advance, wherein the blurress is the face ambiguity value; and obtaining a face occlusion score based on the face occlusion feature information, comprising: acquiring the areas of a plurality of key areas corresponding to the face image according to a pre-stored face key point detection model; acquiring the face shielding characteristic information according to a face shielding model which is stored in advance, wherein the face shielding characteristic information comprises a shielded area corresponding to each key area; for each key area, dividing a superposed area of the key area and the shielded area corresponding to the key area by the shielded area corresponding to the key area to obtain a shielding fraction of the key area; the face occlusion score is calculated based on a formula occlusion ═ k1 × a1_ occlusion + k2 × a2_ occlusion + … + kn × An _ occlusion, where occlusion is the face occlusion score, a1_ occlusion, a2_ occlusion, and … An _ occlusion are occlusion scores of each key region, and k1, k2, and … kn are preset score weight values.
With reference to an implementation manner of the first aspect, the customer images corresponding to the same repeat-to-store customer correspond to a pre-stored number of times to store together, and the method further includes: incrementing the store arrival times of the repeat store arrival customer as determined by the customer; alternatively, the number of times of arrival to the store of the first arrival customer for which the customer is determined is set to one.
In a second aspect, an embodiment of the present invention provides an image processing apparatus, including: the acquisition module is used for acquiring a face image of a customer to be identified; the calculating module is used for calculating the similarity between the face image and a pre-stored customer image to obtain at least one similarity; the judgment execution module is used for judging whether all the at least one similarity does not reach a pre-stored similarity threshold value; when the judgment result is yes, determining that the customer is a first-time customer arriving at a store, and storing the face image as a new customer image; and when the judgment result is negative, determining that the customer is a repeat-to-store customer.
In a third aspect, an embodiment of the present invention provides an electronic device, which includes a processor and a memory connected to the processor, where the memory stores a computer program, and when the computer program is executed by the processor, the electronic device is caused to perform the method described in any one of the embodiments of the first aspect.
In a fourth aspect, the present invention provides a computer-readable storage medium, in which a computer program is stored, and when the computer program runs on a computer, the computer is caused to execute the method of any one of the first aspect.
Compared with the prior art, the image processing method, the image processing device, the electronic equipment and the computer readable storage medium provided by the embodiments of the invention have the advantages that after the electronic equipment obtains the face image of the customer to be identified, the similarity between the face image and the pre-stored customer image is calculated to obtain at least one similarity; then judging whether the at least one similarity does not reach a pre-stored similarity threshold value; if yes, determining that the customer is a first-time customer in a store, and saving the face image as a new customer image; otherwise, the customer is determined to be a repeat-to-store customer. By this method, it is possible to automatically recognize whether a customer arriving at a store is a new customer or a repeat customer arriving at the store, and the store or sales person does not need to recognize the customer based on his or her own memory.
Additional features and advantages of the embodiments disclosed herein will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by practice of the embodiments disclosed herein.
In order to make the aforementioned and other objects, features and advantages of the present invention comprehensible, preferred embodiments accompanied with figures are described in detail below.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings needed to be used in the embodiments will be briefly described below, it should be understood that the following drawings only illustrate some embodiments of the present invention and therefore should not be considered as limiting the scope, and for those skilled in the art, other related drawings can be obtained according to the drawings without inventive efforts.
Fig. 1 is a schematic diagram of an electronic device according to an embodiment of the present invention;
FIG. 2 is a flowchart of an image processing method according to an embodiment of the present invention;
FIG. 3 is a second flowchart of an image processing method according to an embodiment of the present invention;
fig. 4a is a schematic diagram of a face key point detection result provided in the embodiment of the present invention;
FIG. 4b is a schematic diagram of a detection result of a face occlusion region according to an embodiment of the present invention;
FIG. 4c is a schematic diagram illustrating a method for calculating an occlusion score according to an embodiment of the present invention;
FIG. 5 is a schematic diagram of three-axis Euler angles provided by an embodiment of the present invention;
fig. 6 is a block diagram of an image processing apparatus according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments of the present invention without making any creative effort, shall fall within the protection scope of the present invention.
In conventional solutions, manual memory is typically relied upon to identify whether a store customer arrived for the first time or repeatedly many times. However, since the store or sales staff has limited manpower and memory, it is difficult to effectively memorize all the customers who arrive at the store many times, and thus it is difficult for the store or sales staff to know the number of times that the customers arrive at the store.
In order to solve the above problem, embodiments of the present invention provide an image processing method, an image processing apparatus, an electronic device, and a computer-readable storage medium, which can be implemented by corresponding software, hardware, and a combination of software and hardware. The following describes embodiments of the present invention in detail.
First, an electronic device 100 for implementing an image processing method and apparatus according to an embodiment of the present invention is described with reference to fig. 1.
The electronic device 100 may include a processor 110, a memory 120, and an image processing apparatus.
The components of processor 110, memory 120 may be interconnected by a bus system and/or other form of connection mechanism (not shown). It should be noted that the components and structure of the electronic device 100 shown in fig. 1 are exemplary only, and not limiting, and the electronic device 100 may have other components and structures as desired. The image processing apparatus includes at least one software function module which may be stored in the memory 120 in the form of software or firmware (firmware) or solidified in an Operating System (OS) of the electronic device 100. The processor 110 is configured to execute an executable module stored in the memory 120, such as a software functional module or a computer program included in the image processing apparatus.
The memory 120 may include one or more computer program products that may include various forms of computer-readable storage media, such as volatile memory and/or non-volatile memory. The volatile memory may include, for example, Random Access Memory (RAM), cache memory (cache), and/or the like. The non-volatile memory may include, for example, Read Only Memory (ROM), hard disk, flash memory, etc. On which one or more computer program instructions may be stored and executed by processor 110 to implement the functions desired in embodiments of the invention described below. Various applications and various data, such as various data used and/or generated by the applications, may also be stored in the computer-readable storage medium.
The processor 110 may be an integrated circuit chip having signal processing capabilities. The Processor 110 may be a general-purpose Processor, and includes a Central Processing Unit (CPU), a Network Processor (NP), and the like; but may also be a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other programmable logic device, discrete gate or transistor logic device, discrete hardware components. Processor 110 may implement or perform the methods, steps, and logic blocks disclosed in embodiments of the present invention.
Referring to fig. 2, fig. 2 is a flowchart illustrating an image processing method according to an embodiment of the present invention. The flow shown in fig. 2 will be described in detail below, and the method includes:
step S110: and acquiring a face image of the customer to be identified.
As an alternative implementation mode, an intelligent camera pre-packaged with a face detection module can be installed at a door entrance or a cash register of a new retail place to acquire face images of store customers. After the intelligent camera detects a face image and acquires the face image, the face image can be sent to the electronic device 100, so that the electronic device 100 can acquire the face image for recognition.
The electronic device 100 may be connected to the smart camera in a wired or wireless manner to communicate data. Of course, the smart camera may also be integrated within the electronic device 100. The larger the angle between the intelligent camera and the face is, the worse the feature representation of the face is, so that the position of the intelligent camera can be kept horizontal with the face of a customer as much as possible to ensure the quality of the acquired face image.
Step S120: and calculating the similarity between the face image and a pre-stored customer image to obtain at least one similarity.
Optionally, a plurality of customer IDs may be pre-established in a local database or a cloud database of the electronic device 100, and a customer image corresponding to each customer ID is stored under each customer ID. One customer image of the customer may be stored under one customer ID, a plurality of customer images shot by the customer at different shooting angles may be stored, and a plurality of customer images shot by the customer at different shooting times may be stored.
After the face image is acquired, the electronic device 100 may perform similarity calculation between the face image and each customer image pre-stored in the database.
Because the feature vectors included between two identical face images are similar, optionally, for each customer image, a multi-dimensional feature vector (for example, 6400-dimensional feature vector) between the face image and the customer image is respectively extracted through a convolutional neural network, and then the euclidean distance between the multi-dimensional feature vector of the face image and the multi-dimensional feature vector of the customer image is calculated and normalized to obtain a plurality of distance values; and then filling the distance values into a pre-constructed M multiplied by N similarity matrix, wherein the calculation result of the similarity matrix is the similarity between the face image and the customer image, and M and N are positive integers.
Step S130: and judging whether the at least one similarity does not reach a pre-stored similarity threshold value.
When determining the similarity threshold, optionally, a test set including a plurality of sample pictures (for example, ten thousand pictures) may be established first, then the accuracy of the test set under different thresholds is tested through experiments, and finally the obtained threshold under the highest accuracy is used as the similarity threshold.
Of course, the similarity threshold may also be determined in other manners, such as directly obtaining an empirical value determined by a developer according to personal experience.
In addition, when similarity judgment is carried out, the judgment result error of people with small age and large age is large, so that the judgment result is not robust. Therefore, before the similarity judgment, the similarity threshold value can be optimized through the age characteristics. As an optional implementation manner, before performing the similarity determination, the electronic device 100 may extract an age feature of a face image through an age extraction model, and when it is determined that the age feature represents that the customer is in a preset age stage, determine a first threshold value stored in advance as the similarity threshold value; otherwise, determining a second threshold saved in advance as the similarity threshold. Namely, when the age stage of the customer corresponding to the face image is judged to be in the preset age stage, the similarity threshold value with a larger numerical value is adopted.
Optionally, the preset age stage may include one age stage, for example, 0 to 12 years or over 60 years, and may also include multiple age stages, for example, 0 to 12 years and over 60 years.
It is worth pointing out that the first threshold is greater than the second threshold, the second threshold may be the similarity threshold determined by the test set in the foregoing, the first threshold may be a constant value that is less than 1 added to the second threshold, and of course, the first threshold is also less than 1.
When the age extraction model is trained, a large amount of face sample data can be obtained in advance, wherein each face sample data comprises a real age data label which is marked artificially and corresponds to the face sample, and then the large amount of face sample data is input into a convolutional neural network for training, so that the age extraction model is obtained through training.
Step S140: if yes, determining that the customer is a first-time customer in the store, and saving the face image as a new customer image.
Of course, when the face image is stored in the database as a new customer image, it is necessary to establish a new customer ID for the first-time store customer in the database, and then store and associate the face image with the new customer ID.
When a new customer ID is established, optionally, the customer ID with the largest value of the original customer ID may be added by one on the basis of the original customer ID to form a new customer ID, or the new customer ID may be formed in a random encoding manner.
Step S150: if not, the customer is determined to be a repeat-to-store customer.
As an alternative embodiment, the electronic device 100 may also be connected to an audio device. After determining that the customer corresponding to the facial image is a repeat-to-store customer, the electronic device 100 may further prompt the salesperson through an audio device that the customer is an old customer; after determining that the customer corresponding to the facial image is a first-time-to-store customer, the electronic device 100 may further prompt the salesperson that the customer is a first-time-to-store customer.
As another alternative, the number of times to store corresponding to each customer ID may also be stored in the database. After determining whether the customer is a repeat to store customer or a first to store customer, the number of store to stores corresponding to the customer identity for which the customer is determined may also be incremented by one.
In this embodiment, optionally, after determining that the customer is a repeat-to-store customer, the corresponding number of store-to-store times for the repeat-to-store customer may be incremented by one. After the customer is determined to be a first customer who arrives at the store and the face image is stored in the database as a new customer image, since the customer is a new customer, the number of times of arrival corresponding to a new customer ID can be set to one directly after the new customer ID is created for the customer, that is, the operation of adding one to the number of times of arrival is completed.
It should be noted that, when it is determined that the customer is an end-to-store customer and the end-to-store number of the end-to-store customer needs to be increased, since the similarity between the face image and the customer image belonging to different customer IDs in the database may reach a similarity threshold, for example, the similarity threshold is 75%, the similarity between the face image and the customer image of the end-to-store customer a reaches 76%, and the similarity between the face image and the customer image of the end-to-store customer B reaches 80%, as a fault-tolerant processing method, it may be considered that the end-to-store customer with the highest similarity among the end-to-store customers corresponding to the plurality of customer images subjected to similarity comparison and the customer corresponding to the face image are the same customer. Therefore, in the above example, the customer corresponding to the face image is determined as the repeat-to-store customer corresponding to the customer image with the similarity of 80%, that is, the customer corresponding to the face image is determined as the customer B, and at this time, the number of times of arrival of the customer B needs to be increased by one.
Optionally, the storage period of the database may be adjusted according to actual needs, so that the merchant can better analyze the own business condition and the service condition to the customer. A daily face database may be established if the merchant's needs are the number of customers who can go to the store multiple times a day, and a long-term face database may be established if the merchant's needs are the number of customers who can go to the store multiple times a month.
Due to the unique complexity of the new retail scene, for example, the quality of images shot by a camera is low, a large amount of information is redundant, wearing and dressing of people are variable, and the new retail scene is easily shielded by door curtains/goods and the like, and the error rate is high when people identify images due to the multiple factors.
In order to improve the recognition accuracy, as an optional implementation manner, before the similarity between the face image and the customer image is calculated in step S120, the face image may be further subjected to face quality score evaluation, so as to screen out a face image with a qualified face quality score (which reaches a quality score threshold value) and perform similarity calculation. Referring to fig. 3, the process of performing face quality score evaluation may include:
step S111: and extracting the characteristic information of the face image.
Step S112: performing score evaluation based on the characteristic information to obtain a face quality score;
step S113: and judging whether the face quality score reaches a pre-stored quality score threshold value.
Correspondingly, when the face quality score reaches the quality score threshold, step S120 is executed.
That is, when the electronic device 100 determines that the face quality score of the captured face image reaches the quality score threshold, the similarity calculation is performed on the face image and the customer image, and when it determines that the face quality score of the captured face image does not reach the quality score threshold, the face image may be directly discarded.
As an alternative, each customer image may correspond to a pre-stored original face quality score. In this embodiment of face quality score evaluation for face images, optionally, if there is one face image for each repeat-to-store customer, when it is determined that the customer is a repeat-to-store customer and the face quality score of the customer is greater than the original face quality score corresponding to the repeat-to-store customer, the face image of the customer is replaced with the face image of the customer and stored in the database. For example, when it is determined that the customer corresponding to the current face image is a customer a who has repeatedly arrived at the store and the face quality score of the current face image is greater than the original face quality score of the customer image corresponding to the customer a stored in the database, the electronic device 100 replaces the customer image in the database with the current face image.
Optionally, repeating until the store customer may have multiple customer images stored in the database at the same time (for example, three customer images, customer image a, customer image B, and customer image C, stored in the database), and each customer image has a pre-stored original face quality score. Then the electronic device 100 may replace the customer image a with a face image when it is determined that the customer is a repeat-to-store customer and the face quality score of the customer is greater than the original face quality score corresponding to a customer image corresponding to the repeat-to-store customer, for example, the original face quality score corresponding to the customer image a. If it is determined that the face quality score of the customer is greater than the original face quality scores corresponding to the multiple customer images corresponding to the repeat-to-store customer, for example, the face quality scores are greater than the original face quality scores corresponding to the customer image a and the customer image B, in this case, the electronic device 100 may replace the customer image with the face image having the lowest quality score among the multiple customer images, for example, the face quality score of the face image > the original face quality score of the customer image B > the original face quality score of the customer image a, and the electronic device 100 replaces the customer image B with the face image.
The calculation of the face quality score of a face image by a pre-trained model will be described below.
Wherein, the model trained in advance can include the face and shelter from the model for avoid because the people wear with dress up changeable or suffer the higher problem of the discernment error rate that shelters from of door curtain goods etc. and cause easily, in addition, the model trained in advance can also include: at least one of a face pose model, a face ambiguity model, and an age model.
And (3) a model training process:
when training a model to be trained, the electronic device 100 may obtain a large amount of face sample data in advance, where each face sample data includes a label that is artificially labeled and corresponds to a feature that needs to be learned by the model to be trained. Then inputting a large amount of face sample data into the convolutional neural network for training, thereby obtaining a corresponding model.
When the face ambiguity model needs to be trained, the labels included in the face sample data are face ambiguities marked by various people, when the face ambiguity model needs to be trained, the labels included in the face sample data are face ambiguity values marked by various people, when the face occlusion model needs to be trained, the labels included in the face sample data are occlusion types marked by various people, and when the age model needs to be trained, the labels included in the face sample data are the real age of the face corresponding to the face sample data.
The following description is given by taking a pre-trained model including a face shielding model, a face posture model and a face ambiguity model as an example, and correspondingly, the feature information of the face image extracted by the pre-trained model may include: the face occlusion feature information, the face ambiguity feature information and the face pose feature information.
The electronic device 100 may calculate a face occlusion score, a face ambiguity value, and a pose score based on the face occlusion feature information, the face ambiguity feature information, and the face pose feature information, respectively. In this embodiment, performing score evaluation based on the feature information to obtain a face quality score includes:
and calculating the face quality score based on a formula of face-quality-score which is 1/(f1 × pos + f2 × blurriness + f3 × occlusion), wherein the face-quality-score is the face quality score, the pos is a pose score obtained based on the face pose feature information, the blurriness is a face ambiguity value obtained based on the face ambiguity feature information, the occlusion is a face occlusion score obtained based on the face occlusion feature information, and f1, f2 and f3 are respectively preset normalized weight values. f1, f2 and f3 can be adjusted according to actual conditions, and the higher the face quality score is, the higher the representative face quality is, the more easily the face is recognized correctly.
The electronic device 100 calculates a face shielding score, a face ambiguity value, and a pose score based on the face shielding feature information, the face ambiguity feature information, and the face pose feature information, respectively, and introduces the face shielding score, the face ambiguity value, and the pose score.
In calculating the face occlusion score occlusion, a plurality of (for example, 84) key points of the face are first detected by the existing face key point detection model, as shown in fig. 4a, and then the face occlusion model is used to detect the occluded region of the face, as shown in fig. 4 b. The face occlusion model will give the fraction of occluded area of the face key region (face key region includes left/right eyebrows, left/right eyes, nose, mouth), which is calculated as shown in fig. 4 c. Taking the mouth occlusion score (motion _ occlusion) as an example, occlusion _ motion is the area of the overlap area of the mouth area and the occlusion area detected by the key point/the area of the mouth area detected by the key point, and similarly, the left eyebrow occlusion score (left _ eye _ occlusion), the right eyebrow occlusion score (right _ eye _ occlusion), the left eye occlusion score (left _ eye _ occlusion), the right eye occlusion score (right _ eye _ occlusion), the nose occlusion score (nose _ occlusion), and the face contour occlusion score (interior _ occlusion) can be calculated. Therefore, the occlusion overall fraction oclusion ═ k1 × contact _ oclusion + k2 × motion _ oclusion + k3 × nose _ oclusion + k4 × left _ eye _ oclusion + k5 × right _ eye _ oclusion + k6 × left _ eye _ oclusion + k7 × right _ eye _ row _ oclusion, where k1-k7 are the weight values of the respective fractions. In general, k1> k3> k2> k4 ═ k5> k6 ═ k 7. The higher the occlusion score is, the higher the face shielding severity is, the lower the face quality score is, that is, the more difficult it is to distinguish the attribution of the face.
When the face ambiguity value blurriness is calculated, the face ambiguity value blurriness is output to the input face image by the face ambiguity model, the size of the face ambiguity value blurriness is 0-1, the larger the numerical value is, the more fuzzy the face is, the lower the face quality score is, namely, the more difficult the face attribution is to be distinguished.
As shown in fig. 5, when calculating the pose score pose, the face pose model outputs three euler angle values of yaw, row, and pitch of the face to the input face image, where pitch is the pitch angle rotated around the X axis, yaw is the yaw rotated around the Y axis, and roll is the roll rotated around the Z axis. The face pose score (pos) is p1 × yaw + p2 × row + p3 × pitch, and p1, p2, and p3 are weights of the respective scores. In the general case, p1> p3> p 2. The higher the position score, the lower the representative face quality score, and the more difficult it is to resolve the face's attribution.
The embodiment of the invention provides an image processing method applied to an electronic device 100, wherein after a face image of a customer to be identified is acquired by the electronic device 100, the similarity between the face image and a pre-stored customer image is calculated to obtain at least one similarity; then judging whether the at least one similarity does not reach a pre-stored similarity threshold value; if yes, determining that the customer is a first-time customer in a store, and saving the face image as a new customer image; otherwise, the customer is determined to be a repeat-to-store customer. By this method, it is possible to automatically recognize whether a customer arriving at a store is a new customer or a repeat customer arriving at the store, and the store or sales person does not need to recognize the customer based on his or her own memory.
Corresponding to the image processing method provided in fig. 2, please refer to fig. 6, an embodiment of the invention further provides an image processing apparatus 400, where the image processing apparatus 400 may include:
an obtaining module 410, configured to obtain a face image of a customer to be identified;
a calculating module 420, configured to calculate a similarity between the face image and a pre-stored customer image to obtain at least one similarity;
a judgment execution module 430, configured to judge whether none of the at least one similarity reaches a pre-stored similarity threshold; when the judgment result is yes, determining that the customer is a first-time customer arriving at a store, and storing the face image as a new customer image; and when the judgment result is negative, determining that the customer is a repeat-to-store customer.
Optionally, the apparatus may further include: the device comprises an extraction module, an evaluation module and a judgment module.
The extraction module is used for extracting the characteristic information of the face image;
the evaluation module is used for performing score evaluation based on the characteristic information to obtain a face quality score;
and the judging module is used for judging whether the face quality score reaches a pre-stored quality score threshold value.
Correspondingly, the calculating module 420 is configured to calculate the similarity between the face image and the pre-stored customer image only when the determining module determines that the face image is the customer image.
Optionally, the obtaining module 410 may be further configured to extract an age feature of the face image;
the judgment execution module 430 may be further configured to determine a first threshold value stored in advance as the similarity threshold value when it is judged that the age characteristic represents that the customer is in a preset age stage; otherwise, determining a second threshold value which is saved in advance as the similarity threshold value; wherein the first threshold is greater than the second threshold.
Optionally, the apparatus may further include a testing module and a determining module.
The obtaining module 410 may be further configured to obtain a test set including a plurality of sample pictures;
the test module is used for testing the accuracy of the test set under different thresholds;
and the determining module is used for taking the obtained threshold value under the highest accuracy as the similarity threshold value.
Optionally, each of the repeat-to-store customers corresponds to one of the customer images, and each of the customer images corresponds to one of the original face quality scores stored in advance.
The determination execution module 430 may be further configured to, when it is determined that the face quality score is greater than the original face quality score of the customer image corresponding to the repeat-to-store customer, replace the customer image with the face image, where a similarity between the face image and the customer image reaches the similarity threshold.
Optionally, each of the repeat-to-store customers corresponds to a plurality of customer images, and each of the customer images corresponds to a pre-stored original face quality score.
The determination executing module 430 may be further configured to determine, when it is determined that the face quality score is greater than a plurality of original face quality scores corresponding to the repeat-to-store customers, a customer image corresponding to a smallest original face quality score among the plurality of original face quality scores as a target customer image; replacing the target customer image with the facial image; wherein the similarity between the face image and the target customer image reaches the similarity threshold.
Optionally, the feature information may include face occlusion feature information, face ambiguity feature information, and face pose feature information, and the evaluation module is configured to calculate the face quality score based on a formula face-quality-score of 1/(f1 × position + f2 × blurriness + f3 × oclusion), where face-quality-score is the face quality score, position is a pose score obtained based on the face pose feature information, blurriness is a face ambiguity value obtained based on the face ambiguity feature information, oclusion is a face occlusion score obtained based on the face occlusion feature information, and f1, f2, and f3 are preset weight values, respectively.
The computing module 420 is further configured to obtain the face pose feature information according to a pre-stored face pose model, where the face pose feature information includes three euler angle values corresponding to the face image; calculating the attitude score based on the formula of pos, p1 × yaw + p2 × row + p3 × pitch, wherein pos is the attitude score, pitch is a pitch angle rotating around an X axis, yaw is a yaw rotating around a Y axis, roll is a roll rotating around a Z axis, and p1, p2 and p3 are preset weight values; and the number of the first and second groups,
the face ambiguity feature information including blurress is acquired according to a face ambiguity model which is stored in advance, and the blurress is the face ambiguity value; and the number of the first and second groups,
the face image processing device is also used for acquiring the areas of a plurality of key areas corresponding to the face image according to a pre-stored face key point detection model; acquiring the face shielding characteristic information according to a face shielding model which is stored in advance, wherein the face shielding characteristic information comprises a shielded area corresponding to each key area; for each key area, dividing a superposed area of the key area and the shielded area corresponding to the key area by the shielded area corresponding to the key area to obtain a shielding fraction of the key area; the face occlusion score is calculated based on a formula occlusion ═ k1 × a1_ occlusion + k2 × a2_ occlusion + … + kn × An _ occlusion, where occlusion is the face occlusion score, a1_ occlusion, a2_ occlusion, and … An _ occlusion are occlusion scores of each key region, and k1, k2, and … kn are preset score weight values.
Optionally, the customer images corresponding to the same repeat store-arriving customer correspond to a pre-stored store-arriving number. The apparatus may further comprise an increase module for increasing by one the number of store visits the customer is determined to repeat to a store customer; alternatively, the number of times of arrival to the store of the first arrival customer for which the customer is determined is set to one.
The device provided in this embodiment has the same implementation principle and technical effect as those of the foregoing embodiments, and for the sake of brief description, reference may be made to the corresponding contents in fig. 2 to 5 in the foregoing method embodiments without reference to the device embodiment.
Furthermore, an embodiment of the present invention further provides an electronic device, which includes a processor and a memory connected to the processor, where the memory stores a computer program, and when the computer program is executed by the processor, the electronic device is caused to execute the image processing method provided in any one of the first embodiments. The structural schematic diagram of the electronic device can be seen in fig. 1.
Furthermore, an embodiment of the present invention further provides a computer-readable storage medium, in which a computer program is stored, and when the computer program runs on a computer, the computer is caused to execute the image processing method provided in any one of the embodiments of the present invention.
In addition, an embodiment of the present invention further provides a computer program, where the computer program may be stored in a cloud or a local storage medium, and when the computer program runs on a computer, the computer is caused to execute the image processing method provided in any embodiment of the present invention.
In summary, according to the image processing method, the image processing apparatus, the electronic device, and the computer-readable storage medium provided in the embodiments of the present invention, after the face image of the customer to be identified is obtained, the similarity between the face image and the pre-stored customer image is first calculated to obtain at least one similarity; then judging whether the at least one similarity does not reach a pre-stored similarity threshold value; if yes, determining that the customer is a first-time customer in a store, and saving the face image as a new customer image; otherwise, the customer is determined to be a repeat-to-store customer. By this method, it is possible to automatically recognize whether a customer arriving at a store is a new customer or a repeat customer arriving at the store, and the store or sales person does not need to recognize the customer based on his or her own memory.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus and method can be implemented in other ways. The apparatus embodiments described above are merely illustrative, and for example, the flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of apparatus, methods and computer program products according to various embodiments of the present invention. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
In addition, the functional modules in the embodiments of the present invention may be integrated together to form an independent part, or each module may exist separately, or two or more modules may be integrated to form an independent part.
The functions, if implemented in the form of software functional modules and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
It is noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
The above description is only for the specific embodiments of the present invention, but the scope of the present invention is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present invention, and all the changes or substitutions should be covered within the scope of the present invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.

Claims (12)

1. An image processing method, characterized in that the method comprises:
acquiring a face image of a customer to be identified;
calculating the similarity between the face image and a pre-stored customer image to obtain at least one similarity;
judging whether the at least one similarity does not reach a pre-stored similarity threshold value;
if yes, determining that the customer is a first-time customer in a store, and saving the face image as a new customer image;
when not, determining the customer as a repeat-to-store customer.
2. The method of claim 1, further comprising:
extracting age characteristics of the face image;
when the age characteristic represents that the customer is in a preset age stage, determining a first threshold value stored in advance as the similarity threshold value;
otherwise, determining a second threshold value which is saved in advance as the similarity threshold value; wherein the first threshold is greater than the second threshold.
3. The method of claim 1, further comprising:
acquiring a test set comprising a plurality of sample pictures;
testing the accuracy of the test set under different thresholds;
and taking the obtained threshold value under the highest accuracy as the similarity threshold value.
4. The method of claim 1, further comprising:
extracting feature information of the face image;
performing score evaluation based on the characteristic information to obtain a face quality score;
judging whether the face quality score reaches a pre-stored quality score threshold value;
correspondingly, calculating the similarity between the face image and the pre-stored customer image comprises the following steps:
and when the face quality score reaches the quality score threshold value, calculating the similarity between the face image and a pre-stored customer image.
5. The method of claim 4, wherein each of said repeat-to-store customers corresponds to one of said customer images, each of said customer images corresponding to a pre-stored original face quality score, said method further comprising:
replacing the customer image with the face image when the face quality score is judged to be greater than the original face quality score of the customer image corresponding to the repeat-to-store customer, wherein the similarity between the face image and the customer image reaches the similarity threshold.
6. The method of claim 4, wherein there are a plurality of said customer images for each said repeat-to-store customer, each said customer image having a pre-stored raw face quality score, said method further comprising:
when the face quality score is judged to be larger than a plurality of original face quality scores corresponding to the repeat-to-store customers, determining a customer image corresponding to the minimum original face quality score in the plurality of original face quality scores as a target customer image;
replacing the target customer image with the facial image; wherein the similarity between the face image and the target customer image reaches the similarity threshold.
7. The method of claim 4, wherein the feature information comprises: the method comprises the following steps of carrying out score evaluation based on the feature information to obtain a face quality score, wherein the face shielding feature information, the face ambiguity feature information and the face posture feature information comprise:
obtaining a pose score based on the face pose feature information;
obtaining a face ambiguity value based on the face ambiguity feature information;
obtaining a face shielding score based on the face shielding characteristic information;
and calculating the face quality score based on a formula of face-quality-score being 1/(f1 × pos + f2 × blurriness + f3 × occlusion), wherein the face-quality-score is the face quality score, the pos is the posture score, the blurriness is the face ambiguity value, the occlusion is the face occlusion score, and f1, f2 and f3 are preset weight values respectively.
8. The method of claim 7,
obtaining a pose score based on the face pose feature information, comprising:
acquiring the face pose feature information according to a pre-stored face pose model, wherein the face pose feature information comprises three Euler angle values corresponding to the face image;
calculating the attitude score based on the formula of pos, p1 × yaw + p2 × row + p3 × pitch, wherein pos is the attitude score, pitch is a pitch angle rotating around an X axis, yaw is a yaw rotating around a Y axis, roll is a roll rotating around a Z axis, and p1, p2 and p3 are preset weight values; and the number of the first and second groups,
the obtaining of the face ambiguity value based on the face ambiguity feature information comprises:
acquiring face ambiguity feature information including blurress according to a face ambiguity model which is stored in advance, wherein the blurress is the face ambiguity value; and the number of the first and second groups,
the obtaining of the face shielding score based on the face shielding characteristic information includes:
acquiring the areas of a plurality of key areas corresponding to the face image according to a pre-stored face key point detection model;
acquiring the face shielding characteristic information according to a face shielding model which is stored in advance, wherein the face shielding characteristic information comprises a shielded area corresponding to each key area;
for each key area, dividing a superposed area of the key area and the shielded area corresponding to the key area by the shielded area corresponding to the key area to obtain a shielding fraction of the key area;
the face occlusion score is calculated based on a formula occlusion ═ k1 × a1_ occlusion + k2 × a2_ occlusion + … + kn × An _ occlusion, where occlusion is the face occlusion score, a1_ occlusion, a2_ occlusion, and … An _ occlusion are occlusion scores of each key region, and k1, k2, and … kn are preset score weight values.
9. The method of any of claims 1-8, wherein the customer images corresponding to the same repeat-to-store customer collectively correspond to a pre-saved number of times to store, the method further comprising:
incrementing the store arrival times of the repeat store arrival customer as determined by the customer; or,
setting the number of times to store of the first time to store customer as determined by the customer to one.
10. An image processing apparatus, characterized in that the apparatus comprises:
the acquisition module is used for acquiring a face image of a customer to be identified;
the calculating module is used for calculating the similarity between the face image and a pre-stored customer image to obtain at least one similarity;
the judgment execution module is used for judging whether all the at least one similarity does not reach a pre-stored similarity threshold value; when the judgment result is yes, determining that the customer is a first-time customer arriving at a store, and storing the face image as a new customer image; and when the judgment result is negative, determining that the customer is a repeat-to-store customer.
11. An electronic device, comprising a processor and a memory coupled to the processor, the memory storing a computer program that, when executed by the processor, causes the electronic device to perform the method of any of claims 1-9.
12. A computer-readable storage medium, in which a computer program is stored which, when run on a computer, causes the computer to carry out the method according to any one of claims 1-9.
CN201811328072.3A 2018-11-08 2018-11-08 Image processing method, device, electronic equipment, computer readable storage medium Pending CN109409962A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811328072.3A CN109409962A (en) 2018-11-08 2018-11-08 Image processing method, device, electronic equipment, computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811328072.3A CN109409962A (en) 2018-11-08 2018-11-08 Image processing method, device, electronic equipment, computer readable storage medium

Publications (1)

Publication Number Publication Date
CN109409962A true CN109409962A (en) 2019-03-01

Family

ID=65472336

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811328072.3A Pending CN109409962A (en) 2018-11-08 2018-11-08 Image processing method, device, electronic equipment, computer readable storage medium

Country Status (1)

Country Link
CN (1) CN109409962A (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111723678A (en) * 2020-05-27 2020-09-29 上海瀛之杰汽车信息技术有限公司 Human face passenger flow identification method, device, equipment and medium suitable for multi-person scene
CN111797773A (en) * 2020-07-07 2020-10-20 广州广电卓识智能科技有限公司 Method, device and equipment for detecting occlusion of key parts of human face
CN111814569A (en) * 2020-06-12 2020-10-23 深圳禾思众成科技有限公司 A method and system for detecting a face occlusion area
CN112016469A (en) * 2020-08-28 2020-12-01 Oppo广东移动通信有限公司 Image processing method and device, terminal and readable storage medium
CN112131915A (en) * 2019-06-25 2020-12-25 杭州海康威视数字技术股份有限公司 Face attendance system, camera and code stream equipment
CN113448925A (en) * 2021-06-25 2021-09-28 东莞市小精灵教育软件有限公司 Test question picture optimization method and device, computer equipment and storage medium
CN113762181A (en) * 2021-09-13 2021-12-07 联想(北京)有限公司 Image processing method and electronic equipment
CN116959071A (en) * 2023-07-24 2023-10-27 厦门立林科技有限公司 A children's mouth and nose occlusion detection method and terminal equipment

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106096513A (en) * 2016-06-01 2016-11-09 深圳信炜科技有限公司 Fingerprint identification method, fingerprint recognition system and electronic equipment
CN107679613A (en) * 2017-09-30 2018-02-09 同观科技(深圳)有限公司 A kind of statistical method of personal information, device, terminal device and storage medium
CN107689069A (en) * 2017-08-24 2018-02-13 深圳市唯特视科技有限公司 A kind of image automatic synthesis method blocked based on identification face
CN108109044A (en) * 2017-12-26 2018-06-01 南京开为网络科技有限公司 A kind of intelligence retail crm system
CN108230293A (en) * 2017-05-31 2018-06-29 深圳市商汤科技有限公司 Determine method and apparatus, electronic equipment and the computer storage media of quality of human face image
CN108241836A (en) * 2016-12-23 2018-07-03 同方威视技术股份有限公司 Method and device for security inspection

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106096513A (en) * 2016-06-01 2016-11-09 深圳信炜科技有限公司 Fingerprint identification method, fingerprint recognition system and electronic equipment
CN108241836A (en) * 2016-12-23 2018-07-03 同方威视技术股份有限公司 Method and device for security inspection
CN108230293A (en) * 2017-05-31 2018-06-29 深圳市商汤科技有限公司 Determine method and apparatus, electronic equipment and the computer storage media of quality of human face image
CN107689069A (en) * 2017-08-24 2018-02-13 深圳市唯特视科技有限公司 A kind of image automatic synthesis method blocked based on identification face
CN107679613A (en) * 2017-09-30 2018-02-09 同观科技(深圳)有限公司 A kind of statistical method of personal information, device, terminal device and storage medium
CN108109044A (en) * 2017-12-26 2018-06-01 南京开为网络科技有限公司 A kind of intelligence retail crm system

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112131915A (en) * 2019-06-25 2020-12-25 杭州海康威视数字技术股份有限公司 Face attendance system, camera and code stream equipment
CN111723678A (en) * 2020-05-27 2020-09-29 上海瀛之杰汽车信息技术有限公司 Human face passenger flow identification method, device, equipment and medium suitable for multi-person scene
CN111814569A (en) * 2020-06-12 2020-10-23 深圳禾思众成科技有限公司 A method and system for detecting a face occlusion area
CN111797773A (en) * 2020-07-07 2020-10-20 广州广电卓识智能科技有限公司 Method, device and equipment for detecting occlusion of key parts of human face
CN112016469A (en) * 2020-08-28 2020-12-01 Oppo广东移动通信有限公司 Image processing method and device, terminal and readable storage medium
CN113448925A (en) * 2021-06-25 2021-09-28 东莞市小精灵教育软件有限公司 Test question picture optimization method and device, computer equipment and storage medium
CN113762181A (en) * 2021-09-13 2021-12-07 联想(北京)有限公司 Image processing method and electronic equipment
CN116959071A (en) * 2023-07-24 2023-10-27 厦门立林科技有限公司 A children's mouth and nose occlusion detection method and terminal equipment

Similar Documents

Publication Publication Date Title
CN109409962A (en) Image processing method, device, electronic equipment, computer readable storage medium
CN109145766B (en) Model training method and device, recognition method, electronic device and storage medium
CN108875522B (en) Face clustering method, device and system and storage medium
CN108229322B (en) Video-based face recognition method and device, electronic equipment and storage medium
US10635890B2 (en) Facial recognition method and apparatus, electronic device, and storage medium
CN113313053B (en) Image processing method, device, apparatus, medium, and program product
CN110414550B (en) Training method, device and system of face recognition model and computer readable medium
CN110298249A (en) Face identification method, device, terminal and storage medium
US20170161591A1 (en) System and method for deep-learning based object tracking
US20160371537A1 (en) Method, system, and computer program product for recognizing face
CN108171158B (en) Living body detection method, device, electronic device and storage medium
CN108009466B (en) Pedestrian detection method and device
CN109359214A (en) Video description generation method, storage medium and terminal device based on neural network
Sharma et al. A new pose invariant face recognition system using PCA and ANFIS
CN109840503B (en) A method and device for determining category information
CN114495006B (en) Method and device for detecting legacy object and storage medium
WO2018121287A1 (en) Target re-identification method and device
CN110334605A (en) Gesture recognition method, device, storage medium and equipment based on neural network
CN111797709A (en) A real-time dynamic gesture trajectory recognition method based on regression detection
JP6969878B2 (en) Discriminator learning device and discriminator learning method
CN114387496A (en) A target detection method and electronic device
CN107741996A (en) Method and device for constructing family map based on face recognition, and computing equipment
CN111401339A (en) Method and device for identifying age of person in face image and electronic equipment
CN110135245B (en) Store arrival confirmation method and device, electronic equipment and readable storage medium
KR102599020B1 (en) Method, program, and apparatus for monitoring behaviors based on artificial intelligence

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20190301