CN111325806A - Clothing color recognition method, device and system based on semantic segmentation - Google Patents
Clothing color recognition method, device and system based on semantic segmentation Download PDFInfo
- Publication number
- CN111325806A CN111325806A CN202010098415.2A CN202010098415A CN111325806A CN 111325806 A CN111325806 A CN 111325806A CN 202010098415 A CN202010098415 A CN 202010098415A CN 111325806 A CN111325806 A CN 111325806A
- Authority
- CN
- China
- Prior art keywords
- clothing
- picture
- pictures
- portrait
- color
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/90—Determination of colour characteristics
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/25—Determination of region of interest [ROI] or a volume of interest [VOI]
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/26—Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
- G06V10/267—Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10024—Color image
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Evolutionary Biology (AREA)
- Evolutionary Computation (AREA)
- Bioinformatics & Computational Biology (AREA)
- General Engineering & Computer Science (AREA)
- Artificial Intelligence (AREA)
- Life Sciences & Earth Sciences (AREA)
- Multimedia (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
Abstract
The invention provides a clothing color identification method based on semantic segmentation, which comprises the following steps: collecting a certain amount of portrait pictures with different parameters under different scenes, and labeling the collected portrait pictures; carrying out random parameter transformation processing on the marked portrait picture to generate an initial sample set; establishing a clothing region extraction model based on a JPPNet network, and extracting a local picture of each clothing region contained in each clothing region from a portrait picture; carrying out background transformation and color classification labeling on the extracted local picture; carrying out size unification and random parameter transformation processing on the marked local pictures to generate a training sample set; and creating a clothing color recognition model based on the classifier, and importing the training sample set into the clothing color recognition model to train the clothing color recognition model. The method and the device can achieve higher recognition success rate and recognition accuracy rate, and particularly have higher promotion range for only the situations of half-length image, shading and the like in the picture.
Description
Technical Field
The invention relates to the technical field of image processing, in particular to a method, a device and a system for identifying clothing color based on semantic segmentation.
Background
Generally, clothes colors are recognized by directly classifying upper and lower clothes colors after the whole portrait is input into a classifier, but this method has a problem that it is difficult to determine a case where the portrait is incomplete or a case where the posture of the portrait is complicated due to problems such as an angle of photographing and blocking.
Besides the direct judgment by the classifier, the industry also carries out multi-label classification based on the graph neural network, and the reason for doing so is that the relationship among the colors of the upper garment, the colors of the lower garment and other attributes of the human body can be utilized, but because the colors of the garment are not obviously related to other attributes of the human body such as the style of the garment, the length of the garment and the like, the recognition of other attributes can be improved to some extent, but the recognition effect of the colors of the garment has no positive effect.
In addition, the industry also has a method based on an attention mechanism, which can initially position the positions of the upper garment and the lower garment by using an attention technology, and then judge the color according to the positioned characteristics, but because a human body is not a rigid body with a constant shape, the phenomena of shielding, distortion and the like can be caused due to the change of the posture of the human body, and in addition, a video monitoring scene can not completely shoot the complete whole body illumination of each person, the attention mechanism has the problem of inaccurate positioning when facing the complex scenes, so that the direct result can influence the final classification result, and the situation of recognition errors can be generated.
Because the dressing condition of the human body is complex, the position and the area of the clothes can greatly change along with the mode of the obtained image of the human body, the shielding mode, the change of the posture of the human body, the change of the shooting angle and the like, and the conventional clothes color identification scheme can either roughly position the position of the clothes, or directly judge according to the position of key points of the human body, or directly judge by the whole picture. For example, the prior patent discloses a fashionable garment image segmentation method based on deep learning, which can identify semantic information of upper garments and lower garments from a complex scene, input source images into a deep learning network specially designed for the fashionable garment field for training, and automatically identify the upper garments, the lower garments and the whole-body garment matching in the images. In the clothing local feature extraction module, clothing key point information and key point visualization information are used for pooling the image global features input by the image feature extraction module around the key point positions to obtain local features, and the local features are irrelevant to deformation and shielding of clothing, so that the clothing identification and segmentation accuracy is greatly improved. The key point information of the garment includes coordinate point information of various garments, for example, coordinate points such as a left collar, a right collar, left sleeves, right sleeves, a left lower hem, a right lower corner and the like are installed on the upper body garment.
However, in practical applications, these schemes are difficult to adapt to identification of complex situations, for example, in the aforementioned patent, a garment region must be located by relying on coordinate point information of a garment, when some coordinate point information of a certain garment region is lost due to being blocked or a shooting angle, etc., the lost coordinate point needs to be fitted, and besides the identification speed is slow due to a large calculation amount in the fitting process, the final identification effect in practical applications is greatly reduced due to low accuracy of the fitted coordinate point.
In summary, the above-mentioned techniques still have problems such as misjudgment, missing judgment, etc. in color judgment, which often results in inaccurate color judgment and difficulty in achieving practical purposes.
Disclosure of Invention
The invention aims to provide a method, a device and a system for identifying clothes colors based on semantic segmentation. Compared with other garment color identification methods, the garment color identification method based on the same data set can achieve higher identification success rate and identification accuracy rate, especially for the situation that only the half-body image, the shielding and other complex situations exist in the picture, the identification success rate and the identification accuracy rate are improved to a higher extent, and for the situation of the half-body image, for example, the garment color identification method based on the same data set can achieve almost complete judgment accuracy.
In order to achieve the above object, with reference to fig. 1, the present invention provides a clothing color identification method based on semantic segmentation, where the clothing color identification method includes the following steps:
s1: collecting a certain amount of portrait pictures with different parameters under different scenes, and labeling the collected portrait pictures, wherein the labeling content comprises clothing region segmentation information labeling and human body joint point information labeling; carrying out random parameter transformation processing on the marked portrait picture to generate an initial sample set;
s2: establishing a clothing region extraction model based on a JPPNet network, wherein the clothing region extraction model is used for extracting clothing regions on a human image picture by combining human body joint point information marking and clothing region segmentation information marking; leading the portrait pictures in the initial sample set into a clothing region extraction model, and extracting local pictures of each clothing region contained in the portrait pictures;
s3: carrying out background transformation and color classification labeling on the extracted local picture of each clothing region; carrying out size unification and random parameter transformation processing on the marked local pictures to generate a training sample set;
s4: establishing a clothing color recognition model based on the classifier, and importing a training sample set into the clothing color recognition model to train the clothing color recognition model;
s5: the method comprises the steps of collecting portrait pictures containing clothing information in real time, and identifying clothing colors of one or more clothing areas in the portrait pictures by adopting a clothing area extraction model and a clothing color identification model.
In a further embodiment, in step S1, the parameters of the portrait picture include shooting parameters and human posture parameters;
the shooting parameters comprise illumination conditions, shooting scenes, shooting angles and shooting distances;
the human body posture parameters comprise a human body posture, a whole body close-up and a half body close-up.
In a further embodiment, in step S1, the information labels for the garment region segmentation include information labels for head, upper garment, lower garment, limbs, and foot regions;
the information labels of the human body joint points comprise information labels of joint points of wrists, elbows, shoulders, heads, chests, knee joints and ankles of the human body.
In a further embodiment, the random parameter transformation processing refers to performing random clipping, rotation, flipping, and color transformation processing on the picture.
In a further embodiment, in step S3, the performing the background transformation on the extracted local picture of each clothing region means that the background regions in the extracted local picture of each clothing region are unified into a pure white background.
In a further embodiment, in step S3, the marked local pictures are unified to the same size by using a bilinear interpolation method.
In a further embodiment, in step S5, the identifying the clothing color of one or more clothing regions in the portrait picture by using the clothing region extraction model and the clothing color identification model includes the following steps:
s51: acquiring a portrait picture containing clothing information in real time, and importing the portrait picture into a clothing region extraction model to extract a local picture of each clothing region contained in the portrait picture;
s52: and carrying out size unification and background transformation processing on the extracted local pictures of each clothing region, and introducing the processed local pictures of each clothing region into a clothing color identification model to identify the corresponding clothing color.
Based on the clothing color identification method, the invention provides a clothing color identification device based on semantic segmentation, and the clothing color identification device comprises:
the garment region extraction model is created based on a JPPNet network and is used for extracting a garment region on a human image picture by combining human body joint point information marking and garment region segmentation information marking;
the garment color identification model is created based on the classifier and used for identifying the garment colors in the imported local pictures of each garment region;
the portrait picture acquisition module is used for acquiring portrait pictures with different parameters in different scenes;
the sample set generating module is used for carrying out random parameter transformation processing on the imported picture to generate a corresponding training picture sample set;
and the image preprocessing module is used for performing background transformation and size unification processing on the imported pictures.
Based on the clothing color recognition method, the invention provides a clothing color recognition system based on semantic segmentation, which comprises a memory, a processor and a computer program, wherein the computer program is stored in the memory and can run on the processor;
the processor, when executing the computer program, implements the steps of the garment color identification method as described above.
Compared with the prior art, the technical scheme of the invention has the following remarkable beneficial effects:
(1) the method comprises the following steps of performing more accurate and fine segmentation on clothes contained in a portrait picture by adopting a large amount of labeled semantic segmentation data so as to extract a local picture of each clothes area contained in the portrait picture, wherein due to the introduction of human body joint information labeling, the extracted local picture of each clothes area is not limited by factors such as incompleteness or shooting direction, and then processing the local picture of each clothes area to form a new picture and sending the new picture into a classifier for color recognition, so that the color recognition range is reduced, and the color recognition efficiency is improved; compared with other garment color identification methods, the garment color identification method based on the same data set can achieve higher identification success rate and identification accuracy rate.
(2) The method and the device are little interfered by only complicated situations such as the half-length image, the blocking and the like in the picture, and for example, the method and the device can almost completely judge the half-length image.
(3) A clothing region extraction model is established based on the JPPNet network, the clothing region extraction speed is high, and the whole clothing color recognition time is short.
(4) By carrying out random parameter transformation processing on the image, a large number of sample pictures are generated on the basis of a small number of portrait pictures, and the generation efficiency of the sample set is high.
(5) The original background area of the generated local picture is transformed into a uniform background (for example, a pure white background), so that the problem of background interference is avoided.
It should be understood that all combinations of the foregoing concepts and additional concepts described in greater detail below can be considered as part of the inventive subject matter of this disclosure unless such concepts are mutually inconsistent. In addition, all combinations of claimed subject matter are considered a part of the presently disclosed subject matter.
The foregoing and other aspects, embodiments and features of the present teachings can be more fully understood from the following description taken in conjunction with the accompanying drawings. Additional aspects of the present invention, such as features and/or advantages of exemplary embodiments, will be apparent from the description which follows, or may be learned by practice of specific embodiments in accordance with the teachings of the present invention.
Drawings
The drawings are not intended to be drawn to scale. In the drawings, each identical or nearly identical component that is illustrated in various figures may be represented by a like numeral. For purposes of clarity, not every component may be labeled in every drawing. Embodiments of various aspects of the present invention will now be described, by way of example, with reference to the accompanying drawings, in which:
fig. 1 is a flow chart of a semantic segmentation-based garment color identification method of the present invention.
Fig. 2 is a diagram of the garment color identification procedure of the present invention.
Fig. 3 is a schematic diagram of a specific recognition scenario of the present invention.
Detailed Description
In order to better understand the technical content of the present invention, specific embodiments are described below with reference to the accompanying drawings.
In this disclosure, aspects of the present invention are described with reference to the accompanying drawings, in which a number of illustrative embodiments are shown. Embodiments of the present disclosure are not necessarily defined to include all aspects of the invention. It should be appreciated that the various concepts and embodiments described above, as well as those described in greater detail below, may be implemented in any of numerous ways, as the disclosed concepts and embodiments are not limited to any one implementation. In addition, some aspects of the present disclosure may be used alone, or in any suitable combination with other aspects of the present disclosure.
Detailed description of the preferred embodiment
With reference to fig. 1, the present invention provides a clothing color identification method based on semantic segmentation, wherein the clothing color identification method includes the following steps:
s1: collecting a certain amount of portrait pictures with different parameters under different scenes, and labeling the collected portrait pictures, wherein the labeling content comprises clothing region segmentation information labeling and human body joint point information labeling; and carrying out random parameter transformation processing on the marked portrait picture to generate an initial sample set.
S2: establishing a clothing region extraction model based on a JPPNet (Joint Body matching & Point optimization Network), wherein the clothing region extraction model is used for extracting clothing regions on a human image picture by combining human Body Joint point information marking and clothing region segmentation information marking; and importing the portrait pictures in the initial sample set into a clothing region extraction model, and extracting local pictures of each clothing region contained in the portrait pictures. The JPPNet network is a human body analysis and posture estimation deep learning method based on Tensorflow, which is often adopted in the prior art.
S3: carrying out background transformation and color classification labeling on the extracted local picture of each clothing region; and carrying out size unification and random parameter transformation processing on the marked local pictures to generate a training sample set.
S4: and creating a clothing color recognition model based on the classifier, and importing the training sample set into the clothing color recognition model to train the clothing color recognition model.
S5: the method comprises the steps of collecting portrait pictures containing clothing information in real time, and identifying clothing colors of one or more clothing areas in the portrait pictures by adopting a clothing area extraction model and a clothing color identification model.
The foregoing steps are described in detail below with reference to specific examples.
Step one, generating an initial sample set
Firstly, portrait pictures with different postures, illumination, scenes and angles are collected, including some pictures of the whole body and the half body, and then the pictures are labeled in two aspects. The annotation includes two aspects: the first aspect is the segmentation and labeling of the head, the upper garment, the lower garment, the limbs and other areas; the second aspect is the labeling of the wrist, elbow, shoulder, head, chest, knee joint, ankle and other 15 joint points of the human body.
And the marking data is a data basis for extracting the local picture of each clothing region from the portrait picture by adopting a JPPNet network in the step two. The step can distinguish different clothes such as coats, jackets, dresses and the like of the human body in a labeling mode. The number of the collected portrait images is limited, and the larger the data of the training sample introduced into the clothing color recognition model is, the more the types are, and the higher the robustness and the recognition rate of the clothing color recognition model generated by final training are. In order to increase the number of training samples as much as possible, the invention proposes that the marked portrait picture is subjected to random parameter transformation processing (for example, the picture is subjected to random clipping, rotation, inversion, color transformation and the like), so as to generate an initial sample set.
Step two, extracting local pictures of each clothing region contained in the portrait pictures
And establishing a clothing region extraction model based on a JPPNet network, importing the portrait pictures in the initial sample set into the clothing region extraction model for training, wherein the clothing region extraction model is used for extracting clothing regions on the portrait picture by combining with human body joint point information marking and clothing region segmentation information marking, and finally extracting local pictures of each clothing region contained in the portrait picture. In the present invention, the type of the finally extracted clothing region is determined by the user according to the actual requirement, for example, only the partial pictures respectively including the upper garment and the lower garment are extracted.
The network adopted by the clothing region extraction model is JPPNet, and the human body joint node is adopted to assist in segmenting different regions of the human body. Thanks to this assistance, compared with a common semantic segmentation model, the clothing region extraction model can greatly reduce the cases of mis-segmentation, thereby significantly enhancing the generalization capability of the whole model.
Step three, generating a training sample set
After the upper garment and the lower garment of a person are extracted through a semantic segmentation model, the upper garment and the lower garment are selected to be extracted respectively to form new pictures, the sizes of the new pictures are unified by methods such as bilinear interpolation, and then the new pictures are used as training data to be sent to a classifier of the next stage for classification.
In the process of forming the local picture, considering that the color of the background can influence the effect of the color classifier, the invention provides that the area which is originally the background is replaced by the pure white background so as to avoid the problem of background interference.
Similarly, in order to improve the robustness and the recognition rate of the garment color recognition model, the invention provides that random parameter transformation processing (such as processing of random cutting, rotation, turning, color transformation and the like on the picture) is carried out on the marked local picture, the number of training samples is increased as much as possible, and a training sample set is generated.
Step four, establishing and training a clothing color recognition model
And (4) leading the training sample set generated in the step three into a classifier for color recognition so as to finish the training of the garment color recognition model. In the training process, the training sample set can be divided into a training set and a testing set according to a set proportion, the training set is adopted to train the clothing color recognition model, then the testing set is adopted to verify the clothing color recognition model (for example, whether the recognition success rate and the recognition accuracy rate meet preset requirements or not is judged), the training is completed when the verification is passed, otherwise, the model parameters are adjusted to retrain the model until the verification is passed.
Detailed description of the invention
With reference to fig. 2, on the basis of successful training of the clothing color recognition model, in step S5, the identifying clothing colors of one or more clothing regions in the portrait image by using the clothing region extraction model and the clothing color recognition model includes the following steps:
s51: and acquiring a portrait picture containing clothing information in real time, and importing the portrait picture into a clothing region extraction model to extract a local picture of each clothing region contained in the portrait picture.
S52: and carrying out size unification and background transformation processing on the extracted local pictures of each clothing region, and introducing the processed local pictures of each clothing region into a clothing color identification model to identify the corresponding clothing color.
As shown in fig. 3(a), when the clothing color recognition method of the present invention is adopted, two partial pictures are first extracted by segmentation, and each partial picture only includes an upper garment and a lower garment, then the generated partial pictures only including the upper garment or the lower garment are processed by background unification, size unification, and the like, so as to obtain two pictures shown in fig. 3(b) and fig. 3(c), and finally clothing color recognition is performed based on fig. 3(b) and fig. 3 (c).
The method analyzes the corresponding relation between the human body joint points and the clothing areas through a large number of samples, and accurately extracts each clothing area by adopting the human body joint points in the actual clothing identification process. For example, in a half-length photograph lacking a complete downloaded picture, the complete downloaded picture can be extracted from the whole portrait picture by combining four limbs, knee joints, hips, and the like, and then background processing and color recognition are performed on the extracted complete downloaded picture.
Practice proves that the accuracy rate of the scheme of upper and lower clothes color classification based on direct classification, graph convolution neural network and attention mechanism can only reach 60% -70% on data of a monitoring scene, the accuracy rate of the method can reach 85% through testing on the same data set, especially for the complex conditions of only half-length images, shielding and the like in pictures, the color of a shielding object can be almost judged as the color of clothes by the conventional method, and the clothes color identification method provided by the invention can judge most of the conditions correctly and can even realize almost complete judgment on the condition of the half-length images.
Application scenario one
Under the scene of a person bayonet, the acquired pictures of people are changed due to the fact that the time, the illumination, the angle, the shielding, the posture and the like are complicated and changeable. The garment color identification method can effectively process the complex situation, accurately extracts different garment regions such as an upper garment region and a lower garment region by combining human body joint point information in the picture, extracts an upper garment from the upper garment region, extracts a lower garment from the lower garment region, changes the background colors of the upper garment and the lower garment, highlights the upper garment or the lower garment part, and finally carries out color identification on the upper garment picture and the lower garment picture.
Application scenario two
Very serious occlusion problems often exist in the shot pictures, such as common backpack occlusion, occlusion of large handheld articles, occlusion of crowded people, wherein the occlusion of crowded people is very easy to have mistaken segmentation in the semantic segmentation model. The invention provides an auxiliary segmentation method adopting human body joint points, which can avoid the situation to a certain extent, and when the upper garment and the lower garment of a crowd are in a separated state, a scheme of cutting from a blank area can be adopted to avoid the situation of mistaken segmentation.
Detailed description of the preferred embodiment
Based on the clothing color identification method, the invention provides a clothing color identification device based on semantic segmentation, and the clothing color identification device comprises:
(1) the garment region extraction model established based on the JPPNet network is used for extracting the garment region on the human image picture by combining the information marking of the human body joint point and the information marking of the garment region segmentation.
(2) And the clothing color identification model is created based on the classifier and is used for identifying clothing colors in the imported local picture of each clothing region.
(3) And the portrait picture acquisition module is used for acquiring portrait pictures with different parameters in different scenes.
(4) And the sample set generating module is used for carrying out random parameter transformation processing on the imported picture to generate a corresponding training picture sample set.
(5) And the image preprocessing module is used for performing background transformation and size unification processing on the imported pictures.
Detailed description of the invention
Based on the clothing color recognition method, the invention provides a clothing color recognition system based on semantic segmentation, and the clothing color recognition system comprises a memory, a processor and a computer program which is stored in the memory and can run on the processor. The processor, when executing the computer program, implements the steps of the garment color identification method as described above.
Those of ordinary skill in the art will understand that: all or part of the steps for implementing the method embodiments can be implemented by hardware related to program instructions, and the program can be stored in a computer readable storage medium, and when executed, the program performs the steps including the method embodiments; and the aforementioned storage medium includes: various media that can store program codes, such as ROM, RAM, magnetic or optical disks.
Although the present invention has been described with reference to the preferred embodiments, it is not intended to be limited thereto. Those skilled in the art can make various changes and modifications without departing from the spirit and scope of the invention. Therefore, the protection scope of the present invention should be determined by the appended claims.
Claims (9)
1. A clothing color recognition method based on semantic segmentation is characterized by comprising the following steps:
s1: collecting a certain amount of portrait pictures with different parameters under different scenes, and labeling the collected portrait pictures, wherein the labeling content comprises clothing region segmentation information labeling and human body joint point information labeling; carrying out random parameter transformation processing on the marked portrait picture to generate an initial sample set;
s2: establishing a clothing region extraction model based on a JPPNet network, wherein the clothing region extraction model is used for extracting clothing regions on a human image picture by combining human body joint point information marking and clothing region segmentation information marking; leading the portrait pictures in the initial sample set into a clothing region extraction model, and extracting local pictures of each clothing region contained in the portrait pictures;
s3: carrying out background transformation and color classification labeling on the extracted local picture of each clothing region; carrying out size unification and random parameter transformation processing on the marked local pictures to generate a training sample set;
s4: establishing a clothing color recognition model based on the classifier, and importing a training sample set into the clothing color recognition model to train the clothing color recognition model;
s5: the method comprises the steps of collecting portrait pictures containing clothing information in real time, and identifying clothing colors of one or more clothing areas in the portrait pictures by adopting a clothing area extraction model and a clothing color identification model.
2. The method for recognizing clothing color based on semantic segmentation as claimed in claim 1, wherein in step S1, the parameters of the portrait picture include shooting parameters and human posture parameters;
the shooting parameters comprise illumination conditions, shooting scenes, shooting angles and shooting distances;
the human body posture parameters comprise a human body posture, a whole body close-up and a half body close-up.
3. The method for recognizing colors of clothes according to claim 1, wherein in step S1, the labels of the information about the segmentation of the areas of clothes include labels of the areas of head, top, bottom, limbs and feet;
the information labels of the human body joint points comprise information labels of joint points of wrists, elbows, shoulders, heads, chests, knee joints and ankles of the human body.
4. The method for recognizing clothing color based on semantic segmentation as claimed in claim 1, wherein the random parameter transformation processing is to perform random cropping, rotation, flipping and color transformation processing on the picture.
5. The method for recognizing a color of a garment according to claim 1, wherein the step S3 of performing the background transformation on the extracted local picture of each clothing region is to unify background regions in the extracted local picture of each clothing region into a pure white background.
6. The method for recognizing clothing color based on semantic segmentation as claimed in claim 1, wherein in step S3, the labeled local pictures are unified to the same size by using bilinear interpolation.
7. The method for recognizing clothing color based on semantic segmentation according to claim 1, wherein in step S5, the step of recognizing clothing color of one or more clothing regions in the portrait image by using the clothing region extraction model and the clothing color recognition model comprises the following steps:
s51: acquiring a portrait picture containing clothing information in real time, and importing the portrait picture into a clothing region extraction model to extract a local picture of each clothing region contained in the portrait picture;
s52: and carrying out size unification and background transformation processing on the extracted local pictures of each clothing region, and introducing the processed local pictures of each clothing region into a clothing color identification model to identify the corresponding clothing color.
8. A clothing color recognition apparatus based on semantic segmentation, the clothing color recognition apparatus comprising:
the garment region extraction model is created based on a JPPNet network and is used for extracting a garment region on a human image picture by combining human body joint point information marking and garment region segmentation information marking;
the garment color identification model is created based on the classifier and used for identifying the garment colors in the imported local pictures of each garment region;
the portrait picture acquisition module is used for acquiring portrait pictures with different parameters in different scenes;
the sample set generating module is used for carrying out random parameter transformation processing on the imported picture to generate a corresponding training picture sample set;
and the image preprocessing module is used for performing background transformation and size unification processing on the imported pictures.
9. A clothing color recognition system based on semantic segmentation, characterized in that the clothing color recognition system comprises a memory, a processor and a computer program stored in the memory and executable on the processor;
the processor, when executing the computer program, performs the steps of the garment color recognition method as claimed in any one of claims 1 to 7.
Priority Applications (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202010098415.2A CN111325806A (en) | 2020-02-18 | 2020-02-18 | Clothing color recognition method, device and system based on semantic segmentation |
| PCT/CN2020/121515 WO2021164283A1 (en) | 2020-02-18 | 2020-10-16 | Clothing color recognition method, device and system based on semantic segmentation |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202010098415.2A CN111325806A (en) | 2020-02-18 | 2020-02-18 | Clothing color recognition method, device and system based on semantic segmentation |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| CN111325806A true CN111325806A (en) | 2020-06-23 |
Family
ID=71172768
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN202010098415.2A Pending CN111325806A (en) | 2020-02-18 | 2020-02-18 | Clothing color recognition method, device and system based on semantic segmentation |
Country Status (2)
| Country | Link |
|---|---|
| CN (1) | CN111325806A (en) |
| WO (1) | WO2021164283A1 (en) |
Cited By (10)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN112419249A (en) * | 2020-11-12 | 2021-02-26 | 厦门市美亚柏科信息股份有限公司 | Special clothing picture conversion method, terminal device and storage medium |
| CN112528855A (en) * | 2020-12-11 | 2021-03-19 | 南方电网电力科技股份有限公司 | Electric power operation dressing standard identification method and device |
| CN112990012A (en) * | 2021-03-15 | 2021-06-18 | 深圳喜为智慧科技有限公司 | Tool color identification method and system under shielding condition |
| WO2021164283A1 (en) * | 2020-02-18 | 2021-08-26 | 苏州科达科技股份有限公司 | Clothing color recognition method, device and system based on semantic segmentation |
| CN113487619A (en) * | 2020-06-28 | 2021-10-08 | 青岛海信电子产业控股股份有限公司 | Data processing method, device, equipment and medium |
| CN113516062A (en) * | 2021-06-24 | 2021-10-19 | 深圳开思信息技术有限公司 | Customer identification method and system for automobile repair shop |
| CN114093011A (en) * | 2022-01-12 | 2022-02-25 | 北京新氧科技有限公司 | Hair classification method, device, equipment and storage medium |
| CN114201681A (en) * | 2021-12-13 | 2022-03-18 | 支付宝(杭州)信息技术有限公司 | Method and device for recommending clothes |
| CN117132805A (en) * | 2023-07-20 | 2023-11-28 | 江苏范特科技有限公司 | A target positioning method and device for identifying pedestrians based on clothing |
| CN117409208A (en) * | 2023-12-14 | 2024-01-16 | 武汉纺织大学 | A real-time semantic segmentation method and system for clothing images |
Families Citing this family (6)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN114048489B (en) * | 2021-09-01 | 2022-11-18 | 广东智媒云图科技股份有限公司 | Human body attribute data processing method and device based on privacy protection |
| CN113848736B (en) * | 2021-09-13 | 2024-06-28 | 青岛海尔科技有限公司 | Clothing information processing method and device based on smart wardrobe |
| CN113919998B (en) * | 2021-10-14 | 2024-05-14 | 天翼数字生活科技有限公司 | Picture anonymizing method based on semantic and gesture graph guidance |
| CN113963374A (en) * | 2021-10-19 | 2022-01-21 | 中国石油大学(华东) | Pedestrian attribute recognition method based on multi-level features and identity information assistance |
| CN119516319B (en) * | 2024-11-15 | 2025-11-07 | 杭州市拱墅区边缘智能创新研究院 | AI technology-based image-driven multi-region feature fusion system for clothing |
| CN119380418B (en) * | 2024-12-27 | 2025-04-22 | 悦谛美斯健康科技(无锡)有限公司 | A motion form recognition method and system based on fusion vision and clothing AI |
Citations (4)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN106227827A (en) * | 2016-07-25 | 2016-12-14 | 华南师范大学 | Image of clothing foreground color feature extracting method and costume retrieval method and system |
| CN108229288A (en) * | 2017-06-23 | 2018-06-29 | 北京市商汤科技开发有限公司 | Neural metwork training and clothes method for detecting color, device, storage medium, electronic equipment |
| CN109325952A (en) * | 2018-09-17 | 2019-02-12 | 上海宝尊电子商务有限公司 | Fashion clothing image partition method based on deep learning |
| CN110263605A (en) * | 2018-07-18 | 2019-09-20 | 桂林远望智能通信科技有限公司 | Pedestrian's dress ornament color identification method and device based on two-dimension human body guise estimation |
Family Cites Families (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN107766861A (en) * | 2017-11-14 | 2018-03-06 | 深圳码隆科技有限公司 | The recognition methods of character image clothing color, device and electronic equipment |
| CN111325806A (en) * | 2020-02-18 | 2020-06-23 | 苏州科达科技股份有限公司 | Clothing color recognition method, device and system based on semantic segmentation |
-
2020
- 2020-02-18 CN CN202010098415.2A patent/CN111325806A/en active Pending
- 2020-10-16 WO PCT/CN2020/121515 patent/WO2021164283A1/en not_active Ceased
Patent Citations (4)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN106227827A (en) * | 2016-07-25 | 2016-12-14 | 华南师范大学 | Image of clothing foreground color feature extracting method and costume retrieval method and system |
| CN108229288A (en) * | 2017-06-23 | 2018-06-29 | 北京市商汤科技开发有限公司 | Neural metwork training and clothes method for detecting color, device, storage medium, electronic equipment |
| CN110263605A (en) * | 2018-07-18 | 2019-09-20 | 桂林远望智能通信科技有限公司 | Pedestrian's dress ornament color identification method and device based on two-dimension human body guise estimation |
| CN109325952A (en) * | 2018-09-17 | 2019-02-12 | 上海宝尊电子商务有限公司 | Fashion clothing image partition method based on deep learning |
Non-Patent Citations (2)
| Title |
|---|
| XIAODAN LIANG 等: "Look into Person: Joint Body Parsing & Pose Estimation Network and a New Benchmark", 《IEEE TRANSACTIONS ON PATTERN ANALYSIS ON MACHINE INTELLIGENCE》 * |
| XULUHONGSHANG: "FashionAI 全球挑战赛-服饰属性识别赛后技术分享", 《CSDN博客-HTTPS://BLOG.CSDN.NET/XULUHONGSHANG/ARTICLE/DETAILS/80616331》 * |
Cited By (14)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| WO2021164283A1 (en) * | 2020-02-18 | 2021-08-26 | 苏州科达科技股份有限公司 | Clothing color recognition method, device and system based on semantic segmentation |
| CN113487619A (en) * | 2020-06-28 | 2021-10-08 | 青岛海信电子产业控股股份有限公司 | Data processing method, device, equipment and medium |
| CN112419249B (en) * | 2020-11-12 | 2022-09-06 | 厦门市美亚柏科信息股份有限公司 | Special clothing picture conversion method, terminal device and storage medium |
| CN112419249A (en) * | 2020-11-12 | 2021-02-26 | 厦门市美亚柏科信息股份有限公司 | Special clothing picture conversion method, terminal device and storage medium |
| CN112528855A (en) * | 2020-12-11 | 2021-03-19 | 南方电网电力科技股份有限公司 | Electric power operation dressing standard identification method and device |
| CN112528855B (en) * | 2020-12-11 | 2021-09-03 | 南方电网电力科技股份有限公司 | Electric power operation dressing standard identification method and device |
| CN112990012A (en) * | 2021-03-15 | 2021-06-18 | 深圳喜为智慧科技有限公司 | Tool color identification method and system under shielding condition |
| CN113516062A (en) * | 2021-06-24 | 2021-10-19 | 深圳开思信息技术有限公司 | Customer identification method and system for automobile repair shop |
| CN114201681A (en) * | 2021-12-13 | 2022-03-18 | 支付宝(杭州)信息技术有限公司 | Method and device for recommending clothes |
| CN114093011B (en) * | 2022-01-12 | 2022-05-06 | 北京新氧科技有限公司 | Hair classification method, device, equipment and storage medium |
| CN114093011A (en) * | 2022-01-12 | 2022-02-25 | 北京新氧科技有限公司 | Hair classification method, device, equipment and storage medium |
| CN117132805A (en) * | 2023-07-20 | 2023-11-28 | 江苏范特科技有限公司 | A target positioning method and device for identifying pedestrians based on clothing |
| CN117409208A (en) * | 2023-12-14 | 2024-01-16 | 武汉纺织大学 | A real-time semantic segmentation method and system for clothing images |
| CN117409208B (en) * | 2023-12-14 | 2024-03-08 | 武汉纺织大学 | A real-time semantic segmentation method and system for clothing images |
Also Published As
| Publication number | Publication date |
|---|---|
| WO2021164283A1 (en) | 2021-08-26 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| CN111325806A (en) | Clothing color recognition method, device and system based on semantic segmentation | |
| Park et al. | Articulated pose estimation with tiny synthetic videos | |
| CN109325952B (en) | Fashionable garment image segmentation method based on deep learning | |
| EP1260935B1 (en) | Face detection device, face pose detection device, partial image extraction device, and methods for said devices | |
| CN106022343B (en) | A Garment Style Recognition Method Based on Fourier Descriptor and BP Neural Network | |
| CN106384126B (en) | Apparel style recognition method based on contour curvature feature points and support vector machine | |
| Hu et al. | Clothing segmentation using foreground and background estimation based on the constrained Delaunay triangulation | |
| Cychnerski et al. | Clothes detection and classification using convolutional neural networks | |
| Yu et al. | Inpainting-based virtual try-on network for selective garment transfer | |
| CN106570480A (en) | Posture-recognition-based method for human movement classification | |
| CN110263605A (en) | Pedestrian's dress ornament color identification method and device based on two-dimension human body guise estimation | |
| Kheirkhah et al. | A hybrid face detection approach in color images with complex background | |
| CN106952312B (en) | A logo-free augmented reality registration method based on line feature description | |
| CN113378799A (en) | Behavior recognition method and system based on target detection and attitude detection framework | |
| CN109166172B (en) | Clothing model construction method, device, server and storage medium | |
| Aonty et al. | Multi-person pose estimation using group-based convolutional neural network model | |
| CN110543817A (en) | Pedestrian Re-Identification Method Based on Pose-Guided Feature Learning | |
| Roy et al. | LGVTON: A landmark guided approach to virtual try-on | |
| WO2013160663A2 (en) | A system and method for image analysis | |
| Zhou et al. | A method to automatic create dataset for training object detection neural networks | |
| Li et al. | Toward accurate and realistic virtual try-on through shape matching and multiple warps | |
| Bourbakis et al. | Skin-based face detection-extraction and recognition of facial expressions | |
| Desai et al. | Review on human pose estimation and human body joints localization | |
| Wang et al. | GA-STIP: Action recognition in multi-channel videos with geometric algebra based spatio-temporal interest points | |
| Karanth et al. | Automatic Classification and Color Changing of Saree Components Using Deep Learning Techniques |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| PB01 | Publication | ||
| PB01 | Publication | ||
| SE01 | Entry into force of request for substantive examination | ||
| SE01 | Entry into force of request for substantive examination | ||
| RJ01 | Rejection of invention patent application after publication | ||
| RJ01 | Rejection of invention patent application after publication |
Application publication date: 20200623 |