[go: up one dir, main page]

WO2013160663A2 - A system and method for image analysis - Google Patents

A system and method for image analysis Download PDF

Info

Publication number
WO2013160663A2
WO2013160663A2 PCT/GB2013/051011 GB2013051011W WO2013160663A2 WO 2013160663 A2 WO2013160663 A2 WO 2013160663A2 GB 2013051011 W GB2013051011 W GB 2013051011W WO 2013160663 A2 WO2013160663 A2 WO 2013160663A2
Authority
WO
WIPO (PCT)
Prior art keywords
image
garment
colour
background
person
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Ceased
Application number
PCT/GB2013/051011
Other languages
French (fr)
Other versions
WO2013160663A3 (en
Inventor
Salman VALIBEIK
Bjoern RENNHAK
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
CLOTHES NETWORK Ltd
Original Assignee
CLOTHES NETWORK Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by CLOTHES NETWORK Ltd filed Critical CLOTHES NETWORK Ltd
Publication of WO2013160663A2 publication Critical patent/WO2013160663A2/en
Publication of WO2013160663A3 publication Critical patent/WO2013160663A3/en
Anticipated expiration legal-status Critical
Ceased legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/60Editing figures and text; Combining figures or text
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/58Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/583Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
    • G06F16/5838Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content using colour
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/20Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/14Transformations for image registration, e.g. adjusting or mapping for alignment of images
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods

Definitions

  • the present invention relates to a system and method for the identification of an item in an image, and the analysis of an item in an image and the matching of the image with other images including similar items. Additionally, the present invention relates to the fitting of an item in a first image to a particular shape stored in a second image. More specifically, the present invention relates to the identification of a garment in an image, the matching of a garment in the image with other, similar garments in images and the virtual fitment of a garment shown in an image on a representation (or image) of a person.
  • Google Goggles uses object recognition algorithms, and is reliant on the extensive library of images that Google already possesses in their databases (as evidenced by Google's image search function). It is known that Goggles does not work well with images of 'soft' or 'organic' shapes, such as clothing, furniture or animals. Additionally, there are known services which take input in the form of uploaded images containing garments, and seek to match the garment in the image with similar garments in the database. These systems, however, require a user interaction to identify which type of garment is present in the image. Such services are limited, because they are heavily reliant upon user interaction.
  • WO01/1 1886 discloses a system for the virtual fitting of clothing which analyses only the 'critical' points of a garment. Additionally, systems are known which require the preparation of time-consuming 3D scans of garments to be 'fitted' prior to the garments being available for online virtual fitting (on a 3D torso based on measurements. Additionally, other known systems utilise pre-acquired images of clothing on an adjustable model which has been preset to model particular body shapes. It is an object of the present invention to provide a system and method which accurately determines a garment type in an image, provides image results from a database which contain similar garments to those in the source image, and allows a user to virtually 'fit' the garments in an image to a representation of themselves.
  • one aspect of the present invention provides a method for analysing an image to locate and isolate an object within the image, the image containing at least an object against a background, the method comprising classifying the colour of each pixel of the image, estimating, based on the colour of each pixel, the colour descriptor of the object and the colour descriptor of the background of the image, determining, based on the colour of the object and the colour of the background, the locations in the image of the object and the background, isolating the object in the image, and identifying the shape descriptor of the object.
  • classifying the colour of each pixel comprises first determining whether the pixel is white, grey or black, and if the pixel is not white, grey or black, determining whether the pixel is, grey, red, orange, yellow, green, cyan, blue, purple or pink.
  • the method further includes the step of transferring the pixel colour space from RGB (Red, Green, Blue) to HSI (Hue, Saturation, Intensity).
  • estimating the respective colour descriptors of the object and the background includes creating a colour histogram based on the pixel colours of the object and background.
  • estimating the respective colour descriptors of the object and background includes determining whether the colour of the object and the colour of substantially all of the background are similar.
  • the method includes the step of calculating the ratio of the number of pixels in the image having one colour to the total number of pixels in the image.
  • the step of determining the location of the object and the background comprises using analysis of the edges of the object.
  • the step of determining the location of the object and the background comprises discarding the pixel data relating to the background.
  • estim ati ng the colour descriptors of the object and background further includes determining whether the background includes regions of a colour similar to the colour of the object.
  • the method further comprises clustering pixels form ing the image and analysing the clusters by way of a k-means algorithm, separating the regions of similar colour in the background from the object.
  • the method includes the further steps of analysing the isolated object to identify areas of the object of a similar colour to the background, and if there are areas of a similar colour present in the object, applying a morphological operation to the image of the object to remove these areas.
  • the step of determining the locations of the object and background in the image includes assuming that the object is in a central region of the image.
  • the step of determining the locations of the object and the background in the image includes making an assumption regarding the location of the object with respect to the background, and comparing the estimated colours of the object and background to determine which is the object and which is the background.
  • the step of identifying the shape descriptor of the object includes comparing the object in the image with a selection of pre-determined objects having known shapes.
  • the method further includes the steps of comparing the shape descriptor of the object in the image with other images containing a similar object, and using the data obtained from the comparison to improve the shape descriptor identification.
  • the method further includes the step of identifying the pattern descriptor of the object in the image.
  • the step of identifying the pattern descriptor comprises using a k-means algorithm.
  • Another aspect of the present invention provides a method of searching for an image containing an object, the method comprising the steps of identifying an image to be used as the basis for the search, determining the complexity of the image, determining the bounds of the object within the image, identifying the shape descriptor of the object within the image based on the identified bounds of the object, determining the colour descriptor of the object within the image, com pari ng the object in the im age with the content of other, predetermined images, based upon the colour and shape descriptors of the object, and returning images which, based on the comparison of the object of the image and the predetermined images, include content similar to the basis of the search.
  • the step of identifying the image to be used as the basis for the search includes receiving an image from a user.
  • the step of determining the complexity of the image includes performing pixel colour analysis of the image.
  • the step of determ ining the complexity includes using a human detection algorithm.
  • the step of determining the complexity further includes analysis of the background of the image.
  • the step of determining the complexity of the image includes analysing shapes present in the image.
  • the step of determining the bounds of the image includes performing edge analysis.
  • the step of determ in ing the bounds of the image includes performing colour and texture analysis.
  • the step of determining the bounds of the image comprises manually determining the bounds.
  • the step of identifying the shape descriptor of the object within the image includes comparing the determined image bounds with shapes with a selection of p re-determined objects having known shape descriptors.
  • the step of determining the colour descriptor of the object includes analysing the colours of the pixels within the image.
  • the method further includes the step of creating a colour histogram based upon the pixel data.
  • the step of com pari ng the obj ect i n the i m age with predetermined images includes analysis of a database of pre-processed images.
  • the step of returning the results includes providing images and data relating to the images.
  • the data relating to the images includes price, size, colour, pattern, texture, textile and/or supply data.
  • the basis for the search further includes providing data relating to price, size, colour, pattern, texture, textile and/or supply data.
  • the method further includes the step of identifying the pattern descriptor of the object in the image.
  • the step of identifying the pattern descriptor comprises using a k-means algorithm.
  • a yet further embodiment of the present invention provides a method of aligning a representation of a garment contained in a first image with a representation of a person contained in a second image, the method including the steps of identifying the garment in the first image and the person in the second image, analysing the shape of the garment in the first image allocating nodes to predetermined points on the garment, associated with the garment shape and predetermined shape data, analysing the shape of the person in the second image to find the outline of the person in the second image, allocating nodes to predetermined points on the person, associated with the shape of the person and predeterm ined shape data, analysing the predetermined points of both the garment and the person and determining alignment information, manipulating the first and second images to align the predetermined points, scaling the first image based upon the dimensions of the second image, and overlaying the first image onto the second image.
  • identifying the garment in the first image includes comparing the garment with shapes with a selection of pre-determined objects having known shapes.
  • the method further includes the step of isolating the garment in the first image from the background of the image.
  • the step of isolating is carried out automatically.
  • the step of isolating is carried out manually.
  • analysing the shape of the garment includes analysing the outline of the garment and creating a map of the shape of the garment.
  • the step of allocating nodes to predetermined points on the garment includes perform ing shape analysis of the outline found for the garment and subsequently identifying the points based on predetermined criteria.
  • analysing the shape of the person includes analysing the outline of the person and creating a map of the shape of the person.
  • the method further includes the step of displaying the map of the shape of the person as an estimation of the outline of the person.
  • the step of allocating nodes to predetermined points on the person includes analysing the shape of the outline found for the person and subsequently identifying the points based on predetermined criteria.
  • the method further includes the step of placing the predetermined points of the garment in the first image on at least one of neck, shoulders, elbows, wrists, chest, navel, waist, hips, knees or ankles.
  • the m ethod further incl udes the step of placing the predetermined points of the person in the second image on at least one of head, neck, shoulders, elbows, wrists, chest, navel, waist, hips, knees, ankles or feet.
  • the method further includes the step of analysing the predetermined points on both the garment and person to determine alignment information includes form ing correspondences between the predetermined points on the garment in particular locations and predetermined points on the person in particular locations.
  • the particular locations on the person include the joints of the person.
  • the particular locations on the garment include the areas of the garment which correspond to joints of a person.
  • the step of manipulating the first and second images uses one-to-one mapping of the predetermined points.
  • the step of scaling the first image based on the dimensions of the second image include analysing the distances between the predetermined points in both the first and second images and subsequently scaling the first image based upon the distances in the second image.
  • the step of scaling the first image further takes into account the outline determined for the person in the second image and includes scaling the garment accordingly.
  • the scaling takes into account the height and weight of the person in the second image.
  • the method further includes the step of analysing the lighting of the person in the second image and applying simulated lighting to the garment in the first image based on the lighting in the second image.
  • FIGURE 1 shows a flow diagram incorporating both image search and garment fitting
  • FIGURE 2 shows examples of images of garments to be isolated from the background
  • FIGU RE 3 shows examples of the isolation of the shapes of two of the garments shown in Figure 4;
  • FIGURE 4 shows a schematic view of the garment isolation process
  • FIGURE 5 shows a schematic view of the garment fitting process
  • FIGURE 6 shows a flow diagram of the fitment process
  • FIGURE 7 shows stages of the virtual fitting process
  • FIGURE 8 shows further stages of the virtual fitting process
  • FIGURE 9 shows the relative location of a vertex during a fitting process.
  • Figure 1 shows a flow diagram setting out some of the steps associated with the image search and garment fitting aspects of the invention.
  • the image search tool allows a user to search for images of garments, based upon matches to an existing image rather than keywords, and comprises two portions, which will be discussed in turn.
  • the first portion of the search tool is the analysis and cataloguing of images which are to be made available to be searched.
  • This analysis process is, in general, carried out as an offline cataloguing operation as new garment images are to be incorporated into the database. These images may be provided by a feed from a fashion house or fashion retailer, or may be manually incorporated.
  • the images may be analysed ⁇ -the-fly' as they are imported into the database, or alternatively the images may be imported and then analysed.
  • the database images may first be segmented to isolate the garment in the image, and then their colour, and shape and pattern descriptors extracted.
  • the analysis must locate and identify varying garment types which are pictured on varying backgrounds, because different fashion houses and clothing retailers use different types and configurations of images.
  • Figure 2 shows three such examples of garments to be analysed, with backgrounds of varying complexity.
  • some retailers use pictures of garments on a white or neutral background, some use images of garments worn by models, some use images with a shaded background, and some use images which include objects in the background or are m ore com plex. Further, some images include a combination of some or all of the above.
  • the method employed in this analysis is a 'pixel colour'-based method.
  • issues which must be overcome. These include situations in which the garment is a similar colour to the skin of the model (making it difficult to ascertain easily and with confidence where the garment ends and the model begins) and situations where the garment is a similar colour to the background of the picture (making it difficult to locate the garment against the background).
  • Colour Classification
  • each pixel is classified as a colour.
  • This colour is, in general, chosen from a set which includes white, black, gray, red, orange, yellow, green, cyan, blue, purple and pink.
  • each pixel is analysed to establish if the pixel colour is white, black or gray. If not, as a second step, one of the other colours is assigned, as will be explained in detail later.
  • the image is transferred from an RGB (Red, Green, Blue) colour space to a H IS (Hue, Saturation, Intensity) colour space, where three colour channels are extracted from the or each image. Additionally, a fourth channel, V, may be used which represents the minimum value of RGB components in each pixel.
  • RGB Red, Green, Blue
  • H IS Hue, Saturation, Intensity
  • a pixel is determined to be white, grey or black, the pixel colour information is recorded. However, if a pixel is not identified as being as white, black or grey, further analysis is required.
  • the colour of each pixel may be determined using the following equations: red if 0,96 ⁇ H or H ⁇ 0,03
  • This cataloguing method assigns a colour to all pixels of the image, including background pixels. Once this has been completed, it is necessary to isolate the garment from the remainder (the background) of the image so as to obtain the information regarding the garment.
  • FIGS. 3a and 3b show examples of isolation - two of the garments shown in Figure 2 are shown in isolation, with the shape of the garment accurately isolated in the image.
  • images of garments which are to be catalogued into the database include the or each garment presented against a white or monotonic background. If the background is white (or a monotonic colour), it is possible to discard parts of the image which are white (or the monotonic colour).
  • a white garment on a white background or indeed a monotonic garment on a monotonic background of the same colour
  • purely discarding the white (or monotonic) pixels would remove the background and the garment, and the process would fail.
  • the garment shown in an image is white, the majority of the pixels in the image will be white in colour. If the ratio of white pixels to all pixels in the image is larger than a threshold ratio, preferably a ratio of 0.8, it can be said that the garment in the image is white. However, in a situation where there are other objects in the background of the image, the threshold ratio will not be reached. Therefore, it is also necessary to consider one third of the width of the image (in the horizontal direction) and the full height of the image (in the vertical direction) and calculate the ratio of white pixels in that section.
  • a threshold ratio preferably a ratio of 0.8
  • the background may be removed, with the foreground of the image (the area which contains the garment) being retained.
  • some pixels from the foreground may also be removed.
  • the non-white pixels of the garment may be considered as being and the white pixels located in the garment may be considered as ⁇ '. Therefore, any white colour regions in the texture of a garment appear to be small 'holes'. These 'holes' may then be removed using morphological methods, without affecting the white pixels of the background. This makes it possible to distinguish between removed white pixels in the garment and the non-removed pixels as the background colour, allowing removal of the white background.
  • the present invention includes a skin and hair detection algorithm to remove the skin and hair from an image.
  • simple removal of pixels having the same colour as the skin or hair of the model also results in removal of the garment.
  • the garment either has small regions of skin and/or hair colour among other colours, or it has a uniform, dominant colour of skin and/or hair.
  • these small regions may be detected using morphological operations. After this morphological step, if the ratio of skin and/or hair pixels to the whole number of foreground pixels is larger than a threshold, preferably 0.85, it may indicate that the garment's colour is similar to skin and/or hair.
  • a threshold preferably 0.85
  • the k-means algorithm takes a number, n, of observations in k clusters, with each observation belonging to the cluster with the nearest mean, through an iterative refinement process. Taking into account that skin and/or hair are usually present in the outer part of the foreground, the algorithm may then determine which cluster belongs to the garment and which belongs to skin and/or hair.
  • the background may be removed semi-automatically - a user may be prompted to identify the region of the image which contains the garment and the background region of the image, by way of clicking, or drawing a line or number of lines in part of the object or garment, along with a line or number of lines in part of the background.
  • a user may be asked to draw a bounding box around the object or garment in the image to aid in the isolation.
  • the line-drawing or box-drawing process may be repeated in an iterative fashion to improve the accuracy of the isolation of the image or garment.
  • the background may also be removed manually.
  • Figure 4 shows a schematic view of the isolation process, including a dotted- line indication of the background of the image and a solid line indication of the garment.
  • the object or garm ent in the foreground may then be isolated.
  • the colour, pattern and shape descriptor may be extracted from the image.
  • the colour information obtained in the analysis step may be used to generate a colour histogram for the garment. This allows the estimation of the colour of the garment.
  • the edge information in areas of the garment is analysed. This analysis is carried out using a k-means algorithm to cluster the similar patterns in the garment (one garment may include different patterns). Then, the most dominant clusters determined using the k-means algorithm are used to create the pattern descriptor for the garment.
  • an effective shape descriptor may be used. If the image being analysed shows the garment worn by a model, the garment silhouette is not the same as the original shape of the garment.
  • the shape of the garment in the image is dependent, in general, on the pose adopted by the model in the image, and the way in which some parts of the garment may be hidden by the model's limbs.
  • the present invention addresses the problems associated with this shape difference.
  • retailers provide both 'flat' images of a garment and images of the garment being worn by a model .
  • the two garment images are analysed using the above discussed methods, and subsequently compared, with the relationship between the corresponding images used to determine the shape descriptor of the garment.
  • the images may be compared with images of other, similar garments in the database which are shown in both 'flat' and 'worn' configurations to find the most similar picture and therefore to identify the shape descriptor of the unworn (or 'flat') garment.
  • the most sim ilar picture may be determined based on the pose of the model and garment silhouette, and the shape descriptor of the garment is assigned based upon this data.
  • the data is then stored in a database, made available to be searched. Retailers may provide a stream of images of garments to be included in the database, with the above method being carried out in an automated fashion, or in batches.
  • the second step is the upload and processing step for searching.
  • the user may upload an image to a host or into a database accessible by the searching algorithm in a known fashion, or alternatively, link to an image which is already available online. This image may then be processed to be used as the basis for the image search.
  • the processing step may then be carried out on the image.
  • the complexity of the image may be determined. This is preferably an automated step, and the complexity is determined using details extracted from the image.
  • the images used will contain a human, therefore human detection algorithms may be used to detect any human parts (face, torso, legs, arms, body and the like) are present in the image.
  • the colours clustered in the image borders and/or background are analysed to determine the complexity of the image based upon the number of colours present in the borders and/or background. The human detection coupled with the analysis of the colours may then be used to determine the complexity of the background.
  • the bounds of the image may be determined.
  • a user may be asked to identify the bounds of the garment in the image that is to be used as the search parameter, by indication of e.g a bounding box or indication line.
  • the bounds of the garment may be determined automatically, or using the methods of identification discussed above.
  • characteristics of the garment such as shape, colour and the like may be extracted from the image using the above algorithms and image processing techniques.
  • the garment class (e.g. tops, trousers, dresses, shirts, etc.) may also be determined, as discussed in more detail below. Search characteristics may then be com pared with predetermined characteristics extracted from images of garments in a database, to provide the user with garment images and details which are similar to the garment in the uploaded image, also discussed in more detail below.
  • Garment Classes e.g. tops, trousers, dresses, shirts, etc.
  • the analysis engine supports various garment classes, such that when an image is uploaded, and the image is analysed, no user input is required to identify the garment class.
  • the analysis engine may include both a selection of pre-set 'template' garment classes, which may include exaggerated or 'stylised' garments and a selection of real garment images which represent each of the garment classes.
  • the processing of the uploaded image assesses the content of the image, and as a first step, may analyse the intensity of the regions of the image, clustering them together to form areas of the image. The areas may then be merged, based upon assumptions made about the foreground and background of the image. This assumption regarding the foreground and background may be based upon input from a user on image upload, but if no user input is available, the system may assume that the corners of the image are background, with the rest of the image being the foreground. This may be carried out in the same way as for the image cataloguing analysis discussed above.
  • an iterative process may be undertaken, using regression techniques, to extract the block shape of the garment in the image. If the outline of the garment in the image is particularly 'jagged', a final step may be undertaken, which smoothes out the outline of the garment. The class of the garment may then be detected. This detection method may be applied to images of garments provided by retailers. Alternatively, the isolation process may be carried out in line with the above discussion.
  • the detectable classes of garments are pre-set, there are only a limited number of garment classes which may be identified, and the analysed image may therefore be uploaded by a user, analysed by the system without input from the user, and then passed to the search engine.
  • the class of garment is deduced from the outline of the garment by comparison with existing data in the database as discussed above, and the garment class which returns the most likely match is the garment class that may be assigned.
  • the garment class may be identified by the user.
  • the data extracted from the garment is then compared against the images of garments in the database, to find garments which 'match' the image uploaded by the user, and the results of the search are displayed to the user.
  • the search may also include parameters which are associated with the image of the garment, such as (but not limited to) price, size, colour, pattern, texture, textile and/or supply data. These parameters may be input by the user when the image which is to be the basis of the search is identified, and may also be stored in the database with the analysed images as meta-data to improve the search experience and usefulness of the search.
  • Images uploaded by a user and/or images from the database may also be virtually 'fitted' on to an image of a person.
  • Figure 5 shows a schematic view of the virtual garment fitting process.
  • the virtual fitting aspect of the present invention allows for the alignment of a representation of a garment contained in an image with a representation of a person stored in a second image.
  • the process will now be discussed in detail.
  • the virtual fitting is the alignment of an image of a garment with an image of a person, such that the garment is scaled and deformed to 'fit' onto the image of the person.
  • images of garments may be processed to analyse the shape and fit of the garments. This analysis may follow on from the image processing steps set out above.
  • the shape and fit analysis process involves analysis of the shape characteristics of the garments, and places a number of 'nodes' at points of the garment which to enable accurate and effective scaling and fitment of the garment. Nodes and Predetermined Points
  • the 'nodes' are, in general, placed at predetermined 'key' points of the garment, for instance with a jumper, the nodes may be placed at the portions of the garment which correspond to wrists, elbows, shoulders and chest, and in the case of trousers, the nodes may be placed at ankles, knees, hips and waist. It is to be understood that the nodes may be placed in positions which, dependent on the shape and fit of the garment, will allow accurate virtual fitment.
  • the system identifies the outline of the shape of the person in the image. If this cannot be achieved automatically, the user may be asked to position an Outline' shape of a person on the image of the person.
  • the outline shape may be scaled and deformed to fit with the shape of the person in the picture.
  • the outline may, alternatively, be carried using the background isolation steps above.
  • the predetermined 'key' points on the body of the person may then be identified using image analysis techniques. Alternatively, the predetermined points may be indicated by a user. Then, nodes may be identified which correspond to the shape and position of the person. These nodes may be joined together to form an outline of a rough 'skeleton' of the person, within the outline of the person determined previously.
  • the user may be asked to confirm the location of key 'nodes', such as those placed at elbows, knees and head. Once the nodes have been positioned, the image and associated spatial data is stored, and the image is ready for the subsequent fitment of a garment.
  • the user may then select an image of a garment (which may be an image returned by the search discussed above) and 'fit' it to their image, virtually.
  • the image of the garment will have been analysed as above, and will then be manipulated and scaled, using image transformation algorithms, to 'fit' with the image of the person, using the nodes at the predetermined points in both the garment and person images.
  • the present invention may further include a measurement module which may be used to measure the sizes of portions of a person in an image, based on the height and weight of the person in the image. This measurement module can be used to provide more accurate scaling and fitment or garments.
  • the lighting conditions in the picture of the user may be analysed, and similar lighting is applied to the virtually fitted garment to improve the realism and quality of the virtual fitting.
  • the virtual fitting system allows for the recognition and overlaying of a user's hair in the uploaded picture.
  • the hair in the uploaded image may be identified using the processing techniques detailed above, with the hair placed over the fitted garment where necessary.
  • the virtual fitting algorithm requires correspondence between the garment silhouette and the model in the image of the user (the 'target image'). To provide opti mal fitting resu lts, the algorithm serves to preserve the characteristic features of the garment, i.e. the general fit of the garment and its specific shape. Also, the algorithm reflects the specific body shape of the model and deforms the garment accordingly, and minimises deviations from the original garment shape in order not to introduce unnecessary visual artefacts.
  • the virtual fitting algorithm consists of three main phases, which are shown in Figure 6 and are outlined in the following sections.
  • the algorithm requires three inputs, which are:
  • the silhouette template may differ in appearance to reflect garment and model body shape characteristics, both templates share the same number of points which are ordered in a similar fashion. Therefore, there is a specific one-to-one mapping between points at key positions along the silhouette on both images. For the rest of the algorithm, it may be assumed that the input silhouettes are generally correctly aligned to the garment and model respectively.
  • the contour selection phase shown schematically in Figure 7 guarantees the best compromise between conformance to the model silhouette and garment shape fidelity. Selecting contour points exclusively from the edge of the garment binary mask perfectly preserves the garment shape but does not provide any information about the body position in relation to the garment itself. Conversely, selecting points from the silhouette template would cause the garment shape to be significantly altered during morphing and fitment, which may result, for example, in loose dresses being fitted tightly to the body.
  • the algorithm proceeds by automatically detecting the garment contour from the binary mask. This contour g(i) is then checked against the silhouette template on the garment image s g (j) for all point pairs (ij) and a decision is made according to:
  • the final contour c(i) s obtained by selecting a point from the body silhouette template s g (j) if this is close to the garment contour, thus providing body alignment information, while the other points are taken from the extracted garment contour g(i), thus preserving the original garment shape.
  • the geometric relationship between the two silhouettes is defined by the general affine transform:
  • s m (i) is the silhouette template in the model image.
  • IA ⁇ 3 ⁇ 4] .arg-issn
  • This is computed using all points from the two templates, regardless of which ones have been used in the final contour c(i). This ensures that as much information as possible about the geometric relationship between the two templates in all areas of the image is provided for the estimated transform.
  • the computed transform is only applied to points in c(i) chosen from the garment contour, whereas those originally belonging to s g (i) are replaced with their matches from s m (i).
  • the algorithm iteratively identifies segments originating from the garment contour that are discontinuous with respect to their surrounding segments.
  • the discontinuities can fall under two possible cases. The first is that segments obtained from the transformed g(i) sandwiched between others obtained from s m (i), which are fixed, and the second is segments whose end points are separated by a distance that exceeds a predefined threshold.
  • the algorithm proceeds by recursively finding the translation aligning the end points of two consecutive misaligned segments. The recursion is iterated until a fixed segment is found. When the algorithm terminates, two translations ti and t 2 are found for the two endpoints of each segment Pk(m), m ⁇ [0.. M] where k is the index of the segment and m is the index of the point within the /c* h segment. The translation is then propagated to all other points in the segment by the weighted average:
  • the propagation of the translation concludes the algorithm, whose output is a set of two co ntou rs i n th e g a rm e nt a n d m ode l i m ag es w ith kn own correspondences. This then constitutes the input of the morphing stage for the final phase of the virtual fitting algorithm.
  • the relationship between these corresponding key points is used to obtain the way in which the garment should be deformed to fit on the model.
  • the corresponding key points form a relationship between the picture of the model and the picture of the garment, as shown in Figure 8. This relationship is used, along with a regular rectangular grid to deform the garment based on the relationship between corresponding points to fit it on the model.
  • a rectangular grid is superimposed on to the model, as can be seen in Figure 8. For each vertex of the grid, its relative location with respect to the key points in close proximity to it is calculated. For each vertex of the grid, those key points that lie inside the cells containing that vertex are considered as neighbour to that vertex.
  • each vertex is obtained with respect to its neighbouring key points. For each pair of neighbouring key points, the relative location of the vertex is obtained and saved. These relative locations are applied on the corresponding key points on the garment to obtain the corresponding vertex on the garment. Doing this for all vertices of the grid, the corresponding grid for the garment is obtained.
  • v is taken to be a vertex point and pi and p 2 two neighbour key points of v.
  • the equation f1 ⁇ 4 + - j3 ⁇ 43+ gfl
  • ⁇ a,b) is the relative location of v with respect to pi and p 2 , that can be obtained by solving a linear system of equations.
  • the corresponding vertices for those that have neighbour key points are obtained.
  • the corresponding vertices of vertices inside the convex hull obtained by the key points are found for the correspondences determined in the previous step.
  • the key points considered in the first step are the first set and those considered in the second step are the second set.
  • For each vertex in the set its local coordinates with respect to points that are in the same row as well as the points that are in the same column are obtained and applied on the corresponding points on the garment. Doing so, the corresponding vertices of the set on the garment are obtained.
  • the first set is extended and further vertices are considered of the grid in a margin of the second set.
  • the relative coordinate of a point is obtained with respect to two nearest key points in the same line and these coordinates are applied to the corresponding key points on the garment to obtain corresponding key points on the garment.
  • the texture inside each cell of the grid may be mapped on the garment to its corresponding cell on the model to synthesize the deformed picture of the garment on the model.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Engineering & Computer Science (AREA)
  • Library & Information Science (AREA)
  • Software Systems (AREA)
  • Computer Hardware Design (AREA)
  • Computer Graphics (AREA)
  • Architecture (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • Processing Or Creating Images (AREA)
  • Image Analysis (AREA)

Description

Title: A System and Method for Image Analysis Description of Invention
The present invention relates to a system and method for the identification of an item in an image, and the analysis of an item in an image and the matching of the image with other images including similar items. Additionally, the present invention relates to the fitting of an item in a first image to a particular shape stored in a second image. More specifically, the present invention relates to the identification of a garment in an image, the matching of a garment in the image with other, similar garments in images and the virtual fitment of a garment shown in an image on a representation (or image) of a person.
Particularly, it is desirable to be able to search for a garment or item of clothing which is similar to another, particular type or style of garment or item of clothing, view similar garments or items of clothing and virtually 'fit' the garment or item of clothing to an image of a person.
Previous systems have been suggested which allow limited searching of images by using an image as the basis of the search (or the 'search parameters'). The tools which exist presently, such as Google's "Google Goggles"®, use analysis of the shape and colour contained in an image to provide similar images. Google Goggles uses object recognition algorithms, and is reliant on the extensive library of images that Google already possesses in their databases (as evidenced by Google's image search function). It is known that Goggles does not work well with images of 'soft' or 'organic' shapes, such as clothing, furniture or animals. Additionally, there are known services which take input in the form of uploaded images containing garments, and seek to match the garment in the image with similar garments in the database. These systems, however, require a user interaction to identify which type of garment is present in the image. Such services are limited, because they are heavily reliant upon user interaction.
WO01/1 1886 discloses a system for the virtual fitting of clothing which analyses only the 'critical' points of a garment. Additionally, systems are known which require the preparation of time-consuming 3D scans of garments to be 'fitted' prior to the garments being available for online virtual fitting (on a 3D torso based on measurements. Additionally, other known systems utilise pre-acquired images of clothing on an adjustable model which has been preset to model particular body shapes. It is an object of the present invention to provide a system and method which accurately determines a garment type in an image, provides image results from a database which contain similar garments to those in the source image, and allows a user to virtually 'fit' the garments in an image to a representation of themselves.
Accordingly, one aspect of the present invention provides a method for analysing an image to locate and isolate an object within the image, the image containing at least an object against a background, the method comprising classifying the colour of each pixel of the image, estimating, based on the colour of each pixel, the colour descriptor of the object and the colour descriptor of the background of the image, determining, based on the colour of the object and the colour of the background, the locations in the image of the object and the background, isolating the object in the image, and identifying the shape descriptor of the object. Preferably, classifying the colour of each pixel comprises first determining whether the pixel is white, grey or black, and if the pixel is not white, grey or black, determining whether the pixel is, grey, red, orange, yellow, green, cyan, blue, purple or pink.
Conveniently, after the colour classification step, the method further includes the step of transferring the pixel colour space from RGB (Red, Green, Blue) to HSI (Hue, Saturation, Intensity). Advantageously, estimating the respective colour descriptors of the object and the background includes creating a colour histogram based on the pixel colours of the object and background.
Preferably, estimating the respective colour descriptors of the object and background includes determining whether the colour of the object and the colour of substantially all of the background are similar.
Conveniently, the method includes the step of calculating the ratio of the number of pixels in the image having one colour to the total number of pixels in the image.
Advantageously, if the ratio is calculated as being 0.8 or higher, concluding that the background and object are the same colour. Preferably, if the estimated colour descriptors of the object and the background are similar, the step of determining the location of the object and the background comprises using analysis of the edges of the object.
Conveniently if the colour descriptors of the object and background are not similar, the step of determining the location of the object and the background comprises discarding the pixel data relating to the background. Advantageously, estim ati ng the colour descriptors of the object and background further includes determining whether the background includes regions of a colour similar to the colour of the object.
Preferably, if it is determined that the background includes regions of a colour sim ilar to the colour of the object, the method further comprises clustering pixels form ing the image and analysing the clusters by way of a k-means algorithm, separating the regions of similar colour in the background from the object.
Conveniently, the method includes the further steps of analysing the isolated object to identify areas of the object of a similar colour to the background, and if there are areas of a similar colour present in the object, applying a morphological operation to the image of the object to remove these areas.
Advantageously, the step of determining the locations of the object and background in the image includes assuming that the object is in a central region of the image.
Preferably, the step of determining the locations of the object and the background in the image includes making an assumption regarding the location of the object with respect to the background, and comparing the estimated colours of the object and background to determine which is the object and which is the background.
Conveniently, the step of identifying the shape descriptor of the object includes comparing the object in the image with a selection of pre-determined objects having known shapes. Preferably, the method further includes the steps of comparing the shape descriptor of the object in the image with other images containing a similar object, and using the data obtained from the comparison to improve the shape descriptor identification.
Conveniently, the method further includes the step of identifying the pattern descriptor of the object in the image.
Advantageously, the step of identifying the pattern descriptor comprises using a k-means algorithm.
Preferably, using the k-means algorithm clusters similar patterns on the object, and determining the dominant pattern to identify the pattern descriptor. Another aspect of the present invention provides a method of searching for an image containing an object, the method comprising the steps of identifying an image to be used as the basis for the search, determining the complexity of the image, determining the bounds of the object within the image, identifying the shape descriptor of the object within the image based on the identified bounds of the object, determining the colour descriptor of the object within the image, com pari ng the object in the im age with the content of other, predetermined images, based upon the colour and shape descriptors of the object, and returning images which, based on the comparison of the object of the image and the predetermined images, include content similar to the basis of the search.
Preferably, the step of identifying the image to be used as the basis for the search includes receiving an image from a user. Conveniently, the step of determining the complexity of the image includes performing pixel colour analysis of the image. Advantageously, the step of determ ining the complexity includes using a human detection algorithm. Preferably, the step of determining the complexity further includes analysis of the background of the image.
Conveniently, the step of determining the complexity of the image includes analysing shapes present in the image.
Advantageously, the step of determining the bounds of the image includes performing edge analysis.
Preferably, the step of determ in ing the bounds of the image includes performing colour and texture analysis.
Conveniently, the step of determining the bounds of the image comprises manually determining the bounds. Advantageously, the step of identifying the shape descriptor of the object within the image includes comparing the determined image bounds with shapes with a selection of p re-determined objects having known shape descriptors. Preferably, the step of determining the colour descriptor of the object includes analysing the colours of the pixels within the image.
Conveniently, the method further includes the step of creating a colour histogram based upon the pixel data. Advantageously, the step of com pari ng the obj ect i n the i m age with predetermined images includes analysis of a database of pre-processed images. Preferably, the step of returning the results includes providing images and data relating to the images.
Conveniently, the data relating to the images includes price, size, colour, pattern, texture, textile and/or supply data.
Advantageously, the basis for the search further includes providing data relating to price, size, colour, pattern, texture, textile and/or supply data.
Preferably, further including analysing and comparing the data with predetermined catalogued data associated with the predetermined images.
Conveniently, the method further includes the step of identifying the pattern descriptor of the object in the image. Advantageously, the step of identifying the pattern descriptor comprises using a k-means algorithm.
A yet further embodiment of the present invention provides a method of aligning a representation of a garment contained in a first image with a representation of a person contained in a second image, the method including the steps of identifying the garment in the first image and the person in the second image, analysing the shape of the garment in the first image allocating nodes to predetermined points on the garment, associated with the garment shape and predetermined shape data, analysing the shape of the person in the second image to find the outline of the person in the second image, allocating nodes to predetermined points on the person, associated with the shape of the person and predeterm ined shape data, analysing the predetermined points of both the garment and the person and determining alignment information, manipulating the first and second images to align the predetermined points, scaling the first image based upon the dimensions of the second image, and overlaying the first image onto the second image.
Preferably, identifying the garment in the first image includes comparing the garment with shapes with a selection of pre-determined objects having known shapes.
Conveniently, the method further includes the step of isolating the garment in the first image from the background of the image.
Advantageously, the step of isolating is carried out automatically.
Preferably, the step of isolating is carried out manually.
Conveniently, analysing the shape of the garment includes analysing the outline of the garment and creating a map of the shape of the garment.
Advantageously, the step of allocating nodes to predetermined points on the garment includes perform ing shape analysis of the outline found for the garment and subsequently identifying the points based on predetermined criteria.
Preferably, analysing the shape of the person includes analysing the outline of the person and creating a map of the shape of the person.
Conveniently, the method further includes the step of displaying the map of the shape of the person as an estimation of the outline of the person. Advantageously, the step of allocating nodes to predetermined points on the person includes analysing the shape of the outline found for the person and subsequently identifying the points based on predetermined criteria. Preferably, the method further includes the step of placing the predetermined points of the garment in the first image on at least one of neck, shoulders, elbows, wrists, chest, navel, waist, hips, knees or ankles.
Conveniently, the m ethod further incl udes the step of placing the predetermined points of the person in the second image on at least one of head, neck, shoulders, elbows, wrists, chest, navel, waist, hips, knees, ankles or feet.
Advantageously, the method further includes the step of analysing the predetermined points on both the garment and person to determine alignment information includes form ing correspondences between the predetermined points on the garment in particular locations and predetermined points on the person in particular locations. Preferably, the particular locations on the person include the joints of the person.
Conveniently, the particular locations on the garment include the areas of the garment which correspond to joints of a person.
Advantageously, the step of manipulating the first and second images uses one-to-one mapping of the predetermined points.
Preferably, the step of scaling the first image based on the dimensions of the second image include analysing the distances between the predetermined points in both the first and second images and subsequently scaling the first image based upon the distances in the second image.
Conveniently, the step of scaling the first image further takes into account the outline determined for the person in the second image and includes scaling the garment accordingly.
Advantageously, the scaling takes into account the height and weight of the person in the second image.
Preferably, the method further includes the step of analysing the lighting of the person in the second image and applying simulated lighting to the garment in the first image based on the lighting in the second image. Embodiments of the present invention will now be described, by way of example, with reference to the accompanying figures, in which:
FIGURE 1 shows a flow diagram incorporating both image search and garment fitting;
FIGURE 2 shows examples of images of garments to be isolated from the background;
FIGU RE 3 shows examples of the isolation of the shapes of two of the garments shown in Figure 4;
FIGURE 4 shows a schematic view of the garment isolation process;
FIGURE 5 shows a schematic view of the garment fitting process;
FIGURE 6 shows a flow diagram of the fitment process; FIGURE 7 shows stages of the virtual fitting process;
FIGURE 8 shows further stages of the virtual fitting process; and
FIGURE 9 shows the relative location of a vertex during a fitting process.
Each aspect of the present invention will be discussed in turn, and it is to be understood that each of the aspects may, if desired, be used in isolation from the others. Figure 1 shows a flow diagram setting out some of the steps associated with the image search and garment fitting aspects of the invention.
Image Search
The image search tool allows a user to search for images of garments, based upon matches to an existing image rather than keywords, and comprises two portions, which will be discussed in turn.
Analysis and Cataloguing
The first portion of the search tool is the analysis and cataloguing of images which are to be made available to be searched. This analysis process is, in general, carried out as an offline cataloguing operation as new garment images are to be incorporated into the database. These images may be provided by a feed from a fashion house or fashion retailer, or may be manually incorporated. The images may be analysed Όη-the-fly' as they are imported into the database, or alternatively the images may be imported and then analysed.
Image Isolation
In the analysis, the database images may first be segmented to isolate the garment in the image, and then their colour, and shape and pattern descriptors extracted. The analysis must locate and identify varying garment types which are pictured on varying backgrounds, because different fashion houses and clothing retailers use different types and configurations of images. Figure 2 shows three such examples of garments to be analysed, with backgrounds of varying complexity.
For instance, some retailers use pictures of garments on a white or neutral background, some use images of garments worn by models, some use images with a shaded background, and some use images which include objects in the background or are m ore com plex. Further, some images include a combination of some or all of the above.
The method employed in this analysis is a 'pixel colour'-based method. When analysing images to extract the garment contained therein, there are a number of issues which must be overcome. These include situations in which the garment is a similar colour to the skin of the model (making it difficult to ascertain easily and with confidence where the garment ends and the model begins) and situations where the garment is a similar colour to the background of the picture (making it difficult to locate the garment against the background). Colour Classification
In carrying out the analysis on an image, each pixel is classified as a colour. This colour is, in general, chosen from a set which includes white, black, gray, red, orange, yellow, green, cyan, blue, purple and pink. As a first step, each pixel is analysed to establish if the pixel colour is white, black or gray. If not, as a second step, one of the other colours is assigned, as will be explained in detail later.
Once the initial analysis is completed, the image is transferred from an RGB (Red, Green, Blue) colour space to a H IS (Hue, Saturation, Intensity) colour space, where three colour channels are extracted from the or each image. Additionally, a fourth channel, V, may be used which represents the minimum value of RGB components in each pixel.
Next, the values of all channels are normalized to the range [0,1 ]. For each pixel, a process may be carried out to establish whether each pixel is white, black or gray, using the following equations: white if 0.75 < V and S < 0.25 + 3(0.25 - (1 - 0)
hl k if 9-25 < / and S < 0,25 + 3(0,25 - !
gray if 9,25 < / < 0.75 and S < 0,2
If a pixel is determined to be white, grey or black, the pixel colour information is recorded. However, if a pixel is not identified as being as white, black or grey, further analysis is required. The colour of each pixel may be determined using the following equations: red if 0,96 < H or H < 0,03
orange if 0.03 < and S O.lfl
yellow If 0.10 < .if mid H≤ 0.21
green if 0.21 < S mid H < 0.44
cyan if M < E a d ≤ JkSS
M if QS6 < H <md ≤ Q S
purple if 0.78 < H mid ≤ 0.8S
pink if 0.88 < H mid H O S .
This cataloguing method assigns a colour to all pixels of the image, including background pixels. Once this has been completed, it is necessary to isolate the garment from the remainder (the background) of the image so as to obtain the information regarding the garment.
Background Removal and Isolation Figures 3a and 3b show examples of isolation - two of the garments shown in Figure 2 are shown in isolation, with the shape of the garment accurately isolated in the image. Most often, images of garments which are to be catalogued into the database include the or each garment presented against a white or monotonic background. If the background is white (or a monotonic colour), it is possible to discard parts of the image which are white (or the monotonic colour). However, in the case of a white garment on a white background (or indeed a monotonic garment on a monotonic background of the same colour), purely discarding the white (or monotonic) pixels would remove the background and the garment, and the process would fail.
Additionally, it is necessary to detect body parts present in the image, for instance hair, skin, and face (if the image includes a model wearing the garment). This is discussed in more detail later.
If the garment shown in an image is white, the majority of the pixels in the image will be white in colour. If the ratio of white pixels to all pixels in the image is larger than a threshold ratio, preferably a ratio of 0.8, it can be said that the garment in the image is white. However, in a situation where there are other objects in the background of the image, the threshold ratio will not be reached. Therefore, it is also necessary to consider one third of the width of the image (in the horizontal direction) and the full height of the image (in the vertical direction) and calculate the ratio of white pixels in that section.
However, since garments are not always presented in the centre of the image, this second, 'partial area' approach may fail in cases where the first approach succeeds. Therefore, it may be necessary to consider both criteria to establish whether or not the garment in the image is white. If it is determined that the garment in the image is white, edge information is used to obtain an approximate boundary for the outline and shape of the garment.
If the garment is determined as not being white, the background may be removed, with the foreground of the image (the area which contains the garment) being retained. However, there is a possibility that when removing the white pixels, some pixels from the foreground (where the garment has small regions of white colour among other colours) may also be removed. To prevent this from occurring, the non-white pixels of the garment may be considered as being and the white pixels located in the garment may be considered as Ό'. Therefore, any white colour regions in the texture of a garment appear to be small 'holes'. These 'holes' may then be removed using morphological methods, without affecting the white pixels of the background. This makes it possible to distinguish between removed white pixels in the garment and the non-removed pixels as the background colour, allowing removal of the white background.
Another difficulty arises when the garment in the image is being worn by a model, and the garment is of a colour similar to the model's skin or hair. The present invention includes a skin and hair detection algorithm to remove the skin and hair from an image. In situations where the garment has a colour similar to the skin or hair of a model, simple removal of pixels having the same colour as the skin or hair of the model also results in removal of the garment.
Two situations in which the garment shares the colour of hair or skin arise in general - the garment either has small regions of skin and/or hair colour among other colours, or it has a uniform, dominant colour of skin and/or hair. In the case where the garment includes small regions of skin and/or hair colour, these small regions may be detected using morphological operations. After this morphological step, if the ratio of skin and/or hair pixels to the whole number of foreground pixels is larger than a threshold, preferably 0.85, it may indicate that the garment's colour is similar to skin and/or hair. To distinguish between pixels which form the garment and those which are skin and/or hair, pixels that have this colour associated are clustered together, optimally into two or three clusters by way of a k-means algorithm. The k-means algorithm takes a number, n, of observations in k clusters, with each observation belonging to the cluster with the nearest mean, through an iterative refinement process. Taking into account that skin and/or hair are usually present in the outer part of the foreground, the algorithm may then determine which cluster belongs to the garment and which belongs to skin and/or hair.
Alternatively, if the background cannot be removed and the object or garment isolated automatically, the background may be removed semi-automatically - a user may be prompted to identify the region of the image which contains the garment and the background region of the image, by way of clicking, or drawing a line or number of lines in part of the object or garment, along with a line or number of lines in part of the background. Alternatively, a user may be asked to draw a bounding box around the object or garment in the image to aid in the isolation. The line-drawing or box-drawing process may be repeated in an iterative fashion to improve the accuracy of the isolation of the image or garment. The background may also be removed manually.
Figure 4 shows a schematic view of the isolation process, including a dotted- line indication of the background of the image and a solid line indication of the garment.
After the background has been removed, the object or garm ent in the foreground may then be isolated. Once isolated, or 'segmented', the colour, pattern and shape descriptor may be extracted from the image. To extract the colour descriptor, the colour information obtained in the analysis step may be used to generate a colour histogram for the garment. This allows the estimation of the colour of the garment.
Colour, Shape and Pattern Descriptors
For pattern descriptors, the edge information in areas of the garment is analysed. This analysis is carried out using a k-means algorithm to cluster the similar patterns in the garment (one garment may include different patterns). Then, the most dominant clusters determined using the k-means algorithm are used to create the pattern descriptor for the garment.
For shape descriptors, an effective shape descriptor may be used. If the image being analysed shows the garment worn by a model, the garment silhouette is not the same as the original shape of the garment. The shape of the garment in the image is dependent, in general, on the pose adopted by the model in the image, and the way in which some parts of the garment may be hidden by the model's limbs.
The present invention addresses the problems associated with this shape difference. In some cases, retailers provide both 'flat' images of a garment and images of the garment being worn by a model . In such cases, the two garment images are analysed using the above discussed methods, and subsequently compared, with the relationship between the corresponding images used to determine the shape descriptor of the garment. In the case where the only images of a garment available are those of a model wearing the garment, the images may be compared with images of other, similar garments in the database which are shown in both 'flat' and 'worn' configurations to find the most similar picture and therefore to identify the shape descriptor of the unworn (or 'flat') garment. The most sim ilar picture may be determined based on the pose of the model and garment silhouette, and the shape descriptor of the garment is assigned based upon this data. When images have been analysed, the data is then stored in a database, made available to be searched. Retailers may provide a stream of images of garments to be included in the database, with the above method being carried out in an automated fashion, or in batches.
Searching with Images
The second step is the upload and processing step for searching. To enable the search to occur, the user may upload an image to a host or into a database accessible by the searching algorithm in a known fashion, or alternatively, link to an image which is already available online. This image may then be processed to be used as the basis for the image search.
The processing step may then be carried out on the image.
Image Complexity
After the image is uploaded (or linked to), the complexity of the image may be determined. This is preferably an automated step, and the complexity is determined using details extracted from the image. In general, the images used will contain a human, therefore human detection algorithms may be used to detect any human parts (face, torso, legs, arms, body and the like) are present in the image. Secondly, the colours clustered in the image borders and/or background are analysed to determine the complexity of the image based upon the number of colours present in the borders and/or background. The human detection coupled with the analysis of the colours may then be used to determine the complexity of the background.
Once the complexity has been determined, the bounds of the image may be determined. To determine the bounds, a user may be asked to identify the bounds of the garment in the image that is to be used as the search parameter, by indication of e.g a bounding box or indication line. Alternatively, the bounds of the garment may be determined automatically, or using the methods of identification discussed above.
Then, characteristics of the garment such as shape, colour and the like may be extracted from the image using the above algorithms and image processing techniques.
Based on these characteristics the garment class (e.g. tops, trousers, dresses, shirts, etc.) may also be determined, as discussed in more detail below. Search characteristics may then be com pared with predetermined characteristics extracted from images of garments in a database, to provide the user with garment images and details which are similar to the garment in the uploaded image, also discussed in more detail below. Garment Classes
The analysis engine supports various garment classes, such that when an image is uploaded, and the image is analysed, no user input is required to identify the garment class. The analysis engine may include both a selection of pre-set 'template' garment classes, which may include exaggerated or 'stylised' garments and a selection of real garment images which represent each of the garment classes.
The processing of the uploaded image assesses the content of the image, and as a first step, may analyse the intensity of the regions of the image, clustering them together to form areas of the image. The areas may then be merged, based upon assumptions made about the foreground and background of the image. This assumption regarding the foreground and background may be based upon input from a user on image upload, but if no user input is available, the system may assume that the corners of the image are background, with the rest of the image being the foreground. This may be carried out in the same way as for the image cataloguing analysis discussed above.
Then, an iterative process may be undertaken, using regression techniques, to extract the block shape of the garment in the image. If the outline of the garment in the image is particularly 'jagged', a final step may be undertaken, which smoothes out the outline of the garment. The class of the garment may then be detected. This detection method may be applied to images of garments provided by retailers. Alternatively, the isolation process may be carried out in line with the above discussion.
Given that the detectable classes of garments are pre-set, there are only a limited number of garment classes which may be identified, and the analysed image may therefore be uploaded by a user, analysed by the system without input from the user, and then passed to the search engine. The class of garment is deduced from the outline of the garment by comparison with existing data in the database as discussed above, and the garment class which returns the most likely match is the garment class that may be assigned. Alternatively, the garment class may be identified by the user.
The data extracted from the garment is then compared against the images of garments in the database, to find garments which 'match' the image uploaded by the user, and the results of the search are displayed to the user. The search may also include parameters which are associated with the image of the garment, such as (but not limited to) price, size, colour, pattern, texture, textile and/or supply data. These parameters may be input by the user when the image which is to be the basis of the search is identified, and may also be stored in the database with the analysed images as meta-data to improve the search experience and usefulness of the search. Garment Fitting
Images uploaded by a user and/or images from the database may also be virtually 'fitted' on to an image of a person. Figure 5 shows a schematic view of the virtual garment fitting process.
The virtual fitting aspect of the present invention allows for the alignment of a representation of a garment contained in an image with a representation of a person stored in a second image. The process will now be discussed in detail. The virtual fitting is the alignment of an image of a garment with an image of a person, such that the garment is scaled and deformed to 'fit' onto the image of the person.
To enable virtual fitting, images of garments may be processed to analyse the shape and fit of the garments. This analysis may follow on from the image processing steps set out above. The shape and fit analysis process involves analysis of the shape characteristics of the garments, and places a number of 'nodes' at points of the garment which to enable accurate and effective scaling and fitment of the garment. Nodes and Predetermined Points
The 'nodes' are, in general, placed at predetermined 'key' points of the garment, for instance with a jumper, the nodes may be placed at the portions of the garment which correspond to wrists, elbows, shoulders and chest, and in the case of trousers, the nodes may be placed at ankles, knees, hips and waist. It is to be understood that the nodes may be placed in positions which, dependent on the shape and fit of the garment, will allow accurate virtual fitment.
To enable an image of a person to be used for virtual fitting, it is necessary to analyse the image of a person. In general, as a first step, the system identifies the outline of the shape of the person in the image. If this cannot be achieved automatically, the user may be asked to position an Outline' shape of a person on the image of the person. The outline shape may be scaled and deformed to fit with the shape of the person in the picture. The outline may, alternatively, be carried using the background isolation steps above.
Once the outline has been determined, the predetermined 'key' points on the body of the person may then be identified using image analysis techniques. Alternatively, the predetermined points may be indicated by a user. Then, nodes may be identified which correspond to the shape and position of the person. These nodes may be joined together to form an outline of a rough 'skeleton' of the person, within the outline of the person determined previously.
The user may be asked to confirm the location of key 'nodes', such as those placed at elbows, knees and head. Once the nodes have been positioned, the image and associated spatial data is stored, and the image is ready for the subsequent fitment of a garment.
Garment Scaling
The user may then select an image of a garment (which may be an image returned by the search discussed above) and 'fit' it to their image, virtually. The image of the garment will have been analysed as above, and will then be manipulated and scaled, using image transformation algorithms, to 'fit' with the image of the person, using the nodes at the predetermined points in both the garment and person images.
Further, in the case of a garment image which has been provided by a retailer, size data may be associated with the image, to allow a user to see how different garment sizes would fit them. This requires the measurements of the person in the image to be associated with their analysed image. The present invention may further include a measurement module which may be used to measure the sizes of portions of a person in an image, based on the height and weight of the person in the image. This measurement module can be used to provide more accurate scaling and fitment or garments. In order to create a more realistic result for the garment fitting, the lighting conditions in the picture of the user may be analysed, and similar lighting is applied to the virtually fitted garment to improve the realism and quality of the virtual fitting. Additionally, the virtual fitting system allows for the recognition and overlaying of a user's hair in the uploaded picture. In a situation where the user has, for instance, shoulder length hair or longer, the hair in the uploaded image may be identified using the processing techniques detailed above, with the hair placed over the fitted garment where necessary.
The virtual fitting algorithm requires correspondence between the garment silhouette and the model in the image of the user (the 'target image'). To provide opti mal fitting resu lts, the algorithm serves to preserve the characteristic features of the garment, i.e. the general fit of the garment and its specific shape. Also, the algorithm reflects the specific body shape of the model and deforms the garment accordingly, and minimises deviations from the original garment shape in order not to introduce unnecessary visual artefacts.
The virtual fitting algorithm consists of three main phases, which are shown in Figure 6 and are outlined in the following sections. The algorithm requires three inputs, which are:
• a garment binary mask, showing the areas of the garment image that are part of the garment itself, which may be extracted using the image processing techniques discussed above;
• the garment image itself, together with alignment information from a standard silhouette template; and • the model image, together with alignment information from the same silhouette template used for the garment image.
While the silhouette template may differ in appearance to reflect garment and model body shape characteristics, both templates share the same number of points which are ordered in a similar fashion. Therefore, there is a specific one-to-one mapping between points at key positions along the silhouette on both images. For the rest of the algorithm, it may be assumed that the input silhouettes are generally correctly aligned to the garment and model respectively.
The contour selection phase, shown schematically in Figure 7 guarantees the best compromise between conformance to the model silhouette and garment shape fidelity. Selecting contour points exclusively from the edge of the garment binary mask perfectly preserves the garment shape but does not provide any information about the body position in relation to the garment itself. Conversely, selecting points from the silhouette template would cause the garment shape to be significantly altered during morphing and fitment, which may result, for example, in loose dresses being fitted tightly to the body. The algorithm proceeds by automatically detecting the garment contour from the binary mask. This contour g(i) is then checked against the silhouette template on the garment image sg(j) for all point pairs (ij) and a decision is made according to:
Figure imgf000025_0001
Therefore, the final contour c(i) s obtained by selecting a point from the body silhouette template sg(j) if this is close to the garment contour, thus providing body alignment information, while the other points are taken from the extracted garment contour g(i), thus preserving the original garment shape. Given the specific one-to-one mapping between garment silhouettes and model images, it is possible to calculate an initial coarse registration to transfer the contour c(i) to the model image. The geometric relationship between the two silhouettes is defined by the general affine transform:
Figure imgf000026_0001
where sm(i) is the silhouette template in the model image.
The free parameters (A,b) of the general affine transform are computed by minimising:
IA\ ¾] = .arg-issn |smtO— ^¾ - &| This is computed using all points from the two templates, regardless of which ones have been used in the final contour c(i). This ensures that as much information as possible about the geometric relationship between the two templates in all areas of the image is provided for the estimated transform. However, as there is a specific mapping between the silhouettes, the computed transform is only applied to points in c(i) chosen from the garment contour, whereas those originally belonging to sg(i) are replaced with their matches from sm(i).
The result from this step of the algorithm is shown in Figure 7, where the dotted-line outline of the garment contour on the model image is obtained by transforming the estimated c(i), whereas the chained-line segments are taken directly from the silhouette template sm(i). The final phase of the transfer algorithm is local refinement of the transferred contour. Local inaccuracies in the transform estim ate can arise from differences between the two silhouette templates that may have been locally altered to better fit either the garment or the model body characteristics.
In order to account for these local differences, the algorithm iteratively identifies segments originating from the garment contour that are discontinuous with respect to their surrounding segments. The discontinuities can fall under two possible cases. The first is that segments obtained from the transformed g(i) sandwiched between others obtained from sm(i), which are fixed, and the second is segments whose end points are separated by a distance that exceeds a predefined threshold.
The algorithm proceeds by recursively finding the translation aligning the end points of two consecutive misaligned segments. The recursion is iterated until a fixed segment is found. When the algorithm terminates, two translations ti and t2 are found for the two endpoints of each segment Pk(m), m<≡[0.. M] where k is the index of the segment and m is the index of the point within the /c*h segment. The translation is then propagated to all other points in the segment by the weighted average:
Figure imgf000027_0001
The propagation of the translation concludes the algorithm, whose output is a set of two co ntou rs i n th e g a rm e nt a n d m ode l i m ag es w ith kn own correspondences. This then constitutes the input of the morphing stage for the final phase of the virtual fitting algorithm.
After transferring the key points to the coordinate system of the model, the relationship between these corresponding key points is used to obtain the way in which the garment should be deformed to fit on the model. The corresponding key points form a relationship between the picture of the model and the picture of the garment, as shown in Figure 8. This relationship is used, along with a regular rectangular grid to deform the garment based on the relationship between corresponding points to fit it on the model. A rectangular grid is superimposed on to the model, as can be seen in Figure 8. For each vertex of the grid, its relative location with respect to the key points in close proximity to it is calculated. For each vertex of the grid, those key points that lie inside the cells containing that vertex are considered as neighbour to that vertex. The relative location of each vertex is obtained with respect to its neighbouring key points. For each pair of neighbouring key points, the relative location of the vertex is obtained and saved. These relative locations are applied on the corresponding key points on the garment to obtain the corresponding vertex on the garment. Doing this for all vertices of the grid, the corresponding grid for the garment is obtained.
On the picture of the model in Figure 9, the relative location of a vertex with respect to key points is obtained as follows: v is taken to be a vertex point and pi and p2 two neighbour key points of v. The equation = f¼ + - j¾3+ gfl_ :¾ may then be used, where R90 is 90 degrees rotation matrix and a and b are local coordinates of v with respect to pi and p2. {a,b) is the relative location of v with respect to pi and p2, that can be obtained by solving a linear system of equations. Refe rri n g aga i n to F i g u re 8 , after applying relative coordinates on corresponding key points on the picture of the garment, the corresponding vertices for those that have neighbour key points are obtained. The corresponding vertices of vertices inside the convex hull obtained by the key points are found for the correspondences determined in the previous step. The key points considered in the first step are the first set and those considered in the second step are the second set. For each vertex in the set, its local coordinates with respect to points that are in the same row as well as the points that are in the same column are obtained and applied on the corresponding points on the garment. Doing so, the corresponding vertices of the set on the garment are obtained.
Since the correspondence of points may not cover boundary regions of the garment, the first set is extended and further vertices are considered of the grid in a margin of the second set. The relative coordinate of a point is obtained with respect to two nearest key points in the same line and these coordinates are applied to the corresponding key points on the garment to obtain corresponding key points on the garment.
After obtaining the corresponding grid on the garment, the texture inside each cell of the grid may be mapped on the garment to its corresponding cell on the model to synthesize the deformed picture of the garment on the model.
When used in this specification and claims, the terms "com prises" and "comprising" and variations thereof mean that the specified features, steps or integers are included. The terms are not to be interpreted to exclude the presence of other features, steps or components.
The features disclosed in the foregoing description, or the following claims, or the accompanying drawings, expressed in their specific forms or in terms of a means for perform ing the disclosed function, or a method or process for attaining the disclosed result, as appropriate, may, separately, or in any combination of such features, be utilised for realising the invention in diverse forms thereof.

Claims

Claims
1 . A method of aligning a representation of a garment contained in a first image with a representation of a person contained in a second image, the method including the steps of:
identifying the garment in the first image and the person in the second image;
analysing the shape of the garment in the first image
allocating nodes to predetermined points on the garment, associated with the garment shape and predetermined shape data;
analysing the shape of the person in the second image to find the outline of the person in the second image;
allocating nodes to predetermined points on the person, associated with the shape of the person and predetermined shape data;
analysing the predetermined points of both the garment and the person and determining alignment information;
manipulating the first and second images to align the predetermined points;
scaling the first image based upon the dimensions of the second image; and
overlaying the first image onto the second image.
2. The method of claim 1 wherein identifying the garment in the first image includes comparing the garment with shapes with a selection of pre- determined objects having known shapes.
3. The method of claim 2, further including the step of isolating the garment in the first image from the background of the image.
4. The method of claim 3, wherein the step of isolating is carried out automatically.
5. The method of claim 4, wherein the step of isolating is carried out manually.
6. The method of any one of claims 1 -5 wherein analysing the shape of the garment includes analysing the outline of the garment and creating a map of the shape of the garment.
7. The method of any one of claims 1 -6, wherein the step of allocating nodes to predetermined points on the garment includes performing shape analysis of the outline found for the garment and subsequently identifying the points based on predetermined criteria.
8. The method of any one of claims 1 -7 wherein analysing the shape of the person includes analysing the outline of the person and creating a map of the shape of the person.
9. The method of claim 8, further including the step of displaying the map of the shape of the person as an estimation of the outline of the person.
10. The method of any one of claims 1 -9, wherein the step of allocating nodes to predetermined points on the person includes analysing the shape of the outline found for the person and subsequently identifying the points based on predetermined criteria.
1 1 . The method of any one of claims 1 -10, further including the step of placing the predetermined points of the garment in the first image on at least one of neck, shoulders, elbows, wrists, chest, navel, waist, hips, knees or ankles.
12. The method of any one of claims 1 -1 1 , further including the step of placing the predetermined points of the person in the second image on at least one of head, neck, shoulders, elbows, wrists, chest, navel, waist, hips, knees, ankles or feet.
13. The method of any one of claims 1 -12, wherein the step of analysing the predetermined points on both the garment and person to determine alignment information includes forming correspondences between the predetermined points on the garment in particular locations and predetermined points on the person in particular locations.
14. The method of claim 13, wherein the particular locations on the person include the joints of the person.
15. The method of claim 13 or 14, wherein the particular locations on the garment include the areas of the garment which correspond to joints of a person.
16. The method of any one of claims 1 -15, wherein the step of manipulating the first and second images uses one-to-one mapping of the predetermined points.
17. The method of any one of claims 1 -16, wherein the step of scaling the first image based on the dimensions of the second image include analysing the distances between the predetermined points in both the first and second images and subsequently scaling the first image based upon the distances in the second image.
18. The method of claim 17, wherein the step of scaling the first image further takes into account the outline determined for the person in the second image and includes scaling the garment accordingly.
19. The method of claim 17 or 18, wherein the scaling takes into account the height and weight of the person in the second image.
20. The method of any one of claims 1 -19, further including the step of analysing the lighting of the person in the second image and applying simulated lighting to the garment in the first image based on the lighting in the second image.
21 . A method for analysing an image to locate and isolate an object within the image, the image containing at least an object against a background, the method comprising:
classifying the colour of each pixel of the image;
estimating, based on the colour of each pixel, the colour descriptor of the object and the colour descriptor of the background of the image;
determining, based on the colour of the object and the colour of the background, the locations in the image of the object and the background;
isolating the object in the image; and
identifying the shape descriptor of the object.
22. The method of claim 21 , wherein classifying the colour of each pixel comprises first determining whether the pixel is white, grey or black; and
if the pixel is not white, grey or black, determining whether the pixel is, grey, red, orange, yellow, green, cyan, blue, purple or pink.
23. The method of claim 21 or 22, wherein after the colour classification step, the method further includes the step of transferring the pixel colour space from RGB (Red, Green, Blue) to HSI (Hue, Saturation, Intensity).
24. The method of any one of claims 21 -23, wherein estimating the respective colour descriptors of the object and the background includes creating a colour histogram based on the pixel colours of the object and background.
25. The method of any one of claims 21 -24, wherein estimating the respective colour descriptors of the object and background includes determining whether the colour of the object and the colour of substantially all of the background are similar.
26. The method of any one of claims 21 -25, further comprising the step of calculating the ratio of the number of pixels in the image having one colour to the total number of pixels in the image.
27. The method of claim 26, wherein if the ratio is calculated as being 0.8 or higher, concluding that the background and object are the same colour.
28. The method of any one of claims 25-27, wherein if the estimated colour descriptors of the object and the background are similar, the step of determining the location of the object and the background comprises using analysis of the edges of the object.
29. The method of any one of claims 25-27, wherein if the colour descriptors of the object and background are not similar, the step of determining the location of the object and the background comprises discarding the pixel data relating to the background.
30. The method of claim 25, wherein estimating the colour descriptors of the object and background further includes determining whether the background includes regions of a colour similar to the colour of the object.
31 . The method of claim 30, wherein if it is determined that the background includes regions of a colour similar to the colour of the object, the method further comprises clustering pixels forming the image and analysing the clusters by way of a k-means algorithm, separating the regions of similar colour in the background from the object.
32. The method of any one of claims 25-31 wherein the method includes the further steps of:
analysing the isolated object to identify areas of the object of a similar colour to the background; and if there are areas of a similar colour present in the object;
applying a morphological operation to the image of the object to remove these areas.
33. The method of any one of claims 22-32, wherein the step of determining the locations of the object and background in the image includes assuming that the object is in a central region of the image.
34. The method of any one of claims 21 -33, wherein the step of determining the locations of the object and the background in the image includes:
making an assumption regarding the location of the object with respect to the background; and
comparing the estimated colours of the object and background to determine which is the object and which is the background.
35. The method of any one of claims 21 -34, wherein the step of identifying the shape descriptor of the object includes comparing the object in the image with a selection of pre-determined objects having known shapes.
36. The method of any one of claims 21 -35, further including the steps of: comparing the shape descriptor of the object in the image with other images containing a similar object; and using the data obtained from the comparison to improve the shape descriptor identification.
37. The method of any one of claims 21 -36, further including the step of identifying the pattern descriptor of the object in the image.
38. The method of claim 37 wherein the step of identifying the pattern descriptor comprises using a k-means algorithm.
39. The method of claim 38 wherein using the k-means algorithm clusters similar patterns on the object, and determining the dominant pattern to identify the pattern descriptor.
40. A method of searching for an image containing an object, the method comprising the steps of:
identifying an image to be used as the basis for the search;
determining the complexity of the image;
determining the bounds of the object within the image;
identifying the shape descriptor of the object within the image based on the identified bounds of the object;
determining the colour descriptor of the object within the image;
comparing the object in the image with the content of other, predetermined images, based upon the colour and shape descriptors of the object; and
returning images which, based on the comparison of the object of the image and the predetermined images, include content similar to the basis of the search.
41 . The method of claim 40, wherein the step of identifying the image to be used as the basis for the search includes receiving an image from a user.
42. The method of any one of claims 40-41 , wherein the step of determining the complexity of the image includes performing pixel colour analysis of the image.
43. The method of any one of claims 40-42, wherein the step of determining the complexity includes using a human detection algorithm.
44. The method of any one of claims 40-43, wherein the step of determining the complexity further includes analysis of the background of the image.
45. The method of any one of claims 40-44, wherein the step of determining the complexity of the image includes analysing shapes present in the image.
46. The method of any one of claims 40-45, wherein the step of determining the bounds of the image includes performing edge analysis.
47. The method of any one of claims 40-46, wherein the step of determining the bounds of the image includes performing colour and texture analysis.
48. The method of any one of claims 40-47, wherein the step of determining the bounds of the image comprises manually determining the bounds.
49. The method of any one of claims 40-48, wherein the step of identifying the shape descriptor of the object within the image includes comparing the determined image bounds with shapes with a selection of pre-determined objects having known shape descriptors.
50. The method of any one of claims 40-49, wherein the step of determining the colour descriptor of the object includes analysing the colours of the pixels within the image.
51 . The method of claim 50, further including the step of creating a colour histogram based upon the pixel data.
52. The method of any one of claims 40-51 , wherein the step of comparing the object in the image with predetermined images includes analysis of a database of pre-processed images.
53. The method of any one of claims 40-52, wherein the step of returning the results includes providing images and data relating to the images.
54. The method of claim 53, wherein the data relating to the images includes price, size, colour, pattern, texture, textile and/or supply data.
55. The method of any one of claims 40-54, wherein the basis for the search further includes providing data relating to price, size, colour, pattern, texture, textile and/or supply data.
56. The method of claim 55, wherein further including analysing and comparing the data with predetermined catalogued data associated with the predetermined images.
57. The method of any one of claims 40-56, further including the step of identifying the pattern descriptor of the object in the image.
58. The method of claim 57 wherein the step of identifying the pattern descriptor comprises using a k-means algorithm.
59. A method for analysing an image as hereinbefore described with reference to the accompanying drawings.
60. A method of searching for an image as hereinbefore described with reference to the accompanying drawings.
61 . A method of aligning a representation of a garment as hereinbefore described with reference to the accompanying drawings.
62. Any novel feature or combination of features as described herein.
PCT/GB2013/051011 2012-04-23 2013-04-22 A system and method for image analysis Ceased WO2013160663A2 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
GB1207040.5A GB2501473A (en) 2012-04-23 2012-04-23 Image based clothing search and virtual fitting
GB1207040.5 2012-04-23

Publications (2)

Publication Number Publication Date
WO2013160663A2 true WO2013160663A2 (en) 2013-10-31
WO2013160663A3 WO2013160663A3 (en) 2014-02-06

Family

ID=46261680

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/GB2013/051011 Ceased WO2013160663A2 (en) 2012-04-23 2013-04-22 A system and method for image analysis

Country Status (2)

Country Link
GB (2) GB2501473A (en)
WO (1) WO2013160663A2 (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105373929A (en) * 2014-09-02 2016-03-02 阿里巴巴集团控股有限公司 Method of providing photographing recommending information and apparatus thereof
WO2016036478A1 (en) * 2014-09-02 2016-03-10 Alibaba Group Holding Limited Method and apparatus for creating photo-taking template database and for providing-taking recommendation information
CN106228600A (en) * 2015-06-02 2016-12-14 三星电子株式会社 For the method and apparatus providing the three-dimensional data of clothing
EP3467769A4 (en) * 2016-05-27 2019-11-13 Rakuten, Inc. IMAGE PROCESSING DEVICE, IMAGE PROCESSING METHOD, AND IMAGE PROCESSING PROGRAM
WO2020049914A1 (en) * 2018-09-06 2020-03-12 富士フイルム株式会社 Image processing device, method, and program

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114187588B (en) * 2021-12-08 2023-01-24 贝壳技术有限公司 Data processing method, device and storage medium

Family Cites Families (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6307568B1 (en) * 1998-10-28 2001-10-23 Imaginarix Ltd. Virtual dressing over the internet
US6546309B1 (en) * 2000-06-29 2003-04-08 Kinney & Lange, P.A. Virtual fitting room
US20020130890A1 (en) * 2001-02-09 2002-09-19 Harry Karatassos Programmatic fitting algorithm in garment simulations
JP2003055826A (en) * 2001-08-17 2003-02-26 Minolta Co Ltd Server and method of virtual try-on data management
JP2004086662A (en) * 2002-08-28 2004-03-18 Univ Waseda Clothes try-on service providing method and clothes try-on system, user terminal device, program, program for mounting cellphone, and control server
US7634394B2 (en) * 2004-03-05 2009-12-15 The Procter & Gamble Company Method of analysis of comfort for virtual prototyping system
ES2279708B1 (en) * 2005-11-15 2008-09-16 Reyes Infografica, S.L. METHOD OF GENERATION AND USE OF A VIRTUAL CLOTHING CLOTHING TEST AND SYSTEM.
GB2473503B (en) * 2009-09-15 2015-02-11 Metail Ltd System and method for image processing
KR20120040565A (en) * 2010-10-19 2012-04-27 (주)피센 3-d virtual fitting system and method using mobile device
CN102044038A (en) * 2010-12-27 2011-05-04 上海工程技术大学 Three-dimensional virtual dressing method of clothes for real person
GB201102794D0 (en) * 2011-02-17 2011-03-30 Metail Ltd Online retail system

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105373929A (en) * 2014-09-02 2016-03-02 阿里巴巴集团控股有限公司 Method of providing photographing recommending information and apparatus thereof
WO2016036478A1 (en) * 2014-09-02 2016-03-10 Alibaba Group Holding Limited Method and apparatus for creating photo-taking template database and for providing-taking recommendation information
CN105373929B (en) * 2014-09-02 2020-06-02 阿里巴巴集团控股有限公司 Method and device for providing photographing recommendation information
CN106228600A (en) * 2015-06-02 2016-12-14 三星电子株式会社 For the method and apparatus providing the three-dimensional data of clothing
CN106228600B (en) * 2015-06-02 2021-06-29 三星电子株式会社 Method and apparatus for providing three-dimensional data of clothing
EP3467769A4 (en) * 2016-05-27 2019-11-13 Rakuten, Inc. IMAGE PROCESSING DEVICE, IMAGE PROCESSING METHOD, AND IMAGE PROCESSING PROGRAM
US10810744B2 (en) 2016-05-27 2020-10-20 Rakuten, Inc. Image processing device, image processing method and image processing program
WO2020049914A1 (en) * 2018-09-06 2020-03-12 富士フイルム株式会社 Image processing device, method, and program
JPWO2020049914A1 (en) * 2018-09-06 2021-09-24 富士フイルム株式会社 Image processing equipment, methods and programs
JP7100139B2 (en) 2018-09-06 2022-07-12 富士フイルム株式会社 Image processing equipment, methods and programs
US11403798B2 (en) 2018-09-06 2022-08-02 Fujifilm Corporation Image processing apparatus, method, and program

Also Published As

Publication number Publication date
GB201207040D0 (en) 2012-06-06
GB2503331A (en) 2013-12-25
WO2013160663A3 (en) 2014-02-06
GB2501473A (en) 2013-10-30
GB201307246D0 (en) 2013-05-29

Similar Documents

Publication Publication Date Title
CN112970047B (en) System and method for automatically generating three-dimensional virtual garment models using product descriptions
US9147207B2 (en) System and method for generating image data for on-line shopping
EP3479296B1 (en) System of virtual dressing utilizing image processing, machine learning, and computer vision
TWI559242B (en) Visual clothing retrieval
CN104715493B (en) A kind of method of movement human Attitude estimation
TW202022782A (en) Method and image matching method for neural network training and device thereof
US11922593B2 (en) Methods of estimating a bare body shape from a concealed scan of the body
CN110930374A (en) Acupoint positioning method based on double-depth camera
Bang et al. Estimating garment patterns from static scan data
CN107507188B (en) Method and device for extracting image information based on machine learning
US20150190716A1 (en) Generation of avatar reflecting player appearance
CN104899910A (en) Image processing apparatus, image processing system, image processing method, and computer program product
CN111325806A (en) Clothing color recognition method, device and system based on semantic segmentation
US20210375045A1 (en) System and method for reconstructing a 3d human body under clothing
CN109614925A (en) Apparel attribute identification method and device, electronic device, storage medium
WO2013160663A2 (en) A system and method for image analysis
CN104123749A (en) Picture processing method and system
CN105869217B (en) A kind of virtual real fit method
US20240296579A1 (en) Method and system for obtaining human body size information from image data
Xu et al. 3d virtual garment modeling from rgb images
CN109448093A (en) A kind of style image generation method and device
Chen et al. Optimizing human model reconstruction from RGB-D images based on skin detection
Le et al. Overlay upper clothing textures to still images based on human pose estimation
Vasconcelos et al. Methodologies to build automatic point distribution models for faces represented in images
Qiu et al. Edge-Feature-Based Aircraft Cover Recognition and Pose Estimation for AR-Aided Inner Components Inspection

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 13718062

Country of ref document: EP

Kind code of ref document: A2

122 Ep: pct application non-entry in european phase

Ref document number: 13718062

Country of ref document: EP

Kind code of ref document: A2