[go: up one dir, main page]

WO2024157262A1 - System and method of building a three-dimensional model of an object and methods of longitudinal study of the object - Google Patents

System and method of building a three-dimensional model of an object and methods of longitudinal study of the object Download PDF

Info

Publication number
WO2024157262A1
WO2024157262A1 PCT/IL2024/050108 IL2024050108W WO2024157262A1 WO 2024157262 A1 WO2024157262 A1 WO 2024157262A1 IL 2024050108 W IL2024050108 W IL 2024050108W WO 2024157262 A1 WO2024157262 A1 WO 2024157262A1
Authority
WO
WIPO (PCT)
Prior art keywords
images
region
parameter
image
color
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Ceased
Application number
PCT/IL2024/050108
Other languages
French (fr)
Inventor
Vardit Eckhouse
Saar WOLLACH
Iddo GOREN
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Cherry Imaging Ltd
Original Assignee
Cherry Imaging Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Cherry Imaging Ltd filed Critical Cherry Imaging Ltd
Publication of WO2024157262A1 publication Critical patent/WO2024157262A1/en
Anticipated expiration legal-status Critical
Ceased legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/20Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/55Depth or shape recovery from multiple images
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2200/00Indexing scheme for image data processing or generation, in general
    • G06T2200/08Indexing scheme for image data processing or generation, in general involving all processing steps from image acquisition to 3D model generation
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2219/00Indexing scheme for manipulating 3D models or images for computer graphics
    • G06T2219/20Indexing scheme for editing of 3D models
    • G06T2219/2012Colour editing, changing, or manipulating; Use of colour codes

Definitions

  • the present invention relates generally to the technological field of three- dimensional (3D) imaging. More specifically, the present invention relates to techniques of building a 3D model of an object that may be used for longitudinal studying thereof.
  • a longitudinal study is a research design that involves repeated observations of the same object (e.g., part of human body) over short or long periods of time (i.e., using longitudinal data).
  • 3D imaging techniques that are used for longitudinal studying incorporate a usage of complex specially-designed systems providing repeatable image capturing conditions.
  • fixation means providing constant distance between an image capturing device (e.g., camera, depth-sensing means etc.) and the observed object, controlled light sources, constant image capturing device parameters settings (e.g., lens aperture, focal length, focusing distance, shutter speed) etc.
  • the invention may be directed to a method of building a three- dimensional (3D) model of an object by at least one processor, the method including receiving a set of images of the object; filtering the set of images by: applying, to one or more images of the set, a visual feature detection algorithm, to detect at least one region on the image, that corresponds to a specific visual feature of the object; determining, on the one or more images of the set, at least one parameter of the at least one region; and rejecting the one or more images of the set, based on the at least one parameter of the at least one region; and applying a 3D modeling algorithm on the filtered set of images, to build the 3D model of the object.
  • the invention may be directed to a method of longitudinal study of an object by at least one processor, the method including building, by at least one processor, a pair of three-dimensional (3D) models of the object over a period of time, wherein at least one of the 3D models of the pair is built by the claimed method building a three-dimensional (3D) model of an object; comparing the 3D models of the pair with each other, to determine distinctions therebetween; and providing an indication of changes in the object, based on the determined distinctions between the 3D models of the pair.
  • the invention may be directed to a method of longitudinal study of an object, by at least one processor, wherein the method may include: building, by at least one processor, a pair of three-dimensional (3D) models of the object over a period of time; comparing the 3D models of the pair with each other, to determine distinctions therebetween; and providing an indication of changes in the object, based on the determined distinctions between the 3D models of the pair, wherein said building the pair of 3D models may include: rendering the pair of 3D models, a first 3D model of the pair being superimposed with a first texture pattern set and a second 3D model of the pair being superimposed with a second texture pattern set; obtaining a pair of two-dimensional (2D) images of the object, each of the 2D images based on a respective rendered 3D model; and normalizing color representations between the first texture pattern set and the second texture pattern set, based on the pair of 2D images.
  • 3D three-dimensional
  • the invention may be directed to a system for building a three-dimensional (3D) model of an object, the system including a non-transitory memory device, wherein modules of instruction code are stored, and at least one processor associated with the memory device, and configured to execute the modules of instruction code, whereupon execution of said modules of instruction code, the at least one processor is configured to receive a set of images of the object; filter the set of images by: applying, to one or more images of the set, a visual feature detection algorithm, to detect at least one region on the image, that corresponds to a specific visual feature of the object; determining, on the one or more images of the set, at least one parameter of the at least one region; and rejecting the one or more images of the set, based on the at least one parameter of the at least one region; and apply a 3D modeling algorithm on the filtered set of images, to build the 3D model of the object.
  • a visual feature detection algorithm to detect at least one region on the image, that corresponds to a specific visual feature of the object
  • rejecting the one or more images of the set based on the at least one parameter may include rejecting the one or more images of the set provided that a value of the at least one parameter exceeds a predefined threshold or range.
  • the one or more images of the set may represent a plurality of images of the set, and rejecting the one or more images of the set based on the at least one parameter may include rejecting a particular image of the plurality of the images of the set, based on a difference in the at least one parameter of the at least one region between the images of the plurality of the images.
  • rejecting the particular image of the plurality of the images of the set based on the difference in the at least one parameter may include rejecting the particular image of the plurality of the images of the set provided that the difference in the at least one parameter exceeds a predefined threshold or range.
  • the method of building a 3D model may further include correcting the rejected image, to provide the value of the at least one parameter subceeding the predefined threshold or range; and supplementing the filtered set of the images with the corrected image.
  • receiving the set of the images of the object may include capturing the images of the object with an image capturing device; and the method of building a 3D model may further include identifying a specific image capturing conditions in which the rejected image is captured; and providing instructions, to a user via a user interface, of correction of the specific image capturing conditions to provide the value of the at least one parameter subceeding the predefined threshold or range.
  • the specific image capturing conditions may include at least one of (a) a number of light sources; (b) a brightness of at least one light source; (c) a position of at least one light source with respect to the object; (d) a position of the image capturing device with respect to the object.
  • the at least one parameter may be selected from a list including: (a) a location of the at least one region; (b) a brightness of the at least one region; (c) a color balance of the at least one region.
  • the at least one region may represent a plurality of regions
  • the at least one parameter may be selected from a list including: (a) a distance between the regions of the plurality of the regions; (b) a relative brightness between the regions of the plurality of the regions; (c) an average brightness of the regions of the plurality of the regions; (d) a relative color balance between the regions of the plurality of the regions; and (e) an average color balance between the regions of the plurality of the regions.
  • the method of building a 3D model may further include receiving the at least one parameter of the at least one region of the previously built 3D model of the object; and rejecting the one or more images of the set based on the at least one parameter of the at least one region may include rejecting the one or more images of the set, based on a difference in the at least one parameter of the at least one region between the one or more images of the set and the previously built 3D model.
  • the object may be a human face; the at least one parameter may represent a color balance of the at least one region; and the at least one region may correspond to an eye sclera area of the human face.
  • building the pair of 3D models may further include: rendering the pair of 3D models, a first 3D model of the pair being superimposed with a first texture pattern set and a second 3D model of the pair being superimposed with a second texture pattern set; obtaining a pair of two-dimensional (2D) images of the object, each of the 2D images based on a respective rendered 3D model; and normalizing color representations between the first texture pattern set and the second texture pattern set, based on the pair of 2D images.
  • the methods of longitudinal study may further include: changing an orientation of at least one of the 3D models, to align an orientation of the object between the 3D models; wherein obtaining the pair of the 2D images is performed in the aligned orientation.
  • normalizing the color representations may further include: determining at least one region on each 2D image, said at least one region corresponding to a specific surface area of the object; calculating a transfer function mapping color representation of the at least one region of one of the 2D images to a corresponding at least one region of another of the 2D images; adjusting the color representation of at least one of the first texture pattern set and the second texture pattern set, to normalize the color representations, based on the calculated transfer function.
  • the color representations may be defined via a specific color model, including one or more color channels, each providing a digital representation of a specific color characteristic; and calculating the transfer function may further include: for each 2D image, calculating a deviation of values in said one or more color channels between pixels of the respective at least one region; and calculating the transfer function, so as to fit the calculated deviation of one of the 2D images to another of the 2D images.
  • the deviation may include at least one of a mean deviation and a standard deviation.
  • the transfer function may include a set of linear functions, each for a respective color channel of the one or more color channels.
  • normalizing the color representations may further include: applying said set of linear functions to pixels of the first texture pattern set to normalize the color representation thereof with respect to the color representation of the second texture pattern set.
  • the specific color model may be a LAB color-opponent model, and wherein said one or more channels may include: lightness channel (L), redness-greenness channel (A) and blueness -yellowness channel (B).
  • the object may be a human face; the aligned orientation may be a frontal orientation; and determining the at least one region on each 2D image may include applying, to each 2D image, a face image landmark detection algorithm, to detect a plurality of face image landmarks defining the at least one region on a respective 2D image, the at least one region corresponding to a specific area of the human face.
  • building of at least one of the 3D models of the pair may further include: receiving a set of images of the object; filtering the set of images by: applying, to one or more images of the set, a visual feature detection algorithm, to detect at least one region on the image, that corresponds to a specific visual feature of the object; determining, on the one or more images of the set, at least one parameter of the at least one region; and rejecting the one or more images of the set, based on the at least one parameter of the at least one region; and applying a 3D modeling algorithm on the filtered set of images, to build the 3D model of the object.
  • rejecting the one or more images of the set based on the at least one parameter may include rejecting the one or more images of the set provided that a value of the at least one parameter exceeds a predefined threshold or range.
  • the one or more images of the set may represent a plurality of images of the set, and rejecting the one or more images of the set based on the at least one parameter may include rejecting a particular image of the plurality of the images of the set, based on a difference in the at least one parameter of the at least one region between the images of the plurality of the images.
  • rejecting the particular image of the plurality of the images of the set based on the difference in the at least one parameter may include rejecting the particular image of the plurality of the images of the set provided that the difference in the at least one parameter exceeds a predefined threshold or range.
  • the methods of longitudinal study may further include: correcting the rejected image, to provide the value of the at least one parameter subceeding the predefined threshold or range; and supplementing the filtered set of the images with the corrected image.
  • receiving the set of the images of the object may include capturing the images of the object with an image capturing device; and the methods of longitudinal study may further include identifying a specific image capturing conditions in which the rejected image is captured; and providing instructions, to a user via a user interface, of correction of the specific image capturing conditions to provide the value of the at least one parameter subceeding the predefined threshold or range.
  • the specific image capturing conditions may include at least one of: (a) a number of light sources; (b) a brightness of at least one light source; (c) a position of at least one light source with respect to the object; (d) a position of the image capturing device with respect to the object.
  • the at least one parameter may be selected from a list including: (a) a location of the at least one region; (b) a brightness of the at least one region; (c) a color balance of the at least one region.
  • the at least one region represents a plurality of regions
  • the at least one parameter may be selected from a list including: (a) a distance between the regions of the plurality of the regions; (b) a relative brightness between the regions of the plurality of the regions; (c) an average brightness of the regions of the plurality of the regions; (d) a relative color balance between the regions of the plurality of the regions; and (e) an average color balance between the regions of the plurality of the regions.
  • the methods of longitudinal study may further include receiving the at least one parameter of the at least one region of a first 3D model of the pair of 3D models; and, for a second 3D model of the pair of 3D models, rejecting the one or more images of the set may include rejecting the one or more images of the set, based on a difference in the at least one parameter of the at least one region between the one or more images of the set and the first 3D model.
  • the object may be a human face; the at least one parameter may represent a color balance of the at least one region; and the at least one region may correspond to an eye sclera area of the human face.
  • the at least one processor may be further configured to reject the one or more images of the set, provided that a value of the at least one parameter exceeds a predefined threshold or range.
  • the one or more images of the set may represent a plurality of images of the set
  • the at least one processor may be further configured to reject the one or more images of the set by rejecting a particular image of the plurality of the images of the set, based on a difference in the at least one parameter of the at least one region between the images of the plurality of the images.
  • the at least one processor may be further configured to reject the particular image of the plurality of the images of the set, provided that the difference in the at least one parameter exceeds a predefined threshold or range. [0046] In some embodiments, the at least one processor may be further configured to correct the rejected image, to provide the value of the at least one parameter subceeding the predefined threshold or range; and supplement the filtered set of the images with the corrected image.
  • the system may further include an image capturing device in operative connection with the at least one processor, and the at least one processor may be further configured to: receive the set of the images of the object by capturing the images of the object with the image capturing device; identify a specific image capturing conditions in which the rejected image is captured; and provide instructions, to a user via a user interface, of correction of the specific image capturing conditions to provide the value of the at least one parameter subceeding the predefined threshold or range.
  • the at least one processor may be further configured to: receive the at least one parameter of the at least one region of the previously built 3D model of the object; reject the one or more images of the set further based on a difference in the at least one parameter of the at least one region between the one or more images of the set and the previously built 3D model.
  • FIG. 1 is a block diagram, depicting a computing device which may be included in a system for building a 3D model of an object, according to some embodiments;
  • FIGs. 2A-2E are schematic representation of the concept of the present invention, demonstrated with respect to building a 3D model of a human face, according to some embodiments;
  • Fig. 2F is a schematic representation of the concept of the present invention, demonstrated with respect to normalizing color representation between two 3D models of a human face, for the purpose of the longitudinal study, according to some embodiments;
  • FIG. 3 A is a block diagram, depicting a system for building a 3D model of an object, according to some embodiments
  • Fig. 3B is a block diagram, depicting a filtering module of a system for building a 3D model of an object, according to some embodiments
  • FIG. 3C is a block diagram, depicting a longitudinal study system for applying the claimed methods of longitudinal study, according to some embodiments.
  • Fig. 3D is a block diagram, depicting aspects of color normalization module of the longitudinal study system, according to some embodiments.
  • Fig. 4 is a flow diagram, depicting a method of building a 3D model of an object, according to some embodiments
  • Fig. 5A is a flow diagram, depicting a method of longitudinal study of an object, according to some embodiments.
  • Fig. 5B is a flow diagram, depicting a method of longitudinal study of an object, according to other embodiments.
  • the terms “plurality” and “a plurality” as used herein may include, for example, “multiple” or “two or more”.
  • the terms “plurality” or “a plurality” may be used throughout the specification to describe two or more components, devices, elements, units, parameters, or the like.
  • the term “set” when used herein may include one or more items.
  • some steps of the claimed method may be performed by using machine-learning (ML)-based models.
  • ML-based models may be artificial neural networks (ANN).
  • a neural network (NN) or an artificial neural network (ANN), e.g., a neural network implementing a machine learning (ML) or artificial intelligence (Al) function may refer to an information processing paradigm that may include nodes, referred to as neurons, organized into layers, with links between the neurons. The links may transfer signals between neurons and may be associated with weights.
  • a NN may be configured or trained for a specific task, e.g., pattern recognition or classification. Training a NN for the specific task may involve adjusting these weights based on examples.
  • Each neuron of an intermediate or last layer may receive an input signal, e.g., a weighted sum of output signals from other neurons, and may process the input signal using a linear or nonlinear function (e.g., an activation function).
  • the results of the input and intermediate layers may be transferred to other neurons and the results of the output layer may be provided as the output of the NN.
  • the neurons and links within a NN are represented by mathematical constructs, such as activation functions and matrices of data elements and weights.
  • a processor e.g., CPUs or graphics processing units (GPUs), or a dedicated hardware device may perform the relevant calculations.
  • ML-based model may be a single ML- based model or a set (ensemble) of ML-based models realizing as a whole the same function as a single one.
  • ML-based model may be a single ML- based model or a set (ensemble) of ML-based models realizing as a whole the same function as a single one.
  • the following description of the claimed invention is provided with respect to building a 3D model of a human face. It should be understood that such a specific embodiment is provided in order for the description to be sufficiently illustrative and it is not intended to limit the scope of protection claimed by the invention. It should be understood for one ordinarily skilled in the art that the implementation of the claimed invention in accordance with such a task is provided as a non-exclusive example and other practical implementations can be covered by the claimed invention.
  • FIG. 1 is a block diagram depicting a computing device, which may be included within an embodiment of the system for building a 3D model of an object, according to some embodiments.
  • Computing device 1 may include a processor or controller 2 that may be, for example, a central processing unit (CPU) processor, a chip or any suitable computing or computational device, an operating system 3, a memory device 4, instruction code 5, a storage system 6, input devices 7 and output devices 8.
  • processor 2 (or one or more controllers or processors, possibly across multiple units or devices) may be configured to carry out methods described herein, and/or to execute or act as the various modules, units, etc. More than one computing device 1 may be included in, and one or more computing devices 1 may act as the components of, a system according to embodiments of the invention.
  • Operating system 3 may be or may include any code segment (e.g., one similar to instruction code 5 described herein) designed and/or configured to perform tasks involving coordination, scheduling, arbitration, supervising, controlling or otherwise managing operation of computing device 1, for example, scheduling execution of software programs or tasks or enabling software programs or other modules or units to communicate.
  • Operating system 3 may be a commercial operating system. It will be noted that an operating system 3 may be an optional component, e.g., in some embodiments, a system may include a computing device that does not require or include an operating system 3.
  • Memory device 4 may be or may include, for example, a Random- Access Memory (RAM), a read only memory (ROM), a Dynamic RAM (DRAM), a Synchronous DRAM (SD-RAM), a double data rate (DDR) memory chip, a Flash memory, a volatile memory, a non-volatile memory, a cache memory, a buffer, a short-term memory unit, a long-term memory unit, or other suitable memory units or storage units.
  • RAM Random- Access Memory
  • ROM read only memory
  • DRAM Dynamic RAM
  • SD-RAM Synchronous DRAM
  • DDR double data rate
  • Flash memory Flash memory
  • volatile memory a non-volatile memory
  • cache memory a cache memory
  • buffer a short-term memory unit
  • long-term memory unit e.g., a long-term memory unit
  • a non-transitory storage medium such as memory device 4, a hard disk drive, another storage device, etc. may store instructions or code which when executed by a processor may cause the processor to carry out methods as described herein.
  • memory device 4 may include a short-term or a long-term storage for images used for 3D modelling purposes (e.g., the received set of images, the filtered set of images, the corrected images, the rejected images) and previously built 3D models, as further described herein.
  • Instruction code 5 may be any executable code, e.g., an application, a program, a process, task, or script. Instruction code 5 may be executed by processor or controller 2 possibly under control of operating system 3.
  • instruction code 5 may be a standalone application or an API module that may be configured to build a 3D model of an object or perform a longitudinal study thereof, as further described herein.
  • a system according to some embodiments of the invention may include a plurality of executable code segments or modules similar to instruction code 5 that may be loaded into memory device 4 and cause processor 2 to carry out methods described herein.
  • Storage system 6 may be or may include, for example, a flash memory as known in the art, a memory that is internal to, or embedded in, a micro controller or chip as known in the art, a hard disk drive, a CD-Recordable (CD-R) drive, a Blu-ray disk (BD), a universal serial bus (USB) device or other suitable removable and/or fixed storage unit.
  • Various types of input and output data may be stored in storage system 6 and may be loaded from storage system 6 into memory device 4 where it may be processed by processor or controller 2.
  • memory device 4 may be a non-volatile memory having the storage capacity of storage system 6. Accordingly, although shown as a separate component, storage system 6 may be embedded or included in memory device 4.
  • Input devices 7 may be or may include any suitable input devices, components, or systems, e.g., a detachable keyboard or keypad, a mouse, an image capturing device (e.g., a camera or a depth-sensing means) and the like.
  • Output devices 8 may include one or more (possibly detachable) displays or monitors, speakers and/or any other suitable output devices.
  • Any applicable input/output (RO) devices may be connected to Computing device 1 as shown by blocks 7 and 8.
  • NIC network interface card
  • USB universal serial bus
  • any suitable number of input devices 7 and output device 8 may be operatively connected to Computing device 1 as shown by blocks 7 and 8.
  • a system may include components such as, but not limited to, a plurality of central processing units (CPU) or any other suitable multi-purpose or specific processors or controllers (e.g., similar to element 2), a plurality of input units, a plurality of output units, a plurality of memory units, and a plurality of storage units.
  • CPU central processing units
  • controllers e.g., similar to element 2
  • System 10 for building a 3D model of an object may be implemented as a software module, a hardware module, or a combination thereof.
  • system 10 may be or may include a mobile device 100 as a computing device 1 of Fig. 1.
  • system 10 may be adapted to execute one or more modules of instruction code (e.g., instruction code 5 of Fig. 1) to request, receive, analyze, accept, reject, calculate and produce various data.
  • instruction code e.g., instruction code 5 of Fig. 1
  • Mobile device 100 may include an image capturing device (e.g., a frontal camera 101) and touch screen 102 for interacting with a user via a user interface (UI) as input devices (e.g., input devices 7 of Fig. 1).
  • Mobile device 100 may use touch screen 102 and speaker 103 as output devices (e.g., output devices 8 of Fig. 1).
  • System 10 may be further adapted to execute one or more modules of instruction code (e.g., element 5 of Fig. 1) in order to perform steps of the claimed method.
  • System 10 is described in detail with reference to Figs. 3A-3B.
  • Image capturing conditions may include both external factors, like a number of light sources, brightness of light sources, position of light sources with respect to the object, position of the image capturing device with respect to the object, and internal factors, like image capturing device parameters, e.g., ISO, lens aperture, focal length, focusing distance, shutter speed, color balance settings (color temperature, white balance etc.).
  • image capturing device parameters e.g., ISO, lens aperture, focal length, focusing distance, shutter speed, color balance settings (color temperature, white balance etc.).
  • the present invention suggests applying a visual feature detection algorithm, to detect at least one region on the image, that corresponds to a specific visual feature of the object, and then check the parameters with respect to the specific visual features and corresponding parts of the object.
  • the method may include detection of only one region on each image of the set of images, that corresponds to a specific visual feature of the object; determination on each image of only one parameter of the region; and rejection or approval of the images of the set, based on the parameter (e.g., approving images when a value of the parameter is below a predefined threshold or within a predefined range, and rejecting images when a value of the parameter exceeds the predefined threshold or range).
  • the object may be a human face
  • the region of the image may correspond to the visual feature of the face, such as forehead area of the face
  • the parameter may represent brightness of the forehead region.
  • images of the face may be approved or rejected based on sufficiency of forehead region illumination.
  • presetting a range for such brightness evaluation it may be checked that the images for two different 3D models of the same object are captured in the same or almost the same conditions, hence desired repeatability of building a plurality of 3D models may be achieved.
  • the plurality of regions of the image may be evaluated in combination.
  • the object may be a human face 200
  • the plurality of regions 31 A (including regions 31 A’ and 31 A”) of images 20A’ and 20A’ ’ may correspond to the visual features of face 200, such as left and right cheek areas of face 200
  • parameter 32A may represent a relative brightness between regions 31 A’ and 31 A”.
  • images 20A’ and 20A” of face 200 may be approved or rejected based on evenness of face 200 illumination.
  • region 31 A’ must be brighter than region 31A”, then, according to the claimed method, image 20 A’ will be rejected and image 20A” will be approved.
  • inventions wherein plurality of regions of the image are evaluated in combination, may be such as when the object may be a human face, the plurality of regions of the image may correspond to the visual features of the face, such as left and right cheek areas of the face, and the parameter may represent a distance between the regions of the plurality of the regions.
  • Such an embodiment may be used to evaluate a position of the image capturing device with respect to the object, in particular, a distance between the face and the camera, since the closer the face is located to the camera, the longer distance between these regions is.
  • a plurality of images may be evaluated in combination.
  • rejecting the one or more images of the set based on the parameter may include rejecting a particular image of the plurality of the images, based on a difference in the parameter of a specific region between the images of the plurality of the images.
  • the object may be a human face 200
  • detected region 31 A of images 20A’, 20A” and 20A’” may correspond to the visual feature of face 200, such as proximal right eye comer area of face 200
  • parameter 32A may represent a locations 32A’, 32A’ ’ and 32A’ ’ ’ of region 31 A on respective image 20 A’, 20A’ ’ or 20 A’ ’ ’
  • images 20 A” and 20 A’” of face 200 may be approved or rejected based on difference of locations 32 A” and 32A’” of region 31A on images 20 A” and 20 A’” with respect to location 32A’ of region 31A on image 20A’.
  • Such an embodiment provides indirect evaluation of the degree of similarity in position of the image capturing device (e.g., a frontal camera 101) with respect to the object (e.g., face 200) between captured images (e.g., images 20 A’, 20 A” and 20A’” of Figs. 2C-2E).
  • the image capturing device e.g., a frontal camera 101
  • the object e.g., face 200
  • image 20A may be rejected since difference 32B’ in location of region 31A between images 20 A’ and 20A” is too small (e.g., exceeds a predefined threshold of proximity).
  • Image 20A’ in turn, may be accepted since the difference 32B” in location of region 31A between images 20 A’ and 20 A’ ’ ’ is sufficient (e.g., does not exceed the predefined threshold of proximity).
  • parameters of the regions of the previously built 3D model of the same object may be evaluated. E.g., prior to filtering the set of images that will be used to build a new 3D model of the object, the previously built 3D model may be processed in order to detect specific regions on the model, that correspond to specific visual features of the object, and then requested parameters of the regions may be determined.
  • parameters of regions of the previously built 3D model may be determined beforehand, e.g., during its creation, and stored additionally.
  • values of parameters of the previously built 3D model may be received and compared with values of respective parameters of the images which are captured to build a new 3D model.
  • the method may include rejecting one or more images of the set, based on a difference in the parameters of regions between the previously built 3D model and currently evaluated images.
  • a previously built 3D model and its parameters as a reference for building a new 3D model will substantially increase the repeatability of 3D modeling.
  • the previously built 3D model does not necessarily have to be built using the method and system of the present invention, it may use any techniques already known in the art.
  • Another essential aspect of the present invention lies in the actions that follow the rejection of one or more images of the set.
  • preform correction of the rejected image to provide the value of the specific parameter (the one that was used as the basis for the rejection) subceeding the predefined threshold or range, and then supplement the filtered set of the images with the corrected image.
  • Image capturing may represent (a) a number of light sources; (b) a brightness of at least one light source; (c) a position of at least one light source with respect to the object; (d) a position of the image capturing device with respect to the object.
  • a number of light sources, brightness of at least one light source and position of at least one light source with respect to the object may be evaluated based on the analysis of a brightness of a plurality of regions of the image.
  • the user may then be instructed to add another light source, to turn a specific side of the object to the light source, to move the object closer/further to/from the light source, to find a brighter light source etc.
  • a position of the image capturing device with respect to the object may be evaluated based on a difference in location of specific regions between the images of the set or based on a distance between the plurality of regions on each image separately.
  • the user may be further instructed to move the object closer/further to/from the image capturing device, to rotate the image capturing device or the object with respect to each other etc.
  • the object may be a human face
  • the detected region may correspond to the visual feature of face, such as an eye sclera area of the human face
  • the parameter may represent a color balance of the region.
  • eye sclera area which does not vary in color so much as, for example, skin or hair, as a reference area for adjusting (correcting) color representation of the entire image.
  • color representation of eye sclera area may be processed to get an average color thereof, and then it can be calculated how this average color should be adjusted to represent a white color, and, finally, the entire image color balance (white balance) may be calibrated (corrected) accordingly.
  • the color balance of the region corresponding to the eye sclera area from the previously built 3D model may be used, in order to align the color balance of the new 3D model to the previously built one.
  • FIG. 2F schematically represents the concept of the present invention, demonstrated with respect to normalizing color representation between two 3D models of a human face for the purpose of the longitudinal study, according to some embodiments.
  • Longitudinal study system 500 may be implemented as a software module, a hardware module, or a combination thereof.
  • system 500 may be or may include a computing device 1 of Fig. 1.
  • system 500 may be adapted to execute one or more modules of instruction code (e.g., instruction code 5 of Fig. 1) to request, receive, analyze, calculate and produce various data.
  • System 500 may be further adapted to execute one or more modules of instruction code (e.g., element 5 of Fig. 1) in order to perform steps of claimed methods of longitudinal study.
  • System 500 is described in detail with reference to Figs. 3C-3D.
  • a pair of three-dimensional (3D) models of the object e.g., models 301 and 301’
  • at least one of the 3D models of the pair is built by the method of building 3D model claimed herein.
  • one of the 3D models of the pair may be built based on methods known in the art.
  • both 3D models may be built based on any methods known in the art.
  • first rendered 3D model 303 e.g., first rendered 3D model 303, second rendered 3D model 303 ’
  • first 3D model of the pair e.g., model 303
  • first texture pattern set e.g., first texture pattern set 302
  • second texture pattern set e.g., second texture pattern set 302’
  • texture pattern and “texture pattern set” refer to a two-dimensional (2D) image or a set of 2D images, e.g., stored in common image formats, that are further superimposed (“wrapped”) around the 3D object during rendering procedure, e.g., using known texture mapping and 3D rendering techniques, thereby forming a visual representation of the modeled object.
  • Fig. 2F color representations of texture pattern sets 302 and 302’, as well as color representations of all subsequent elements (e.g., 3D models 303 and 303’) that use these texture pattern sets, are shown with short horizontal dash filling and short vertical dash filling, respectively. Difference in the direction of dash filling shall be understood as difference in the color representation. As can be seen, texture pattern sets 302 and 302’ differ in color representation, which may potentially affect the detection and indication of changes in the object, hence this difference should be fixed on the pre-processing stage, before proceeding with longitudinal study of the object.
  • a pair of 2D images of the object e.g., images 304 and 304’
  • each of the 2D images based on a respective rendered 3D model (e.g., 3D models 303 and 303’).
  • 3D models may be first aligned in orientation (e.g., so as to have 2D images 304 and 304’ depicting human face in frontal orientation).
  • first texture pattern set e.g., first texture pattern set 302
  • second texture pattern set e.g., second texture pattern set 302’
  • each 2D image e.g., images 304 and 304’
  • said at least one region corresponding to a specific surface area of the object (e.g., uniformly exposed skin area of a face, without eye region - for men and women; and, optionally, without beard region - for men).
  • Such regions may be considered clearly defining color representations of respective 3D models (e.g., models 303 and 303’) as they correspond to portions of texture that are substantially uniform in color, and these regions may further be used for color alignment between 3D models since they correspond to the same region of the object (e.g., specific region of a face).
  • transfer function 306 mapping color representation of the at least one region of one of the 2D images (e.g., region 305’) to a corresponding at least one region of another of the 2D images (e.g., region 305); and adjusting the color representation of at least one of the first texture pattern set and the second texture pattern set (e.g., color representation of texture pattern set 302’), to normalize the color representations, based on the calculated transfer function (e.g., transfer function 306).
  • the color representations are defined via a specific color model, comprising one or more color channels, each providing a digital representation of a specific color characteristic.
  • the transfer function e.g., transfer function 306 includes a set of linear functions, each for a respective color channel of the one or more color channels.
  • the specific color model may be a LAB color-opponent model, wherein said one or more channels may include: lightness channel (L), redness-greenness channel (A) and blueness-yellowness channel (B).
  • LAB coloropponent model is known to be used for such purposes, since it has low correlation between the axes in the color space, thereby enabling application of different operations in different color channels while mitigating undesirable cross-channel artifact occurrence.
  • LAB color space is logarithmic, which means to a first approximation that uniform changes in channel intensity tend to be equally detectable, thereby further simplifying computation of a transfer function and increasing accuracy and coherence of color normalization.
  • texture pattern sets 302 and 302’ defined in LAB color-opponent model or to convert to LAB coloropponent model, before calculating transfer function 306.
  • calculating transfer function may further include: for each 2D image (e.g., images 304 and 304’), calculating a deviation of values in said one or more color channels between pixels of the respective at least one region (e.g., regions 305 and 305’); and calculating the transfer function (e.g., function 306), so as to fit the calculated deviation of one of the 2D images to another of the 2D images (e.g., calculated deviation of image 304’ to deviation of image 304).
  • the deviation may include at least one of a mean deviation and a standard deviation.
  • texture pattern set 302 it may be further suggested to normalize the color representations further by: applying said set of linear functions (e.g., transfer function 306) to pixels of one texture pattern set (e.g., texture pattern set 302’) to normalize the color representation thereof with respect to the color representation of another texture pattern set (e.g., texture pattern set 302), thereby obtaining adjusted texture pattern set (e.g., texture pattern set 307).
  • said set of linear functions e.g., transfer function 306
  • the 3D models of the pair may be rendered with respective texture pattern sets (e.g., texture pattern sets 302 and adjusted texture pattern set 307, respectively) and may undergo comparison, in order to determine distinctions therebetween, which may now reliably indicate changes in the object, based on the determined distinctions between the 3D models of the pair, reducing a rate of false positive or false negative detection of changes in the object.
  • respective texture pattern sets e.g., texture pattern sets 302 and adjusted texture pattern set 307, respectively
  • FIG. 3A depicting a system 10 for building a 3D model of an object, according to some embodiments.
  • arrows may represent flow of one or more data elements to and from system 10 and/or among modules or elements of system 10. Some arrows have been omitted in Fig. 3A for the purpose of clarity.
  • system 10 may include image capturing module 20.
  • Image capturing module may be configured to receive or capture a set of images 20A of the object (e.g., human face 200 of Figs. 2A-2E).
  • set of images 20A may be pre-stored in the external memory device and transmitted to system 10 via an input device (e.g., input device 7 of Fig. 1).
  • system 10 may include camera (e.g., frontal camera 101) and image capturing module may be configured to capture images of the object via the camera in the automatic or semi-automatic manner.
  • camera e.g., frontal camera 101
  • image capturing module may be configured to capture images of the object via the camera in the automatic or semi-automatic manner.
  • “capturing” does not necessarily mean “saving the final image” as it is interpreted traditionally (when the user presses the respective element of the UI).
  • the mode of the mobile device when the camera application is activated and images are continuously received from the camera and shown on the screen, shall also be considered as “capturing”.
  • system 10 may include filtering module 30.
  • Filtering module 30 may be configured to filter the set of images 20A, in particular, to reject redundant or incorrect images 20A and accept the correct ones.
  • Filtering module 30 may be further configured to output rejected images 30B and a filtered set 30A of images 20A. Filtering module 30 is described in detail herein with reference to Fig. 3B.
  • image capturing module 20 and filtering module 30 may work in various modes.
  • filtering module 30 may filter images of the set of images 20A that was captured by image capturing device 20 beforehand.
  • filtering module 30 may perform filtering “on-the-fly”, while the user is continuously capturing images of his face.
  • user may even randomly rotate his face with respect to the image capturing device or rotate his image capturing device around his face and filtering module 30 may perform filtering of captured images 20A instantaneously.
  • said “rejection” of images 20 A in order to get the filtered set 30 A of images 20A may be implemented for deactivation of the respective element (“shutter button”) of UI, thereby not allowing the user to capture an incorrect image, and activation of the element only when parameter 32A subceeds the predefined threshold or range (if capturing of images 20A is performed manually).
  • system 10 may further include correction module 40.
  • Correction module 40 may be configured to perform correction of rejected images 30B, to provide the value of parameter 32A (described with reference to Fig. 3B) subceeding the predefined threshold or range.
  • parameter 32 represents brightness or color balance (both with respect to separate regions or representing relation between a plurality of regions)
  • correction module 40 may be configured to correct image 30B so as to make parameter 32 subceeding predefined threshold or range (e.g., increase/decrease brightness of respective regions 31A or of the entire image, adjust color balance accordingly etc.), thereby making rejected image 30B valid for further building of 3D model.
  • correction module 40 may be further configured to output corrected images 40A and supplement the filtered set of the images 20A with corrected images 40A.
  • correction module 40 may be further configured to output correction error 40B thereby signaling about an inability of automatic correction.
  • system 10 may further include instruction module 50.
  • Instruction module 50 may be triggered by receiving correction error 40B and may be configured to identify a specific image capturing conditions in which respective rejected image 30B is captured.
  • instruction module 50 may be configured to identify, based on parameters 32A of detected regions 31 A of rejected images 30B a number of light sources; a brightness of light sources; a position of light sources with respect to the object (e.g., face 200); a position of the image capturing device (e.g., frontal camera 101) with respect to the object (e.g., face 200).
  • regions 31A correspond to left and right cheek areas of face (e.g., face 200) and parameter 32A represents relative brightness of regions 31 A, it can be determined which side of the face is illuminated brighter, consequently, the position of the light source with respect to the face may be identified.
  • system 10 may further include user interface (UI) module 60, communicating system 10 with a user via input and output devices (e.g., input devices 7 and output devices 8 of Fig. 1).
  • input devices may include touch screen 102 and output devices may include touch screen 102 and/or speaker 103.
  • Instruction module 50 may be further configured to provide instructions 50A, to a user via a UI (i.e., via UI module 60), of correction of the specific image capturing conditions to provide the value of respective parameter 32A subceeding the predefined threshold or range.
  • a user may be instructed to turn his face (e.g., face 200) to the light source, to locate a light source on the other side of the face, to move closer/further to/from the light source etc.
  • UI module 60 may be configured to provide instructions 50A to a user in the form of voice messages (e.g., via speaker 103) or text messages (e.g., by showing them on screen 102).
  • system 10 may further include 3D modeling module 70.
  • 3D modeling module 70 may be configured to apply 3D modeling algorithm 70A on filtered set 30A of images 20A, to build the 3D model 10A of the object (e.g., face 200). It should be understood that, in the claimed invention, neither any specific 3D modeling algorithm is claimed, nor the invention is limited to a usage of any specific 3D modeling algorithm known in the art. It rather suggests a specific smart method of filtering and preprocessing images that are further supplied to the 3D modeling algorithm as input data.
  • any known in the art 3D modeling algorithm configured to build a 3D model of an object based on a set of images thereof (e.g., 3D model 10A based on filtered set 30A of images 20A) may be applied herein.
  • images e.g., images 20A
  • images 20A must be captured at various angles with respect to the object, to provide sufficient data of its view from different sides.
  • 3D model 10A may be further transferred to 3D model storage 80.
  • filtering module 30 may be further configured to request and receive parameters 32A of regions 31 A of the previously built 3D model of the same object (e.g., retrospective 3D model 10A’ of the object (e.g., face 200)) from 3D model storage 80 to perform filtering based on the received parameters 32A, as further described with reference to Fig. 3B.
  • parameters 32A of regions 31 A of the previously built 3D model of the same object e.g., retrospective 3D model 10A’ of the object (e.g., face 200)
  • system 10 may include a longitudinal studying module (not shown in figures) configured to compare 3D model 10A of the object (e.g., face 200) with retrospective 3D model 10A’ of the same object (e.g., face 200) and determine distinctions therebetween.
  • the longitudinal studying module may be further configured to provide an indication, via UI module 60, of changes in the object, based on the determined distinctions between 3D model 10A and retrospective 3D model 10 A’ (or alternatively send the information about the distinctions via network to another mobile device or server).
  • FIG. 3A depicting filtering module 30 of system 10 for building a 3D model of an object, according to some embodiments.
  • filtering module 30 may include visual feature detection module 31, configured to apply, to one or more images 20A of the set of images 20A, the visual feature detection algorithm 3 IB, to detect regions 31A on images 20A, that correspond to specific visual features (e.g., cheek areas, forehead area, eye comers, eye sclera areas) of the object (e.g., of face 200).
  • Visual feature detection module 31 may be further configured to output regions 31A of images 20A.
  • filtering module 30 may further include image region analysis module 32.
  • Image region analysis module 32 may be configured to determine, on images 20A of the set, parameters 32A of regions 31 A. If each region 31A is considered separately, parameters 32A may represent, e.g., location of regions 31A, brightness of regions 31 A, color balance of regions 31 A. If regions 31A are considered in combination, parameters 32A may represent, e.g., distance between regions 31 A, relative brightness between regions 31 A, average brightness of regions 31 A, relative color balance between regions 31 A, average color balance between regions 31 A.
  • filtering module 30 may further include comparison module 33.
  • Comparison module 33 may be configured to receive parameters 32A of regions 31A and to evaluate whether parameters 32A exceed a predefined threshold or range (e.g., predefined by threshold or range setup 33B).
  • “exceeding” is used to indicate that a value of the parameter (e.g., parameter 32A) is not acceptable for using the respective image (e.g., image 20A) for 3D modeling, hence the respective image (e.g., image 20A) must be rejected, while the term “subceeding” is used to indicate that a value of a parameter (e.g., parameter 32A) is acceptable for using the respective image (e.g., image 20A) for 3D modeling, accordingly, the image (e.g., image 20A) may be accepted.
  • “exceeding the range” shall be interpreted as “having a value that is outside the range” and “subceeding the range” shall be interpreted as “having a value that is within the range”.
  • Comparison module 33 may be configured to reject images 20A, based on parameters 32A of regions 31 A. In some particular embodiments, comparison module 33 may be configured to reject images 20A provided that a value of the at least one parameter 32A exceeds the predefined threshold or range (e.g., predefined by threshold or range setup 33B). In some particular embodiments, comparison module 33 may be further configured to reject a particular image 20A of the plurality of the images 20A, based on a difference (e.g., differences 32B’ and 32B” shown in Figs. 2E and 2D) in parameters 32A of regions 31A between images 20A of the plurality of images 20A.
  • a difference e.g., differences 32B’ and 32B” shown in Figs. 2E and 2D
  • comparison module 33 may be further configured to reject particular image 20A of the plurality of images 20A provided that the difference (e.g., differences 32B’ and 32B” shown in Figs. 2E and 2D) in a specific parameter 32A exceeds a predefined threshold or range (e.g., predefined by threshold or range setup 33B).
  • a predefined threshold or range e.g., predefined by threshold or range setup 33B.
  • comparison module 33 may be configured to receive the at least one parameter (e.g., respective parameter 32A) of the respective region (e.g., region 31 A) of the previously built 3D model (e.g., retrospective 3D model 10A’) of the same object (e.g., face 200) from 3D model storage 80 (shown in Fig. 3A). Comparison module 33 may be further configured to reject images 20A based on a difference in the at least one parameter (e.g., respective parameter 32A) of the at least one region (e.g., region 31 A) between the respective image 20A and the previously built 3D model (e.g., retrospective 3D model 10A’) of the same object (e.g., face 200).
  • comparison module 33 may be further configured to form rejection decision 33 A, indicating whether the particular image (e.g., image 20A) must be rejected or accepted.
  • filtering module 30 may further include rejection module 34, configured to divide set of images 20A into accepted (e.g., filtered set 30A of images) and rejected ones (e.g., rejected images 30B).
  • rejection module 34 configured to divide set of images 20A into accepted (e.g., filtered set 30A of images) and rejected ones (e.g., rejected images 30B).
  • FIG. 3C and 3D depicting longitudinal study system 500 for applying the claimed methods of longitudinal study, according to some embodiments, and aspects of color normalization module 510 of longitudinal study system 500, according to some embodiments.
  • system 500 may include color normalization module 510.
  • system 500 may be configured to retrieve, from the database of 3D model storage 80, a pair of 3D models of the object over a period of time.
  • system 500 may receive first 3D model 10A’ and second 3D model 10B’ (not superimposed with textures) and first texture pattern set 11 A’ (for 3D model 10A’) and second texture pattern set 1 IB’ (for 3D model 10B’).
  • at least one of the 3D models of the pair e.g., models 10A’ and 10B’
  • both 3D models may be built with other methods known in the art.
  • color normalization module 510 may be configured to receive 3D models 10A’ and 10B’ and texture pattern sets 11A’ and 11B’. In some embodiments, color normalization module 510 may be further configured to normalize color representations between texture pattern sets 11 A’ and 11B’. Aspects of normalizing color representations are described in greater detail with reference to Fig. 3D further below.
  • color normalization module 510 may be further configured to output adjusted second texture pattern set 11B”, normalized with respect to first texture pattern set 11 A’ .
  • system 500 may further include rendering module 520.
  • Rendering module 520 of system 500 may be configured to receive 3D models 10 A’ and 10B’ and texture pattern sets 11 A’ and 11B”. Rendering module 520 may be further configured to render a pair of 3D models, wherein a first 3D model of the pair (e.g., 3D model 10A’) may be superimposed with first texture pattern set 11 A’, and a second 3D model of the pair (e.g., 3D model 10B’) may be superimposed with adjusted second texture pattern set 11B”. Accordingly, rendering module 520 may be configured to output first rendered 3D model 520A and second rendered 3D model 520B’ (having adjusted color representation).
  • system 500 may further include 3D model comparison module 530.
  • 3D model comparison module 530 may be configured to receive 3D models 520A and 520B’.
  • 3D model comparison module 530 may be further configured to perform comparison of the 3D models of the pair (3D models 520A and 520B’) with each other, to determine distinctions therebetween. Accordingly, 3D model comparison module 530 may be further configured to output a data element indicating determined distinctions 530A.
  • system 500 may further include user interface (UI) module 540.
  • UI module 540 may be configured to receive the data element indicating determined distinctions 530A, and 3D models 520 A and 520B’.
  • UI module 540 may be further configured to provide indication 540A of changes in the object, based on determined distinctions 530A between the 3D models 520A and 520B’ .
  • UI module 540 may visually represent 3D models 520A and 520B’ with marked (e.g., by different color) changes in the object.
  • color normalization module 510 is described in greater detail.
  • rendering module 520 of system 500 may be configured to receive 3D models 10A’ and 10B’ and texture pattern sets 11A’ and 1 IB’.
  • Rendering module 520 may be further configured to render a pair of 3D models, wherein a first 3D model of the pair (e.g., 3D model 10A’) may be superimposed with first texture pattern set 11 A’, and a second 3D model of the pair (e.g., 3D model 10B’) may be superimposed with second texture pattern set 1 IB’.
  • rendering module 520 may be configured to output first rendered 3D model 520A and second rendered 3D model 520B.
  • color normalization module 510 may include alignment and image extraction module 511.
  • alignment and image extraction module 511 may be configured to change an orientation of at least one of 3D models 520 A and 520B, to align an orientation of the object between the 3D models 520A and 520B.
  • alignment and image extraction module 511 may be further configured to obtain (extract) a pair of 2D images of the object (e.g., first frontal image 511A and second frontal image 51 IB), each of the 2D images 511 A and 51 IB based on a respective rendered 3D model 520A and 520B, wherein 2D images 511A and 51 IB may be obtained in the aligned orientation.
  • color normalization module 510 may further include landmark detection and region extraction module 512.
  • Landmark detection and region extraction module 512 may be configured to receive frontal images 511A and 51 IB.
  • Landmark detection and region extraction module 512 may be further configured to determine at least one region (e.g., regions 512 A and 512B) on each 2D image 511A and 51 IB, said at least one region corresponding to a specific surface area of the object.
  • landmark detection and region extraction module 512 may be further configured to apply, to each 2D image 511A and 51 IB, a face image landmark detection algorithm, to detect a plurality of face image landmarks 512A’ and 512B’, respectively defining the region on a respective 2D image (e.g., the contour of the respective region 512A and 512B may be formed by detected landmarks 512 A’ and 512B’, respectively).
  • regions 512A and 512B may correspond to an area of human face without eyes, eyebrows, and, optionally, beard and mouth.
  • said face image landmark detection algorithm may include and/or be based on any methods known in the art.
  • color normalization module 510 may further include transfer function calculation module 513.
  • Transfer function calculation module 513 may be configured to receive regions 512A and 512B.
  • transfer function calculation module 513 may be further configured to calculate transfer function 513A mapping color representation of the at least one region (e.g., region 512B) of one of the 2D images (e.g., image 51 IB) to a corresponding at least one region (e.g., region 512A) of another of the 2D images (e.g., image 511 A).
  • the color representations of regions 512A and 512B may be defined via a specific color model, including one or more color channels, each providing a digital representation of a specific color characteristic.
  • the specific color model may be the LAB color-opponent model (as described in greater detail with reference to Fig. 2F), and the channels may include: lightness channel (L), rednessgreenness channel (A) and blueness-yellowness channel (B).
  • transfer function calculation module 513 may be further configured to convert color representation of regions 512 A and 512B into LAB color-opponent model.
  • transfer function calculation module 513 may be further configured to calculate transfer function 513A by calculating, for each 2D image 511 A and 51 IB, a deviation of values in said one or more color channels between pixels of the respective region 512A and 512B .
  • the deviation may include at least one of a mean deviation and a standard deviation.
  • transfer function calculation module 513 may be further configured to calculate transfer function 513A, so as to fit the calculated deviation of one of the 2D images (e.g., image 511A) to another of the 2D images (e.g., image 51 IB).
  • transfer function 513A may include a set of linear functions, each for a respective color channel of the color model.
  • the set of linear functions may be as follows:
  • L y may be a deviation of values in L-channel between pixels of region 512A
  • L x may be a deviation of values in L-channel between pixels of region 512B
  • a y may be a deviation of values in A-channel between pixels of region 512A
  • a x may be a deviation of values in A-channel between pixels of region 512B
  • B y may be a deviation of values in B-channel between pixels of region 512A
  • B x may be a deviation of values in B-channel between pixels of region 512B
  • ai, bL, CIA, bA, as, bs may represent calculated coefficients fitting respective deviation values between regions 512A and 512B.
  • transfer function 513A may include or may be based on known mathematical and statistical methods, thereby it is implied herein that it would be clear for the person skilled in the art, how and which methods to apply in order to calculate such transfer function 513 A.
  • color normalization module 510 may further include color representation adjustment module 514.
  • Color representation adjustment module 514 may be configured to receive transfer function 513 A and second texture pattern set 11B’.
  • Color representation adjustment module 514 may be further configured to adjust the color representation of second texture pattern set 11B’, to normalize the color representations between texture pattern sets 11A’ and 1 IB’, based on transfer function 513A.
  • color representation adjustment module 514 may be further configured to normalize the color representations by applying said set of linear functions (transfer function 513A) to pixels of texture pattern set 1 IB’ to normalize the color representation thereof with respect to the color representation of texture pattern set 11 A’. Thereby, color representation adjustment module 514 may be further configured to output adjusted second texture pattern set 11B”.
  • images 511A and 51 IB of Figs. 3C-3D may be images 304 and 304’ of Fig. 2F; regions 512A and 512B of Fig. 3D may be regions 305 and 305’ of Fig. 2F; transfer function 513A of Fig. 3D may be transfer function 306 of Fig. 2F, etc.
  • step S1005 the at least one processor (e.g., processor 2 of Fig. 1) may perform receiving of a set of images (e.g., images 20A) of the object (e.g., face 200).
  • Step S1005 may be carried out by image capturing module 20 (as described with reference to Fig. 3A).
  • the at least one processor may perform applying, to one or more images (e.g., images 20A) of the set, a visual feature detection algorithm (e.g., visual feature detection algorithm 3 IB), to detect at least one region (e.g., region 31 A) on the image (e.g., image 20 A), that corresponds to a specific visual feature of the object (e.g., face 200).
  • a visual feature detection algorithm e.g., visual feature detection algorithm 3 IB
  • Step S1010 may be carried out by visual feature detection module 31 (as described with reference to Fig. 3B).
  • the at least one processor may perform determining, on the one or more images (e.g., images 20A) of the set, at least one parameter (e.g., parameter 32A) of the at least one region (e.g., region 31A).
  • Step S 1015 may be carried out by image region analysis module 32 (as described with reference to Fig. 3B).
  • the at least one processor may perform rejecting the one or more images (e.g., images 20A) of the set, based on the at least one parameter (e.g., parameter 32A) of the at least one region (e.g., region 31 A), thereby obtaining a filtered set of images (e.g., filtered set 30A of images 20A).
  • Step S 1020 may be carried out by comparison module 33 and rejection module 34 (as described with reference to Fig. 3B).
  • the at least one processor may perform applying of a 3D modeling algorithm (e.g., 3D modeling algorithm 70A) on the filtered set of images (e.g., filtered set 30A of images 20 A), to build the 3D model of the object (e.g., 3D model 10A of the object, e.g., face 200).
  • a 3D modeling algorithm e.g., 3D modeling algorithm 70A
  • Step S1025 may be carried out by 3D modeling module 70 (as described with reference to Fig. 3 A).
  • the at least one processor may perform building of a pair of three-dimensional (3D) models of the object over a period of time (e.g., 3D model 10A and retrospective 3D model lOA’of the object, e.g., face 200), wherein at least one of the 3D models of the pair (e.g., 3D model 10A) is built by the claimed method of building a 3D model.
  • Step S2005 may be carried out by longitudinal studying module (as described with reference to Fig. 3 A).
  • the at least one processor may perform comparison of the 3D models (e.g., 3D model 10A and retrospective 3D model lOA’of the object, e.g., face 200, or 3D models 520A and 520B’, as shown in Fig. 3C) of the pair with each other, to determine distinctions therebetween (e.g., determined distinctions 530A, as shown in Fig. 3C).
  • Step S2010 may be carried out by longitudinal studying module (as described with reference to Fig. 3A) or 3D model comparison module 530 (as described with reference to Fig. 3C).
  • the at least one processor may provide an indication of changes in the object (e.g., face 200), based on the determined distinctions between the 3D models of the pair (e.g., 3D model 10A and retrospective 3D model lOA’of the object, e.g., face 200; or 3D models 520A and 520B’, as shown in Fig. 3C).
  • Step S2015 may be carried out by longitudinal studying module and UI module 60 (as described with reference to Fig. 3A) or by UI module 540 (as described with reference to Fig. 3C).
  • the at least one processor may perform building of a pair of three-dimensional (3D) models (e.g., 3D models 520A and 520B or 520B’, as shown in Figs. 3C-3D) of the object over a period of time, wherein said building the pair of 3D models includes: rendering the pair of 3D models (3D models 520A and 520B, as shown in Figs.
  • 3D three-dimensional
  • a first 3D model of the pair e.g., model 520 A
  • a second 3D model of the pair e.g., model 520B
  • a second texture pattern set e.g., texture pattern set 1 IB’, as shown in Fig. 3D
  • obtaining a pair of two-dimensional (2D) images of the object e.g., images 511 A and 51 IB, as shown in Fig. 3D
  • each of the 2D images e.g., images 511 A and 51 IB, as shown in Fig.
  • Step S3005 may be carried out by system 10 (as described with reference to Fig. 3A and 3B), longitudinal studying module (as described with reference to Fig. 3A), rendering module 520 (as described with reference to Fig. 3C), and color normalization module 510 (as described with reference to Fig. 3C).
  • the at least one processor may perform comparison of the 3D models (e.g., 3D models 520A and 520B’, as shown in Fig. 3C) of the pair with each other, to determine distinctions therebetween (e.g., determined distinctions 530A, as shown in Fig. 3C).
  • Step S3010 may be carried out by 3D model comparison module 530 (as described with reference to Fig. 3C).
  • the at least one processor may provide an indication of changes in the object, based on the determined distinctions between the 3D models of the pair (e.g., 3D models 520A and 520B’, as shown in Fig. 3C).
  • Step S3015 may be carried out by longitudinal studying module and UI module 60 (as described with reference to Fig. 3A), or by UI module 540 (as described with reference to Fig. 3C).
  • the claimed invention represents method of building 3D models, methods of longitudinal study, and systems for performing these methods which increase accuracy and repeatability of building a plurality of 3D models of the same object, while requiring no specially-designed equipment, thereby providing a technical improvement in the technological field of three-dimensional (3D) imaging, as well as the field of longitudinal study. Furthermore, the claimed invention represents methods of longitudinal study of an object (based on a plurality of three-dimensional (3D) models of the object built over a period of time), which increase reliability of detection of changes in the object, based on the determined distinctions between the plurality of 3D models.
  • the present invention provide an improvement of the relevant field of technology by increasing automated diagnosis determination reliability, or by facilitating manual diagnosis determination, wherein said diagnosis determination is made based on the comparison of a plurality of 3D models within the longitudinal study approach.
  • 3D modelling may be effectively used for longitudinal study of various objects. Furthermore, since the claimed method and system do not require any complex specially -designed equipment, and may be reliably implemented using hardware and software basis of a regular mobile device, the longitudinal studying may become common and even prevalent approach in various research fields and become applicable to fields where it was never used before.
  • the claimed method of longitudinal study it can be applied to dermatology.
  • patients may capture images of their faces and build 3D models thereof on a regular basis. These 3D models may be automatically analyzed and changes in skin state may be detected. Alternatively, 3D models may be communicated to a dermatologist for further examination. Hence, the course of skin diseases may be evaluated.
  • Such an approach provides an additional advantage to the field by reducing the number of redundant visits to the doctor. Since the suggested method of building 3D models provides reliable repeatable results, and suggested methods of longitudinal study ensure alignment between 3D models in essential parameters, the reduced number of false positive or false negative detections of changes in the object is expected, which will improve overall applicability of the suggested approach.
  • [00181] In some other exemplary embodiments of the claimed method of longitudinal study, it can be applied to dentistry, in particular, to orthodontics.
  • alterations in bite patterns also cause changes in facial features.
  • initial diagnosis of mal -positioning of teeth and jaws and misalignment in bite patterns or, furthermore, monitoring of the progress of tooth position adjustment and jaw alignment during treatment may be conducted.
  • a patient may be asked to smile or open his mouth while capturing images, in order to create a 3D model of a face including bite pattern.
  • 3D models may be analyzed (either automatically, e.g., by using ML-based methods, or manually - by the specialist) to assess the effectiveness of treatment (e.g., assess the dynamics of reducing gaps between teeth, by measuring sizes of gaps in each 3D model etc.).
  • the claimed method and system of building a 3D model of an object does not only improve the technological field of 3D imaging, but also improves basically any filed where the longitudinal study approach based on 3D modeling may be applied.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Graphics (AREA)
  • Software Systems (AREA)
  • Architecture (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computer Hardware Design (AREA)
  • General Engineering & Computer Science (AREA)
  • Geometry (AREA)
  • Image Processing (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The present invention relates generally to the technological field of three-dimensional (3D) imaging. In the general aspect, the invention may be directed to a method of building a three- dimensional (3D) model of an object including receiving a set of images of the object; filtering the set of images by: applying, to one or more images of the set, a visual feature detection algorithm, to detect at least one region on the image, that corresponds to a specific visual feature of the object; determining, on the one or more images of the set, at least one parameter of the at least one region; and rejecting the one or more images of the set, based on the at least one parameter of the at least one region; and applying a 3D modeling algorithm on the filtered set of images, to build the 3D model of the object.

Description

SYSTEM AND METHOD OF BUILDING A THREE-DIMENSIONAL MODEL OF AN OBJECT AND METHODS OF LONGITUDINAL STUDY OF THE OBJECT
CROSS-REFERENCE TO RELATED APPLICATIONS
[001] This application claims the benefit of priority of U.S. Patent Application No. 63/441,229 filed 26 January 2023, and titled: “SYSTEM AND METHOD OF GENERATING A 3D MODEL OF A SCANNED OBJECT”, which is hereby incorporated by reference in its entirety.
FIELD OF THE INVENTION
[002] The present invention relates generally to the technological field of three- dimensional (3D) imaging. More specifically, the present invention relates to techniques of building a 3D model of an object that may be used for longitudinal studying thereof.
BACKGROUND OF THE INVENTION
[003] A longitudinal study is a research design that involves repeated observations of the same object (e.g., part of human body) over short or long periods of time (i.e., using longitudinal data).
[004] Nowadays, supported by rapid technological progress, longitudinal studying uses various innovative imaging techniques, e.g., various 3D imaging and modeling methods and systems, to assess changes in the observed object by comparing 3D models thereof. The major requirement to such methods and systems is mitigation of any factors that may cause occurrence of distinctions in 3D models not related to actual changes in the object of study (i.e., related to any external factors). In other words, in order to achieve reliable longitudinal study results, it is crucial to provide repeatability in the process of scanning (e.g., capturing images) of the observed object and therefore to provide creation of accurate and repeatable 3D model, which would include only those distinctions that actually represent changes in the observed object.
[005] In order to meet such a requirement, conventional 3D imaging techniques that are used for longitudinal studying incorporate a usage of complex specially-designed systems providing repeatable image capturing conditions. E.g., such techniques combine the usage of fixation means providing constant distance between an image capturing device (e.g., camera, depth-sensing means etc.) and the observed object, controlled light sources, constant image capturing device parameters settings (e.g., lens aperture, focal length, focusing distance, shutter speed) etc.
[006] On the other hand, there are software products that provide tools for building 3D models of an object by means of a regular mobile device (e.g., smartphone). Despite being advantageously simple in usage (compared to the specially-designed systems), such products cannot meet the abovementioned major requirement to 3D modeling systems used for longitudinal studying. Therefore, these products are inapplicable for longitudinal study purposes.
SUMMARY OF THE INVENTION
[007] Accordingly, there is a need for a method and system for building 3D model of an object which would mitigate deficiencies of the prior art. In particular, there is a need for techniques (e.g., methods of building 3D models, methods of longitudinal study, and systems for performing these methods) which would increase accuracy and repeatability of building a plurality of 3D models of the same object, while requiring no specially-designed equipment, thereby providing a technical improvement in the technological field of three- dimensional (3D) imaging, as well as the field of longitudinal study. There is also a need for methods of longitudinal study of an object (based on a plurality of three-dimensional (3D) models of the object built over a period of time), which would increase reliability of detection of changes in the object, based on the determined distinctions between the plurality of 3D models. More specifically, when applied to medical or veterinary science, there is a need for techniques which would provide an improvement of the relevant field of technology by increasing automated diagnosis determination reliability, or by facilitating manual diagnosis determination, wherein said diagnosis determination is made based on the comparison of a plurality of 3D models within the longitudinal study approach.
[008] Thus, to overcome the shortcomings of the prior art and to improve the relevant field of technology, the following invention is provided.
[009] In the general aspect, the invention may be directed to a method of building a three- dimensional (3D) model of an object by at least one processor, the method including receiving a set of images of the object; filtering the set of images by: applying, to one or more images of the set, a visual feature detection algorithm, to detect at least one region on the image, that corresponds to a specific visual feature of the object; determining, on the one or more images of the set, at least one parameter of the at least one region; and rejecting the one or more images of the set, based on the at least one parameter of the at least one region; and applying a 3D modeling algorithm on the filtered set of images, to build the 3D model of the object.
[0010] In another general aspect, the invention may be directed to a method of longitudinal study of an object by at least one processor, the method including building, by at least one processor, a pair of three-dimensional (3D) models of the object over a period of time, wherein at least one of the 3D models of the pair is built by the claimed method building a three-dimensional (3D) model of an object; comparing the 3D models of the pair with each other, to determine distinctions therebetween; and providing an indication of changes in the object, based on the determined distinctions between the 3D models of the pair.
[0011] In yet another general aspect, the invention may be directed to a method of longitudinal study of an object, by at least one processor, wherein the method may include: building, by at least one processor, a pair of three-dimensional (3D) models of the object over a period of time; comparing the 3D models of the pair with each other, to determine distinctions therebetween; and providing an indication of changes in the object, based on the determined distinctions between the 3D models of the pair, wherein said building the pair of 3D models may include: rendering the pair of 3D models, a first 3D model of the pair being superimposed with a first texture pattern set and a second 3D model of the pair being superimposed with a second texture pattern set; obtaining a pair of two-dimensional (2D) images of the object, each of the 2D images based on a respective rendered 3D model; and normalizing color representations between the first texture pattern set and the second texture pattern set, based on the pair of 2D images.
[0012] In yet another general aspect, the invention may be directed to a system for building a three-dimensional (3D) model of an object, the system including a non-transitory memory device, wherein modules of instruction code are stored, and at least one processor associated with the memory device, and configured to execute the modules of instruction code, whereupon execution of said modules of instruction code, the at least one processor is configured to receive a set of images of the object; filter the set of images by: applying, to one or more images of the set, a visual feature detection algorithm, to detect at least one region on the image, that corresponds to a specific visual feature of the object; determining, on the one or more images of the set, at least one parameter of the at least one region; and rejecting the one or more images of the set, based on the at least one parameter of the at least one region; and apply a 3D modeling algorithm on the filtered set of images, to build the 3D model of the object.
[0013] In some embodiments of the method of building a 3D model, rejecting the one or more images of the set based on the at least one parameter may include rejecting the one or more images of the set provided that a value of the at least one parameter exceeds a predefined threshold or range.
[0014] In some embodiments of the method of building a 3D model, the one or more images of the set may represent a plurality of images of the set, and rejecting the one or more images of the set based on the at least one parameter may include rejecting a particular image of the plurality of the images of the set, based on a difference in the at least one parameter of the at least one region between the images of the plurality of the images.
[0015] In some embodiments of the method of building a 3D model, rejecting the particular image of the plurality of the images of the set based on the difference in the at least one parameter may include rejecting the particular image of the plurality of the images of the set provided that the difference in the at least one parameter exceeds a predefined threshold or range.
[0016] In some embodiments, the method of building a 3D model may further include correcting the rejected image, to provide the value of the at least one parameter subceeding the predefined threshold or range; and supplementing the filtered set of the images with the corrected image.
[0017] In some embodiments of the method of building a 3D model, receiving the set of the images of the object may include capturing the images of the object with an image capturing device; and the method of building a 3D model may further include identifying a specific image capturing conditions in which the rejected image is captured; and providing instructions, to a user via a user interface, of correction of the specific image capturing conditions to provide the value of the at least one parameter subceeding the predefined threshold or range.
[0018] In some embodiments, the specific image capturing conditions may include at least one of (a) a number of light sources; (b) a brightness of at least one light source; (c) a position of at least one light source with respect to the object; (d) a position of the image capturing device with respect to the object. [0019] In some embodiments, the at least one parameter may be selected from a list including: (a) a location of the at least one region; (b) a brightness of the at least one region; (c) a color balance of the at least one region.
[0020] In some embodiments, the at least one region may represent a plurality of regions, and the at least one parameter may be selected from a list including: (a) a distance between the regions of the plurality of the regions; (b) a relative brightness between the regions of the plurality of the regions; (c) an average brightness of the regions of the plurality of the regions; (d) a relative color balance between the regions of the plurality of the regions; and (e) an average color balance between the regions of the plurality of the regions.
[0021] In some embodiments, the method of building a 3D model may further include receiving the at least one parameter of the at least one region of the previously built 3D model of the object; and rejecting the one or more images of the set based on the at least one parameter of the at least one region may include rejecting the one or more images of the set, based on a difference in the at least one parameter of the at least one region between the one or more images of the set and the previously built 3D model.
[0022] In some embodiments, the object may be a human face; the at least one parameter may represent a color balance of the at least one region; and the at least one region may correspond to an eye sclera area of the human face.
[0023] In some embodiments of the methods of longitudinal study, building the pair of 3D models may further include: rendering the pair of 3D models, a first 3D model of the pair being superimposed with a first texture pattern set and a second 3D model of the pair being superimposed with a second texture pattern set; obtaining a pair of two-dimensional (2D) images of the object, each of the 2D images based on a respective rendered 3D model; and normalizing color representations between the first texture pattern set and the second texture pattern set, based on the pair of 2D images.
[0024] In some embodiments, the methods of longitudinal study may further include: changing an orientation of at least one of the 3D models, to align an orientation of the object between the 3D models; wherein obtaining the pair of the 2D images is performed in the aligned orientation.
[0025] In some embodiments of the methods of longitudinal study, normalizing the color representations may further include: determining at least one region on each 2D image, said at least one region corresponding to a specific surface area of the object; calculating a transfer function mapping color representation of the at least one region of one of the 2D images to a corresponding at least one region of another of the 2D images; adjusting the color representation of at least one of the first texture pattern set and the second texture pattern set, to normalize the color representations, based on the calculated transfer function. [0026] In some embodiments of the methods of longitudinal study, the color representations may be defined via a specific color model, including one or more color channels, each providing a digital representation of a specific color characteristic; and calculating the transfer function may further include: for each 2D image, calculating a deviation of values in said one or more color channels between pixels of the respective at least one region; and calculating the transfer function, so as to fit the calculated deviation of one of the 2D images to another of the 2D images.
[0027] In some embodiments of the methods of longitudinal study, the deviation may include at least one of a mean deviation and a standard deviation.
[0028] In some embodiments of the methods of longitudinal study, the transfer function may include a set of linear functions, each for a respective color channel of the one or more color channels.
[0029] In some embodiments of the methods of longitudinal study, normalizing the color representations may further include: applying said set of linear functions to pixels of the first texture pattern set to normalize the color representation thereof with respect to the color representation of the second texture pattern set.
[0030] In some embodiments of the methods of longitudinal study, the specific color model may be a LAB color-opponent model, and wherein said one or more channels may include: lightness channel (L), redness-greenness channel (A) and blueness -yellowness channel (B). [0031] In some embodiments of the methods of longitudinal study, the object may be a human face; the aligned orientation may be a frontal orientation; and determining the at least one region on each 2D image may include applying, to each 2D image, a face image landmark detection algorithm, to detect a plurality of face image landmarks defining the at least one region on a respective 2D image, the at least one region corresponding to a specific area of the human face.
[0032] In some embodiments of the methods of longitudinal study, building of at least one of the 3D models of the pair may further include: receiving a set of images of the object; filtering the set of images by: applying, to one or more images of the set, a visual feature detection algorithm, to detect at least one region on the image, that corresponds to a specific visual feature of the object; determining, on the one or more images of the set, at least one parameter of the at least one region; and rejecting the one or more images of the set, based on the at least one parameter of the at least one region; and applying a 3D modeling algorithm on the filtered set of images, to build the 3D model of the object.
[0033] In some embodiments of the methods of longitudinal study, rejecting the one or more images of the set based on the at least one parameter may include rejecting the one or more images of the set provided that a value of the at least one parameter exceeds a predefined threshold or range.
[0034] In some embodiments of the methods of longitudinal study, the one or more images of the set may represent a plurality of images of the set, and rejecting the one or more images of the set based on the at least one parameter may include rejecting a particular image of the plurality of the images of the set, based on a difference in the at least one parameter of the at least one region between the images of the plurality of the images.
[0035] In some embodiments of the methods of longitudinal study, rejecting the particular image of the plurality of the images of the set based on the difference in the at least one parameter may include rejecting the particular image of the plurality of the images of the set provided that the difference in the at least one parameter exceeds a predefined threshold or range.
[0036] In some embodiments, the methods of longitudinal study may further include: correcting the rejected image, to provide the value of the at least one parameter subceeding the predefined threshold or range; and supplementing the filtered set of the images with the corrected image.
[0037] In some embodiments of the methods of longitudinal study, receiving the set of the images of the object may include capturing the images of the object with an image capturing device; and the methods of longitudinal study may further include identifying a specific image capturing conditions in which the rejected image is captured; and providing instructions, to a user via a user interface, of correction of the specific image capturing conditions to provide the value of the at least one parameter subceeding the predefined threshold or range.
[0038] In some embodiments of the methods of longitudinal study, the specific image capturing conditions may include at least one of: (a) a number of light sources; (b) a brightness of at least one light source; (c) a position of at least one light source with respect to the object; (d) a position of the image capturing device with respect to the object.
[0039] In some embodiments of the methods of longitudinal study, the at least one parameter may be selected from a list including: (a) a location of the at least one region; (b) a brightness of the at least one region; (c) a color balance of the at least one region.
[0040] In some embodiments of the methods of longitudinal study, the at least one region represents a plurality of regions, and the at least one parameter may be selected from a list including: (a) a distance between the regions of the plurality of the regions; (b) a relative brightness between the regions of the plurality of the regions; (c) an average brightness of the regions of the plurality of the regions; (d) a relative color balance between the regions of the plurality of the regions; and (e) an average color balance between the regions of the plurality of the regions.
[0041] In some embodiments, the methods of longitudinal study may further include receiving the at least one parameter of the at least one region of a first 3D model of the pair of 3D models; and, for a second 3D model of the pair of 3D models, rejecting the one or more images of the set may include rejecting the one or more images of the set, based on a difference in the at least one parameter of the at least one region between the one or more images of the set and the first 3D model.
[0042] In some embodiments of the methods of longitudinal study, the object may be a human face; the at least one parameter may represent a color balance of the at least one region; and the at least one region may correspond to an eye sclera area of the human face. [0043] In some embodiments, the at least one processor may be further configured to reject the one or more images of the set, provided that a value of the at least one parameter exceeds a predefined threshold or range.
[0044] In some embodiments, the one or more images of the set may represent a plurality of images of the set, and the at least one processor may be further configured to reject the one or more images of the set by rejecting a particular image of the plurality of the images of the set, based on a difference in the at least one parameter of the at least one region between the images of the plurality of the images.
[0045] In some embodiments, the at least one processor may be further configured to reject the particular image of the plurality of the images of the set, provided that the difference in the at least one parameter exceeds a predefined threshold or range. [0046] In some embodiments, the at least one processor may be further configured to correct the rejected image, to provide the value of the at least one parameter subceeding the predefined threshold or range; and supplement the filtered set of the images with the corrected image.
[0047] In some embodiments, the system may further include an image capturing device in operative connection with the at least one processor, and the at least one processor may be further configured to: receive the set of the images of the object by capturing the images of the object with the image capturing device; identify a specific image capturing conditions in which the rejected image is captured; and provide instructions, to a user via a user interface, of correction of the specific image capturing conditions to provide the value of the at least one parameter subceeding the predefined threshold or range.
[0048] In some embodiments, the at least one processor may be further configured to: receive the at least one parameter of the at least one region of the previously built 3D model of the object; reject the one or more images of the set further based on a difference in the at least one parameter of the at least one region between the one or more images of the set and the previously built 3D model.
BRIEF DESCRIPTION OF THE DRAWINGS
[0049] The subject matter regarded as the invention is particularly pointed out and distinctly claimed in the concluding portion of the specification. The invention, however, both as to organization and method of operation, together with objects, features, and advantages thereof, may best be understood by reference to the following detailed description when read with the accompanying drawings in which:
[0050] Fig. 1 is a block diagram, depicting a computing device which may be included in a system for building a 3D model of an object, according to some embodiments;
[0051] Figs. 2A-2E are schematic representation of the concept of the present invention, demonstrated with respect to building a 3D model of a human face, according to some embodiments;
[0052] Fig. 2F is a schematic representation of the concept of the present invention, demonstrated with respect to normalizing color representation between two 3D models of a human face, for the purpose of the longitudinal study, according to some embodiments;
[0053] Fig. 3 A is a block diagram, depicting a system for building a 3D model of an object, according to some embodiments; [0054] Fig. 3B is a block diagram, depicting a filtering module of a system for building a 3D model of an object, according to some embodiments;
[0055] Fig. 3C is a block diagram, depicting a longitudinal study system for applying the claimed methods of longitudinal study, according to some embodiments;
[0056] Fig. 3D is a block diagram, depicting aspects of color normalization module of the longitudinal study system, according to some embodiments;
[0057] Fig. 4 is a flow diagram, depicting a method of building a 3D model of an object, according to some embodiments;
[0058] Fig. 5A is a flow diagram, depicting a method of longitudinal study of an object, according to some embodiments;
[0059] Fig. 5B is a flow diagram, depicting a method of longitudinal study of an object, according to other embodiments.
[0060] It will be appreciated that for simplicity and clarity of illustration, elements shown in the figures have not necessarily been drawn to scale. For example, the dimensions of some of the elements may be exaggerated relative to other elements for clarity. Further, where considered appropriate, reference numerals may be repeated among the figures to indicate corresponding or analogous elements.
DETAILED DESCRIPTION OF THE PRESENT INVENTION
[0061] One skilled in the art will realize the invention may be embodied in other specific forms without departing from the spirit or essential characteristics thereof. The foregoing embodiments are therefore to be considered in all respects illustrative rather than limiting of the invention described herein. Scope of the invention is thus indicated by the appended claims, rather than by the foregoing description, and all changes that come within the meaning and range of equivalency of the claims are therefore intended to be embraced therein.
[0062] In the following detailed description, numerous specific details are set forth in order to provide a thorough understanding of the invention. However, it will be understood by those skilled in the art that the present invention may be practiced without these specific details. In other instances, well-known methods, procedures, and components have not been described in detail so as not to obscure the present invention. Some features or elements described with respect to one embodiment may be combined with features or elements described with respect to other embodiments. For the sake of clarity, discussion of same or similar features or elements may not be repeated.
[0063] Although embodiments of the invention are not limited in this regard, discussions utilizing terms such as, for example, “processing,” “computing,” “calculating,” “determining,” “establishing”, “analyzing”, “checking”, “choosing”, “selecting”, “omitting”, “training” or the like, may refer to operation(s) and/or process(es) of a computer, a computing platform, a computing system, or other electronic computing device, that manipulates and/or transforms data represented as physical (e.g., electronic) quantities within the computer’s registers and/or memories into other data similarly represented as physical quantities within the computer’s registers and/or memories or other information non-transitory storage medium that may store instructions to perform operations and/or processes.
[0064] Although embodiments of the invention are not limited in this regard, the terms “plurality” and “a plurality” as used herein may include, for example, “multiple” or “two or more”. The terms “plurality” or “a plurality” may be used throughout the specification to describe two or more components, devices, elements, units, parameters, or the like. The term “set” when used herein may include one or more items.
[0065] Unless explicitly stated, the method embodiments described herein are not constrained to a particular order or sequence. Additionally, some of the described method embodiments or elements thereof can occur or be performed simultaneously, at the same point in time, concurrently, or iteratively and repeatedly.
[0066] In some embodiments of the present invention, some steps of the claimed method (e.g., applying a visual feature detection algorithm or applying a 3D modeling algorithm) may be performed by using machine-learning (ML)-based models. In some embodiments, ML-based models may be artificial neural networks (ANN).
[0067] A neural network (NN) or an artificial neural network (ANN), e.g., a neural network implementing a machine learning (ML) or artificial intelligence (Al) function, may refer to an information processing paradigm that may include nodes, referred to as neurons, organized into layers, with links between the neurons. The links may transfer signals between neurons and may be associated with weights. A NN may be configured or trained for a specific task, e.g., pattern recognition or classification. Training a NN for the specific task may involve adjusting these weights based on examples. Each neuron of an intermediate or last layer may receive an input signal, e.g., a weighted sum of output signals from other neurons, and may process the input signal using a linear or nonlinear function (e.g., an activation function). The results of the input and intermediate layers may be transferred to other neurons and the results of the output layer may be provided as the output of the NN. Typically, the neurons and links within a NN are represented by mathematical constructs, such as activation functions and matrices of data elements and weights. A processor, e.g., CPUs or graphics processing units (GPUs), or a dedicated hardware device may perform the relevant calculations.
[0068] It should be obvious for the one ordinarily skilled in the art that various ML-based models can be implemented without departing from the essence of the present invention. It should also be understood, that in some embodiments ML-based model may be a single ML- based model or a set (ensemble) of ML-based models realizing as a whole the same function as a single one. Hence, in view of the scope of the present invention, the abovementioned variants should be considered equivalent.
[0069] In some respects, the following description of the claimed invention is provided with respect to building a 3D model of a human face. It should be understood that such a specific embodiment is provided in order for the description to be sufficiently illustrative and it is not intended to limit the scope of protection claimed by the invention. It should be understood for one ordinarily skilled in the art that the implementation of the claimed invention in accordance with such a task is provided as a non-exclusive example and other practical implementations can be covered by the claimed invention.
[0070] Reference is now made to Fig. 1, which is a block diagram depicting a computing device, which may be included within an embodiment of the system for building a 3D model of an object, according to some embodiments.
[0071] Computing device 1 may include a processor or controller 2 that may be, for example, a central processing unit (CPU) processor, a chip or any suitable computing or computational device, an operating system 3, a memory device 4, instruction code 5, a storage system 6, input devices 7 and output devices 8. Processor 2 (or one or more controllers or processors, possibly across multiple units or devices) may be configured to carry out methods described herein, and/or to execute or act as the various modules, units, etc. More than one computing device 1 may be included in, and one or more computing devices 1 may act as the components of, a system according to embodiments of the invention.
[0072] Operating system 3 may be or may include any code segment (e.g., one similar to instruction code 5 described herein) designed and/or configured to perform tasks involving coordination, scheduling, arbitration, supervising, controlling or otherwise managing operation of computing device 1, for example, scheduling execution of software programs or tasks or enabling software programs or other modules or units to communicate. Operating system 3 may be a commercial operating system. It will be noted that an operating system 3 may be an optional component, e.g., in some embodiments, a system may include a computing device that does not require or include an operating system 3.
[0073] Memory device 4 may be or may include, for example, a Random- Access Memory (RAM), a read only memory (ROM), a Dynamic RAM (DRAM), a Synchronous DRAM (SD-RAM), a double data rate (DDR) memory chip, a Flash memory, a volatile memory, a non-volatile memory, a cache memory, a buffer, a short-term memory unit, a long-term memory unit, or other suitable memory units or storage units. Memory device 4 may be or may include a plurality of possibly different memory units. Memory device 4 may be a computer or processor non-transitory readable medium, or a computer non-transitory storage medium, e.g., a RAM. In one embodiment, a non-transitory storage medium such as memory device 4, a hard disk drive, another storage device, etc. may store instructions or code which when executed by a processor may cause the processor to carry out methods as described herein. In some embodiments, memory device 4 may include a short-term or a long-term storage for images used for 3D modelling purposes (e.g., the received set of images, the filtered set of images, the corrected images, the rejected images) and previously built 3D models, as further described herein.
[0074] Instruction code 5 may be any executable code, e.g., an application, a program, a process, task, or script. Instruction code 5 may be executed by processor or controller 2 possibly under control of operating system 3. For example, instruction code 5 may be a standalone application or an API module that may be configured to build a 3D model of an object or perform a longitudinal study thereof, as further described herein. Although, for the sake of clarity, a single item of instruction code 5 is shown in Fig. 1, a system according to some embodiments of the invention may include a plurality of executable code segments or modules similar to instruction code 5 that may be loaded into memory device 4 and cause processor 2 to carry out methods described herein.
[0075] Storage system 6 may be or may include, for example, a flash memory as known in the art, a memory that is internal to, or embedded in, a micro controller or chip as known in the art, a hard disk drive, a CD-Recordable (CD-R) drive, a Blu-ray disk (BD), a universal serial bus (USB) device or other suitable removable and/or fixed storage unit. Various types of input and output data may be stored in storage system 6 and may be loaded from storage system 6 into memory device 4 where it may be processed by processor or controller 2. In some embodiments, some of the components shown in Fig. 1 may be omitted. For example, memory device 4 may be a non-volatile memory having the storage capacity of storage system 6. Accordingly, although shown as a separate component, storage system 6 may be embedded or included in memory device 4.
[0076] Input devices 7 may be or may include any suitable input devices, components, or systems, e.g., a detachable keyboard or keypad, a mouse, an image capturing device (e.g., a camera or a depth-sensing means) and the like. Output devices 8 may include one or more (possibly detachable) displays or monitors, speakers and/or any other suitable output devices. Any applicable input/output (RO) devices may be connected to Computing device 1 as shown by blocks 7 and 8. For example, a wired or wireless network interface card (NIC), a universal serial bus (USB) device or external hard drive may be included in input devices 7 and/or output devices 8. It will be recognized that any suitable number of input devices 7 and output device 8 may be operatively connected to Computing device 1 as shown by blocks 7 and 8.
[0077] A system according to some embodiments of the invention may include components such as, but not limited to, a plurality of central processing units (CPU) or any other suitable multi-purpose or specific processors or controllers (e.g., similar to element 2), a plurality of input units, a plurality of output units, a plurality of memory units, and a plurality of storage units.
[0078] Reference is now made to Figs. 2A-2E, which schematically represent the concept of the present invention, demonstrated with respect to building a 3D model of a human face. [0079] The concept of the present invention is described with respect to system 10. System 10 for building a 3D model of an object may be implemented as a software module, a hardware module, or a combination thereof. For example, system 10 may be or may include a mobile device 100 as a computing device 1 of Fig. 1. Furthermore, system 10 may be adapted to execute one or more modules of instruction code (e.g., instruction code 5 of Fig. 1) to request, receive, analyze, accept, reject, calculate and produce various data. Mobile device 100 may include an image capturing device (e.g., a frontal camera 101) and touch screen 102 for interacting with a user via a user interface (UI) as input devices (e.g., input devices 7 of Fig. 1). Mobile device 100 may use touch screen 102 and speaker 103 as output devices (e.g., output devices 8 of Fig. 1). System 10 may be further adapted to execute one or more modules of instruction code (e.g., element 5 of Fig. 1) in order to perform steps of the claimed method. System 10 is described in detail with reference to Figs. 3A-3B.
[0080] In order to increase accuracy and repeatability in building a plurality of 3D models of the same object, it is important to ensure that the images for building the 3D models are taken under the same image capturing conditions (or at least similar to a certain extent). Image capturing conditions may include both external factors, like a number of light sources, brightness of light sources, position of light sources with respect to the object, position of the image capturing device with respect to the object, and internal factors, like image capturing device parameters, e.g., ISO, lens aperture, focal length, focusing distance, shutter speed, color balance settings (color temperature, white balance etc.).
[0081] Consequently, when building a 3D model of an object, at least some of such parameters must be directly or indirectly evaluated, to ensure that their value will not affect the repeatability of 3D modelling. Furthermore, in addition to requirements of traditional photography, where such parameters are usually evaluated with respect to either the entire image (i.e., including the object and the background) or the object as a whole, the present invention suggests applying a visual feature detection algorithm, to detect at least one region on the image, that corresponds to a specific visual feature of the object, and then check the parameters with respect to the specific visual features and corresponding parts of the object. Hence, such an approach provides an accurate validation of each image that will be used for 3D modeling, thereby ensuring that a plurality of 3D models is built based on images that do not have a substantial difference in specific image capturing parameters (in other words, that the object is imaged similarly).
[0082] It should be understood that, within the scope of the present invention, various logic may be applied in making decision on which images to approve or reject. [0083] For example, in some simple embodiments, the method may include detection of only one region on each image of the set of images, that corresponds to a specific visual feature of the object; determination on each image of only one parameter of the region; and rejection or approval of the images of the set, based on the parameter (e.g., approving images when a value of the parameter is below a predefined threshold or within a predefined range, and rejecting images when a value of the parameter exceeds the predefined threshold or range). E.g., the object may be a human face, the region of the image may correspond to the visual feature of the face, such as forehead area of the face, and the parameter may represent brightness of the forehead region. Hence, images of the face may be approved or rejected based on sufficiency of forehead region illumination. Furthermore, by presetting a range for such brightness evaluation, it may be checked that the images for two different 3D models of the same object are captured in the same or almost the same conditions, hence desired repeatability of building a plurality of 3D models may be achieved.
[0084] In some other embodiments, the plurality of regions of the image may be evaluated in combination.
[0085] For example, as shown in Figs. 2A and 2B, the object may be a human face 200, the plurality of regions 31 A (including regions 31 A’ and 31 A”) of images 20A’ and 20A’ ’ may correspond to the visual features of face 200, such as left and right cheek areas of face 200, and parameter 32A may represent a relative brightness between regions 31 A’ and 31 A”. Hence, images 20A’ and 20A” of face 200 may be approved or rejected based on evenness of face 200 illumination. E.g., if according to preset conditions, region 31 A’ must be brighter than region 31A”, then, according to the claimed method, image 20 A’ will be rejected and image 20A” will be approved.
[0086] Yet another example of embodiments (not shown in Figures), wherein plurality of regions of the image are evaluated in combination, may be such as when the object may be a human face, the plurality of regions of the image may correspond to the visual features of the face, such as left and right cheek areas of the face, and the parameter may represent a distance between the regions of the plurality of the regions. Such an embodiment may be used to evaluate a position of the image capturing device with respect to the object, in particular, a distance between the face and the camera, since the closer the face is located to the camera, the longer distance between these regions is. [0087] In yet another embodiment, a plurality of images may be evaluated in combination. Furthermore, rejecting the one or more images of the set based on the parameter may include rejecting a particular image of the plurality of the images, based on a difference in the parameter of a specific region between the images of the plurality of the images.
[0088] For example, as shown in Figs. 2C-2E, the object may be a human face 200, detected region 31 A of images 20A’, 20A” and 20A’” may correspond to the visual feature of face 200, such as proximal right eye comer area of face 200, and parameter 32A may represent a locations 32A’, 32A’ ’ and 32A’ ’ ’ of region 31 A on respective image 20 A’, 20A’ ’ or 20 A’ ’ ’ . Hence, images 20 A” and 20 A’” of face 200 may be approved or rejected based on difference of locations 32 A” and 32A’” of region 31A on images 20 A” and 20 A’” with respect to location 32A’ of region 31A on image 20A’. Such an embodiment provides indirect evaluation of the degree of similarity in position of the image capturing device (e.g., a frontal camera 101) with respect to the object (e.g., face 200) between captured images (e.g., images 20 A’, 20 A” and 20A’” of Figs. 2C-2E).
[0089] E.g., when a user captures images of his face starting with the front view and then moves camera around the face, and the images are captured automatically and continuously, the images that are too close and hence too similar to each other may be rejected. Hence, image 20A” may be rejected since difference 32B’ in location of region 31A between images 20 A’ and 20A” is too small (e.g., exceeds a predefined threshold of proximity). Image 20A’”, in turn, may be accepted since the difference 32B” in location of region 31A between images 20 A’ and 20 A’ ’ ’ is sufficient (e.g., does not exceed the predefined threshold of proximity). It should be understood that differences 32B’ and 32B” and the relative angular position of mobile device 100 and face 200 shown in Figs. 2C-2E are exaggerated in order for the example to be illustrative. In practical embodiment of the invention the predefined threshold of proximity may be significantly lower.
[0090] It should be understood that such kind of filtering provides the technical improvement of increasing repeatability of 3D modelling (when building a plurality of 3D models of the same object), because each 3D model is built based on the same number of images, each of which represents the same angular view of the object as the respective image of other 3D models. It should be understood that filtering out substantially similar (in terms of positioning) images will not only contribute into the abovementioned technical improvement, but also provide additional technical improvement by reducing computational time of 3D modelling, since a smaller number of images will be involved therein.
[0091] In some embodiments, parameters of the regions of the previously built 3D model of the same object may be evaluated. E.g., prior to filtering the set of images that will be used to build a new 3D model of the object, the previously built 3D model may be processed in order to detect specific regions on the model, that correspond to specific visual features of the object, and then requested parameters of the regions may be determined. Alternatively, parameters of regions of the previously built 3D model may be determined beforehand, e.g., during its creation, and stored additionally. Hence, according to some embodiments, values of parameters of the previously built 3D model may be received and compared with values of respective parameters of the images which are captured to build a new 3D model. Hence, the method may include rejecting one or more images of the set, based on a difference in the parameters of regions between the previously built 3D model and currently evaluated images. Obviously, using a previously built 3D model and its parameters as a reference for building a new 3D model will substantially increase the repeatability of 3D modeling. Moreover, it should be clear that the previously built 3D model does not necessarily have to be built using the method and system of the present invention, it may use any techniques already known in the art.
[0092] Another essential aspect of the present invention lies in the actions that follow the rejection of one or more images of the set.
[0093] In some embodiments, it is suggested to preform correction of the rejected image, to provide the value of the specific parameter (the one that was used as the basis for the rejection) subceeding the predefined threshold or range, and then supplement the filtered set of the images with the corrected image.
[0094] In some alternative embodiments, it is suggested to identify a specific image capturing conditions in which the rejected image was captured and to provide instructions, to a user via a user interface, of correction of the specific image capturing conditions to provide the value of the specific parameter subceeding the predefined threshold or range. Image capturing may represent (a) a number of light sources; (b) a brightness of at least one light source; (c) a position of at least one light source with respect to the object; (d) a position of the image capturing device with respect to the object. A number of light sources, brightness of at least one light source and position of at least one light source with respect to the object may be evaluated based on the analysis of a brightness of a plurality of regions of the image. E.g., the user may then be instructed to add another light source, to turn a specific side of the object to the light source, to move the object closer/further to/from the light source, to find a brighter light source etc. A position of the image capturing device with respect to the object may be evaluated based on a difference in location of specific regions between the images of the set or based on a distance between the plurality of regions on each image separately. E.g., the user may be further instructed to move the object closer/further to/from the image capturing device, to rotate the image capturing device or the object with respect to each other etc.
[0095] In yet another embodiment, wherein the plurality of images is evaluated in combination, the object may be a human face, the detected region may correspond to the visual feature of face, such as an eye sclera area of the human face, and the parameter may represent a color balance of the region. Obviously, in order to provide the repeatability of 3D modeling it is important that images which are used for building 3D models have the same color balance. Hence, in such embodiments, it is suggested to use eye sclera area, which does not vary in color so much as, for example, skin or hair, as a reference area for adjusting (correcting) color representation of the entire image. E.g., color representation of eye sclera area may be processed to get an average color thereof, and then it can be calculated how this average color should be adjusted to represent a white color, and, finally, the entire image color balance (white balance) may be calibrated (corrected) accordingly. Furthermore, as a reference value for color correction, the color balance of the region corresponding to the eye sclera area from the previously built 3D model may be used, in order to align the color balance of the new 3D model to the previously built one.
[0096] Referring now to Fig. 2F, another essential aspect of the present invention is explained. Fig. 2F schematically represents the concept of the present invention, demonstrated with respect to normalizing color representation between two 3D models of a human face for the purpose of the longitudinal study, according to some embodiments.
[0097] The concept of the present invention is described with respect to longitudinal study system 500. Longitudinal study system 500 may be implemented as a software module, a hardware module, or a combination thereof. For example, system 500 may be or may include a computing device 1 of Fig. 1. Furthermore, system 500 may be adapted to execute one or more modules of instruction code (e.g., instruction code 5 of Fig. 1) to request, receive, analyze, calculate and produce various data. System 500 may be further adapted to execute one or more modules of instruction code (e.g., element 5 of Fig. 1) in order to perform steps of claimed methods of longitudinal study. System 500 is described in detail with reference to Figs. 3C-3D.
[0098] According to one of aspects of the present invention, it is suggested to build a pair of three-dimensional (3D) models of the object (e.g., models 301 and 301’) over a period of time, wherein at least one of the 3D models of the pair is built by the method of building 3D model claimed herein. According to some embodiments, one of the 3D models of the pair may be built based on methods known in the art. According to another aspect, both 3D models may be built based on any methods known in the art. It is further suggested to compare the 3D models of the pair (e.g., models 301 and 301’) with each other, to determine distinctions therebetween; and to provide an indication of changes in the object, based on the determined distinctions between the 3D models of the pair.
[0099] As indicated above, in order to provide reliable detection and indication of changes in the object within the longitudinal study approach, it is critical to achieve similarity in the aspects of object representation between models (e.g., models 301 and 301’), such as lighting, brightness, color representation, color balance etc. The aspects of the suggested approach of normalizing color representations between a pair of 3D models (e.g., models 301 and 301’) are further described in greater detail herein.
[00100] Accordingly, in some embodiments, it is suggested to render the pair of 3D models (e.g., first rendered 3D model 303, second rendered 3D model 303 ’), a first 3D model of the pair (e.g., model 303) being superimposed with a first texture pattern set (e.g., first texture pattern set 302) and a second 3D model of the pair (e.g., model 303’) being superimposed with a second texture pattern set (e.g., second texture pattern set 302’).
[00101] In the context of the present description, terms such as “texture pattern” and “texture pattern set” refer to a two-dimensional (2D) image or a set of 2D images, e.g., stored in common image formats, that are further superimposed (“wrapped”) around the 3D object during rendering procedure, e.g., using known texture mapping and 3D rendering techniques, thereby forming a visual representation of the modeled object.
[00102] In Fig. 2F, color representations of texture pattern sets 302 and 302’, as well as color representations of all subsequent elements (e.g., 3D models 303 and 303’) that use these texture pattern sets, are shown with short horizontal dash filling and short vertical dash filling, respectively. Difference in the direction of dash filling shall be understood as difference in the color representation. As can be seen, texture pattern sets 302 and 302’ differ in color representation, which may potentially affect the detection and indication of changes in the object, hence this difference should be fixed on the pre-processing stage, before proceeding with longitudinal study of the object.
[00103] To do so, in some embodiments, it may be further suggested to obtain a pair of 2D images of the object (e.g., images 304 and 304’), each of the 2D images based on a respective rendered 3D model (e.g., 3D models 303 and 303’). In order to obtain a pair of 2D images, in some embodiments, 3D models may be first aligned in orientation (e.g., so as to have 2D images 304 and 304’ depicting human face in frontal orientation).
[00104] Finally, in some embodiments, it may be suggested to normalize color representations between the first texture pattern set (e.g., first texture pattern set 302) and the second texture pattern set (e.g., second texture pattern set 302’), based on the pair of 2D images (e.g., images 304 and 304’).
[00105] In particular, in some embodiments, it may be suggested to determine at least one region (e.g., regions 305 and 305’) on each 2D image (e.g., images 304 and 304’), said at least one region corresponding to a specific surface area of the object (e.g., uniformly exposed skin area of a face, without eye region - for men and women; and, optionally, without beard region - for men). Such regions (e.g., regions 305 and 305’) may be considered clearly defining color representations of respective 3D models (e.g., models 303 and 303’) as they correspond to portions of texture that are substantially uniform in color, and these regions may further be used for color alignment between 3D models since they correspond to the same region of the object (e.g., specific region of a face).
[00106] In some embodiments, it is further suggested to calculate transfer function 306 mapping color representation of the at least one region of one of the 2D images (e.g., region 305’) to a corresponding at least one region of another of the 2D images (e.g., region 305); and adjusting the color representation of at least one of the first texture pattern set and the second texture pattern set (e.g., color representation of texture pattern set 302’), to normalize the color representations, based on the calculated transfer function (e.g., transfer function 306).
[00107] In some embodiments, the color representations are defined via a specific color model, comprising one or more color channels, each providing a digital representation of a specific color characteristic. In some embodiments, the transfer function (e.g., transfer function 306) includes a set of linear functions, each for a respective color channel of the one or more color channels.
[00108] As known in the art, when a typical three-channel image is represented in any of the most well-known color models (or color spaces), there will be correlations between the different channels’ values. For example, in RGB space, most pixels will have large values for the red and green channel if the blue channel is large. This implies that if it is needed to change the appearance of a pixel’s color in a coherent way, all color channels must be modified in tandem. This aspect complicates any color modification process. To mitigate this issue, it is known to use an orthogonal color space (color model) without correlations between the axes (channels).
[00109] Accordingly, in some embodiments, the specific color model may be a LAB color-opponent model, wherein said one or more channels may include: lightness channel (L), redness-greenness channel (A) and blueness-yellowness channel (B). LAB coloropponent model is known to be used for such purposes, since it has low correlation between the axes in the color space, thereby enabling application of different operations in different color channels while mitigating undesirable cross-channel artifact occurrence. Furthermore, LAB color space is logarithmic, which means to a first approximation that uniform changes in channel intensity tend to be equally detectable, thereby further simplifying computation of a transfer function and increasing accuracy and coherence of color normalization.
[00110] Accordingly, in some embodiments, it may be suggested to store texture pattern sets 302 and 302’ defined in LAB color-opponent model, or to convert to LAB coloropponent model, before calculating transfer function 306.
[00111] In some embodiments, calculating transfer function (e.g., function 306) may further include: for each 2D image (e.g., images 304 and 304’), calculating a deviation of values in said one or more color channels between pixels of the respective at least one region (e.g., regions 305 and 305’); and calculating the transfer function (e.g., function 306), so as to fit the calculated deviation of one of the 2D images to another of the 2D images (e.g., calculated deviation of image 304’ to deviation of image 304). In some embodiments, the deviation may include at least one of a mean deviation and a standard deviation.
[00112] In some embodiments, it may be further suggested to normalize the color representations further by: applying said set of linear functions (e.g., transfer function 306) to pixels of one texture pattern set (e.g., texture pattern set 302’) to normalize the color representation thereof with respect to the color representation of another texture pattern set (e.g., texture pattern set 302), thereby obtaining adjusted texture pattern set (e.g., texture pattern set 307).
[00113] Accordingly, the 3D models of the pair (e.g., models 301 and 301’) may be rendered with respective texture pattern sets (e.g., texture pattern sets 302 and adjusted texture pattern set 307, respectively) and may undergo comparison, in order to determine distinctions therebetween, which may now reliably indicate changes in the object, based on the determined distinctions between the 3D models of the pair, reducing a rate of false positive or false negative detection of changes in the object.
[00114] It should be clear that provided examples of embodiments of the present invention shall not be considered exclusive, and other embodiments may be covered by the scope of the present invention. E.g., in some embodiments, other parameters (e.g., blurriness, noise level etc.), regions and combinations thereof may be evaluated in order to perform filtering, as well as other parameters may be normalized in the same or similar approach before performing comparison of a plurality of 3D models.
[00115] Reference is now made to Fig. 3A, depicting a system 10 for building a 3D model of an object, according to some embodiments.
[00116] As shown in Fig. 3A, arrows may represent flow of one or more data elements to and from system 10 and/or among modules or elements of system 10. Some arrows have been omitted in Fig. 3A for the purpose of clarity.
[00117] In some embodiments, system 10 may include image capturing module 20. Image capturing module may be configured to receive or capture a set of images 20A of the object (e.g., human face 200 of Figs. 2A-2E).
[00118] It should be understood that the terms “receiving” or “capturing” should be considered herein in the broadest reasonable meaning. E.g., in some embodiments, set of images 20A may be pre-stored in the external memory device and transmitted to system 10 via an input device (e.g., input device 7 of Fig. 1). In some other embodiments, system 10 may include camera (e.g., frontal camera 101) and image capturing module may be configured to capture images of the object via the camera in the automatic or semi-automatic manner. It should also be clear that “capturing” does not necessarily mean “saving the final image” as it is interpreted traditionally (when the user presses the respective element of the UI). The mode of the mobile device, when the camera application is activated and images are continuously received from the camera and shown on the screen, shall also be considered as “capturing”.
[00119] In some embodiments, system 10 may include filtering module 30. Filtering module 30 may be configured to filter the set of images 20A, in particular, to reject redundant or incorrect images 20A and accept the correct ones. Filtering module 30 may be further configured to output rejected images 30B and a filtered set 30A of images 20A. Filtering module 30 is described in detail herein with reference to Fig. 3B.
[00120] It should be understood that image capturing module 20 and filtering module 30 may work in various modes. E.g., filtering module 30 may filter images of the set of images 20A that was captured by image capturing device 20 beforehand. Alternatively, filtering module 30 may perform filtering “on-the-fly”, while the user is continuously capturing images of his face. In some embodiments, user may even randomly rotate his face with respect to the image capturing device or rotate his image capturing device around his face and filtering module 30 may perform filtering of captured images 20A instantaneously. Furthermore, said “rejection” of images 20 A in order to get the filtered set 30 A of images 20A, in some embodiments, may be implemented for deactivation of the respective element (“shutter button”) of UI, thereby not allowing the user to capture an incorrect image, and activation of the element only when parameter 32A subceeds the predefined threshold or range (if capturing of images 20A is performed manually).
[00121] In some embodiments, system 10 may further include correction module 40. Correction module 40 may be configured to perform correction of rejected images 30B, to provide the value of parameter 32A (described with reference to Fig. 3B) subceeding the predefined threshold or range. E.g., if parameter 32 represents brightness or color balance (both with respect to separate regions or representing relation between a plurality of regions), and the value of parameter 32 of rejected image 30B exceeds predefined threshold or range, correction module 40 may be configured to correct image 30B so as to make parameter 32 subceeding predefined threshold or range (e.g., increase/decrease brightness of respective regions 31A or of the entire image, adjust color balance accordingly etc.), thereby making rejected image 30B valid for further building of 3D model.
[00122] Hence, correction module 40 may be further configured to output corrected images 40A and supplement the filtered set of the images 20A with corrected images 40A. [00123] In case no automatic correction may be applied in order to fix the deficiencies of rejected images 30B, correction module 40 may be further configured to output correction error 40B thereby signaling about an inability of automatic correction.
[00124] In some embodiments, system 10 may further include instruction module 50. Instruction module 50 may be triggered by receiving correction error 40B and may be configured to identify a specific image capturing conditions in which respective rejected image 30B is captured. In particular, instruction module 50 may be configured to identify, based on parameters 32A of detected regions 31 A of rejected images 30B a number of light sources; a brightness of light sources; a position of light sources with respect to the object (e.g., face 200); a position of the image capturing device (e.g., frontal camera 101) with respect to the object (e.g., face 200). E.g., if regions 31A correspond to left and right cheek areas of face (e.g., face 200) and parameter 32A represents relative brightness of regions 31 A, it can be determined which side of the face is illuminated brighter, consequently, the position of the light source with respect to the face may be identified.
[00125] In some embodiments, system 10 may further include user interface (UI) module 60, communicating system 10 with a user via input and output devices (e.g., input devices 7 and output devices 8 of Fig. 1). In some embodiments, input devices may include touch screen 102 and output devices may include touch screen 102 and/or speaker 103. Instruction module 50 may be further configured to provide instructions 50A, to a user via a UI (i.e., via UI module 60), of correction of the specific image capturing conditions to provide the value of respective parameter 32A subceeding the predefined threshold or range. E.g., a user may be instructed to turn his face (e.g., face 200) to the light source, to locate a light source on the other side of the face, to move closer/further to/from the light source etc. E.g., UI module 60 may be configured to provide instructions 50A to a user in the form of voice messages (e.g., via speaker 103) or text messages (e.g., by showing them on screen 102).
[00126] In some embodiments, system 10 may further include 3D modeling module 70. 3D modeling module 70 may be configured to apply 3D modeling algorithm 70A on filtered set 30A of images 20A, to build the 3D model 10A of the object (e.g., face 200). It should be understood that, in the claimed invention, neither any specific 3D modeling algorithm is claimed, nor the invention is limited to a usage of any specific 3D modeling algorithm known in the art. It rather suggests a specific smart method of filtering and preprocessing images that are further supplied to the 3D modeling algorithm as input data. Hence, any known in the art 3D modeling algorithm configured to build a 3D model of an object based on a set of images thereof (e.g., 3D model 10A based on filtered set 30A of images 20A) may be applied herein. It should also be understood that in order to build an accurate 3D model (e.g., 3D model 10A) images (e.g., images 20A) must be captured at various angles with respect to the object, to provide sufficient data of its view from different sides.
[00127] In some embodiments, 3D model 10A may be further transferred to 3D model storage 80.
[00128] In some embodiments, filtering module 30 may be further configured to request and receive parameters 32A of regions 31 A of the previously built 3D model of the same object (e.g., retrospective 3D model 10A’ of the object (e.g., face 200)) from 3D model storage 80 to perform filtering based on the received parameters 32A, as further described with reference to Fig. 3B.
[00129] In some embodiments, system 10 may include a longitudinal studying module (not shown in figures) configured to compare 3D model 10A of the object (e.g., face 200) with retrospective 3D model 10A’ of the same object (e.g., face 200) and determine distinctions therebetween. The longitudinal studying module may be further configured to provide an indication, via UI module 60, of changes in the object, based on the determined distinctions between 3D model 10A and retrospective 3D model 10 A’ (or alternatively send the information about the distinctions via network to another mobile device or server).
[00130] Reference is now made to Fig. 3A, depicting filtering module 30 of system 10 for building a 3D model of an object, according to some embodiments.
[00131] In some embodiments, filtering module 30 may include visual feature detection module 31, configured to apply, to one or more images 20A of the set of images 20A, the visual feature detection algorithm 3 IB, to detect regions 31A on images 20A, that correspond to specific visual features (e.g., cheek areas, forehead area, eye comers, eye sclera areas) of the object (e.g., of face 200). Visual feature detection module 31 may be further configured to output regions 31A of images 20A.
[00132] It should be understood that, in the claimed invention, neither any specific visual feature detection algorithm is claimed, nor the invention is limited to usage of any specific visual feature detection algorithm known in the art. Hence, any known in the art visual feature detection algorithm (e.g., based on artificial intelligence (Al) and or machine learning (ML) techniques) may be applied herein. [00133] In some embodiments, filtering module 30 may further include image region analysis module 32. Image region analysis module 32 may be configured to determine, on images 20A of the set, parameters 32A of regions 31 A. If each region 31A is considered separately, parameters 32A may represent, e.g., location of regions 31A, brightness of regions 31 A, color balance of regions 31 A. If regions 31A are considered in combination, parameters 32A may represent, e.g., distance between regions 31 A, relative brightness between regions 31 A, average brightness of regions 31 A, relative color balance between regions 31 A, average color balance between regions 31 A.
[00134] In some embodiments, filtering module 30 may further include comparison module 33. Comparison module 33 may be configured to receive parameters 32A of regions 31A and to evaluate whether parameters 32A exceed a predefined threshold or range (e.g., predefined by threshold or range setup 33B).
[00135] It should be understood that terms “exceeding” or “subceeding” the predefined threshold or range shall be interpreted in a broadest reasonable manner. That is, both “exceeding” and “subceeding” may mean crossing a threshold in a direction from low value of the parameter (e.g., parameter 32A) to high value of the parameter and the other way around. However, in the context of the present application, “exceeding” is used to indicate that a value of the parameter (e.g., parameter 32A) is not acceptable for using the respective image (e.g., image 20A) for 3D modeling, hence the respective image (e.g., image 20A) must be rejected, while the term “subceeding” is used to indicate that a value of a parameter (e.g., parameter 32A) is acceptable for using the respective image (e.g., image 20A) for 3D modeling, accordingly, the image (e.g., image 20A) may be accepted. Furthermore, “exceeding the range” shall be interpreted as “having a value that is outside the range” and “subceeding the range” shall be interpreted as “having a value that is within the range”.
[00136] Comparison module 33 may be configured to reject images 20A, based on parameters 32A of regions 31 A. In some particular embodiments, comparison module 33 may be configured to reject images 20A provided that a value of the at least one parameter 32A exceeds the predefined threshold or range (e.g., predefined by threshold or range setup 33B). In some particular embodiments, comparison module 33 may be further configured to reject a particular image 20A of the plurality of the images 20A, based on a difference (e.g., differences 32B’ and 32B” shown in Figs. 2E and 2D) in parameters 32A of regions 31A between images 20A of the plurality of images 20A. Furthermore, in some particular embodiments, comparison module 33 may be further configured to reject particular image 20A of the plurality of images 20A provided that the difference (e.g., differences 32B’ and 32B” shown in Figs. 2E and 2D) in a specific parameter 32A exceeds a predefined threshold or range (e.g., predefined by threshold or range setup 33B).
[00137] In some embodiments, comparison module 33 may be configured to receive the at least one parameter (e.g., respective parameter 32A) of the respective region (e.g., region 31 A) of the previously built 3D model (e.g., retrospective 3D model 10A’) of the same object (e.g., face 200) from 3D model storage 80 (shown in Fig. 3A). Comparison module 33 may be further configured to reject images 20A based on a difference in the at least one parameter (e.g., respective parameter 32A) of the at least one region (e.g., region 31 A) between the respective image 20A and the previously built 3D model (e.g., retrospective 3D model 10A’) of the same object (e.g., face 200).
[00138] Hence, in result of determining whether parameter 32A of the particular image (e.g., image 20 A) or a difference in parameter 32A between the particular image (e.g., image 20A) and either other images 20A or previously built 3D model (e.g., retrospective 3D model 10A’) exceeds or subceeds the predefined threshold or range, comparison module 33 may be further configured to form rejection decision 33 A, indicating whether the particular image (e.g., image 20A) must be rejected or accepted.
[00139] In some embodiments, filtering module 30 may further include rejection module 34, configured to divide set of images 20A into accepted (e.g., filtered set 30A of images) and rejected ones (e.g., rejected images 30B).
[00140] Reference is now made to Figs. 3C and 3D, depicting longitudinal study system 500 for applying the claimed methods of longitudinal study, according to some embodiments, and aspects of color normalization module 510 of longitudinal study system 500, according to some embodiments.
[00141] As shown in Fig. 3C, in some embodiments, system 500 may include color normalization module 510.
[00142] In some embodiments, system 500 may be configured to retrieve, from the database of 3D model storage 80, a pair of 3D models of the object over a period of time. E.g., system 500 may receive first 3D model 10A’ and second 3D model 10B’ (not superimposed with textures) and first texture pattern set 11 A’ (for 3D model 10A’) and second texture pattern set 1 IB’ (for 3D model 10B’). In some embodiments, at least one of the 3D models of the pair (e.g., models 10A’ and 10B’) may be built by the method of building 3D model described above, e.g., using system 10, shown in Figs. 3A and 3B. In other embodiments, both 3D models may be built with other methods known in the art.
[00143] In some embodiments, color normalization module 510 may be configured to receive 3D models 10A’ and 10B’ and texture pattern sets 11A’ and 11B’. In some embodiments, color normalization module 510 may be further configured to normalize color representations between texture pattern sets 11 A’ and 11B’. Aspects of normalizing color representations are described in greater detail with reference to Fig. 3D further below.
[00144] In some embodiments, as a result of the normalization, color normalization module 510 may be further configured to output adjusted second texture pattern set 11B”, normalized with respect to first texture pattern set 11 A’ .
[00145] In some embodiments, system 500 may further include rendering module 520.
[00146] Rendering module 520 of system 500 may be configured to receive 3D models 10 A’ and 10B’ and texture pattern sets 11 A’ and 11B”. Rendering module 520 may be further configured to render a pair of 3D models, wherein a first 3D model of the pair (e.g., 3D model 10A’) may be superimposed with first texture pattern set 11 A’, and a second 3D model of the pair (e.g., 3D model 10B’) may be superimposed with adjusted second texture pattern set 11B”. Accordingly, rendering module 520 may be configured to output first rendered 3D model 520A and second rendered 3D model 520B’ (having adjusted color representation).
[00147] In some embodiments, system 500 may further include 3D model comparison module 530. In some embodiments, 3D model comparison module 530 may be configured to receive 3D models 520A and 520B’. In some embodiments, 3D model comparison module 530 may be further configured to perform comparison of the 3D models of the pair (3D models 520A and 520B’) with each other, to determine distinctions therebetween. Accordingly, 3D model comparison module 530 may be further configured to output a data element indicating determined distinctions 530A.
[00148] In some embodiments, system 500 may further include user interface (UI) module 540. UI module 540 may be configured to receive the data element indicating determined distinctions 530A, and 3D models 520 A and 520B’. In some embodiments, UI module 540 may be further configured to provide indication 540A of changes in the object, based on determined distinctions 530A between the 3D models 520A and 520B’ . E.g., UI module 540 may visually represent 3D models 520A and 520B’ with marked (e.g., by different color) changes in the object.
[00149] Referring now to Fig. 3D, color normalization module 510 is described in greater detail.
[00150] For the purpose of color normalization, rendering module 520 of system 500 may be configured to receive 3D models 10A’ and 10B’ and texture pattern sets 11A’ and 1 IB’. Rendering module 520 may be further configured to render a pair of 3D models, wherein a first 3D model of the pair (e.g., 3D model 10A’) may be superimposed with first texture pattern set 11 A’, and a second 3D model of the pair (e.g., 3D model 10B’) may be superimposed with second texture pattern set 1 IB’. Accordingly, rendering module 520 may be configured to output first rendered 3D model 520A and second rendered 3D model 520B. [00151] In some embodiments, color normalization module 510 may include alignment and image extraction module 511. In some embodiments, alignment and image extraction module 511 may be configured to change an orientation of at least one of 3D models 520 A and 520B, to align an orientation of the object between the 3D models 520A and 520B. In some embodiments, alignment and image extraction module 511 may be further configured to obtain (extract) a pair of 2D images of the object (e.g., first frontal image 511A and second frontal image 51 IB), each of the 2D images 511 A and 51 IB based on a respective rendered 3D model 520A and 520B, wherein 2D images 511A and 51 IB may be obtained in the aligned orientation.
[00152] In some embodiments, color normalization module 510 may further include landmark detection and region extraction module 512. Landmark detection and region extraction module 512 may be configured to receive frontal images 511A and 51 IB. Landmark detection and region extraction module 512 may be further configured to determine at least one region (e.g., regions 512 A and 512B) on each 2D image 511A and 51 IB, said at least one region corresponding to a specific surface area of the object.
[00153] In particular, in some embodiments, wherein the object is a human face and the aligned orientation is a frontal orientation, landmark detection and region extraction module 512 may be further configured to apply, to each 2D image 511A and 51 IB, a face image landmark detection algorithm, to detect a plurality of face image landmarks 512A’ and 512B’, respectively defining the region on a respective 2D image (e.g., the contour of the respective region 512A and 512B may be formed by detected landmarks 512 A’ and 512B’, respectively). In some embodiments, regions 512A and 512B may correspond to an area of human face without eyes, eyebrows, and, optionally, beard and mouth.
[00154] It shall be understood that said face image landmark detection algorithm may include and/or be based on any methods known in the art.
[00155] In some embodiments, color normalization module 510 may further include transfer function calculation module 513. Transfer function calculation module 513 may be configured to receive regions 512A and 512B. In some embodiments, transfer function calculation module 513 may be further configured to calculate transfer function 513A mapping color representation of the at least one region (e.g., region 512B) of one of the 2D images (e.g., image 51 IB) to a corresponding at least one region (e.g., region 512A) of another of the 2D images (e.g., image 511 A).
[00156] In some embodiments, the color representations of regions 512A and 512B may be defined via a specific color model, including one or more color channels, each providing a digital representation of a specific color characteristic. E.g., in some embodiments, the specific color model may be the LAB color-opponent model (as described in greater detail with reference to Fig. 2F), and the channels may include: lightness channel (L), rednessgreenness channel (A) and blueness-yellowness channel (B). In some embodiments, transfer function calculation module 513 may be further configured to convert color representation of regions 512 A and 512B into LAB color-opponent model.
[00157] In some embodiments, transfer function calculation module 513 may be further configured to calculate transfer function 513A by calculating, for each 2D image 511 A and 51 IB, a deviation of values in said one or more color channels between pixels of the respective region 512A and 512B . In some embodiments, the deviation may include at least one of a mean deviation and a standard deviation. In some embodiments, transfer function calculation module 513 may be further configured to calculate transfer function 513A, so as to fit the calculated deviation of one of the 2D images (e.g., image 511A) to another of the 2D images (e.g., image 51 IB).
[00158] In some embodiments, transfer function 513A may include a set of linear functions, each for a respective color channel of the color model.
[00159] For example, the set of linear functions may be as follows:
Ly = <2L*LX + bL ;
Ay= CIA* Ax + bA ', By — <2B*BX + bB ,
[00160] wherein Ly may be a deviation of values in L-channel between pixels of region 512A, Lx may be a deviation of values in L-channel between pixels of region 512B, Ay may be a deviation of values in A-channel between pixels of region 512A, Ax may be a deviation of values in A-channel between pixels of region 512B, By may be a deviation of values in B-channel between pixels of region 512A, Bx may be a deviation of values in B-channel between pixels of region 512B, and ai, bL, CIA, bA, as, bs may represent calculated coefficients fitting respective deviation values between regions 512A and 512B.
[00161] It shall be understood that calculation of transfer function 513A may include or may be based on known mathematical and statistical methods, thereby it is implied herein that it would be clear for the person skilled in the art, how and which methods to apply in order to calculate such transfer function 513 A.
[00162] In some embodiments, color normalization module 510 may further include color representation adjustment module 514. Color representation adjustment module 514 may be configured to receive transfer function 513 A and second texture pattern set 11B’. Color representation adjustment module 514 may be further configured to adjust the color representation of second texture pattern set 11B’, to normalize the color representations between texture pattern sets 11A’ and 1 IB’, based on transfer function 513A. In particular, color representation adjustment module 514 may be further configured to normalize the color representations by applying said set of linear functions (transfer function 513A) to pixels of texture pattern set 1 IB’ to normalize the color representation thereof with respect to the color representation of texture pattern set 11 A’. Thereby, color representation adjustment module 514 may be further configured to output adjusted second texture pattern set 11B”.
[00163] It shall be understood that, although different numerical and alfa-numerical references may be used between different figures herein, e.g., Figs. 3C-3D and Fig. 2F, the references may be associated with same or similar elements. E.g., 3D models 10A’ and 10B’ of Figs. 3C-3D may be 3D models 301 and 301’ of Fig. 2F; texture pattern sets 11A’, 11B’ and 11B” of Figs. 3C-3D may be texture pattern sets 302, 302’ and 307 of Fig. 2F; 3D models 520A and 520B of Figs. 3C-3D may be 3D models 303 and 303’ of Fig. 2F; images 511A and 51 IB of Figs. 3C-3D may be images 304 and 304’ of Fig. 2F; regions 512A and 512B of Fig. 3D may be regions 305 and 305’ of Fig. 2F; transfer function 513A of Fig. 3D may be transfer function 306 of Fig. 2F, etc.
[00164] Referring now to Fig. 4, a flow diagram is presented, depicting a method of building a 3D model of an object, by at least one processor, according to some embodiments. [00165] As shown in step S1005, the at least one processor (e.g., processor 2 of Fig. 1) may perform receiving of a set of images (e.g., images 20A) of the object (e.g., face 200). Step S1005 may be carried out by image capturing module 20 (as described with reference to Fig. 3A).
[00166] As shown in step S1010, the at least one processor (e.g., processor 2 of Fig. 1) may perform applying, to one or more images (e.g., images 20A) of the set, a visual feature detection algorithm (e.g., visual feature detection algorithm 3 IB), to detect at least one region (e.g., region 31 A) on the image (e.g., image 20 A), that corresponds to a specific visual feature of the object (e.g., face 200). Step S1010 may be carried out by visual feature detection module 31 (as described with reference to Fig. 3B).
[00167] As shown in step S1015, the at least one processor (e.g., processor 2 of Fig. 1) may perform determining, on the one or more images (e.g., images 20A) of the set, at least one parameter (e.g., parameter 32A) of the at least one region (e.g., region 31A). Step S 1015 may be carried out by image region analysis module 32 (as described with reference to Fig. 3B).
[00168] As shown in step S1020, the at least one processor (e.g., processor 2 of Fig. 1) may perform rejecting the one or more images (e.g., images 20A) of the set, based on the at least one parameter (e.g., parameter 32A) of the at least one region (e.g., region 31 A), thereby obtaining a filtered set of images (e.g., filtered set 30A of images 20A). Step S 1020 may be carried out by comparison module 33 and rejection module 34 (as described with reference to Fig. 3B).
[00169] As shown in step S1025, the at least one processor (e.g., processor 2 of Fig. 1) may perform applying of a 3D modeling algorithm (e.g., 3D modeling algorithm 70A) on the filtered set of images (e.g., filtered set 30A of images 20 A), to build the 3D model of the object (e.g., 3D model 10A of the object, e.g., face 200). Step S1025 may be carried out by 3D modeling module 70 (as described with reference to Fig. 3 A).
[00170] Referring now to Fig. 5A, a flow diagram is presented, depicting a method of longitudinal study of an object, by at least one processor, according to some embodiments. [00171] As shown in step S2005, the at least one processor (e.g., processor 2 of Fig. 1) may perform building of a pair of three-dimensional (3D) models of the object over a period of time (e.g., 3D model 10A and retrospective 3D model lOA’of the object, e.g., face 200), wherein at least one of the 3D models of the pair (e.g., 3D model 10A) is built by the claimed method of building a 3D model. Step S2005 may be carried out by longitudinal studying module (as described with reference to Fig. 3 A).
[00172] As shown in step S2010, the at least one processor (e.g., processor 2 of Fig. 1) may perform comparison of the 3D models (e.g., 3D model 10A and retrospective 3D model lOA’of the object, e.g., face 200, or 3D models 520A and 520B’, as shown in Fig. 3C) of the pair with each other, to determine distinctions therebetween (e.g., determined distinctions 530A, as shown in Fig. 3C). Step S2010 may be carried out by longitudinal studying module (as described with reference to Fig. 3A) or 3D model comparison module 530 (as described with reference to Fig. 3C).
[00173] As shown in step S2015, the at least one processor (e.g., processor 2 of Fig. 1) may provide an indication of changes in the object (e.g., face 200), based on the determined distinctions between the 3D models of the pair (e.g., 3D model 10A and retrospective 3D model lOA’of the object, e.g., face 200; or 3D models 520A and 520B’, as shown in Fig. 3C). Step S2015 may be carried out by longitudinal studying module and UI module 60 (as described with reference to Fig. 3A) or by UI module 540 (as described with reference to Fig. 3C).
[00174] Referring now to Fig. 5B, a flow diagram is presented, depicting a method of longitudinal study of an object, by at least one processor, according to some embodiments. [00175] As shown in step S3005, the at least one processor (e.g., processor 2 of Fig. 1) may perform building of a pair of three-dimensional (3D) models (e.g., 3D models 520A and 520B or 520B’, as shown in Figs. 3C-3D) of the object over a period of time, wherein said building the pair of 3D models includes: rendering the pair of 3D models (3D models 520A and 520B, as shown in Figs. 3C-3D), a first 3D model of the pair (e.g., model 520 A) being superimposed with a first texture pattern set (e.g., texture pattern set 11 A’, as shown in Fig. 3D) and a second 3D model of the pair (e.g., model 520B) being superimposed with a second texture pattern set (e.g., texture pattern set 1 IB’, as shown in Fig. 3D); obtaining a pair of two-dimensional (2D) images of the object (e.g., images 511 A and 51 IB, as shown in Fig. 3D), each of the 2D images (e.g., images 511 A and 51 IB, as shown in Fig. 3D) based on a respective rendered 3D model (e.g., 3D model 520A and 520B, respectively); and normalizing color representations between the first texture pattern set (e.g., texture pattern set 11 A’, as shown in Fig. 3D) and the second texture pattern set (e.g., texture pattern set 1 IB’, as shown in Fig. 3D), based on the pair of 2D images (e.g., images 511A and 51 IB, as shown in Fig. 3D). Step S3005 may be carried out by system 10 (as described with reference to Fig. 3A and 3B), longitudinal studying module (as described with reference to Fig. 3A), rendering module 520 (as described with reference to Fig. 3C), and color normalization module 510 (as described with reference to Fig. 3C).
[00176] As shown in step S3010, the at least one processor (e.g., processor 2 of Fig. 1) may perform comparison of the 3D models (e.g., 3D models 520A and 520B’, as shown in Fig. 3C) of the pair with each other, to determine distinctions therebetween (e.g., determined distinctions 530A, as shown in Fig. 3C). Step S3010 may be carried out by 3D model comparison module 530 (as described with reference to Fig. 3C).
[00177] As shown in step S3015, the at least one processor (e.g., processor 2 of Fig. 1) may provide an indication of changes in the object, based on the determined distinctions between the 3D models of the pair (e.g., 3D models 520A and 520B’, as shown in Fig. 3C). Step S3015 may be carried out by longitudinal studying module and UI module 60 (as described with reference to Fig. 3A), or by UI module 540 (as described with reference to Fig. 3C).
[00178] As can be seen from the provided description, the claimed invention represents method of building 3D models, methods of longitudinal study, and systems for performing these methods which increase accuracy and repeatability of building a plurality of 3D models of the same object, while requiring no specially-designed equipment, thereby providing a technical improvement in the technological field of three-dimensional (3D) imaging, as well as the field of longitudinal study. Furthermore, the claimed invention represents methods of longitudinal study of an object (based on a plurality of three-dimensional (3D) models of the object built over a period of time), which increase reliability of detection of changes in the object, based on the determined distinctions between the plurality of 3D models. More specifically, when applied to medical or veterinary science, the present invention provide an improvement of the relevant field of technology by increasing automated diagnosis determination reliability, or by facilitating manual diagnosis determination, wherein said diagnosis determination is made based on the comparison of a plurality of 3D models within the longitudinal study approach.
[00179] Once the required accuracy and repeatability are achieved, 3D modelling may be effectively used for longitudinal study of various objects. Furthermore, since the claimed method and system do not require any complex specially -designed equipment, and may be reliably implemented using hardware and software basis of a regular mobile device, the longitudinal studying may become common and even prevalent approach in various research fields and become applicable to fields where it was never used before.
[00180] E.g., in some embodiments of the claimed method of longitudinal study, it can be applied to dermatology. E.g., patients may capture images of their faces and build 3D models thereof on a regular basis. These 3D models may be automatically analyzed and changes in skin state may be detected. Alternatively, 3D models may be communicated to a dermatologist for further examination. Hence, the course of skin diseases may be evaluated. Such an approach provides an additional advantage to the field by reducing the number of redundant visits to the doctor. Since the suggested method of building 3D models provides reliable repeatable results, and suggested methods of longitudinal study ensure alignment between 3D models in essential parameters, the reduced number of false positive or false negative detections of changes in the object is expected, which will improve overall applicability of the suggested approach.
[00181] In some other exemplary embodiments of the claimed method of longitudinal study, it can be applied to dentistry, in particular, to orthodontics. As known, alterations in bite patterns also cause changes in facial features. Hence, by examining 3D models of patient’s face, initial diagnosis of mal -positioning of teeth and jaws and misalignment in bite patterns or, furthermore, monitoring of the progress of tooth position adjustment and jaw alignment during treatment may be conducted. Furthermore, a patient may be asked to smile or open his mouth while capturing images, in order to create a 3D model of a face including bite pattern. Then, such 3D models may be analyzed (either automatically, e.g., by using ML-based methods, or manually - by the specialist) to assess the effectiveness of treatment (e.g., assess the dynamics of reducing gaps between teeth, by measuring sizes of gaps in each 3D model etc.).
[00182] Hence, as can be seen, the claimed method and system of building a 3D model of an object does not only improve the technological field of 3D imaging, but also improves basically any filed where the longitudinal study approach based on 3D modeling may be applied.
[00183] Unless explicitly stated, the method embodiments described herein are not constrained to a particular order or sequence. Furthermore, all formulas described herein are intended as examples only and other or different formulas may be used. Additionally, some of the described method embodiments or elements thereof may occur or be performed at the same point in time.
[00184] While certain features of the invention have been illustrated and described herein, many modifications, substitutions, changes, and equivalents may occur to those skilled in the art. It is, therefore, to be understood that the appended claims are intended to cover all such modifications and changes as fall within the true spirit of the invention.
[00185] Various embodiments have been presented. Each of these embodiments may of course include features from other embodiments presented, and embodiments not specifically described may include various features described herein.

Claims

1. A method of building a three-dimensional (3D) model of an object by at least one processor, the method comprising: receiving a set of images of the object; filtering the set of images by: applying, to one or more images of the set, a visual feature detection algorithm, to detect at least one region on the image, that corresponds to a specific visual feature of the object; determining, on the one or more images of the set, at least one parameter of the at least one region; and rejecting the one or more images of the set, based on the at least one parameter of the at least one region; and applying a 3D modeling algorithm on the filtered set of images, to build the 3D model of the object.
2. The method of claim 1, wherein rejecting the one or more images of the set based on the at least one parameter comprises rejecting the one or more images of the set provided that a value of the at least one parameter exceeds a predefined threshold or range.
3. The method according to any one of claims 1 and 2, wherein the one or more images of the set represents a plurality of images of the set, and wherein rejecting the one or more images of the set based on the at least one parameter comprises rejecting a particular image of the plurality of the images of the set, based on a difference in the at least one parameter of the at least one region between the images of the plurality of the images.
4. The method of claim 3, wherein rejecting the particular image of the plurality of the images of the set based on the difference in the at least one parameter comprises rejecting the particular image of the plurality of the images of the set provided that the difference in the at least one parameter exceeds a predefined threshold or range.
5. The method according to any one of claims 2 and 4, wherein the method further comprises correcting the rejected image, to provide the value of the at least one parameter subceeding the predefined threshold or range; supplementing the filtered set of the images with the corrected image.
6. The method according to any one of claims 2, 4 and 5, wherein receiving the set of the images of the object comprises capturing the images of the object with an image capturing device; and wherein the method further comprises identifying a specific image capturing conditions in which the rejected image is captured; and providing instructions, to a user via a user interface, of correction of the specific image capturing conditions to provide the value of the at least one parameter subceeding the predefined threshold or range.
7. The method of claim 6, wherein the specific image capturing conditions comprise at least one of (a) a number of light sources; (b) a brightness of at least one light source; (c) a position of at least one light source with respect to the object; (d) a position of the image capturing device with respect to the object.
8. The method according to any one of claims 1-7, wherein the at least one parameter is selected from a list comprising: (a) a location of the at least one region; (b) a brightness of the at least one region; (c) a color balance of the at least one region.
9. The method according to any one of claims 1-8, wherein the at least one region represents a plurality of regions, and wherein the at least one parameter is selected from a list comprising: (a) a distance between the regions of the plurality of the regions; (b) a relative brightness between the regions of the plurality of the regions; (c) an average brightness of the regions of the plurality of the regions; (d) a relative color balance between the regions of the plurality of the regions; and (e) an average color balance between the regions of the plurality of the regions.
10. The method according to any one of claims 1-9, further comprising receiving the at least one parameter of the at least one region of the previously built 3D model of the object; wherein rejecting the one or more images of the set based on the at least one parameter of the at least one region comprises rejecting the one or more images of the set, based on a difference in the at least one parameter of the at least one region between the one or more images of the set and the previously built 3D model.
11. The method according to any one of claims 1-10, wherein the object is a human face; the at least one parameter represents a color balance of the at least one region; and the at least one region corresponds to an eye sclera area of the human face.
12. A method of longitudinal study of an object, by at least one processor, the method comprising: building, by the at least one processor, a pair of three-dimensional (3D) models of the object over a period of time, wherein at least one of the 3D models of the pair is built by the method of building 3D model of any one of claims 1-11; comparing the 3D models of the pair with each other, to determine distinctions therebetween; and providing an indication of changes in the object, based on the determined distinctions between the 3D models of the pair.
13. The method of longitudinal study of claim 12, wherein building the pair of 3D models comprises: rendering the pair of 3D models, a first 3D model of the pair being superimposed with a first texture pattern set and a second 3D model of the pair being superimposed with a second texture pattern set; obtaining a pair of two-dimensional (2D) images of the object, each of the 2D images based on a respective rendered 3D model; and normalizing color representations between the first texture pattern set and the second texture pattern set, based on the pair of 2D images.
14. The method of longitudinal study of claim 13, further comprising: changing an orientation of at least one of the 3D models, to align an orientation of the object between the 3D models; wherein obtaining the pair of the 2D images is performed in the aligned orientation.
15. The method of longitudinal study of claim 14, wherein normalizing the color representations further comprises: determining at least one region on each 2D image, said at least one region corresponding to a specific surface area of the object; calculating a transfer function mapping color representation of the at least one region of one of the 2D images to a corresponding at least one region of another of the 2D images; adjusting the color representation of at least one of the first texture pattern set and the second texture pattern set, to normalize the color representations, based on the calculated transfer function.
16. The method of longitudinal study of claim 15, wherein the color representations are defined via a specific color model, comprising one or more color channels, each providing a digital representation of a specific color characteristic; and wherein calculating the transfer function further comprises: for each 2D image, calculating a deviation of values in said one or more color channels between pixels of the respective at least one region; and calculating the transfer function, so as to fit the calculated deviation of one of the 2D images to another of the 2D images.
17. The method of longitudinal study of claim 16, wherein the deviation comprises at least one of a mean deviation and a standard deviation.
18. The method of longitudinal study according to any one of claims 16 and 17, wherein the transfer function comprises a set of linear functions, each for a respective color channel of the one or more color channels.
19. The method of longitudinal study of claim 18, wherein normalizing the color representations further comprises: applying said set of linear functions to pixels of the first texture pattern set to normalize the color representation thereof with respect to the color representation of the second texture pattern set.
20. The method of longitudinal study according to any one of claims 16-19, wherein the specific color model is a LAB color-opponent model, and wherein said one or more channels comprise: lightness channel (L), redness-greenness channel (A) and bluenessyellowness channel (B).
21. The method of longitudinal study according to any one of claims 15-20, wherein the object is a human face; wherein the aligned orientation is a frontal orientation; wherein determining the at least one region on each 2D image comprises applying, to each 2D image, a face image landmark detection algorithm, to detect a plurality of face image landmarks defining the at least one region on a respective 2D image, the at least one region corresponding to a specific area of the human face.
22. A method of longitudinal study of an object, by at least one processor, the method comprising: building a pair of three-dimensional (3D) models of the object over a period of time; comparing the 3D models of the pair with each other, to determine distinctions therebetween; and providing an indication of changes in the object, based on the determined distinctions between the 3D models of the pair, wherein said building the pair of 3D models comprises: rendering the pair of 3D models, a first 3D model of the pair being superimposed with a first texture pattern set and a second 3D model of the pair being superimposed with a second texture pattern set; obtaining a pair of two-dimensional (2D) images of the object, each of the 2D images based on a respective rendered 3D model; and normalizing color representations between the first texture pattern set and the second texture pattern set, based on the pair of 2D images.
23. The method of claim 22, further comprising: changing an orientation of at least one of the 3D models, to align an orientation of the object between the 3D models; wherein obtaining the pair of the 2D images is performed in the aligned orientation.
24. The method according to any one of claims 22-23, wherein normalizing the color representations further comprises: determining at least one region on each 2D image, said at least one region corresponding to a specific surface area of the object; calculating a transfer function mapping color representation of the at least one region of one of the 2D images to a corresponding at least one region of another of the 2D images; adjusting the color representation of at least one of the first texture pattern set and the second texture pattern set, to normalize the color representations, based on the calculated transfer function.
25. The method of claim 24, wherein the color representations are defined via a specific color model, comprising one or more color channels, each providing a digital representation of a specific color characteristic; and wherein calculating the transfer function further comprises: for each 2D image, calculating a deviation of values in said one or more color channels between pixels of the respective at least one region; and calculating the transfer function, so as to fit the calculated deviation of one of the 2D images to another of the 2D images.
26. The method of claim 25, wherein the deviation comprises at least one of a mean deviation and a standard deviation.
27. The method according to any one of claims 25 and 26, wherein the transfer function comprises a set of linear functions, each for a respective color channel of the one or more color channels.
28. The method of claim 27, wherein normalizing the color representations further comprises: applying said set of linear functions to pixels of the first texture pattern set to normalize the color representation thereof with respect to the color representation of the second texture pattern set.
29. The method according to any one of claims 25-28, wherein the specific color model is a LAB color-opponent model, and wherein said one or more channels comprise: lightness channel (L), redness-greenness channel (A) and blueness-yellowness channel (B).
30. The method according to any one of claims 24-29, wherein the object is a human face; wherein the aligned orientation is a frontal orientation; wherein determining the at least one region on each 2D image comprises applying, to each 2D image, a face image landmark detection algorithm, to detect a plurality of face image landmarks defining the at least one region on a respective 2D image, the at least one region corresponding to a specific area of the human face.
31. The method according to any one of claims 22-30, wherein building of at least one of the 3D models of the pair further comprises: receiving a set of images of the object; filtering the set of images by: applying, to one or more images of the set, a visual feature detection algorithm, to detect at least one region on the image, that corresponds to a specific visual feature of the object; determining, on the one or more images of the set, at least one parameter of the at least one region; and rejecting the one or more images of the set, based on the at least one parameter of the at least one region; and applying a 3D modeling algorithm on the filtered set of images, to build the 3D model of the object.
32. The method of claim 31 , wherein rejecting the one or more images of the set based on the at least one parameter comprises rejecting the one or more images of the set provided that a value of the at least one parameter exceeds a predefined threshold or range.
33. The method according to any one of claims 31 and 32, wherein the one or more images of the set represents a plurality of images of the set, and wherein rejecting the one or more images of the set based on the at least one parameter comprises rejecting a particular image of the plurality of the images of the set, based on a difference in the at least one parameter of the at least one region between the images of the plurality of the images.
34. The method of claim 33, wherein rejecting the particular image of the plurality of the images of the set based on the difference in the at least one parameter comprises rejecting the particular image of the plurality of the images of the set provided that the difference in the at least one parameter exceeds a predefined threshold or range.
35. The method according to any one of claims 32 and 34, wherein the method further comprises: correcting the rejected image, to provide the value of the at least one parameter subceeding the predefined threshold or range; supplementing the filtered set of the images with the corrected image.
36. The method according to any one of claims 32, 34 and 35 wherein receiving the set of the images of the object comprises capturing the images of the object with an image capturing device; and wherein the method further comprises identifying a specific image capturing conditions in which the rejected image is captured; and providing instructions, to a user via a user interface, of correction of the specific image capturing conditions to provide the value of the at least one parameter subceeding the predefined threshold or range.
37. The method of claim 36, wherein the specific image capturing conditions comprise at least one of (a) a number of light sources; (b) a brightness of at least one light source; (c) a position of at least one light source with respect to the object; (d) a position of the image capturing device with respect to the object.
38. The method according to any one of claims 31-37, wherein the at least one parameter is selected from a list comprising: (a) a location of the at least one region; (b) a brightness of the at least one region; (c) a color balance of the at least one region.
39. The method according to any one of claims 31-38, wherein the at least one region represents a plurality of regions, and wherein the at least one parameter is selected from a list comprising: (a) a distance between the regions of the plurality of the regions; (b) a relative brightness between the regions of the plurality of the regions; (c) an average brightness of the regions of the plurality of the regions; (d) a relative color balance between the regions of the plurality of the regions; and (e) an average color balance between the regions of the plurality of the regions.
40. The method according to any one of claims 31-39, further comprising receiving the at least one parameter of the at least one region of a first 3D model of the pair of 3D models; wherein, for a second 3D model of the pair of 3D models, rejecting the one or more images of the set comprises rejecting the one or more images of the set, based on a difference in the at least one parameter of the at least one region between the one or more images of the set and the first 3D model.
41. The method according to any one of claims 31-40, wherein the object is a human face; the at least one parameter represents a color balance of the at least one region; and the at least one region corresponds to an eye sclera area of the human face.
42. A system for building a three-dimensional (3D) model of an object, the system comprising: a non-transitory memory device, wherein modules of instruction code are stored, and at least one processor associated with the memory device, and configured to execute the modules of instruction code, whereupon execution of said modules of instruction code, the at least one processor is configured to: receive a set of images of the object; filter the set of images by: applying, to one or more images of the set, a visual feature detection algorithm, to detect at least one region on the image, that corresponds to a specific visual feature of the object; determining, on the one or more images of the set, at least one parameter of the at least one region; and rejecting the one or more images of the set, based on the at least one parameter of the at least one region; and apply a 3D modeling algorithm on the filtered set of images, to build the 3D model of the object.
43. The system of claim 42, wherein the at least one processor is further configured to reject the one or more images of the set, provided that a value of the at least one parameter exceeds a predefined threshold or range.
44. The system according to any one of claims 41 and 42, wherein the one or more images of the set represents a plurality of images of the set, and wherein the at least one processor is further configured to reject the one or more images of the set by rejecting a particular image of the plurality of the images of the set, based on a difference in the at least one parameter of the at least one region between the images of the plurality of the images.
45. The system of claim 44, wherein the at least one processor is further configured to reject the particular image of the plurality of the images of the set, provided that the difference in the at least one parameter exceeds a predefined threshold or range.
46. The system according to any one of claims 43 and 45, wherein the at least one processor is further configured to: correct the rejected image, to provide the value of the at least one parameter subceeding the predefined threshold or range; and supplement the filtered set of the images with the corrected image.
47. The system according to any one of claims 43, 45 and 46, further comprising an image capturing device in operative connection with the at least one processor, and wherein the at least one processor is further configured to: receive the set of the images of the object by capturing the images of the object with the image capturing device; identify a specific image capturing conditions in which the rejected image is captured; and provide instructions, to a user via a user interface, of correction of the specific image capturing conditions to provide the value of the at least one parameter subceeding the predefined threshold or range.
48. The system of claim 47, wherein the specific image capturing conditions comprise at least one of (a) a number of light sources; (b) a brightness of at least one light source; (c) a position of at least one light source with respect to the object; (d) a position of the image capturing device with respect to the object.
49. The system according to any one of claims 42-48, wherein the at least one parameter is selected from a list comprising: (a) a location of the at least one region; (b) a brightness of the at least one region; (c) a color balance of the at least one region.
50. The system according to any one of claims 42-49, wherein the at least one region represents a plurality of regions, and wherein the at least one parameter is selected from a list comprising: (a) a distance between the regions of the plurality of the regions; (b) a relative brightness between the regions of the plurality of the regions; (c) an average brightness of the regions of the plurality of the regions; (d) a relative color balance between the regions of the plurality of the regions; and (e) an average color balance between the regions of the plurality of the regions.
51. The system according to any one of claims 42-50, wherein the at least one processor is further configured to: receive the at least one parameter of the at least one region of the previously built 3D model of the object; reject the one or more images of the set further based on a difference in the at least one parameter of the at least one region between the one or more images of the set and the previously built 3D model.
52. The system according to any one of claims 42-51, wherein the object is a human face; the at least one parameter represents a color balance of the at least one region; and the at least one region corresponds to an eye sclera area of the human face.
PCT/IL2024/050108 2023-01-26 2024-01-25 System and method of building a three-dimensional model of an object and methods of longitudinal study of the object Ceased WO2024157262A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US202363441229P 2023-01-26 2023-01-26
US63/441,229 2023-01-26

Publications (1)

Publication Number Publication Date
WO2024157262A1 true WO2024157262A1 (en) 2024-08-02

Family

ID=91970145

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/IL2024/050108 Ceased WO2024157262A1 (en) 2023-01-26 2024-01-25 System and method of building a three-dimensional model of an object and methods of longitudinal study of the object

Country Status (1)

Country Link
WO (1) WO2024157262A1 (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060245639A1 (en) * 2005-04-29 2006-11-02 Microsoft Corporation Method and system for constructing a 3D representation of a face from a 2D representation
US20150138201A1 (en) * 2013-11-20 2015-05-21 Fovia, Inc. Volume rendering color mapping on polygonal objects for 3-d printing
US20190122424A1 (en) * 2017-10-23 2019-04-25 Fit3D, Inc. Generation of Body Models and Measurements
US20220254045A1 (en) * 2021-02-09 2022-08-11 Everypoint, Inc. Determining Object Structure Using Physically Mounted Devices With Only Partial View Of Object
WO2022194910A1 (en) * 2021-03-18 2022-09-22 Institut Pasteur Method for visualizing at least a zone of an object in at least one interface

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060245639A1 (en) * 2005-04-29 2006-11-02 Microsoft Corporation Method and system for constructing a 3D representation of a face from a 2D representation
US20150138201A1 (en) * 2013-11-20 2015-05-21 Fovia, Inc. Volume rendering color mapping on polygonal objects for 3-d printing
US20190122424A1 (en) * 2017-10-23 2019-04-25 Fit3D, Inc. Generation of Body Models and Measurements
US20220254045A1 (en) * 2021-02-09 2022-08-11 Everypoint, Inc. Determining Object Structure Using Physically Mounted Devices With Only Partial View Of Object
WO2022194910A1 (en) * 2021-03-18 2022-09-22 Institut Pasteur Method for visualizing at least a zone of an object in at least one interface

Similar Documents

Publication Publication Date Title
US20150157243A1 (en) Health state determining method and apparatus using facial image
CN110889355B (en) Face recognition verification method, face recognition verification system and storage medium
CN110555875A (en) Pupil radius detection method and device, computer equipment and storage medium
KR102413404B1 (en) skin condition analyzing and skin disease diagnosis device
EP4075385B1 (en) Method and system for anonymizing facial images
TW200910223A (en) Image processing apparatus and image processing method
TW202105329A (en) Face verification method and apparatus, server and readable storage medium
Mussi et al. A novel ear elements segmentation algorithm on depth map images
US20230237848A1 (en) System and method for characterizing droopy eyelid
WO2024157262A1 (en) System and method of building a three-dimensional model of an object and methods of longitudinal study of the object
US20240032856A1 (en) Method and device for providing alopecia information
Gopinathan et al. The melanoma skin cancer detection and feature extraction through image processing techniques
Liu et al. Image-based attributes of multi-modality image quality for contactless biometric samples
US20240020830A1 (en) System and methods of predicting parkinson&#39;s disease based on retinal images using machine learning
US11638553B1 (en) Skin condition analyzing and skin disease diagnosis device
EP4498388A1 (en) Disease diagnosis method using trained model, and system carrying out same
GB2576139A (en) Ocular assessment
Li et al. Integrating prior knowledge with deep learning for optimized quality control in corneal images: A multicenter study
KR20230108490A (en) Method and apparatus for providing counselling service
EP4076151B1 (en) Illumination compensation in imaging
CN112086193A (en) Face recognition health prediction system and method based on Internet of things
AU2023407177B2 (en) Method of fit testing a respirator device
US20250228451A1 (en) Method for estimating eye protrusion value, and system for performing same
Sohel et al. Deep Learning-Based Multi-Pox Disease Identification Using a Hybrid Model
Maheswari et al. Diagnosis of Dental Deformities using Image Processing Techniques

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 24747057

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE