WO2018154272A1 - Systèmes et procédés pour obtenir des informations concernant le visage et les yeux d'un sujet - Google Patents
Systèmes et procédés pour obtenir des informations concernant le visage et les yeux d'un sujet Download PDFInfo
- Publication number
- WO2018154272A1 WO2018154272A1 PCT/GB2018/050269 GB2018050269W WO2018154272A1 WO 2018154272 A1 WO2018154272 A1 WO 2018154272A1 GB 2018050269 W GB2018050269 W GB 2018050269W WO 2018154272 A1 WO2018154272 A1 WO 2018154272A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- eye
- model
- subject
- skin
- item
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Ceased
Links
Classifications
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B3/00—Apparatus for testing the eyes; Instruments for examining the eyes
- A61B3/0016—Operational features thereof
- A61B3/0025—Operational features thereof characterised by electronic signal processing, e.g. eye models
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B3/00—Apparatus for testing the eyes; Instruments for examining the eyes
- A61B3/10—Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions
- A61B3/113—Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions for determining or recording eye movement
-
- G—PHYSICS
- G02—OPTICS
- G02C—SPECTACLES; SUNGLASSES OR GOGGLES INSOFAR AS THEY HAVE THE SAME FEATURES AS SPECTACLES; CONTACT LENSES
- G02C13/00—Assembling; Repairing; Cleaning
-
- G—PHYSICS
- G02—OPTICS
- G02C—SPECTACLES; SUNGLASSES OR GOGGLES INSOFAR AS THEY HAVE THE SAME FEATURES AS SPECTACLES; CONTACT LENSES
- G02C13/00—Assembling; Repairing; Cleaning
- G02C13/003—Measuring during assembly or fitting of spectacles
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0012—Biomedical image inspection
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2200/00—Indexing scheme for image data processing or generation, in general
- G06T2200/08—Indexing scheme for image data processing or generation, in general involving all processing steps from image acquisition to 3D model generation
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
- G06T2207/30041—Eye; Retina; Ophthalmic
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
- G06T2207/30088—Skin; Dermal
Definitions
- the present invention relates to systems and methods for obtaining a numerical eye model of each of the eyes of a subject.
- the eye models of a subject's eyes may be used for the design and production of eyewear item, to be used in proximity with the subject's face.
- the eye models may also be used a part of an augmented reality (AR) or virtual reality (VR) system.
- AR augmented reality
- VR virtual reality
- a conventional process for providing a subject with eyewear such as glasses involves the subject trying on series of dummy frames, and examining his or her reflection in a mirror. Once a frame has been selected, an optician conventionally makes a number of manual
- the measurement process is subject to various errors.
- the modification of the frames is carried out when the subject is not present, so that the resulting glasses may be unsuitable, for example because the lower edge of the fitted lenses impacts on the subject's cheek.
- the modification of the frame varies the distance of the lens from the eye of the subject, which may be highly disadvantageous for glasses which perform visual correction. It has been estimated that a 2mm variation of the spacing of the eye and the lens can result in a 10% difference in the resulting field of vision. Additional problems are that the modification of the frame changes the position of the optical centre of the lens in the up/down and left/right directions (relative to the face of the subject) which may also have undesirable optical effects.
- the item of headwear includes a position sensor (for example, in the case of US 2009/0040460), or marker elements positioned on spectacles which can be identified in successive photographs of the subject's head (for example, in the case of US 8,220,922 and US 8,360,580).
- US 2009/0040460 aims to obtain the position of the centre of eye rotation (CER) in a reference frame which moves with the subject's head, so that ophthalmic lenses can be formed which are made to measure and provide better performance.
- CER centre of eye rotation
- the present invention aims to provide new and useful methods and systems for obtaining an eye model of the face of a subject.
- the item of eyewear typically includes refractive lenses for vision correction.
- the invention proposes that at each of a series of successive time periods in which a subject is looking in different respective directions, an imaging system captures a plurality of images of the subject's face. Using the images for each time period, the system forms a respective three-dimensional skin model of a skin portion of the subject's face. The system also obtains eye data indicative of the position of at least a portion of at least one of the subject's eyes. For example, the eye data may be data characterizing specular reflections in the images from at least one eye of the subject. The system uses the skin models to convert the eye data for the multiple time periods into a common frame of reference, and then uses the eye data to obtain a numerical eye model indicative at least of the position of the centre of the eye's rotation (CER). Typically, such an eye model is obtained for each of the subject's eyes.
- CER numerical eye model indicative at least of the position of the centre of the eye's rotation
- the system can form an eye model without using any elements of the system placed in a fixed position relative to the face of the subject. Indeed the subject's face may be imaged at a time when the subject is not wearing any objects.
- Forming the eye model using the eye data from time periods in which the subject is looking in different respective directions provides more accuracy in measuring the centre of rotation than only using eye data from a time when the subject is looking in a single direction.
- the subject looks in different respective directions.
- These respective directions may be directions relative to the face or the subject and/or relative to a static reference frame (e.g. a building in which the subject is located).
- a static reference frame e.g. a building in which the subject is located.
- the head of the subject may be substantially stationary, but the subject may move his or her eyes, i.e. in the different time periods the subject looks in different respective directions relative to a static frame of reference (e.g. a building in which the subject is located) and relative to the face of the subject.
- the head of the subject may be permitted to move too.
- the subject may turn his head and gaze direction substantially in register with each other, so that the direction in which the subject looks does not change relative to the face of the subject, though it does change relative to the static reference frame.
- the subject may continue looking in the same direction relative to the static frame of reference while turning his or her head.
- the subject looks in different respective directions relative to the face of the subject but not relative to the static reference frame.
- the imaging system may employ any imaging technique(s) for forming the skin model(s), such as stereoscopy (discussed below), laser triangulation, time or flight measurement, phase structured fringe pattern imaging and/or photometry (the science of measuring the brightness of light; also discussed below).
- imaging technique(s) for forming the skin model(s) such as stereoscopy (discussed below), laser triangulation, time or flight measurement, phase structured fringe pattern imaging and/or photometry (the science of measuring the brightness of light; also discussed below).
- the imaging system comprises at least one directional energy source (e.g. a light source such as a visible light source) for illuminating the face (preferably successively) in at least three directions.
- a directional energy source e.g. a light source such as a visible light source
- the model of the skin may be formed using photometry, assuming that the skin exhibits Lambertian reflection in which the reflected radiation is isotropic with an intensity according to Lambert's cosine law (an intensity directly proportional to the cosine of the angle between the direction of the incident light and the surface normal).
- Lambert's cosine law an intensity directly proportional to the cosine of the angle between the direction of the incident light and the surface normal.
- WO 2009/122200 proposes a system in which a three-dimensional model of a 3-D object is produced in which large-scale features are imaged by stereoscopic imaging (that is, by comparing the respective positions of landmarks in multiple images of a subject captured from respective directions), and fine detail is produced by photometric imaging.
- a system of that type may be employed also in the present invention.
- the eye data may comprise, or be derived from, data describing specular reflections ("glints") in at least some of the images. Since many portions of the eye, such as the lens, are transparent, Lambertian reflection is not present, so photometry cannot be used for this purpose. However, detection of specular reflections allows the surface of the eye to be detected accurately. When this is used in combination with photometric modelling of the skin accurate numerical information may be obtained about the CER of the eye in a reference frame defined with reference to the skin of the subject.
- specular reflections specular reflections
- the eye data collected over multiple time periods includes at least one of (i) data indicating reflections from portions of the surface of the eye other than the cornea, and/or (ii) data indicating reflections from portions of the cornea of the eye when the cornea is in multiple locations relative to the skin (i.e. the direction in which the subject looks varies other than by movements of the subject's head).
- the eye data may comprise eye data obtained from any other gaze-tracking technique, such as techniques including iris tracking.
- the eye data is obtained from images captured by image-capture devices which are also used to capture images used to obtain the three-dimensional skin models.
- image-capture devices which are also used to capture images used to obtain the three-dimensional skin models.
- the eye data and the three-dimensional skin models are obtained in the same reference frame. This eliminates a source of errors in a system in which the eye data and the skin models are obtained from different respective cameras which have a (possibly unknown) offset in position and/or orientation.
- the system may comprise a display device for viewing by the subject and arranged to display a time-varying image in different ones of the time periods.
- the time-varying image is typically designed to include an attention-attracting element which is in different locations in the display in different ones of the time periods, so that the direction in which the subject looks varies between the time periods. It is not necessary that the element has the same appearance in different ones of the time periods.
- the time-varying image may, in one example, be produced by activating in successive time periods light-emitting elements ("lights") in different respective locations in the display area of the display device, so that the lights function as the attention- attracting elements.
- the eye model of each eye may include a sclera portion representing the sclera, and a cornea portion representing the cornea.
- the sclera portion may be portion of the surface of a first sphere centred on the centre of rotation of the eye, and the cornea portion may a portion of the surface of a second sphere having a smaller radius of curvature than the first sphere.
- the centers of the two spheres are spaced apart, and the line joining them intersects with the center of the cornea portion of the model, at a position which may be taken as the center of the pupil.
- the generation of the eye data and the eye model are performed in different ways.
- the data describing the specular reflections may be subject to little or no processing to form the eye data which is converted to the common reference frame.
- the data describing the specular reflections in each time period may be significantly pre-processed prior to converting it into the common reference frame.
- the eye data may be in the form of a respective provisional model of the eye for each of the time periods, e.g. defined by at least part of a sphere representing the sclera, and/or at least part of a second sphere representing the cornea.
- the provisional eye models for each of the respective time periods are converted to a common reference frame using the skin model, and combined by any of several possible processes to form the (final) eye model, e.g. choosing the CER of the final eye model as the average of the centre positions of the respective spheres representing the sclera in the provisional eye models in the common reference frame.
- the combination of the eye data for the different time periods to form the eye model may include using the eye data to estimate a respective gaze direction for each of the time periods.
- the converted eye data may be data indicating the respective position and orientation of the cornea of the eye in each of the time periods in the common reference frame.
- the CER may be obtained from these positions and orientations, e.g. as the location which would allow (or be most likely to allow) the cornea of a rotating eye to reach these positions and orientations.
- this process may employ an additional mechanism for estimating the gaze direction, such as one based on the position of the respective portion of the display device which is designed to attract the attention of the subject in each of the respective time periods.
- the common frame of reference may be defined based on the position of the head of the subject in one of the time periods.
- the step of converting the eye data into the common frame of reference only includes modifying the eye data for the other time periods.
- the conversion of the eye data into a common frame of reference may be performed by forming positional (geometrical) mappings between corresponding points of the skin models, and then exploiting a pre-known geometrical relationship between the skin models and the respective eye data.
- the pre-known relationship may be based on a pre-known geometrical relationship between the set of camera(s) which captured the set of images used to produce the skin models, and the set of camera(s) which captured the set of images used to produce the eye data. Note that these two sets of cameras, and two sets of images may overlap.
- the positional mappings may be defined based on landmarks of the skin models corresponding to landmarks of the subject's face, such as the tip of the subject's nose and/or a plane of symmetry. These landmarks are recognized in each of the skin models, and used to define positional transforms which map the skin models into the common frame of reference. As noted above, this may be a frame of reference in which one of the skin models was generated, so that for that skin model the positional transform is trivial (i.e. it does not move or re-orient the skin model).
- the subject has an unusual feature (e.g. a broken nose) which makes a certain landmark unsuitable, other landmarks will tend to be used.
- different landmarks may be used for converting different ones of the skin models into the common reference frame.
- the skin model includes two portions corresponding to portions of the skin of the subject's face proximate each of the subject's eyes. Alternatively or additionally, it may include at least one portion corresponding respectively to at least one of the subject's ears.
- the eye model may be employed as part of a composite model including the eye model for each of the subject's eyes and a skin model portion which indicates the contours of the subject's skin.
- the skin model portion of the composite model may be all or part of one of the skin models obtained for the respective time periods, or a skin model obtained by combining multiple ones of those skin models.
- the skin model portion may not include portions of the face of the subject which are distal from the portions of the face which are proximate to, or come into contact with the item of eyewear.
- the eye model, or composite model may be employed in an automatic process for designing a personalized item of eyewear for use in proximity with the subject's face (the term "proximity” is used here to include also the possibility that the object is in contact with the face).
- proximity is used here to include also the possibility that the object is in contact with the face.
- design is used here to include a process of selecting from a plurality of pre-defined designs for eyewear items, and/or modifying one of more parameters (typically distance parameters) of a pre-defined design of eyewear items.
- the eyewear typically includes at least one lens for each eye, and a frame for supporting the lens(es) in relation to the subject's face.
- the item of eyewear may be a set of glasses, of a type having any one of more of the following functions: vision correction, eye protection (including goggles or sunglasses) and/or cosmetic reasons.
- At least one lens may be a refractive lens for vision correction.
- the shape of the lens may be selected based on the centre of rotation of the corresponding eye, optionally including selecting the refractive power of one or more portions of the lens. This has a significant effect on the field of vision of the subject.
- the refractive power of different portions of the lens may be adjusted to compensate for the different distances of those portions of the lens from the CER.
- the overall shape of the lens may be varied to reduce the distance of different portions of the lens from the CER.
- This design process may assume that the glasses are positioned on the face of the subject in contact with one or more portions of the face model portion of the composite model.
- the face model may be used as part of the procedure to work out how far the lenses are from the CER and/or the lenses.
- the design process may employ a template representing an item of eyewear.
- the design of the eyewear may also include varying dimensions of the frame based on the composite model. If the template representing the item of eyewear is defined by one or more parameters, the design process may include varying those parameters according to the composite model. For example the modification of the frame may be to select a distance between a certain portion of the lens (e.g. the optical centre of the lens) in accordance with the spacing of the CERs of the subject's eyes according to the two eye models, and/or to place the certain portion of the lenses at a desired distance from the CERs of the respective eyes.
- a certain portion of the lens e.g. the optical centre of the lens
- the selection of the respective refractive power of different portion(s) of the lens(es) and the selection of the dimensions of the frame may be conducted together, to produce a design of the item of eyewear which may be optimal in terms of both vision correction and comfort.
- At least one component of the item of eyewear e.g. the arms of the glasses, or the nose pads
- a first organization may produce an eye model or composite model, which is transmitted to a second organization to produce an item of eyewear consistent with the eye model or composite model.
- the subject is preferably illuminated successively in individual ones of at least three directions.
- the energy sources may emit light of the same frequency spectrum (e.g. if the energy is visible light, the directional light sources may each emit white light and the captured images may be color images).
- the subject could alternatively be illuminated in at least three directions by energy sources which emit energy with different respective frequency spectra (e.g. in the case of visible light, the directional light sources may respectively emit red, white and blue light).
- the directional energy sources could be activated simultaneously, if the energy sensors are able to distinguish the energy spectra.
- the energy sensors might be adapted to record received red, green and blue light separately. That is, the red, green and blue light channels of the captured images would be captured simultaneously, and would respectively constitute the images in which the object is illuminated in a single direction.
- this second possibility is not preferred, because coloration of the object may lead to incorrect photometric imaging.
- directional energy source may be used in embodiments of the invention.
- a standard photographic flash a high brightness LED cluster, or Xenon flash bulb or a 'ring flash'. It will be appreciated that the energy need not be in the visible light spectrum.
- At least three energy sources are provided. It would be possible for these sources to be provided as at least three energy outlets from an illumination system in which there are fewer than three elements which generate the energy.
- the energy would be output at the other ends of the energy transmission channels, which would be at three respective spatially separately locations.
- the output ends of the energy transmission channels would constitute respective energy sources.
- the light would propagate from the energy sources in different respective directions.
- the energy sensors may be two or more standard digital cameras, or video cameras, or CMOS sensors and lenses appropriately mounted. In the case of other types of directional energy, sensors appropriate for the directional energy used are adopted. A discrete sensor may be placed at each viewpoint, or in another alternative a single sensor may be located behind a split lens or in combination with a mirror arrangement.
- the energy sources and viewpoints preferably have a known positional relationship, which is typically fixed.
- the energy sensor(s) and energy sources may be incorporated in a portable, hand-held instrument.
- the energy sensor(s) and energy sources may be incorporated in an apparatus which is mounted in a building, e.g. at the premises of an optician or retailer of eyewear.
- the apparatus may be adapted to be worn by a subject, e.g. as part of a helmet.
- the energy sources may be operated to produce a substantially constant total intensity over a certain time period (e.g. by firing them in close succession), which has the advantage that the subject is less likely to blink.
- the energy sources may be controlled to be turned on by processor (a term which is used here in a very general sense to include for example, a field-programmable gate array (FGPA) or other circuitry) which also controls the timing of the image capture devices.
- processor a term which is used here in a very general sense to include for example, a field-programmable gate array (FGPA) or other circuitry
- the processor could control the a different subset of the energy sources to produce light in respective successive time periods, and for each of the image capture device to capture a respective image during these periods. This has the advantage that the processor would be able to determine easily which of the energy sources was the cause of each specular reflection.
- Specular reflections may preserve polarization in the incident light, while Lambertian reflections remove it.
- some or all of the light sources may be provided with a filter to generate light with a predefined linear polarization direction
- some or all of the image capture devices may be provided with a filter to remove incident light which is polarized in the same direction (thus emphasizing Lambertian reflections) or the transverse direction (thus emphasizing specular reflections).
- the energy sources include one or more energy sources of relatively high intensity and one or energy sources which are of relatively lower intensity, is to provide polarization for the one of more of the energy sources of high intensity, and no polarization for the one or more energy sources which are of relatively lower intensity.
- the specular reflections may only be captured using only the high intensity energy sources, in which case (e.g. only) those energy sources would be provided with a polarizer producing a polarization which is parallel to a polarization of the energy sensors used to observe the specular reflections.
- One or more of the energy sources may be configured to generate light in the infrared (IR) spectrum (wavelengths from 700nm to 1 mm) or part of the near infrared spectrum (wavelengths from 700nm to 1 100nm).
- IR infrared
- the subject Since the subject is substantially not sensitive to IR or near-IR radiation, it can be used in situations in which it is not desirable for the subject to react to the imaging process. For example, IR or near-IR radiation would not cause the subject to blink. Also, IR and near-IR radiation may be used in applications as discussed below in which the subject is presented with other images during the imaging process.
- Fig. 1 is a schematic view of an imaging assembly for use in an embodiment of the present invention
- Fig. 2 is a flow diagram of a method performed by an embodiment of the invention
- Fig. 3 shows an eye model for use in the embodiment
- Fig. 4 illustrates schematically how specular reflections from the eye are used by the embodiment to find the parameters of a provisional eye model of the form shown in Fig. 2
- Fig. 5 illustrates schematically how specular reflections from the eye are used by a variation of the embodiment to find the parameters of a provisional eye model of the form shown in Fig.2;
- Fig. 6 illustrates the use of an eye model in designing a lens of an item of eyewear
- Fig. 7 illustrates an embodiment of the invention incorporating the imaging assembly of
- Fig. 1 and a processor.
- an imaging assembly which is a portion of an embodiment of the invention.
- the embodiment includes energy sources 1 , 2, 3. It further includes energy sensors 4, 5 in form of image capturing devices (cameras).
- the energy sensors 4, 5 and energy sources 1 , 2, 3 are fixedly mounted to each other by struts 6.
- the exact form of the mechanical connection between the energy sources 1 , 2, 3 and the energy sensors 4, 5 is different in other forms of the invention, but it is preferable if it maintains the energy sources 1 , 2, 3 and the energy sensors 4, 5 not only at fixed distances from each other but at fixed relative orientations.
- the positional relationship between the energy sources 1 , 2, 3 and the energy sensors 4, 5 is pre-known.
- the energy sources 1 , 2, 3 and image capturing devices 4, 5 may be incorporated in a portable, hand-held instrument.
- the embodiment includes a processor which is in electronic communication with the energy sources 1 , 2, 3 and image capturing devices 4, 5. This is described below in detail with reference to Fig. 7.
- the energy sources 1 , 2, 3 are each adapted to generate electromagnetic radiation, such as visible light or infra-red radiation.
- the energy sources 1 , 2, 3 are all controlled by the processor.
- the output of the image capturing devices 4, 5 is transmitted to the processor.
- Each of the image capturing devices 4, 5 is arranged to capture an image of the face of a subject 7 positioned in both the respective fields of view of the image capturing devices 4, 5.
- the image capturing devices 4, 5 are spatially separated, and preferably also arranged with converging fields of view, so the apparatus is capable of providing two separated viewpoints of the subject 7, so that stereoscopic imaging of the subject 7 is possible.
- the case of two viewpoints is often referred to as a "stereo pair" of images, although it will be appreciated that in variations of the embodiment more than two spatially-separated image capturing devices may be provided, so that the subject 7 is imaged from more than two viewpoints. This may increase the precision and/or visible range of the apparatus.
- the words "stereo" and “stereoscopic” as used herein are intend to encompass, in addition to the possibility of the subject being imaged from two viewpoints, the possibility of the subject being imaged from more than two viewpoints.
- the images captured are typically color images, having a separate intensity for each pixel each of three color channels.
- the three channels may be treated separately in the process described below (e.g. such that the stereo pair of images also has two channels).
- the system comprises a display device 8 having a plurality of lights 9.
- the imaging system is operative to illuminate the lights 9 in successive time periods (which are spaced apart as described below), so that the subject, who looks towards each light 9 as it is illuminated, successively changes his or her viewing direction (i.e. the direction in which he or she is looking).
- the subject might simply be asked to shift his or her viewing direction successively, for example to look in successive time periods at a respective ones of a plurality of portions of a static display.
- a natural human reaction when a subject changes his or her viewing direction is for the subject to slightly move his or her head.
- the imaging system forms a respective three-dimensional model of the skin of the subject, and collects a respective set of eye data indicative of specular reflections from the subject's eyes.
- the skin models are used to reference the eye data into a common reference frame based on landmarks defined on the subject's face, so that motion of the subject's head is compensated for.
- Suitable image capture devices for use in the invention include the 1 /3-Inch CMOS Digital
- All the images used for the modelling are preferably captured for a given time period are preferably captured within a duration of no more than 0.2s, and more preferably no more than 0.1 s.
- the time is preferably less than a blink reaction time, so that imaging is unaffected if the subject closes his or her eyes in respect to the illumination of the energy sources 1 , 2, 3.
- the images are captured over a longer duration, such as up to about 1 second or even longer. This may be appropriate for example if the electromagnetic radiation generated by the energy sources 1 , 2, 3 is not bright enough to cause blinking, and/or does not include electromagnetic radiation in the visible spectrum.
- the skin of the subject 7 will typically reflect electromagnetic radiation generated by the energy sources 1 , 2, 3 by a Lambertian reflection, so the skin portion of the subject's face may be imaged in the manner described in detail in WO 2009/122200, to form a skin model.
- the skin model may optionally also include a portion of the subject's hair, although since a subject's hair may move relative to the subject's face as the subject's head moves preferably the landmarks in the skin model discussed below are landmarks of the subject's skin rather than the subject's hair.
- two acquisition techniques for acquiring 3D information are used to construct the skin model.
- One is photometric reconstruction, in which surface orientation is calculated from the observed variation in reflected energy against the known angle of incidence of the directional source. This provides a relatively high-resolution surface normal map alongside a map of relative surface reflectance (or illumination-free colour), which may be integrated to provide depth, or range, information which specifies the 3D shape of the object surface.
- Inherent to this method of acquisition is output of good high-frequency detail, but there is also the introduction of low-frequency drift, or curvature, rather than absolute metric geometry because of the nature of the noise present in the imaging process.
- the other technique of acquisition is passive stereoscopic reconstruction, which calculates surface depth based on optical triangulation. This is based around known principles of optical parallax. This technique generally provides good unbiased low-frequency information (the coarse underlying shape of the surface of the object), but is noisy or lacks high frequency detail.
- the skin model may be formed by forming an initial model of the shape of the skin using stereoscopic reconstruction, and then refining the initial model using the photometric data to form the skin model.
- the photometric reconstruction requires an approximating model of the surface material reflectivity properties. In the general case this may be modelled (at a single point on the surface) by the Bidirectional Reflectance Distribution Function (BRDF).
- BRDF Bidirectional Reflectance Distribution Function
- a simplified model is typically used in order to render the problem tractable.
- One example is the Lambertian Cosine Law model. In this simple model the intensity of the surface as observed by the camera depends only on the quantity of incoming irradiant energy from the energy source and foreshortening effects due to surface geometry on the object.
- the stereoscopic reconstruction uses optical triangulation, by geometrically correlating the positions in the images captured by the image capture devices 4, 5 of the respective pixels representing the same point on the face (e.g. a feature such as a nostril or facial mole which can be readily identified on both images).
- the pair of images is referred to as a "stereo pair". This is done for multiple points on the face to produce the initial model of the surface of the face.
- the data obtained by the photometric and stereoscopic reconstructions is fused by treating the stereoscopic reconstruction as a low-resolution skeleton providing a gross-scale shape of the face, and using the photometric data to provide high-frequency geometric detail and material reflectance characteristics.
- the process 100 performed by the embodiment is illustrated in Fig. 2.
- the system is initiated, and one of the lights 9 is illuminated.
- the energy sources 1 , 2, 3 are illuminated (e.g. one by one
- this procedure is carried out during a time period which is typically preferably no more than 0.1 s, or no more than 0.2s.
- an initial version of a three-dimensional model of the face (typically including the skin and eye regions, and usually the ear regions) is formed stereoscopically.
- the initial 3D model may be formed in other ways, for example using a depth camera.
- Known types of depth camera include those using sheet-of-light triangulation, structured light (that is, light having a specially designed light pattern), time-of- flight or interferometry.
- the initial 3D model is refined using the images and the photometric techniques described above.
- the resulting 3D model is referred to here as a skin model, since it includes an accurate model of the skin of the subject's face. However, it may also include the subject's hair and also a portion corresponding to (though not accurately representing) the eye regions of the subject's face.
- step 105 the specular reflections in the images are identified, and for each eye the specular reflections are used to form a set of eye data.
- This eye data may be in the form of a provisional eye model for each eye.
- a simple form for the provisional eye model which can be used is shown in Fig. 3. It consists of a sclera portion 10 representing the sclera (the outer white part of the eye), and a cornea portion 1 1 intersecting with the sclera portion 10.
- the sclera portion 10 may be frusto-spherical (i.e. a sphere minus a segment of the sphere which is to one side of a plane which intersects with the sphere).
- the sclera portion 10 of the provisional eye model may omit portions of the spherical surface which are angularly spaced from the cornea portion about the centre of the sphere by more than a predetermined angle.
- the centre of the sphere of which the sclera portion 10 forms a part is the centre of rotation (CER) of the eye.
- the cornea portion 1 1 of the model is a segment of a sphere with a smaller radius of curvature than then sclera portion 10; the cornea portion 1 1 too is frusto-spherical, being less than half of the sphere having smaller radius of curvature.
- the cornea portion 1 1 is provided upstanding from the outer surface of the sclera portion 10 of the model, and the line of intersection between the sclera portion 10 and the cornea portion 1 1 is a circle.
- the center of the cornea portion 1 1 is taken as the center of the pupil. It lies on the line which passes through the center of the sphere used to define the sclera portion 10, and the center of the sphere used to define the cornea portion 1 1 .
- the provisional eye model of Fig. 3 is defined by 8 parameters (numerical values): the coordinates of the CER in a 3-D space defined in relation to the position of the imaging assembly (3 numerical values); the radius of the sclera portion 10; the direction of the gaze of the subject (2 numerical values defining the orientation of the eye); the radius of curvature of the cornea portion 1 1 ; and the degree to which the cornea portion 1 1 stands up from the sclera portion 10. These values are estimated from the specular reflections to form a provisional eye model.
- additional knowledge may be used in this process. For example, the eyeballs of individuals (especially adult individuals) tend to be of about the same size, and this knowledge may be used to pre-set certain dimensions of the provisional eye model.
- each of the energy sources 1 , 2, 3 is fired in turn, and that when each of the energy sources 1 , 2, 3 is fired each of the image capturing devices 4, 5 captures an image.
- the electromagnetic radiation produced by each energy source is reflected by each of the eyes of the subject in a specular reflection.
- each image captured by one of the devices 4, 5 will include at least one very bright region for each eye, and the position in that image of the very bright region is a function of the translational position and orientation of the eye.
- a very bright region In total six images of the face are captured, and if each of them contains (in the eye) a very bright region ("glint") with a two dimensional position in the image, then in total 12 data values can be obtained.
- the 8 parameters of the provisional eye model can be estimated ("fitted" to the data values). This can include computationally searching for values of the desired parameters of the provisional eye model which are most closely consistent with the observed positions of the specular reflections within the images.
- Fig. 4 shows by crosses 12a, 12b, 12c specular reflections captured by the image capturing device 4, and by crosses 13a, 13b, 13c the specular reflections captured by the image capturing device 5.
- the crosses are shown in relation with the provisional eye model following the process of fitting the parameters of the provisional eye model to the observed positions of the specular reflections in the image.
- the number of energy sources may be increased.
- each of the imaging devices 4, 5 could capture up to six images, each showing the specular reflection when a corresponding one of the energy sources is generating electromagnetic radiation.
- the specular reflection would cause a bright spot in the corresponding two-dimensional image, so in total, having identified in each the two-dimensional image the two-dimensional position of the bright spot, the processor would then have twenty-four data values. These twenty-four values could then be used to estimate the six numerical parameters defining the provisional eye model. This is illustrated in Fig. 5, where the six specular reflections captured by the imaging device 4 are labelled 22a, 22b, 22c, 22d, 22e and 22f.
- the six specular reflections captured by the imaging device 5 are shown in Fig. 5 but not labelled.
- the processor expresses the provisional eye model in a coordinate system defined relative to the pre-known fixed relative positions of the energy sensors 4, 5 and the energy sources 1 , 2, 3.
- the skin model and the provisional eye model are in the same coordinate system.
- step 106 it is determined whether all the lights 9 in the display have been illuminated. If not, in step 107 the light 9 illuminated in step 101 is turned off and another of the lights 9 is illuminated, In step 108 there is a delay (typically of 3 to 5 seconds), to allow the eye(s) of the subject to recover from any blink caused by the flashes in step 102, and for the subject's eye(s) to stabilize at the newly illuminated light 9. Note that the delay in step 108 is typically at least a factor of 10 greater than the time taken to perform step 102. Then the process returns to step 102. Thus, another skin model and another provisional eye model are formed. In total, in the respective time period in which each of the lights 9 is illuminated, a respective skin model and respective provisional eye model are formed. The time periods are spaced apart by an amount of time substantially equal to the delay of step 108.
- step 109 the skin models for each of the time periods are brought into a common reference frame.
- this step employs recognizable landmarks on the skin of the skin model.
- the process may be carried out by identifying a number of landmarks on each skin model (typically corresponding to pre-determined landmark features, such as the tip of the nose or the plane of mirror symmetry of the face). Then a respective positional mapping (including a translational component and a rotational component) is derived between a first of the skin models and the other skin models.
- Each positional mapping brings the landmarks of the first skin model into register with corresponding ones of the landmarks of the respective other skin model, and represents the movement of the subject's head between the capture time of the images used to form the first skin model and the capture time of the images used to form the respective other skin model.
- the provisional eye model corresponding to the first skin model is already in the reference frame of the first skin model.
- the positional mapping is used to bring the other provisional eye models into the reference frame of the first skin model.
- all the provisional eye models are converted into the common reference frame.
- the provisional eye models in the common reference frame are combined to form a (final) eye model of each of the eyes in the common reference frame.
- the final eye model may have the form shown in Fig. 3, including a (e.g. frusto-spherical) cornea portion 1 1 , and a (e.g. frusto-spherical) sclera portion 10 with a centre which is the centre of eye rotation (CER).
- CER centre of eye rotation
- the positions of the respective centres of the sclera portion 10 of each provisional eye model may be combined (e.g. their mean calculated), to derive the centre of rotation of the final eye model.
- a composite model is derived of the face of the subject. This includes the eye models for each eye derived in step 109, and a skin portion derived from one or more of the skin models (in locations away from the respective eye models).
- step 1 12 the processor measures one or more dimensions of the composite model (the skin portion of the composite model and/or the eye model(s) of the composite model), such as the inter-pupil distance, and the distances between locations on the nose where the eyewear will be supported and the ears.
- the processor stores in a data-storage device a 3D model of at least part of an item of eyewear intended to be placed in proximity of the face.
- the item of eyewear may be a pair of glasses (which may be glasses for vision correction, sunglasses or glasses for eye protection, or a headset for AR or VR).
- the processor uses the measured dimensions of the composite model to modify at least one dimension of the 3D model of the eyewear.
- the configuration of a nose-rest component of the object model (which determines the position of a lens relative to the nose) may be modified according to the inter-pupil distance, and/or to ensure that the lenses are positioned at a desired spatial location relative to the subject's eyes when the eyes face in a certain direction.
- the length of the arms may be modified in the eyewear model based on the skin portion of the composite model to make this a comfortable fit. If the face model is accurate to within 250 microns, this will meet or exceed the requirements for well-fitting glasses.
- at least one dimension of at least one lens of the eyewear may be modified based on the composite model. For example, as illustrated in Fig. 6, the lens 30 of the item of eyewear includes a portion 32 which is relatively close to the CER 31 of the eye model portion of the composite model, and a portion 33 which is relatively far from the CER 31 .
- the refractive power of the lens 30 may be controlled to be different in the region 33 from that in the region 32 according to this difference in distances, so that the subject's vision is corrected irrespective of whether the subject is looking towards the portion 32 or the portion 33. Note that this control of the refractive power may be performed in combination with any control of the refractive power which is used to make the lens 30 is into a bifocal, multi-focal or varifocal lens.
- step 1 14 the system uses the modified eyewear model to produce at least one component of an item of eyewear according to the model (e.g. the arms and/or the nose-rest component, and/or at least one of the two lenses). This can be done for example by three-dimensional printing. Note that if the eyewear is an item such as varifocal glasses, great precision in producing them is essential, and a precision level of the order of the 250 microns, which is possible in preferred embodiments of the invention, may be essential for high technical performance. A number of variations are possible to the process of Fig. 2 within the scope of the invention. Firstly, the order of steps may be different (e.g. computational steps 103-105 may be performed after the steps 102, 106 and 107 have been completed. Also computational steps 103, 104 may be performed after, or in parallel with, step 105.
- computational steps 103-105 may be performed after the steps 102, 106 and 107 have been completed.
- computational steps 103, 104 may be performed after, or in parallel with, step
- step 105 may not include forming a respective provisional eye model for each of the time periods.
- the eye data used in step 109 may be eye data describing the specular reflections, and this "raw" reflection data may be processed in step 1 10.
- the generation of the eye model may include identifying as many as possible of the specular reflections which lie substantially on a common sphere. These reflections are identified as being made by the sclera, and are used to construct the sclera portion 10 of an eye model as shown in Fig. 2, which is centred on the CER. The other reflections may be assumed to be reflections from the cornea, and are used to construct the cornea portion 1 1 of the eye model.
- the energy sources 1 , 2, 3 may be designed and controlled in several ways. First, as mentioned above, it may be advantageous for the processor to control the timing of the operation of the energy sources, for example to ensure that only a selected subset of the energy sources 1 , 2, 3 are operating when a certain image is captured, e.g. such that only one of the energy sources is operating when any corresponding image is captured; this is usual for photometry. If the energy sources (at least, those which produce the same level of light intensity) are activated successively with no significant gaps between then during this period the total level of light would be substantially constant; this would minimize the risk of the subject blinking. Optionally, an additional image may be captured with all the light sources firing.
- the illumination system may employ polarization of the electromagnetic radiation.
- the processor forms the skin model using Lambertian reflections, and fits the parameters of each eye model using the specular reflections.
- the skin is not a perfect Lambertian reflector, and an eye is not a perfect specular reflector.
- the imaging process may use polarization to help the processor distinguish Lambertian reflection from specular reflection, since Lambertian reflection tends to destroy any polarization in the incident light, whereas specular reflection preserves polarization.
- the energy sources 1 , 2, 3 would comprise polarization filters (e.g. linear polarization filters), and the image capturing devices 4, 5 would be provided with a respective constant input polarization filter, to preferentially remove electromagnetic radiation polarized in a certain direction.
- polarization filters e.g. linear polarization filters
- the choice of that direction, relative to the polarization direction of the electromagnetic radiation emitted by the energy sources 1 , 2, 3, would determine whether the filter causes the image capturing devices 4, 5 to preferentially capture electromagnetic radiation due to Lambertian reflection, or conversely preferentially capture electromagnetic radiation due to specular reflection.
- a suitable linear polarizer would be the XP42 polarizer sheet provided by ITOS Deutschen Technische Optik mbH of Mainz, Germany. Note that this polarizer sheet does not work for IR light (for example, with wavelength 850nm), so should not be used if that choice is made for the energy sources.
- the imaging apparatus would include a first set of image capturing devices for capturing the Lambertian reflections, and a second set of image capturing devices for capturing the specular reflections.
- the first image capturing devices would be provided with a filter for preferentially removing light polarized in the direction parallel to the polarization direction of the electromagnetic radiation before the reflection and/or the second image capturing devices would be provided with a filter for preferentially removing light polarized in the direction transverse to the polarization direction of the electromagnetic radiation before the reflection.
- the processor would use the images generated by the first set of image capturing devices to form the skin model, and the images generated by the second set of image capturing devices for fit the parameters of the eye model.
- each of the image capturing devices 4, 5 may be provided with a respective electronically-controllable filter, which filters light propagating towards the image capturing device to preferentially remove electromagnetic radiation polarized in a certain direction.
- the image capturing device may capture two images at times when a given one of the energy sources 1 , 2, 3 is illuminated: one image at a time when the filter is active to remove the electromagnetic radiation with the certain polarization, and one when the filter is not active.
- the relative proportions of Lambertian reflection and specular reflection in the two images will differ, so that by comparing the two images, the processor is able to distinguish the Lambertian reflection from the specular reflection, so that only light intensity due to the appropriate form of reflection is used to form the skin model and/or the eye model.
- some of all of the energy sources 1 , 2, 3 may generate IR or near-IR light. This is particularly desirable if it is not desirable for the subject to see the directional energy (e.g.
- the subject may be requested to keep his or her direction of vision constant, but to turn only his or her head.
- the display 8 would require only a single light 9, and step 107 is replaced with a step in which the subject moves his or her head.
- An alternative way to use the composite model obtained in step 1 10 is within an augmented reality (AR) or virtual reality (VR) system.
- steps 1 12-1 14 of the method 100 are omitted.
- a respective 3-D skin model of the skin of the subject may be formed at each of a series of successive times. For example, this may be done using a process employing one or more energy sources and one more image capturing devices at known positions in a certain reference frame. Steps corresponding to steps 101 to 104 may be performed at each of the successive times. At each one of the successive times (referred to as the "current time"), the respective skin model ("current skin model”) is compared to the skin portion of the composite model.
- the composite model is brought into the reference frame of the energy source(s) and image capturing device(s).
- the processor may calculate at least one image, which is displayed to the subject using at least one respective display device at a known respective position in the common reference frame.
- the images may such as to give the subject the experience of AR or VR in a realistic way, since the eye model of the composite model gives valuable information for generating the images.
- the eye data collected in each time period which is indicative of the position of at least a portion of the subject's eye may not be derived from specular reflections.
- it may be data obtained by tracking the movement of the subject's iris in the images.
- Azuma & Bishop (Improving static and dynamic registration in an optical see-through HMD", Proceedings of SIGGRAPH 1994) use a two vector method to find the centre of an eye by aligning the subject's eyepoint with two intersecting vectors. This is based on the principle that when a user aligns a central axis of his or her eye with a "vector" (i.e. a line having a certain direction and passing through a certain point), that vector passes through the centre of the eye.
- a vector i.e. a line having a certain direction and passing through a certain point
- the subject might in a given time period align his or her eye with a vector (possibly one generated by a display device), so that the eye data might be data characterizing the position of the vector in a reference frame of the imaging devices used to obtain the 3-D skin model for the corresponding time period, and thereby indicating that the centre of the eye lies on this vector.
- a vector possibly one generated by a display device
- E. Whitmire et al (“Eyecontact: Scleral coil eye tracking for virtual reality", ISWC 2016) describe a system in which the position of an eye is obtained from eye data characterizing magnetic interactions with magnetic elements mounted into an element (e.g. a silicone annulus) worn by the user on the sclera of the eye.
- eye data obtained in this way could be used in the present invention also.
- Fig. 7 is a block diagram showing a technical architecture of the overall system 200 for performing the method.
- the technical architecture includes a processor 322 (which may be referred to as a central processor unit or CPU) that is in communication with the cameras 4, 5, for controlling when they capture images and receiving the images.
- the processor 322 is further in communication with, and able to control the energy sources 1 , 2, 3, and the display 8.
- the processor 322 is also in communication with memory devices including secondary storage 324 (such as disk drives or memory cards), read only memory (ROM) 326, random access memory (RAM) 328.
- the processor 322 may be implemented as one or more CPU chips.
- the system 200 includes a user interface (Ul) 330 for controlling the processor 322.
- the Ul 330 may comprise a touch screen, keyboard, keypad or other known input device. If the Ul 330 comprises a touch screen, the processor 322 is operative to generate an image on the touch screen. Alternatively, the system may include a separate screen (not shown) for displaying images under the control of the processor 322. Note that the Ul 330 is separate from the display 8, since the Ul 330 is typically used by an operator to control the system, whereas the display is used for the subject to look at.
- the system 200 optionally further includes a unit 332 for forming 3D objects designed by the processor 322; for example the unit 332 may take the form of a 3D printer. Alternatively, the system 200 may include a network interface for transmitting instructions for production of the objects to an external production device.
- the secondary storage 324 is typically comprised of a memory card or other storage device and is used for non-volatile storage of data and as an over-flow data storage device if RAM 328 is not large enough to hold all working data. Secondary storage 324 may be used to store programs which are loaded into RAM 328 when such programs are selected for execution.
- the secondary storage 324 has an order generation component 324a, comprising non-transitory instructions operative by the processor 322 to perform various operations of the method of the present disclosure.
- the ROM 326 is used to store instructions and perhaps data which are read during program execution.
- the secondary storage 324, the RAM 328, and/or the ROM 326 may be referred to in some contexts as computer readable storage media and/or non-transitory computer readable media.
- the processor 322 executes instructions, codes, computer programs, scripts which it accesses from hard disk, floppy disk, optical disk (these various disk based systems may all be considered secondary storage 324), flash drive, ROM 326, RAM 328, or the network connectivity devices 332.
- processor 322 may be provided as a FPGA (field-programmable gate array), configured after its manufacturing process, for use in the system of Fig. 7. While only one processor 322 is shown, multiple processors may be present. Thus, while instructions may be discussed as executed by a processor, the instructions may be executed simultaneously, serially, or otherwise executed by one or multiple processors.
- FPGA field-programmable gate array
Landscapes
- Health & Medical Sciences (AREA)
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Ophthalmology & Optometry (AREA)
- General Health & Medical Sciences (AREA)
- Medical Informatics (AREA)
- General Physics & Mathematics (AREA)
- Molecular Biology (AREA)
- Public Health (AREA)
- Veterinary Medicine (AREA)
- Animal Behavior & Ethology (AREA)
- Optics & Photonics (AREA)
- Surgery (AREA)
- Heart & Thoracic Surgery (AREA)
- Biophysics (AREA)
- Biomedical Technology (AREA)
- Signal Processing (AREA)
- Radiology & Medical Imaging (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Human Computer Interaction (AREA)
- Quality & Reliability (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Theoretical Computer Science (AREA)
- Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)
Abstract
Un système d'imagerie capture une pluralité d'images du visage du sujet à chacune d'une série de périodes de temps successives dans lesquelles un sujet regarde dans des directions respectives. À l'aide des images pour chaque période de temps, le système forme un modèle de peau correspondant à une partie de peau du visage du sujet et obtient des données d'œil caractérisant un œil du sujet. Le système utilise les modèles de peau pour convertir les données d'œil des multiples périodes de temps en une trame de référence commune, puis utilise les données d'œil pour obtenir un modèle d'œil numérique dans la trame de référence indiquant au moins la position du centre de la rotation de l'œil (CRO). Typiquement, un tel modèle d'œil est obtenu pour chacun des yeux du sujet. Le modèle d'œil est utilisé dans un procédé de conception d'un article de lunetterie, ou dans un système de réalité augmentée ou de réalité virtuelle.
Applications Claiming Priority (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| GB1702871.3A GB2559977A (en) | 2017-02-22 | 2017-02-22 | Systems and methods for obtaining information about the face and eyes of a subject |
| GB1702871.3 | 2017-02-22 |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| WO2018154272A1 true WO2018154272A1 (fr) | 2018-08-30 |
Family
ID=58486762
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| PCT/GB2018/050269 Ceased WO2018154272A1 (fr) | 2017-02-22 | 2018-01-30 | Systèmes et procédés pour obtenir des informations concernant le visage et les yeux d'un sujet |
Country Status (2)
| Country | Link |
|---|---|
| GB (1) | GB2559977A (fr) |
| WO (1) | WO2018154272A1 (fr) |
Cited By (5)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US11116407B2 (en) | 2016-11-17 | 2021-09-14 | Aranz Healthcare Limited | Anatomical surface assessment methods, devices and systems |
| US11250945B2 (en) | 2016-05-02 | 2022-02-15 | Aranz Healthcare Limited | Automatically assessing an anatomical surface feature and securely managing information related to the same |
| US11850025B2 (en) | 2011-11-28 | 2023-12-26 | Aranz Healthcare Limited | Handheld skin measuring or monitoring device |
| US11903723B2 (en) | 2017-04-04 | 2024-02-20 | Aranz Healthcare Limited | Anatomical surface assessment methods, devices and systems |
| US12039726B2 (en) | 2019-05-20 | 2024-07-16 | Aranz Healthcare Limited | Automated or partially automated anatomical surface assessment methods, devices and systems |
Families Citing this family (1)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US10628931B1 (en) | 2019-09-05 | 2020-04-21 | International Business Machines Corporation | Enhancing digital facial image using artificial intelligence enabled digital facial image generation |
Citations (6)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20030123026A1 (en) * | 2000-05-18 | 2003-07-03 | Marc Abitbol | Spectacles fitting system and fitting methods useful therein |
| WO2015177459A1 (fr) * | 2014-05-20 | 2015-11-26 | Essilor International (Compagnie Generale D'optique) | Procede de determination d'au moins un parametre de comportement visuel d'un individu |
| US20160011437A1 (en) * | 2013-02-28 | 2016-01-14 | Hoya Corporation | Spectacle lens design system, supply system, design method and manufacturing method |
| US20160252751A1 (en) * | 2015-01-16 | 2016-09-01 | James Chang Ho Kim | Methods of Designing and Fabricating Custom-Fit Eyeglasses Using a 3D Printer |
| WO2017077279A1 (fr) * | 2015-11-03 | 2017-05-11 | Fuel 3D Technologies Limited | Systèmes et procédés de génération et d'utilisation d'images tridimensionnelles |
| WO2018000020A1 (fr) * | 2016-06-29 | 2018-01-04 | Seeing Machines Limited | Systèmes et procédés permettant d'effectuer le suivi du regard |
Family Cites Families (1)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| FR2884130B1 (fr) * | 2005-04-08 | 2008-02-15 | Essilor Int | Procede et dispositif de determination du centre de rotation d'un oeil |
-
2017
- 2017-02-22 GB GB1702871.3A patent/GB2559977A/en not_active Withdrawn
-
2018
- 2018-01-30 WO PCT/GB2018/050269 patent/WO2018154272A1/fr not_active Ceased
Patent Citations (6)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20030123026A1 (en) * | 2000-05-18 | 2003-07-03 | Marc Abitbol | Spectacles fitting system and fitting methods useful therein |
| US20160011437A1 (en) * | 2013-02-28 | 2016-01-14 | Hoya Corporation | Spectacle lens design system, supply system, design method and manufacturing method |
| WO2015177459A1 (fr) * | 2014-05-20 | 2015-11-26 | Essilor International (Compagnie Generale D'optique) | Procede de determination d'au moins un parametre de comportement visuel d'un individu |
| US20160252751A1 (en) * | 2015-01-16 | 2016-09-01 | James Chang Ho Kim | Methods of Designing and Fabricating Custom-Fit Eyeglasses Using a 3D Printer |
| WO2017077279A1 (fr) * | 2015-11-03 | 2017-05-11 | Fuel 3D Technologies Limited | Systèmes et procédés de génération et d'utilisation d'images tridimensionnelles |
| WO2018000020A1 (fr) * | 2016-06-29 | 2018-01-04 | Seeing Machines Limited | Systèmes et procédés permettant d'effectuer le suivi du regard |
Cited By (8)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US11850025B2 (en) | 2011-11-28 | 2023-12-26 | Aranz Healthcare Limited | Handheld skin measuring or monitoring device |
| US11250945B2 (en) | 2016-05-02 | 2022-02-15 | Aranz Healthcare Limited | Automatically assessing an anatomical surface feature and securely managing information related to the same |
| US11923073B2 (en) | 2016-05-02 | 2024-03-05 | Aranz Healthcare Limited | Automatically assessing an anatomical surface feature and securely managing information related to the same |
| US11116407B2 (en) | 2016-11-17 | 2021-09-14 | Aranz Healthcare Limited | Anatomical surface assessment methods, devices and systems |
| US12268472B2 (en) | 2016-11-17 | 2025-04-08 | ARANZ Medical Limited | Anatomical surface assessment methods, devices and systems |
| US11903723B2 (en) | 2017-04-04 | 2024-02-20 | Aranz Healthcare Limited | Anatomical surface assessment methods, devices and systems |
| US12279883B2 (en) | 2017-04-04 | 2025-04-22 | ARANZ Medical Limited | Anatomical surface assessment methods, devices and systems |
| US12039726B2 (en) | 2019-05-20 | 2024-07-16 | Aranz Healthcare Limited | Automated or partially automated anatomical surface assessment methods, devices and systems |
Also Published As
| Publication number | Publication date |
|---|---|
| GB201702871D0 (en) | 2017-04-05 |
| GB2559977A (en) | 2018-08-29 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| EP3371781B1 (fr) | Systèmes et procédés de génération et d'utilisation d'images tridimensionnelles | |
| US10775647B2 (en) | Systems and methods for obtaining eyewear information | |
| KR102762435B1 (ko) | 고정-거리 가상 및 증강 현실 시스템들 및 방법들 | |
| CN103442629B (zh) | 通过设定数据速率确定双眼的至少一个参数的方法和光学测量装置 | |
| WO2018154272A1 (fr) | Systèmes et procédés pour obtenir des informations concernant le visage et les yeux d'un sujet | |
| KR101260287B1 (ko) | 증강 현실을 이용한 안경 렌즈 비교 시뮬레이션 방법 | |
| US10307053B2 (en) | Method for calibrating a head-mounted eye tracking device | |
| JP6498606B2 (ja) | ウエアラブル視線計測デバイス及び使用方法 | |
| EP0596868B1 (fr) | Méthode de suive des yeux utilisant un dispositif d'acquisition d'images | |
| US9779512B2 (en) | Automatic generation of virtual materials from real-world materials | |
| US9292765B2 (en) | Mapping glints to light sources | |
| WO2015051751A1 (fr) | Dispositif d'affichage par projection interactif | |
| JP7165994B2 (ja) | 眼の計測を収集するための方法及びデバイス | |
| JPH0782539B2 (ja) | 瞳孔画像撮影装置 | |
| CA3085733A1 (fr) | Systeme et procede d'obtention de mesures d'ajustement et de fabrication pour des lunettes a l'aide d'une localisation et d'une cartographie simultanees | |
| CN111587397B (zh) | 图像生成装置、眼镜片选择系统、图像生成方法以及程序 | |
| CN117957479A (zh) | 使用空间定位的自由形式光学部件进行失真补偿和图像清晰度增强的紧凑型成像光学器件 | |
| EP4542282A1 (fr) | Dispositif monté sur la tête avec compensation de double vision et amélioration du confort de vergence |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| 121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 18703828 Country of ref document: EP Kind code of ref document: A1 |
|
| DPE1 | Request for preliminary examination filed after expiration of 19th month from priority date (pct application filed from 20040101) | ||
| NENP | Non-entry into the national phase |
Ref country code: DE |
|
| 32PN | Ep: public notification in the ep bulletin as address of the adressee cannot be established |
Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 20.11.2019) |
|
| 122 | Ep: pct application non-entry in european phase |
Ref document number: 18703828 Country of ref document: EP Kind code of ref document: A1 |