[go: up one dir, main page]

WO1999025239A1 - Automated photorefractive screening - Google Patents

Automated photorefractive screening Download PDF

Info

Publication number
WO1999025239A1
WO1999025239A1 PCT/US1998/024275 US9824275W WO9925239A1 WO 1999025239 A1 WO1999025239 A1 WO 1999025239A1 US 9824275 W US9824275 W US 9824275W WO 9925239 A1 WO9925239 A1 WO 9925239A1
Authority
WO
WIPO (PCT)
Prior art keywords
eye
reflexes
eyes
model
patient
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Ceased
Application number
PCT/US1998/024275
Other languages
French (fr)
Inventor
Stuart Brown
Adam Hoover
Barbara Brody
Dirk-Uwe Bartsch
David Granet
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Individual
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from US09/173,571 external-priority patent/US6089715A/en
Application filed by Individual filed Critical Individual
Priority to AU15240/99A priority Critical patent/AU1524099A/en
Priority to EP98959444A priority patent/EP1052928A1/en
Priority to KR1020007005253A priority patent/KR20010032112A/en
Priority to JP2000520683A priority patent/JP2001522679A/en
Publication of WO1999025239A1 publication Critical patent/WO1999025239A1/en
Anticipated expiration legal-status Critical
Ceased legal-status Critical Current

Links

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B3/00Apparatus for testing the eyes; Instruments for examining the eyes
    • A61B3/0016Operational features thereof
    • A61B3/0025Operational features thereof characterised by electronic signal processing, e.g. eye models
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B3/00Apparatus for testing the eyes; Instruments for examining the eyes
    • A61B3/02Subjective types, i.e. testing apparatus requiring the active assistance of the patient
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B3/00Apparatus for testing the eyes; Instruments for examining the eyes
    • A61B3/10Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions
    • A61B3/103Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions for determining refraction, e.g. refractometers, skiascopes

Definitions

  • This invention relates to instruments for measuring characteristics of eyes, and more pa ⁇ icularly to a system and method for locating and modeling eyes in imagery for automated photorefractive screening, and for enabling determination of the presence of anomalies in the patient's visual anatomy.
  • a well-known eye examination consists of shining a light into the interior of the eye and visually inspecting the uniformity of the reflected light, normally visible as a red color filling the pupil. Any deviation from uniformity, either in one eye or between the pair of eyes, indicates a potential problem in the patient's visual anatomy. Such light refractive screening is thus a useful tool in patient eye care.
  • Ocular disorders such as strabismus, various forms of refractive errors (myopia, hyperopia, and astigmatism) and opacities of the ocular media are the leading causes of amblyopia, or vision loss. These combine to cause amblyopia in approximately 5% of the population. However, some form of visual problem, not necessarily leading to amblyopia, may be present in over 20% of children.
  • the present invention is directed to a system and method for locating and modeling eyes in imagery for automated photorefractive screening, and for enabling determination of the presence of-anomaiies in the patient's visual anatomy.
  • Pattern Recognition 19(1), 1986, pp. 77-84
  • Eye tracking may be accomplished via a constrained updating of an eye model
  • an eye model for instance, see X. Xie, R. Sudhakar and H. Zhuang, "On improving eye feature extraction using deformable templates", in Pattern Recognition Letters, 27(6), 1994, pp. 791-799; A. Yuille and P. Hallinan, "Deformable Templates", Chapter 2 in Active Vision, MIT Press, 1992, pp. 21-38).
  • Face recognition for instance, see International Conference on Automatic Face and Ges t ure Recognition, proceedings of, edited by M.
  • a system and method for locating and modeling eyes in imagery for automated photorefractive screening includes a digital camera having a lens-mounted flash for obtaining a digital image of the face of an individual, and a suitably programmed processor, such as a general purpose digital computer, for locating an eye of the individual in the digital image, modeling structures in the eye, analyzing the digitized eyes of the individual for eye disease, and providing a recommenda- tion for treatment.
  • a digital camera having a lens-mounted flash for obtaining a digital image of the face of an individual
  • a suitably programmed processor such as a general purpose digital computer
  • the invention includes a system and method for locating a patient's eyes in a digital image that includes each eye as illuminated by a near- axis flash, including automatically finding light reflexes in the digital images as indicative of the locat i on of each eye.
  • Automatically finding light reflexes includes analyzing such light reflexes to determine possible pupil and sclera borders.
  • the invention further includes automatically fitting a corresponding model to such possible pupil and sclera borders, analyzing the model of each eye to determine possible abnormalities in each eye; and outputting a possible diagnosis for each eye based on such analyzing.
  • the invention includes a system and method for locating and modeling a patient's eyes in a digital image that includes each eye as illuminated by a near- axis flash, including finding and indicating bright spots in the digital images as possible comeal reflexes of each eye; finding and indicating red-black and black-white gradients, each comprising a set of gradient points, around such bright spots as possible pupil and sclera borders, respectively, of each eye; fitting a plurality of circles to subsets of such gradient points as possible models for each eye, each eye model having an associated strength; and sorting the eye models for each eye by strengths, and indicating the strongest corresponding eye model as best representing each eye.
  • Another aspect of the invention includes measuring red reflexes and comeal reflexes from the indicated eye models as an indicator of anomalies in the patient's eyes.
  • Yet another aspect of the invention includes generating a digital image of each of a patient's eyes with a camera having a flash positioned near to a center axis of a lens of the camera so as to generate images with bright, sharp light reflexes.
  • the target audience for the invention is previously unreached large populations , such as school children. . in order to keep the imaging process simple and cheap, no special apparatus i s used.
  • the eyes may appear anywhere in the image of the patient's face (simplifying use of the inventi •on , by fi r e ildj eyecare note p m ra.c- t t;i t t;i n o n n ⁇ e» r rs), t thnee e evyeess o 0 f 1 t u he p vatient need not fill the camera image frame, the camera may • b_e a at t a 3 H dii ⁇ s t tainnccee fr rroomm the p v atient that is not intrusive to the patient, and the patient's head need not be tightly constrained.
  • FIG. la is block diagram showing the preferred physical components of the invention as used for imaging an eye of a patient.
  • FIG. lb is a perspective view of a lens 14 and flash 12 in accordance with the invention.
  • FIG. 2a is a diagram showing the light path from a flash to a pair of normal eyes and back to a camera, as well as the appearance of the comeal reflexes from the camera view.
  • FIG. 2b is a diagram showing the light path from a flash to a pair of eyes, one of which is abnormal, and back to a camera, as well as the appearance of the comeal reflexes from the camera view.
  • FIG. 2c is a diagram of the retinal reflex 26 for a pair of normal eyes, showing uniform reflectance equal in both eyes and no refractive error.
  • FIG. 2d is a diagram of the retinal reflex for a pair of eyes, one of which has a retinal pathology that causes lesser reflectance.
  • FIG. 2e is a diagram of the retinal reflex for a pair of eyes, each showing a crescent- shaped reflectance, which indicates the same type of refractive error in both eyes.
  • FIG. 2f is a diagram of the retinal reflex for two eyes, showing different crescent reflectances, which indicates differing amounts of refractive error in the eyes.
  • FIG. 3a is a close-up photograph of a patient's eye.
  • FIG. 3b is a close-up photograph of a patient's eye with a superimposed graphical representation of the eye model used in the invention.
  • FIG. 4a is a close-up photograph of a patient's eye containing an abnormal pupil area
  • FIG. 4b is a close-up photograph of a patient's eye containing an abnormal pupil area with a superimposed graphical representation of the eye model used in the invention.
  • FIG. 5 is a flowchart showing an overview of the steps of the image processing of the preferred embodiment of the invention.
  • FIG. 6a is a close-up photograph of a patient's eye without a superimposed model.
  • FIG. 6b is a close-up photograph of a patient's eye, showing black-to-white gradient points located for the image in FIG. 6a.
  • FIG. 6c is a close-up photograph of a patient's eye, illustrating the result of applying constraints to the black-to-white gradient points shown in Fig. 6b.
  • FIG. 6d is a close-up photograph of a patient's eye, showing the highest scoring subset of black-to-white gradient points from the set shown in FIG. 6c.
  • FIG. 7 is a flowchart of a preferred test sequence using the invention, once an image is acquired and the eyes are located and modeled.
  • FIG. la is block diagram showing the preferred physical components of me invention as used for imaging the eyes of a patient.
  • a digital camera 10 is modified to include a "near- axis" flash 12 positioned slightly off the optical axis of the lens 14 of the camera 10.
  • the flash 12 may be, for example, a ring flash or one or more small flash units attached near the optical axis of the lens 14 of the camera 10.
  • a suitable camera is the Model DC120 Digital Camera from Kodak Corporation, with an attached 6X telephoto lens 14 and a small flash unit 12 mounted to the front of the telephoto lens, slightly off-center.
  • the telephoto lens increases the field of view of the camera as it images the patient's eye, thus increasing the number of pixels available for analysis.
  • FIG. lb is a perspective view of a lens 14 and flash 12 in accordance with the invention.
  • the flash 12 in the illustrated embodiment uses visible light.
  • a flash that principally emits non-visible light, such as infrared flash may be used instead with a suitable camera 10.
  • the camera 10 is used to capture and directly download a digital image of the face of a patient 16 to a conventional computer 18 for processing.
  • the camera 10 is used to obtain a high quality digital image of the patient's face and eyes, preferably a full-color (e.g., RGB), high resolution (e.g., 8 bits per pixel for each color) image of about at least 640x480 pixels in size, and preferably about at least 1024*768 pixels in size.
  • a full-color (e.g., RGB) image e.g., 8 bits per pixel for each color
  • high resolution e.g. 8 bits per pixel for each color
  • the techniques of the invention can be applied to gray-scale images. However, for purposes of this explanation, a color digital image will be assumed.
  • the camera 10 may be a conventional film camera, such as an instant photography camera, the images of which are optically scanned into the computer 18.
  • processing of the image in accordance with the invention is done within a specially programmed digital camera 10, allowing for an integrated photoscreening system.
  • Photoscreening works by photographing two distinct light reflexes from the eye, a corneal reflex and a retinal reflex.
  • Light from a flash 12 is bounced off of the air-tear film interface on the cornea of the eye. Since the comeal surface is essentially spherical, the closest point of the cornea to the camera 10 will reflect the light flash back to the camera as a co eal reflex. If the eye faces the camera, the light reflection point is normally centered in the pupil of the eye. If the eye has strabismus or a related defect, the light reflection point is not centered on the pupil.
  • FIGS. 2a-2b Several comeal reflex conditions are shown in FIGS. 2a-2b.
  • FIG. 2a is a diagram showing the light path from a flash to a pair of normal eyes and back to a camera, as well as the appearance of the comeal reflexes from the camera view.
  • the comeal light reflex 20 is essentially centered within the outline of the iris 22.
  • FIG. 2b is a diagram showing the light path from a flash to two eyes, one of which is abnormal, and back to a camera, as well as the appearance of the comeal reflexes from the camera view.
  • the comeal light reflex 20 is essentially centered within the outline of the iris 22 for the left eye, but the comeal light reflex 24 for the right eye is off-center. Note that the tolerance of "centered" depends in part on the displacement of the flash 12 from the optical axis of the camera lens 14.
  • the retinal light reflex gives information on ocular pathologies and the refractive state of the eye. If one eye has any pathology (e.g., a tumor, blood, cataract, etc.), the retina does not reflect as much light as in a normal eye. The difference in reflectance between an abnormal retina and a normal retina is noticeable. Also, in an eye without refractive error, the retinal light reflex appears as a uniformly bright circle. If the eye has any refractive error, a crescent will appear in the retinal light reflex. The relative size of the crescent to the pupil diameter is generally related to the refractive error.
  • pathology e.g., a tumor, blood, cataract, etc.
  • FIGS. 2c-2f are shown in FIGS. 2c-2f.
  • FIG. 2c is a diagram of the retinal reflex 26 for a pair of normal eyes, showing uniform reflectance equal in both eyes and no refractive error.
  • FIG. 2d is a diagram of the retinal reflex for a pair of eyes, one of which has a retinal pathology that causes lesser reflectance 28.
  • FIG. 2e is a diagram of the retinal reflex for a pair of eyes, each showing a crescent-shaped reflectance 30, which indicates the same type of refractive error in both eyes.
  • FIG. 2f is a diagram of the retinal reflex for a pair of eyes, showing different crescent reflectances 32, 34, which indicates differing amounts of refractive error in the eyes.
  • a model of the eyes in an image of a patient's face must be'generated.
  • the frontal projection of an eye is modeled by a pair of concentric circles:
  • FIG. 3 a is a close-up photograph of a patient's eye
  • FIG. 3b is a close-up photograph of a patient's eye with a superimposed graphical representation of the eye model used in the invention.
  • FIG. 4a is a close-up photograph of a patient's eye containing an abnormal pupil area.
  • FIG. 4b is a close-up photograph of a patient's eye containing an abnormal pupil area with a superimposed graphical representation of the eye model used in the invention.
  • a set of reflexes is measured for each eye
  • the comeal ref l ex is the reflect.on of light from the front surface of the eye (the cornea). It typically appears as a bright spot.
  • the comeal reflex CR is modeled as a four-connected re ⁇ i0 n (defined below). In FIG.
  • a four-connected region is defined as follows. Formally, let I denote an image, consisting of a two-dimensional raster of pixels, organized on an integer grid of R rows and C columns. Let (r, c) denote a pixel location in image I. A four-connected path P between two pixels (r card c.) and (/ c,) is defined as: (rford,£ titan) ⁇ .
  • a four-connected region S is defined as:
  • Eq.4 describes any set of pixels in I contiguously connected, such that any pixel in the set may be reached from any other pixel in the set by at least one path of pixels also in the set.
  • the possible as well as actual comeal reflexes, crescent reflexes, and other abnormal pupil areas, as described in this work, are all modeled as four-connected regions.
  • One .aspect of the invention is to derive necess.ary parameters from a digitized image of a patient's face and eyes sufficient to locate each eye and generate a model for each eye as described above.
  • the invention advantageously utilizes the characteristics of a particular illuminating flash, which produces light reflexes not seen in normal intensity imagery.
  • the preferred embodiment of the invention specifically takes advantage of such light reflexes using special image processing to locate and model the eye so as to enable automated photorefractive screening.
  • each eye is assumed to contain a comeal reflex CR, which appears as a saturated (bright) spot somewhere inside the pupil.
  • Each pupil is assumed to contain some amount of red shading, which creates a prominent gradient in the red band at the pupil-iris boundary. (This is referred to below as the "red-black" gradient for convenience.
  • red-black a similar bound- ary can be ascertained in gray scale images as distinct differences in grayscale gradients.
  • Each sclera is assumed to appear as a shade of white, which creates a prominent gradient in full color at the iris-sclera boundary.
  • FIG. 5 is a flowchart showing an overview of the steps of the image processing of the preferred embodiment of the invention.
  • the steps of the preferred process for locating, mocleling, analyzing, and diagnosing a patient's eyes are as follows, each of which is discussed more fully below:
  • the camera 10 and near-axis flash 12 are used to capture an image of a patient's face and eyes. If the camera 10 is digital, the image data may be directly down- loaded to the computer 18 for processing as a digital image. If the camera 10 is a conventional film camera, such as an instant photography camera,- the images are optically scanned into the computer 18 and stored as a digital image. The unprocessed images may be displayed on a computer monitor or printed (monochrome or color), and may be annotated using suitable conventional graphics software.
  • Each three-band (i.e., RGB) image input into the computer 18 is preferably thresholded in each band to locate areas that are possible comeal reflexes.
  • the images are logically AND'd together to produce a binary image of bright areas.
  • Bright pixels are then spatially grouped into four-connected regions using a queue-based paint-fill algorithm.
  • Bright regions with areas within a selected size range are labeled as possible or hypothesized comeal reflexes.
  • the input image I is thresholded to produce a binary image I bn ⁇ ;hl indicating which pixels are bright:
  • I red , I ⁇ . and I blue are the individual band values of the input image
  • I bnghl l indicates a pixel is bright
  • T B is a selected threshold value.
  • a queue-based paint-fill algorithm is then applied to spatially group bright pixels into four-connected regions using.
  • T s and T L are selectable threshold values.
  • LABEL For each possible comeal reflex (region) LABEL, a set of pixels is found circularly around the comeal reflex, at distances within the expected range of pupil and iris radii, that exhibit strong gradients. This is accomplished by considering the centroid of the region, together with pixels along an evenly distributed set of compass directions from the centroid, as forming a set of rays. A linear gradient filter is applied to the pixels along a segment of each ray. For each segment, the pixel with the highest black-to-white gradient and the pixel with the highest red-to-black gradient are noted.
  • centroid (x c , y c ) of the four-connected region with the value LABEL is found as:
  • centroid together with an equiangular set of compass directions, forms a set of rays:
  • T ⁇ (degrees) is a selectable algorithm parameter that controls the angular resolution of the set.
  • the pixels (integer coordinates) along each ray ⁇ are enumerated between:
  • T, .and T 4 are algorithm parameters describing the minimum and maximum expected radius (in pixels) of the pupil and iris, respectively.
  • N ⁇ is the number of points in the list returned by the above pseudocode.
  • a 1 *7 linear gradient filter shown in Table 1 above, is convolved with the points along the ray to compute a black-to-white gradient BW ⁇ :i (Eq. 12) and a red-to-black gradient RB ⁇ :i (Eq. 13) estimate at each pixel on each ray:
  • T-, and T 3 are algo ⁇ thm parameters describing the maximum and minimum expected radius (in pixels) of the pupil and iris, respectively.
  • circle equations are fit to the gradient points discov- ered in the previous step. For each possible comeal reflex, one circle is fit to the black-to- white gradient pixels and one circle is fit to the red-to-black gradient points.
  • the preferred embodiment of the invention uses a novel method for eliminating outliers in the circle fit, based upon subarcs. The removal of outliers as arc subsets follows naturally from the problem context. If an eye is imaged in a non-forward orientation, or if an eyelid covers some portion of the iris, or if the retinal reflex fills only part of the . pupil, then fitting a circle to the strongest large subarc generally yields the most reliable model.
  • a circle equation is fit to eight subsets of each type of gradient point. Each subset covers 270° of arc, starting at 45° increments.
  • Three strength measures are computed for each circle fit: residual, average gradient, and arc coverage. These values are normalized and summed as a strength score.
  • the circle (out of 8 possible, in the illustrated embodiment) with the overall best strength score is taken as the model for the given hypothesized comeal reflex.
  • a, b and r are the model parameters and N is the number of points to be modeled.
  • the following procedure is used to derive the pupil-iris boundary (model (A,B,C,), Eq. 1) from the set of red-to-black gradient points (RAY ⁇ :r , Eq. 17) and the iris-sclera boundary (model (A,B,C 2 ), Eq. 2) from the set of black-to- white gradient points (RAY ⁇ :W , Eq. 16).
  • FIGS. 6a-6d are close-up photographs of a patient's eye, showing various stages in the removal of outliers.
  • FIG. 6a is a close-up photograph of a patient's eye without a superimposed model.
  • FIG. 6b shows the black-to-white gradient points 60 located for the image in FIG. 6a. These pixels illustrate a spurious response to an eyelid as well as some unstructured outliers.
  • a popular method for fitting in the presence of outliers is the least-median-of-squares method (see, e.g., P.
  • the preferred embodiment uses several constraints combined with a novel method to eliminate outliers in 0(ri) time:
  • the residual, average gradient strength, and arc coverage are used to calculate a score.
  • the score values all depend upon the number of inliers used for fitting the circle. This number, denoted N, n , is computed as 360/T 0 (the number of pixels in the original set RAY ⁇ w or RAY ⁇ r ) minus the count of pixels .discarded or omitted by any of the appropriate steps given in Eq. 27-33.
  • the residual is computed as:
  • N ⁇ 1 1 7' (x . -a) + 0 ⁇ -*) (x I ,y ' I ) € inliers
  • the average gradient strength for each SUBARC ⁇ . ⁇ . w is computed as:
  • the maximum score in each case is 3.
  • the weighting factors were derived experimentally.
  • a possible iris-sclera boundary radius C 2 (Eq. 2) is calculated as the mode radius of BW a . w , where the mode is determined as the highest count at an integer radius, where the count at each radius is computed as:
  • the diminished score reflects the diminished confidence in the model.
  • Each hypothesized comeal reflex now has an overall model strength (i.e., likelihood of being an eye).
  • the set of strengths are sorted.
  • the two strongest models are selected as the correct eye models.
  • the correct eye models may be indicated by output from a computer, including by superimposing a graphical representation of each model on 10 images of a corresponding eye.
  • each possible comeal reflex LABEL that resulted in a possible eye model has an associated score (Eq. 43).
  • the two possible eye models with the highest total score are taken to be the actual eye models, denoted as: 1 5 (A left , B len , C , C ) and (A right , B righl , , C )
  • the centers of the eye models must be at least two eye diameters separated from each other:
  • the orientation of the patient's face in the image is assumed to be either vertical or horizontal, but in either case level with one of the parallel sets of image bound- 20 aries. This may be verified by:
  • Test A In normal eyes, the comeal reflexes will be only slightly off-center (the slight offset is caused by the distance between the camera flash and the center of the camera lens). In this test, for each eye, the offset of the comeal reflex is measured as the ratio of the distance between the comeal reflex centroid and the center of the eye, to the radius of the iris:
  • both comeal reflexes are off-center, then the patient was not looking directly into the camera, and the computer system so indicates. If only one comeal reflex is off-center, the patient has a tropia and the computer system indicates that the patient should be referred to a medical eyecare specialist. Formally, these conditions are tested as:
  • T Pain is a selectable threshold.
  • Test B In normal eyes, the retinal reflexes will appear equal and uniform in both eyes. To test for such uniformity, note that for each eye model (left and right) there is a corresponding pair of concentric circles modeled by (A,B,C,,C 2 ) (Eq. 1-2), and a comeal reflex modeled by CR (Eq. 4) and its associated centroid modeled by (x c ,y c ) (Eq. 8).
  • the retinal reflex RR of each eye model preferably is measured as the average red value of pixels within the iris-pupil boundary, excluding pixels labeled as belonging to the comeal reflex and its two-deep surrounding border (the border preferably is excluded to minimize error).
  • PR the set of pixels in the pupil to be used for computing the retinal reflex
  • T Range is a selectable threshold
  • Test C Pixels inside the pupillary bound.ary (excluding the comeal reflex) that are brighter than the average intensity inside the pupil are segmented into regions. Any region of sufficient size, with sufficient perimeter adjacent to the pupillary boundary, is labeled as a crescent reflex. In normal eyes, no crescent reflexes will be seen. If either eye exhibits a crescent reflex, then the patient has a refractive error and the computer system indicates that the patient should be referred to a medical eyecare specialist. Any region of sufficient size, but not adjacent to the pupillary boundary, is labeled as an abnormal pupil area (e.g., the region may represent a cataract), and the computer system indicates that the patient should be referred to a medical eyecare specialist. Note that if the pupil-iris boundary is not located for an eye, then the crescent reflex and abnormal pupil area tests are undefined in the preferred embodiment of the invention.
  • the average blue intensity inside the pupil denoted BR, is calculated as:
  • each bright region is calculated as the number of bright pixels adjacent to non-bright pixels:
  • each bright region with the pupillary boundary is calculated as the number of bright pixels within a small distance of the pupil-iris circle:
  • Bright regions are classified according to the following criteria:
  • FIG. 7 is a flowchart of a preferred examination sequence using the invention, once an image is acquired and the eyes are located, modeled, and analyzed.
  • a determination is made as to whether the comeal and retinal light reflexes are visible in the image taken by the camera 10 (STEP 100). If not, then the examination should be repeated (i.e., another image obtained) (STEP 102). If so, then a determination is made as to whether the comeal light reflexes are centered (STEP 104). If not, then an indication is given that the patient may have strabismus, and should be referred to further examination (STEP 106). Otherwise, a determination is made as to whether the retinal (red) light reflex in both eyes is equally bright (STEP 108).
  • an indication is given that the patient may have a retinal problem, and should be referred to further examination (STEP 110). Otherwise, a determination is made as to whether a retinal crescent light reflex exists (STEP 112). If so, then an indication is given that the patient may have a refractive error, and should be referred to further examination (STEP 114). Otherwise, a determination is made as to whether other abnormal areas exist in the retinal light reflexes (STEP 116). If so, then an indication is given that the patient may have a possible media opacity, and should be referred to further examination (STEP 118). If not, the test sequence ends (STEP 120). Note that other test sequences may also be devised, and the tests described below may be done in other orders and/or terminated after any tentative diagnosis step.
  • the patient 16 is expected to be directly facing the camera, photographed at a roughly known distance from the camera, using a known lens 14. This consistent configuration yields an expected range of image-apparent sizes of visual anatomical features.
  • the constraint that the patient 16 not be too far from the camera 10 does not reflect necessarily on the ability of the inventive system to locate and model the eyes. Rather, this constraint is imposed to insure the eye regions are imaged by a sufficient number of pixels for making statistically sound measurements for making photoscreening decisions.
  • the patient's eyes should be dilated. Typically, the normal dilation that occurs after three to five minutes in a darkened room is acceptable.
  • the ambient lighting level in the room in which the picture is taken should be as dark as possible, to preserve dilation.
  • the embodiment of the invention described herein is designed to be robust in the presence of a confusing background, the image is expected to be reasonably free of clutter.
  • the background should not contain life-size pictures of people (particularly faces). Normal jewelry is acceptable, but anything which has an eye-like appearance should be removed.
  • the invention may be implemented in hardware or software, or a combination of both (e.g., programmable logic arrays). Unless otherwise specified, the algorithms included as part of the invention are not inherently related to any particular computer or other apparatus. In particular, various general purpose machines may be used with programs written in accordance with the teachings herein, or it may be more convenient to construct more specialized apparatus to perform the required method steps. However, preferably, the invention is implemented in one or more computer programs executing on programmable systems each comprising at least one processor, at least one data storage system (including volatile and non-volatile memory and/or storage elements), at least one input device, and at least one output device. Program code is applied to input data to perform the functions described herein and generate output information. The output information is applied to one or more output devices, in known fashion.
  • Program code is applied to input data to perform the functions described herein and generate output information.
  • the output information is applied to one or more output devices, in known fashion.
  • Each such program may be implemented in any desired computer language (including machine, assembly, high level procedural, or object oriented programming languages) to communicate with a computer system.
  • the language may be a compiled or interpreted language.
  • Each such computer program is preferably stored on a storage media or device (e.g., ROM, CD-ROM, or magnetic or optical media) readable by a general or special purpose programmable computer, for configuring and operating the computer when the storage media or device is read by the computer to perform the procedures described herein.
  • the inventive system may also be considered to be implemented as a computer-readable storage medium, configured with a computer program, where the storage medium so configured causes a computer to operate in a specific and predefined manner to perform the functions described herein.

Landscapes

  • Health & Medical Sciences (AREA)
  • Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Ophthalmology & Optometry (AREA)
  • Surgery (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Medical Informatics (AREA)
  • Molecular Biology (AREA)
  • Biomedical Technology (AREA)
  • Animal Behavior & Ethology (AREA)
  • Biophysics (AREA)
  • Public Health (AREA)
  • Veterinary Medicine (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Signal Processing (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Eye Examination Apparatus (AREA)

Abstract

A system and method for locating and modeling eyes in imagery for automated photorefractive screening. The invention includes a system and method for locating a patient's (16) eyes in a digital image that includes each eye as illuminated by a near-axis flash (12), including automatically finding light reflexes in the digital images as indicative of the location of each eye. Automatically finding light reflexes includes analyzing such light reflexes to determine possible pupil and sclera borders. The invention further includes automatically fitting a corresponding model to such possible pupil and sclera borders, analyzing the model of each eye to determine possible abnormalities in each eye; and outputting a possible diagnosis for each eye based on such analyzing. Other aspects of the invention include measuring retinal reflexes and corneal reflexes from the indicated eye models as an indicator of anomalies in the patient's (16) eyes, and generating a digital image of each of a patient's (16) eyes with a camera having a flash (12) positioned near to a center line of a lens of the camera (10) so as to generate images with bright, sharp light reflexes.

Description

AUTOMATED PHOTOREFRACTIVE SCREENING
TECHNICAL FIELD
This invention relates to instruments for measuring characteristics of eyes, and more paπicularly to a system and method for locating and modeling eyes in imagery for automated photorefractive screening, and for enabling determination of the presence of anomalies in the patient's visual anatomy.
BACKGROUND
A well-known eye examination consists of shining a light into the interior of the eye and visually inspecting the uniformity of the reflected light, normally visible as a red color filling the pupil. Any deviation from uniformity, either in one eye or between the pair of eyes, indicates a potential problem in the patient's visual anatomy. Such light refractive screening is thus a useful tool in patient eye care.
Ocular disorders such as strabismus, various forms of refractive errors (myopia, hyperopia, and astigmatism) and opacities of the ocular media are the leading causes of amblyopia, or vision loss. These combine to cause amblyopia in approximately 5% of the population. However, some form of visual problem, not necessarily leading to amblyopia, may be present in over 20% of children.
Amblyopia does not simply decrease visual acuity. In addition to loss of recognition acuity, there is loss of grating acuity, vernier acuity, sensitivity to contrast, distortions of shape, locations in .space, motion and worsening of crowding stimulus. Moreover, .amblyopia is the leading cause of monocular vision loss in people under the age of 30.
Although monocular vision loss does not impact intellectual capacity, strabismus and other disfiguring disorders can have a tremendous emotional impact. Undetected need for glasses obviously can make school performance more difficult during crucial early grades. Lack of early school success can compound itself and lead to unfulfilled scholastic potential. Further, there are professions in which outstanding visual performance is required. Delaying recognition of various visual disorders may limit children from these future career choices. Human visual development can be considered as having three stages: (1) the period of development of visual acuity to approximately 3-5 years of age; (2) the period from which deprivation is effective in causing amblyopia, from a few months to 7-8 years; and (3) the period from which recovery from amblyopia can be attained partially or fully (time of amblyopia to at least teenage years). Because of the foregoing, clinical intervention in amblyopia is most efficacious if it takes place as soon as possible. Over the last 20 years, investigators have shown that many strabismic and amblyopic states result in abnormal visual experience in early life and these can be prevented or reversed with early detection and intervention. Therefore, identification of the defect at the earliest possible moment is crucial. Some studies have suggested that, for the best possible outcome, this must occur within two years of birth.
Despite this, previous reports have shown the primary care physicians are not always utilizing currently available screening techniques. In fact, one large study estimated that pediatricians are screening less than 40% of children age three or younger. This may be because of impracticality, either from a clinical or practical standpoint. In fact, the National Institute of Health has made it one of its priorities to improve detection of refractive errors, strabismus, and amblyopia in infants and young children. This priority calls specifically for the study of better, and more cost-effective public health methods for testing visual function in preverbal children. Thus, the inventors have perceived a need for automated photorefractive screening. In order to perform automated photorefractive screening, accurate models of the eye are necessary. The inventors have determined that it would be useful to have an automated way of generating such models and presenting such models for diagnosis, either automatically or by a physician or other care-giver. Accordingly, the present invention is directed to a system and method for locating and modeling eyes in imagery for automated photorefractive screening, and for enabling determination of the presence of-anomaiies in the patient's visual anatomy.
The problem solved by the invention differs from other eye and face detection problems. Pupil-tracking may be accomplished via a head-mounted apparatus (for instance, see H. Kawai and S. Tamura, "Eye movement analysis system using fundus images", in
Pattern Recognition, 19(1), 1986, pp. 77-84), which fixes the location of the eyes relative to an image. Eye tracking may be accomplished via a constrained updating of an eye model (for instance, see X. Xie, R. Sudhakar and H. Zhuang, "On improving eye feature extraction using deformable templates", in Pattern Recognition Letters, 27(6), 1994, pp. 791-799; A. Yuille and P. Hallinan, "Deformable Templates", Chapter 2 in Active Vision, MIT Press, 1992, pp. 21-38). Face recognition (for instance, see International Conference on Automatic Face and Gesture Recognition, proceedings of, edited by M. Bichsel; 1995; Second International Conference on Automatic Face and Gesture Recognition, proceedings of, 1996, in Killington, Vermont) may be accomplished using a variety of image transformations or feature extrac- tions. Additional information may be found in several general image processing references, including K. Castleman, Digital Image Processing, Prentice-Hall, 1996; R. Haralick, L. Shapiro, Computer and Robot Vision, 1, 2, Addison, 1992; and R. Jain, R. Kasturi and B. Schunck, Machine Vision, McGraw-Hill, 1995.
SUMMARY
A system and method for locating and modeling eyes in imagery for automated photorefractive screening. The preferred embodiment of the invention includes a digital camera having a lens-mounted flash for obtaining a digital image of the face of an individual, and a suitably programmed processor, such as a general purpose digital computer, for locating an eye of the individual in the digital image, modeling structures in the eye, analyzing the digitized eyes of the individual for eye disease, and providing a recommenda- tion for treatment.
More particularly, in one aspect, the invention includes a system and method for locating a patient's eyes in a digital image that includes each eye as illuminated by a near- axis flash, including automatically finding light reflexes in the digital images as indicative of the location of each eye. Automatically finding light reflexes includes analyzing such light reflexes to determine possible pupil and sclera borders. The invention further includes automatically fitting a corresponding model to such possible pupil and sclera borders, analyzing the model of each eye to determine possible abnormalities in each eye; and outputting a possible diagnosis for each eye based on such analyzing.
In another aspect, the invention includes a system and method for locating and modeling a patient's eyes in a digital image that includes each eye as illuminated by a near- axis flash, including finding and indicating bright spots in the digital images as possible comeal reflexes of each eye; finding and indicating red-black and black-white gradients, each comprising a set of gradient points, around such bright spots as possible pupil and sclera borders, respectively, of each eye; fitting a plurality of circles to subsets of such gradient points as possible models for each eye, each eye model having an associated strength; and sorting the eye models for each eye by strengths, and indicating the strongest corresponding eye model as best representing each eye. Another aspect of the invention includes measuring red reflexes and comeal reflexes from the indicated eye models as an indicator of anomalies in the patient's eyes. Yet another aspect of the invention includes generating a digital image of each of a patient's eyes with a camera having a flash positioned near to a center axis of a lens of the camera so as to generate images with bright, sharp light reflexes. The target audience for the invention is previously unreached large populations, such as school children..in order to keep the imaging process simple and cheap, no special apparatus is used. By using the charactenstics of a near-axis flash and novel processing techniques, the eyes may appear anywhere in the image of the patient's face (simplifying use of the inventi •on , by fi re ildj eyecare „ pmra.c-tt;itt;inonnιrrs), t thnee e evyeess o 0f1 t uhe p vatient need not fill the camera image frame, the camera may • b_e
Figure imgf000007_0001
a att a 3 H diiςsttainnccee fr rroomm the p vatient that is not intrusive to the patient, and the patient's head need not be tightly constrained.
The details of one or more embodiments of the invention are set forth in the accompa¬ nying drawines and the description below. Other features, objects, and advantages of the invention will be apparent from the descπption and drawings, and from the claims.
DESCRIPTION OF DRAWINGS
FIG. la is block diagram showing the preferred physical components of the invention as used for imaging an eye of a patient.
FIG. lb is a perspective view of a lens 14 and flash 12 in accordance with the invention.
FIG. 2a is a diagram showing the light path from a flash to a pair of normal eyes and back to a camera, as well as the appearance of the comeal reflexes from the camera view.
FIG. 2b is a diagram showing the light path from a flash to a pair of eyes, one of which is abnormal, and back to a camera, as well as the appearance of the comeal reflexes from the camera view.
FIG. 2c is a diagram of the retinal reflex 26 for a pair of normal eyes, showing uniform reflectance equal in both eyes and no refractive error.
FIG. 2d is a diagram of the retinal reflex for a pair of eyes, one of which has a retinal pathology that causes lesser reflectance. FIG. 2e is a diagram of the retinal reflex for a pair of eyes, each showing a crescent- shaped reflectance, which indicates the same type of refractive error in both eyes.
FIG. 2f is a diagram of the retinal reflex for two eyes, showing different crescent reflectances, which indicates differing amounts of refractive error in the eyes.
FIG. 3a is a close-up photograph of a patient's eye. FIG. 3b is a close-up photograph of a patient's eye with a superimposed graphical representation of the eye model used in the invention.
FIG. 4a is a close-up photograph of a patient's eye containing an abnormal pupil area,
FIG. 4b is a close-up photograph of a patient's eye containing an abnormal pupil area with a superimposed graphical representation of the eye model used in the invention. FIG. 5 is a flowchart showing an overview of the steps of the image processing of the preferred embodiment of the invention.
FIG. 6a is a close-up photograph of a patient's eye without a superimposed model.
FIG. 6b is a close-up photograph of a patient's eye, showing black-to-white gradient points located for the image in FIG. 6a. FIG. 6c is a close-up photograph of a patient's eye, illustrating the result of applying constraints to the black-to-white gradient points shown in Fig. 6b.
FIG. 6d is a close-up photograph of a patient's eye, showing the highest scoring subset of black-to-white gradient points from the set shown in FIG. 6c.
FIG. 7 is a flowchart of a preferred test sequence using the invention, once an image is acquired and the eyes are located and modeled.
Like reference numbers and designations in the various drawings indicate like elements.
DETAILED DESCRIPTION
Imaging System
FIG. la is block diagram showing the preferred physical components of me invention as used for imaging the eyes of a patient. A digital camera 10 is modified to include a "near- axis" flash 12 positioned slightly off the optical axis of the lens 14 of the camera 10. The flash 12 may be, for example, a ring flash or one or more small flash units attached near the optical axis of the lens 14 of the camera 10. A suitable camera is the Model DC120 Digital Camera from Kodak Corporation, with an attached 6X telephoto lens 14 and a small flash unit 12 mounted to the front of the telephoto lens, slightly off-center. The telephoto lens increases the field of view of the camera as it images the patient's eye, thus increasing the number of pixels available for analysis.
The proximity of the flash 12 to the center of the lens 14 provides bright, sharp light reflections (reflexes) that enable the image processing aspects of the invention to model the patient's eyes accurately and reliably. Use of a flash rather than continuous lighting means that the patient's pupils can be fully open without causing significant discomfort to the patient. FIG. lb is a perspective view of a lens 14 and flash 12 in accordance with the invention. The flash 12 in the illustrated embodiment uses visible light. However, a flash that principally emits non-visible light, such as infrared flash, may be used instead with a suitable camera 10. The camera 10 is used to capture and directly download a digital image of the face of a patient 16 to a conventional computer 18 for processing. In the preferred embodiment of the invention, the camera 10 is used to obtain a high quality digital image of the patient's face and eyes, preferably a full-color (e.g., RGB), high resolution (e.g., 8 bits per pixel for each color) image of about at least 640x480 pixels in size, and preferably about at least 1024*768 pixels in size. Although a color image is preferred, the techniques of the invention can be applied to gray-scale images. However, for purposes of this explanation, a color digital image will be assumed.
In an alternative embodiment, the camera 10 may be a conventional film camera, such as an instant photography camera, the images of which are optically scanned into the computer 18. In another alternative embodiment, processing of the image in accordance with the invention is done within a specially programmed digital camera 10, allowing for an integrated photoscreening system.
Photoscreening
Photoscreening works by photographing two distinct light reflexes from the eye, a corneal reflex and a retinal reflex. Light from a flash 12 is bounced off of the air-tear film interface on the cornea of the eye. Since the comeal surface is essentially spherical, the closest point of the cornea to the camera 10 will reflect the light flash back to the camera as a co eal reflex. If the eye faces the camera, the light reflection point is normally centered in the pupil of the eye. If the eye has strabismus or a related defect, the light reflection point is not centered on the pupil.
Several comeal reflex conditions are shown in FIGS. 2a-2b. FIG. 2a is a diagram showing the light path from a flash to a pair of normal eyes and back to a camera, as well as the appearance of the comeal reflexes from the camera view. As shown, the comeal light reflex 20 is essentially centered within the outline of the iris 22. FIG. 2b is a diagram showing the light path from a flash to two eyes, one of which is abnormal, and back to a camera, as well as the appearance of the comeal reflexes from the camera view. As shown, the comeal light reflex 20 is essentially centered within the outline of the iris 22 for the left eye, but the comeal light reflex 24 for the right eye is off-center. Note that the tolerance of "centered" depends in part on the displacement of the flash 12 from the optical axis of the camera lens 14.
Light from a flash 12 also bounces off of the back of the eye through the retina. The retinal light reflex gives information on ocular pathologies and the refractive state of the eye. If one eye has any pathology (e.g., a tumor, blood, cataract, etc.), the retina does not reflect as much light as in a normal eye. The difference in reflectance between an abnormal retina and a normal retina is noticeable. Also, in an eye without refractive error, the retinal light reflex appears as a uniformly bright circle. If the eye has any refractive error, a crescent will appear in the retinal light reflex. The relative size of the crescent to the pupil diameter is generally related to the refractive error. Thus, the differences in reflectance that may be perceived can be global (differences between eyes) or local (differences in areas of an eye). Several retinal reflex conditions are shown in FIGS. 2c-2f. FIG. 2c is a diagram of the retinal reflex 26 for a pair of normal eyes, showing uniform reflectance equal in both eyes and no refractive error. FIG. 2d is a diagram of the retinal reflex for a pair of eyes, one of which has a retinal pathology that causes lesser reflectance 28. FIG. 2e is a diagram of the retinal reflex for a pair of eyes, each showing a crescent-shaped reflectance 30, which indicates the same type of refractive error in both eyes. FIG. 2f is a diagram of the retinal reflex for a pair of eyes, showing different crescent reflectances 32, 34, which indicates differing amounts of refractive error in the eyes.
Eye Model
In order to automate photoscreening, a model of the eyes in an image of a patient's face must be'generated. In the preferred embodiment of the invention, the frontal projection of an eye is modeled by a pair of concentric circles:
Figure imgf000012_0001
(x - )2 + (y - 4S)2 = C2 2 (2)
where (A, B) are the coordinates for the center of the eye, C, denotes the boundary between the pupil and iris, and C2 denotes the boundary between the iris and sclera. This model locates the eye in a two-dimensional coordinate system and establishes a means to measure the reflexes from the eye. After model parameters are generated, the model may be used to automatically generate a diagnosis or screening decision for the eye anatomy. As an example of how such a model "fits" an eye, FIG. 3 a is a close-up photograph of a patient's eye, while FIG. 3b is a close-up photograph of a patient's eye with a superimposed graphical representation of the eye model used in the invention. C, denotes the iris-pupil boundary, and C2 denotes the iris-sclera border. As another example, FIG. 4a is a close-up photograph of a patient's eye containing an abnormal pupil area. FIG. 4b is a close-up photograph of a patient's eye containing an abnormal pupil area with a superimposed graphical representation of the eye model used in the invention. I„ the preferred embodiment of the invention, a set of reflexes is measured for each eye The comeal reflex is the reflect.on of light from the front surface of the eye (the cornea). It typically appears as a bright spot. The comeal reflex CR is modeled as a four-connected reσi0n (defined below). In FIG. 3b, the comeal reflex is indicated by an overiayed region CR. (Note that the comeal reflex CR in FIG. 3b is not necessarily located at the center of the model ) The comeal reflex is always present in an image of the eyes taken through the camera 10 and flash 12 described above. This fact is used to automatically locate the eyes in an image of a patient's face, even if the eyes do not fill the camera image frame.
The retinal (or red) reflex RR is part of the pupillary reflex, the reflection of light from the inner surface (retina) of the eve, returning back through the pupil. In a color image, the retinal reflex typically fills the pupil with a reddish tint. In the preferred embodiment, the retinal refle RR is modeled as a single intensity value corresponding to the average amount of red color visible in the pupil. (In a grayscale image, the retinal reflex should appear as a distinctly identifiable set of gray values, and could be similarly modeled).
The crescent reflex MR is also part of the pupillary reflex. Depending on the curvature of the eye, and the location of the flash, a stronger concentration of reflected light, usually crescent-shaped, may appear around a portion of the boundary of the pupil. When present the crescent reflex MR is preferably modeled as a four-connected region (defined below) In FIG. 3b, the crescent reflex has been displayed by an overiayed region MR. Other anatomical problems can also cause a non-uniform pupillary reflex. For instance a cataract can cause a portion of the pupillary reflex to appear brighter than normal. These "other" abnormal pupil areas may occur in any portion of the pupillary reflex. When present, each abnormal pupil area AR is modeled as a four-connected region (defined below). i FIG 4b, an abnormal pupil area has been displayed by an overiayed region AR.
A four-connected region is defined as follows. Formally, let I denote an image, consisting of a two-dimensional raster of pixels, organized on an integer grid of R rows and C columns. Let (r, c) denote a pixel location in image I. A four-connected path P between two pixels (r„ c.) and (/ c,) is defined as: (r „,£„)} .where
Figure imgf000014_0001
(r + l,c ) or
(r -l,c )or
(p.,.p.,) = Vx = l...N-l, (r _• ,c * +1) or
(3)
( Cj-l)
0 ≤r <Λ, 0 ≤c <C, (r.,ct) =(rltcχ) and (r^) =(r//,c//)
A four-connected region S is defined as:
S = {(rl,cl),(r2,c2),(r c3),...,(rl/,cll)}, where
(4) P((rt,c.),(r ,c )) exists Vi eN
Informally, Eq.4 describes any set of pixels in I contiguously connected, such that any pixel in the set may be reached from any other pixel in the set by at least one path of pixels also in the set. In the preferred embodiment, the possible as well as actual comeal reflexes, crescent reflexes, and other abnormal pupil areas, as described in this work, are all modeled as four-connected regions.
Locating, Modeling, Analyzing, and Diagnosing Eyes
One .aspect of the invention is to derive necess.ary parameters from a digitized image of a patient's face and eyes sufficient to locate each eye and generate a model for each eye as described above. To accomplish this goal, the invention advantageously utilizes the characteristics of a particular illuminating flash, which produces light reflexes not seen in normal intensity imagery. The preferred embodiment of the invention specifically takes advantage of such light reflexes using special image processing to locate and model the eye so as to enable automated photorefractive screening.
More particularly, the presumed content of the camera image generated as described above is used advantageously to reduce the complexity of the search for possible eye models. Each eye is assumed to contain a comeal reflex CR, which appears as a saturated (bright) spot somewhere inside the pupil. Each pupil is assumed to contain some amount of red shading, which creates a prominent gradient in the red band at the pupil-iris boundary. (This is referred to below as the "red-black" gradient for convenience. However, a similar bound- ary can be ascertained in gray scale images as distinct differences in grayscale gradients.) Each sclera is assumed to appear as a shade of white, which creates a prominent gradient in full color at the iris-sclera boundary. (This is referred to below as the "black-white" gradient for convenience. A similar boundary can be ascertained in gray scale images as distinct differences in grayscale gradients.) These three features drive the search process. FIG. 5 is a flowchart showing an overview of the steps of the image processing of the preferred embodiment of the invention. In broad terms, the steps of the preferred process for locating, mocleling, analyzing, and diagnosing a patient's eyes are as follows, each of which is discussed more fully below:
(1) Generate a digitized image of a patient's face and eyes, as illuminated by a near-axis flash (STEP 50).
(2) Find bright spots in the image as possible comeal reflexes (STEP 51).
(3) Find red-black and black-white gradients around bright spots as possible pupil and sclera borders (STEP 52).
(4) Fit circles to subarcs of gradient points as possible eye models (STEP 53). (5) Sort the eye models by strengths (i.e., the likelihood of being correct models of a . n eye), and take the two best as hypothesized eyes (STEP 54) (this assumes that the patient has two eyes). (6) Measure retinal and comeal reflexes from the selected eye models, and look for abnormalities or anomalies in such reflexes (STEP 55). (7) Make a recommendation as to diagnosis (STEP 56).
(I) Generate a digitized image of a patient 'sface and eyes, as illuminated by a near-axis flash
As noted above, the camera 10 and near-axis flash 12 are used to capture an image of a patient's face and eyes. If the camera 10 is digital, the image data may be directly down- loaded to the computer 18 for processing as a digital image. If the camera 10 is a conventional film camera, such as an instant photography camera,- the images are optically scanned into the computer 18 and stored as a digital image. The unprocessed images may be displayed on a computer monitor or printed (monochrome or color), and may be annotated using suitable conventional graphics software.
(2) Find bright spots in the image as possible comeal reflexes
Each three-band (i.e., RGB) image input into the computer 18 is preferably thresholded in each band to locate areas that are possible comeal reflexes. The images are logically AND'd together to produce a binary image of bright areas. Bright pixels are then spatially grouped into four-connected regions using a queue-based paint-fill algorithm. Bright regions with areas within a selected size range are labeled as possible or hypothesized comeal reflexes.
Formally, the input image I is thresholded to produce a binary image Ibnι;hl indicating which pixels are bright:
j J ϊ Ir^ TB > I gr„n TB< tuy T3 (5) br,sh' I 0 otherwise
where Ired, I^. and Iblue are the individual band values of the input image, Ibnghl=l indicates a pixel is bright, and TB is a selected threshold value. A queue-based paint-fill algorithm is then applied to spatially group bright pixels into four-connected regions using. The pseudocode for this algorithm is given below (the image I referred to in the pseudocode in this case is the bright image It,right): initialize LABEL = 2 for each pixel P in the image I if I(P) = 1 let Q be an empty queue of pixel indices add P to Q increase LABEL by 1 set I(P) = LABEL until Q is empty do: select the top point, X, from Q for all 4-connected neighbors N of X do:
if I(N) = l add N to Q set I(P) = LABEL end if end for remove X from Q end until end if end for
Thereafter, the size of each bright region is counted as the number of pixels with the same label:
1 I _ = LABEL
SlZeLABEL -Σ 0 otherwise
where (r, c) denotes all pixel locations in the image. Regions satisfying
TS ≤ SlZe LABEL ≤ TL (7)
•are considered to be possible comeal reflexes, where Ts and TL are selectable threshold values.
(3) Find red-black and black-white gradients around bright spots as possible pupil and sclera borders
For each possible comeal reflex (region) LABEL, a set of pixels is found circularly around the comeal reflex, at distances within the expected range of pupil and iris radii, that exhibit strong gradients. This is accomplished by considering the centroid of the region, together with pixels along an evenly distributed set of compass directions from the centroid, as forming a set of rays. A linear gradient filter is applied to the pixels along a segment of each ray. For each segment, the pixel with the highest black-to-white gradient and the pixel with the highest red-to-black gradient are noted. Any direction for which the highest gradient points are not within the expected ranges of distance from the centroid, or for which the black-to-white gradient is closer to the centroid than the red-.to-black gradient, are discarded as outlying stray pixels ("outliers").
Formally, the centroid (xc, yc) of the four-connected region with the value LABEL is found as:
c I ht = LA r I
Σ brig BEL brigh _t = LABEL
0 otherwise 0 otherwise
X = (8) size LABEL size LABEL
The centroid, together with an equiangular set of compass directions, forms a set of rays:
(x,y )...(x +cosθ,y ÷sinθ) θ =0,rn,2r θ.,"3rn θ,'...,360 (9)
where Tθ (degrees) is a selectable algorithm parameter that controls the angular resolution of the set. The pixels (integer coordinates) along each ray θ are enumerated between:
(xc +r I,cosθ,y c +r 1,sinθ )...( v*c +T 4 cosθ.y ' e +7/ 4 sinθ) (10)
according to the pseudocode set forth below, and T, .and T4 are algorithm parameters describing the minimum and maximum expected radius (in pixels) of the pupil and iris, respectively.
given x l,yl,x2,y2 (integers) if xl > x2 then swap xl,yl with x2,y2 let x,y (reai-valued variables) = xl,yl loop add floor(x,y) to list if x2-xl > |y2-yl] then x=x+l y=y+(y2-yl)/(x2-xl) else if y2 > yl then y=y+l else y=y-l x=x+(x2-xl)/(|y2-yl] end if until |x2-x| < 0.5 and |y2-y| < 0.5 if swapped at beginning, reverse list
After-such enumeration, let the list of points along each ray θ be denoted as:
RAYn ..N-
<U)
where Nθ is the number of points in the list returned by the above pseudocode.
Table 1
Figure imgf000019_0001
A 1 *7 linear gradient filter, shown in Table 1 above, is convolved with the points along the ray to compute a black-to-white gradient BWθ:i (Eq. 12) and a red-to-black gradient RBθ:i (Eq. 13) estimate at each pixel on each ray:
BW. = ∑ \ I . RAY. . ) - I (RAY. ) +I (RAY- )
(12)
- / grttπ (RAYR V:iι-j .)' +/ blue (^RAY θ.:ι.*j .) -/ btut (RAY O.:ι-j .) I , » = r. ..Ne -3
RB
*£ red ,-? ~ ΛM J z = 3 ... r (13) where T-, and T3 are algoπthm parameters describing the maximum and minimum expected radius (in pixels) of the pupil and iris, respectively.
The pixel with the strongest black-to-white gradient, RAYθ w, is found such that:
B W* . * B We . V« = 3 ...Nθ -3 (14)
The pixel with the strongest red-to-black gradient, RAYθ r, is found such that:
RB, r ≥ RBe , V =3 ...Nθ -3 (15)
These pixels form two lists of points, radially ordered about the possible comeal reflex, one point per ray for each list. The list of black-to-white gradient points is denoted as:
Figure imgf000020_0001
The list of red-to-black gradient points is denoted as:
Figure imgf000020_0002
Since these lists form closed loops, the point RAYΘ:1 = RAYe.360., = RAYθ+360 , by cyclic wrap-around.
(4) Fit circles to subarcs of gradient points as possible eye models
Using a least-squares solution, circle equations are fit to the gradient points discov- ered in the previous step. For each possible comeal reflex, one circle is fit to the black-to- white gradient pixels and one circle is fit to the red-to-black gradient points. The preferred embodiment of the invention uses a novel method for eliminating outliers in the circle fit, based upon subarcs. The removal of outliers as arc subsets follows naturally from the problem context. If an eye is imaged in a non-forward orientation, or if an eyelid covers some portion of the iris, or if the retinal reflex fills only part of the.pupil, then fitting a circle to the strongest large subarc generally yields the most reliable model.
More particularly, using a least-squares solution a circle equation is fit to eight subsets of each type of gradient point. Each subset covers 270° of arc, starting at 45° increments. Three strength measures are computed for each circle fit: residual, average gradient, and arc coverage. These values are normalized and summed as a strength score. The circle (out of 8 possible, in the illustrated embodiment) with the overall best strength score is taken as the model for the given hypothesized comeal reflex. Formally, let a set of pixels to be modeled by a circle:
(x -a)2 + ( -6)2 = r2 (18)
be denoted as:
(x ,y ) i = l . , .N
(19) ι { χ I, ,"y l, )' s χ2>y1 s - - - s χ H -y»»
where a, b and r are the model parameters and N is the number of points to be modeled. The following procedure is used to derive the pupil-iris boundary (model (A,B,C,), Eq. 1) from the set of red-to-black gradient points (RAYθ:r, Eq. 17) and the iris-sclera boundary (model (A,B,C2), Eq. 2) from the set of black-to- white gradient points (RAYΘ:W, Eq. 16). In the first case, the substitution of notation is (a.b.r) - (A,B,C,) and (x,,y,) = (xθ:π ye:r). where i increments by one as θ increments by Tβ, .and N = 360/Tθ. In the second case, the substitution of notation is (α,b,r) = (A,B,C2) and (x„y.) = (xθ:w» YΘ-.W). where i increments by one .as θ increments by Tβ, .and N = 360/Tθ.
Let χ denote .an error term which sums the distances between the model (α,b,r) .and each point (x„y,) to give the least-squares fitting problem:
X2 ( ,b,r) = ∑ (r 2 - (x - α)2 - (y . - ft)2)2 (20)
<»t where the solution is the model that minimizes χ\ Expanding Eq. 20 yields:
(r22 +2x a -a 2 -y2 +2y b - b 2)1 (21)
Figure imgf000022_0001
Using the substitution:
α -a +o -r (22)
i Eq. 21 yields:
(-x 2 +2x a -y +2y b - α)2 (23)
Figure imgf000022_0002
which may be recognized as linear in the unknowns a, b and α. Given these conditions, a solution to Eq. 23 may be found by use of the well-known normal equations (for instance, see W. Press et al., Numerical Recipes in C: The Art of Scientific Computing, Cambridge Press, (2nd edition) 1992), written in matrix form as:
Ax =b (24)
where A is N*3, x is 3*1, b is Nχl, and the matrices are constructed as:
Figure imgf000022_0003
The solution to Eq. 24 is:
x =(A τA)"lA τ b (26)
from which α and b are found directly, and r is found by substitution back into Eq. 22. In the preferred embodiment, the Gauss-Jordan elimination method (see, for example, W. Press et al., Numerical Recipes in C: The Art of Scientific Computing, Cambridge Press, (2nd edition) 1992) is used to solve the matrix inversion in Eq. 26.
Eq. 26 yields the optimal solution (best model) if all points (x„y,) are "inliers", and the measurement noise is uniformly distributed. The points located via strong gradients rarely meet these constraints. For instance, FIGS. 6a-6d are close-up photographs of a patient's eye, showing various stages in the removal of outliers. FIG. 6a is a close-up photograph of a patient's eye without a superimposed model. FIG. 6b shows the black-to-white gradient points 60 located for the image in FIG. 6a. These pixels illustrate a spurious response to an eyelid as well as some unstructured outliers. A popular method for fitting in the presence of outliers is the least-median-of-squares method (see, e.g., P. J. Rousseeuw, "Least median of squares regression", in Journal of American Statistics Association, vol. 79, 1984, pp. 871-880, and P. Meer, D. Mintz and A. Rosenfeld, "Robust Regression Methods for Computer Vision: A Review", in International Journal of Computer Vision, 6:1, 1991, pp. 59-70). This algorithm works in the presence of up to 50% outliers, but is 0(n2) in computational complexity.
To avoid slow computation, the preferred embodiment uses several constraints combined with a novel method to eliminate outliers in 0(ri) time:
(a) Any pixel RAYθ:w is discarded for which:
RBθ:w ≤ 100 (27)
(b) Any pixel RAYθ:r is discarded for which:
BWθ:r ≤ 100 (28)
In both cases (a) and (b), a gradient below 100 implies an almost imperceptible boundary that is probably a response to noise.
(c) Any pixel that is a duplicate of another pixel is discarded:
(*, s (.χ j>yj i≠J (29)
Such duplicate pixels may appear if Tβ is too small, resulting in oversampling. • , D Δ Y AYn the pixel RAYβ , is discarded if: (d) For any pixel-pair RAY0 w and KA TC fl r , ■.»»- p
. , , - 3 ( 0)
This applies .h narurai cons.ra.n, <ha< *. i* ■*»»<•« ™s, be outside the pupil boundary.
W If a pixel is fend ,0 be isolated, it is discarded. A pixel (x„V,) is considered not to
be isolated if: yy -o,,-?? 2-5 > , / - 4.. (»>
AU discarded pixels are treated as outtøs in that they are not used duπng the ieast- squares fitting procedure. HO. 6c tl.ustra.es the resui. of app.ying E,. 27-3 , to the biaCcto- white .radieή; po.nts shown in Fig. 6b. Note tha, oniy the unsecured out.iers have been removed from the image. A se, of sutured outUers (due to a response from the eyehd) are
still present. . ,
For the remains inliers, Ec, 26 is applied to eight subsets of the po.nts RAY., and
,o eight subsets of the points RAY., Each subset is formed by a set of points with a prescribed range of θ. The effect is tha, each subset contains poin,s on a s barc of the original s mphng circle. Formaily. eight subsets of RAY., and of RAY., are r=sPec,,ve,y defined -as:
fl ϋ - - no.r V 2T_ θ , ......,,270 δ =0,45 315 (32)
Figure imgf000024_0001
SUBARCM.r θ = 0, ra,2rθ,...,270 δ =0,45 315 (33)
Figure imgf000024_0002
Each subset describes the gradient points located in 270° of contiguous arc around the possible comeal reflex, starting at 45 ° increments.
For each subset SUBARCS:Θ:W or SUBARC4:θ.r and its resulting circle ( ,6,r), the residual, average gradient strength, and arc coverage are used to calculate a score. The score values all depend upon the number of inliers used for fitting the circle. This number, denoted N,n, is computed as 360/T0 (the number of pixels in the original set RAYθ w or RAYθ r) minus the count of pixels .discarded or omitted by any of the appropriate steps given in Eq. 27-33. The residual is computed as:
residual =
N Σ 1 = 1 7' (x . -a) + 0\ -*) (x I ,y ' I ) € inliers
(34) otherwise
The average gradient strength for each SUBARCδ.θ.w is computed as:
1 ( BW (x ,y ) e inliers
*""<"<„ = — ∑ \ Q - ot'hc ise ( 5)
The average gradient strength for each SUBARCδ.θ:r is computed as:
Figure imgf000025_0001
The arc coverage is computed as:
coverage = (37)
2πr
where r is the radius of the fitted circle. If coverage < 0.3, then the subset is deemed unreliable (the fitted points span only 30% or less of the circle boundary). If all eight subsets SUBARC4:θn. and all eight subsets SUBARC4:Θ:w are deemed unreliable, then the possible comeal reflex LABEL is disproved as a candidate eye model. The scores for gradient BW and gradient^ are capped at 500; any larger value is set to 500. The scores for each SUBARC8:θ:r and SUBARCδ:θ:W are respectively computed as:
residual gradient score, = 1.0 — — ++ ^-^ + coverage (38)
1.5 500 residual 8radient RK score . = 1 .0 - + + coverage (39)
1 .5 500
The maximum score in each case is 3. The weighting factors were derived experimentally.
If at least one subset of red-to-black gradient points SUB RCδ.e.r is deemed reliable, then the highest scoring subset SUBARCΔ.θ:r is found such that: score Δ.r ≥ score. Vδ = 0, 45, . . .3 15 Q
The parameters (a,b,r) of the circle fitted to the points SUBARCΔ.θ;r are used to derive the parameters for a possible pupil-iris boundary (Eq. 1) as follows: A=α, B=6, C,=r. A possible iris-sclera boundary radius C2 (Eq. 2) is calculated as the mode radius of BWa.w, where the mode is determined as the highest count at an integer radius, where the count at each radius is computed as:
count = j 1 radt s i-i 0
Figure imgf000026_0001
If at least one subset of black-to-white gradient points SUBARCδ:θ:r is deemed reliable, then the highest scoring subset SUBARCΔ:θ:r found such that: score ≥ score Vδ = 0, 45, ... , 315 (42)
FIG. 6d shows the highest scoring sub.arc of black-to-white gradient points from the set shown in FIG. 6c. If no subset of red-to-black gradient points SUB RCδ:θ:w was deemed reliable, then the parameters (a.b.r) of the circle fitted to the points SUBARCΔ:Θ:W are used to derive the parameters for an iris-sclera boundary (Eq. 2) as follows: A=α, B=6, C2=r. In this case, no pupil-iris boundary is known, so the value of C, (Eq. 1) is undefined.
The overall score for the possible eye model associated with the possible comeal reflex LABEL is calculated as: score L,AB „E„L, = score Λ:r + score Δ Λ .w .
Figure imgf000027_0001
Note that if no plausible subarc model is found for a pupil-iris circle, the first half of the overall score is 0. Although it is possible to model an eye without locating the pupil boundary, the diminished score reflects the diminished confidence in the model.
5 (5) Sort the eye models by strengths, and take the two best as hypothesized eyes
Each hypothesized comeal reflex now has an overall model strength (i.e., likelihood of being an eye). In the next step, the set of strengths are sorted. The two strongest models are selected as the correct eye models. The correct eye models may be indicated by output from a computer, including by superimposing a graphical representation of each model on 10 images of a corresponding eye.
More particularly, each possible comeal reflex LABEL that resulted in a possible eye model (Eq.1-2) has an associated score (Eq. 43). The two possible eye models with the highest total score, within the following constraints, are taken to be the actual eye models, denoted as: 15 (Aleft, Blen, C , C ) and (Aright, Brighl, , C )
First, the centers of the eye models must be at least two eye diameters separated from each other:
i (Λ up -Ar,gh, ? + (BUfi ~ B right ) ' > 2(C, (44)
Second, the orientation of the patient's face in the image is assumed to be either vertical or horizontal, but in either case level with one of the parallel sets of image bound- 20 aries. This may be verified by:
Λ r, right B left -B right arctan <30 ° or arctan <30° (45)
B ft ~ righ .t A l,eftΛ 'A rig ^ht The user-given orientation of the patient's face in the image (either upright, sideways- left, or sideways-right) is used to determine which model i-n the strongest pair corresponds to the left eye and which model corresponds to the right eye. If no pair of possible eye models passes these constraints, or fewer than two possible eye models were found, then the image is reported as being atypical and processing halts in the preferred embodiment of the invention.
(6) Measure retinal and comeal reflexes from the selected eye models, and look for abnormalities or anomalies in such reflexes
In the preferred embodiment, using the best pair of possible eye models as determined above, several tests are performed to discover if any disorders of the eye are visible in the image.
Test A: In normal eyes, the comeal reflexes will be only slightly off-center (the slight offset is caused by the distance between the camera flash and the center of the camera lens). In this test, for each eye, the offset of the comeal reflex is measured as the ratio of the distance between the comeal reflex centroid and the center of the eye, to the radius of the iris:
offset = (46)
Figure imgf000028_0001
If both comeal reflexes are off-center, then the patient was not looking directly into the camera, and the computer system so indicates. If only one comeal reflex is off-center, the patient has a tropia and the computer system indicates that the patient should be referred to a medical eyecare specialist. Formally, these conditions are tested as:
°ffSe t ft > Tcr °r °ffSe tr,gh, > Tcr " "^ (47)
°ffsetieft > Tcr and °ffs&t,ight > T cr " subJect not looking (48)
where T„ is a selectable threshold. Test B: In normal eyes, the retinal reflexes will appear equal and uniform in both eyes. To test for such uniformity, note that for each eye model (left and right) there is a corresponding pair of concentric circles modeled by (A,B,C,,C2) (Eq. 1-2), and a comeal reflex modeled by CR (Eq. 4) and its associated centroid modeled by (xc,yc) (Eq. 8). The retinal reflex RR of each eye model preferably is measured as the average red value of pixels within the iris-pupil boundary, excluding pixels labeled as belonging to the comeal reflex and its two-deep surrounding border (the border preferably is excluded to minimize error). Formally, let the set of pixels in the pupil to be used for computing the retinal reflex be denoted as PR, defined as:
sJ(c -A)2 +(r -B)2 ≤ C] and
PR = aU (r,c) e l such that (49) r± { 0 ...2} , c± { 0 ...2) € C R
The retinal reflex RR is calculated as:
(r, c) e PR
Σ otherwise
RR =
1 (r, c) € PR 0 otherwise (50)
where (r,c) denotes all pixel locations in the image. If the retinal reflexes have unequal luminance, the computer system indicates that the patient should be referred to a medical eyecare specialist. Formally, this is tested as:
\ 1 RR l,eft A -RR right \ ' > T rr - unequal luminance (51)
where T„ is a selectable threshold.
Test C: Pixels inside the pupillary bound.ary (excluding the comeal reflex) that are brighter than the average intensity inside the pupil are segmented into regions. Any region of sufficient size, with sufficient perimeter adjacent to the pupillary boundary, is labeled as a crescent reflex. In normal eyes, no crescent reflexes will be seen. If either eye exhibits a crescent reflex, then the patient has a refractive error and the computer system indicates that the patient should be referred to a medical eyecare specialist. Any region of sufficient size, but not adjacent to the pupillary boundary, is labeled as an abnormal pupil area (e.g., the region may represent a cataract), and the computer system indicates that the patient should be referred to a medical eyecare specialist. Note that if the pupil-iris boundary is not located for an eye, then the crescent reflex and abnormal pupil area tests are undefined in the preferred embodiment of the invention.
Formally, the average green intensity inside the pupil, denoted GR, is calculated as:
/ green (r, c) 6 PR
Σ 0 otherwise
GR =
Σ (r, c) G PR otherwise (52)
The average blue intensity inside the pupil, denoted BR, is calculated as:
c) € PR
Σ blue (r, otherwise
BR =
(r, c) 6 PR otherwise (53)
An image of bright pixels inside the pupil, denoted 1^., is created as:
(r,c) € PR and I /
- < red , -RR + green -GR +I bl,ue -BR ≥ IO and pupil (54)
( vI red J >200 or / red -RR>30) otherwise where Ipupiι=l indicates the pixel is bright. Bright pixels are spatially grouped into four- connected regions .using the queue-based paint-fill pseudocode algorithm listed above (the image I referred to in the pseudocode is in this case Ipup,ι). The size of each bright region is counted as the number of pixels with the same label:
„ ( 1 / = LABEL size, , = ∑ ft 'T* • (55) la ' 7 I ° otherwise '
The perimeter of each bright region is calculated as the number of bright pixels adjacent to non-bright pixels:
;™ω/ Σ 0 } ≠ 0 (56) r'e
Figure imgf000031_0001
The contact of each bright region with the pupillary boundary is calculated as the number of bright pixels within a small distance of the pupil-iris circle:
/ pupil ,(r,c) ' = LABEL and contact l,ab b ,eel -Σ
Figure imgf000031_0002
≤ 2.5 (57) otherwise
Bright regions are classified according to the following criteria:
SiZeLΛBPI COntaCtLABEL
"B ■ > 0.05 and ^£i > 0.25 - crescent reflex (58)
2πC 2 PerimLABEL
sιze L,A.B*ErL, .. _ _ - , contact L,AB„E„L,
> 0.05 and : ≤ 0.25 - abnormal pupil area (59*
2πC ■ 2 penm LABEL size LABEL
≤ 0.05 - rκ> £ (60)
2πC 2
(7) Make a recommendation as to diagnosis
FIG. 7 is a flowchart of a preferred examination sequence using the invention, once an image is acquired and the eyes are located, modeled, and analyzed. First, a determination is made as to whether the comeal and retinal light reflexes are visible in the image taken by the camera 10 (STEP 100). If not, then the examination should be repeated (i.e., another image obtained) (STEP 102). If so, then a determination is made as to whether the comeal light reflexes are centered (STEP 104). If not, then an indication is given that the patient may have strabismus, and should be referred to further examination (STEP 106). Otherwise, a determination is made as to whether the retinal (red) light reflex in both eyes is equally bright (STEP 108). If not, then an indication is given that the patient may have a retinal problem, and should be referred to further examination (STEP 110). Otherwise, a determination is made as to whether a retinal crescent light reflex exists (STEP 112). If so, then an indication is given that the patient may have a refractive error, and should be referred to further examination (STEP 114). Otherwise, a determination is made as to whether other abnormal areas exist in the retinal light reflexes (STEP 116). If so, then an indication is given that the patient may have a possible media opacity, and should be referred to further examination (STEP 118). If not, the test sequence ends (STEP 120). Note that other test sequences may also be devised, and the tests described below may be done in other orders and/or terminated after any tentative diagnosis step.
Preferred Imaging Environment
A preferred ex. mination protocol for human patients includes (1) measuring the distance from the camera 10 to the patient; (2) centering the camera on a horizontally central landmark of the patient, preferably the patient's nose; and (3) taking an image. In addition, the inventors have determined that a preferred imaging environment for using the camera 10 of FIG. la includes the following practical constraints for a photographer: A single human patient 16 is expected, thus exactly one pair of eyes per image is expected. A parent or guardian may be holding a young child on his or her lap, so long as the image does not include the eyes of the parent or guardian.
• The patient 16 is expected to be looking directly into the camera lens 14, with both eyes open, so that the appropriate reflexes from the eyes may be imaged. Generally, for the tests to be valid, the entire pupil of each eye needs to be visible.
• The path from the camera 10 to the patient 16 should be clear of all obstacle^ for instance, the patient 16 should not be wearing glasses.
The patient 16 is expected to be directly facing the camera, photographed at a roughly known distance from the camera, using a known lens 14. This consistent configuration yields an expected range of image-apparent sizes of visual anatomical features. The constraint that the patient 16 not be too far from the camera 10 does not reflect necessarily on the ability of the inventive system to locate and model the eyes. Rather, this constraint is imposed to insure the eye regions are imaged by a sufficient number of pixels for making statistically sound measurements for making photoscreening decisions.
• The patient's eyes should be dilated. Typically, the normal dilation that occurs after three to five minutes in a darkened room is acceptable. The ambient lighting level in the room in which the picture is taken should be as dark as possible, to preserve dilation.
Although the embodiment of the invention described herein is designed to be robust in the presence of a confusing background, the image is expected to be reasonably free of clutter. For instance, the background should not contain life-size pictures of people (particularly faces). Normal jewelry is acceptable, but anything which has an eye-like appearance should be removed.
• The patient's eyes are expected to be roughly parallel with either the horizontal or vertical image boundary. Two orientations are allowed for testing for the defined visual problems. Calibrating algorithm parameters
Several thresholds and parameters have been identified in the description of our algorithms which locate, model, and screen the eyes. The values for these parameters depend upon several calibration factors: the size (rows by columns, in pixels) of the digital image, the image quantization (in bits per pixel), the distance of the subject from the camera, the optical magnification of the lens (or lenses) mounted on the camera, the displacement of the flash from the center of the optical lens, and the power of the flash (in lumens). Calibration of the location of the near-axis flash was accomplished on an empirical basis. Development included compensation for a non-threatening distance from a patient. This required adjusting the flash location with respect to the optical axis of the camera lens in such a way as to maximize the image screening reflexes. Placement of the flash too close to the optical center overwhelms the response, whereas placement too far diminishes the response. Adjustment of the camera light sensor also was necessary to insure a standard number of lumens per flash. We have successfully constructed and calibrated two working systems based upon different cameras, thereby resulting in different values for the algorithm parameters; however, conceptually, changing these values does not change how the inventive system operates. Of the two configurations tested, we chose the one with the larger image resolution, because larger numbers of samples (in this case, pixels) generally result in statistically more sound decisions. The currently preferred configuration uses a Kodak DC120 digital camera, which captures a 1280x960x24 bit (RGB) image. Subjects are photographed at a distance of about 3.5 feet (i.e., slightly over one meter). This distance helps avoid changes in the refractive characteristics of a patient's eyes due to accommodation (i.e., muscle induced changes in the curvature of the eye due to focusing on close objects), and is less intrusive to the patient. The camera has a built-in 3x lens, to which has been attached an additional 2x lens, giving an effective 6x magnification. The flash is a standard model from Kodak Corporation, consisting of a flash tube encased in a reflector. The flash tube measures about 27.25 mm (length) by about 2.5 mm (diameter). The reflector measures about 7.8 mm (width) by about 25.1 mm (length) by about 5.5 mm (depth). The flash is placed about 17.25 mm from the optical axis (measured from the center of the reflector - which is also the center of the flash tube - to the center of the optical axis).
For this camera and flash configuration, values for the algorithm parameters described above have been calibrated as follows:
T,=6 TL=80
T2=39 TB=200
T3=17 Tθ=2.0
T4=83 T =20
Ts=4 Tcr=0.2
Computer Implementation
The invention may be implemented in hardware or software, or a combination of both (e.g., programmable logic arrays). Unless otherwise specified, the algorithms included as part of the invention are not inherently related to any particular computer or other apparatus. In particular, various general purpose machines may be used with programs written in accordance with the teachings herein, or it may be more convenient to construct more specialized apparatus to perform the required method steps. However, preferably, the invention is implemented in one or more computer programs executing on programmable systems each comprising at least one processor, at least one data storage system (including volatile and non-volatile memory and/or storage elements), at least one input device, and at least one output device. Program code is applied to input data to perform the functions described herein and generate output information. The output information is applied to one or more output devices, in known fashion.
Each such program may be implemented in any desired computer language (including machine, assembly, high level procedural, or object oriented programming languages) to communicate with a computer system. In any case, the language may be a compiled or interpreted language. Each such computer program is preferably stored on a storage media or device (e.g., ROM, CD-ROM, or magnetic or optical media) readable by a general or special purpose programmable computer, for configuring and operating the computer when the storage media or device is read by the computer to perform the procedures described herein. The inventive system may also be considered to be implemented as a computer-readable storage medium, configured with a computer program, where the storage medium so configured causes a computer to operate in a specific and predefined manner to perform the functions described herein.
A number of embodiments of the present invention have been described. Nevertheless, it will be understood that various modifications may be made without departing from the spirit and scope of the invention. For example, certain of the steps described above for some of the processes are absolutely or relatively order-independent, and thus may be performed in an order other than as described. As another example, other diagnostic tests may be applied or devised that use the eye models generated by the invention. As yet another example, the inventive system and method may be used with non-human patients. Accordingly, other embodiments are within the scope of the following claims.

Claims

WHAT IS CLAIMED IS:
1 A computer implemented method for locating a patient's eyes in a digital image that includes each eye as illuminated by a near-axis flash, including the step of automatically finding light reflexes in the digital images as indicative of the location of each eye.
2. The method of claim 1 , wherein the step of automatically finding light reflexes includes the step of analyzing such light reflexes to determine possible pupil and sclera borders.
3. The method of claim 2, further including the step of automatically fitting a corresponding model to such possible pupil and sclera borders.
4. The method of claim 3, further including the step of analyzing the model of each eye to determine possible abnormalities in each eye.
5. The method of claim 4, further including the step of outputting a possible diagnosis for each eye based on such analyzing.
6. A computer implemented method for modeling a patient's eyes in a digital image that includes each eye as illuminated by a near-axis flash, including the step of automatically fitting a model to each eye in the digital images based on locating light reflections in the digital images as possible light reflexes from each eye.
7. The method of claim 6, wherein the step of step of automatically fitting a model to the digital images includes the step of analyzing such possible light reflexes to determine possible pupil and sclera borders.
8. The method of claim 6, further including the step of analyzing the model of each eye to determine possible abnormalities in each eye.
9. The method of claim 8, further including the step of outputting a possible diagnosis for each eye based on such analyzing.
implemented method for locating and modeling a patient's eyes in a digital
10. A computer image ma, includes each eye as illuminated by a near-axis nash. including fee s.eps of:
(a) finding and indicating brigh, spots in fee digital images as possible comeal reflexes of each eye;
(b) finding and indicating red-black and black-white gradients, each comprising a set of gradient points, around such bπght spots as possible pupil and sclera borders, respectively, of each eye;
(c) fitting a plurality of circles to subsets of such gradient points as possible models for each eye, each eye model having an associated strength; and
(d) sorting the eye models for each eye by strengths, and indicating the strongest corresponding eye model as best representing each eye.
, ! 1 The method of claim 10, further including the step of measuring retinal reflexes and comeal reflexes from each indicated eye model as an indicator of possible abnormalities 3 in each eye.
12. The method of claim 11, further including fee s,eP of ouφuning a possible diagnosis for each eye based on such measuring.
13. A computer implemented method for locating and modeling a patient's eyes, including the steps of: (a) illuminating the patient's eyes with a near-axis flash; (b) generating a digitized image that includes each eye; (c) finding and indicating bright spots in the digital image as possible comeal reflexes of each eye; (d) finding and indicating red-black and black-white gradients, each comprising a set of gradient points, around such bright spots as possible pupil and sclera borders, respectively, of each eye; (e) fitting a plurality of circles to subsets of such gradient points as possible models for each eye, each eye model having an associated strength; and (f) sorting the eye models for each eye by strengths, and indicating the strongest corresponding eye model as best representing each eye.
14. The method of claim 1 , further including the step of measuring retinal reflexes and comeal reflexes from each indicated eye model as an indicator of possible abnormalities in each eye.
15. The method of claim 14, further including the step of outputting a possible diagnosis for each eye based on such measuring.
16. A computer implemented method for locating and modeling a patient's eyes, including the steps of: (a) generating a digital image of each of a patient's eyes with a camera having a flash positioned near to a center line of a lens of the camera so as to generate an image with bright, sharp light reflexes; (b) finding and indicating bright spots in the digital image as possible comeal reflexes of each eye; (c) finding and indicating red-black and black-white gradients, each comprising a set of gradient points, around such bright spots as possible pupil and sclera borders, respectively, of each eye; (d) fitting a plurality of circles to subsets of such gradient points as possible models for each eye, each eye model having an associated strength; and (e) sorting the eye models for each eye by strengths, and indicating the strongest corresponding eye model as best representing each eye.
17. The method of claim 16, further including the step of measuring retinal reflexes and comeal reflexes from each indicated eye model as an indicator of possible abnormalities in each eye.
18. The method of claim 17, further including the step of outputting a possible diagnosis for each eye based on such measuring.
19. The method of claims 13 or 16, wherein the step of finding and indicating bright spots in the digital image as possible comeal reflexes includes the steps of: (a) spatially grouped bright pixels of each digital image into four-connected regions; (b) labeling such four-connected regions with areas within a selected size range as possible comeal reflexes.
20. The method of claims 19, wherein the step of finding and indicating red-black and black- white gradients includes the steps of: (a) determining a centroid for each possible comeal reflex; (b) defining a set of rays as each centroid together with pixels of the digital image along a set of compass directions from the centroid; (c) applying a gradient filter to a segment of each ray; (d) for each such segment, determining the pixel with the highest red-to-black gradient and the pixel with the highest black-to-white gradient; wherein the set of pixels with the highest red-to-black gradient from all segments comprise a possible pupil border, and wherein the set of pixels with the highest black-to- white gradient from all segments comprise a possible sclera border.
21. The method of claim 20, wherein the step of fitting a plurality of circles to subsets of such gradient points as possible eye models, includes the steps of: (a) applying a least-squares solution to fit at least one circle equation in the form of subarcs to the gradient points comprising the possible pupil border and the gradient points comprising the possible sclera border, for each possible comeal reflex; and (b) computing a circle strength score for each fit of at least one circle equation.
22. The method of claim 21 , wherein the step of sorting the eye models by strengths includes the steps of: (a) normalizing and sorting the circle strength scores for each eye; (b) selecting the circle equation with the best circle strength score as a model for the corresponding eye.
The metnod of claims 1 i or 14, wherein the step of measuπng retinal reflexes and comeal reflexes from each indicated eye model as an indicator of possible abnormalities in each eye includes the steps of determining and indicating if the retinal reflexes appear equal and uniform in both eyes.
The method of claims 11 or 14, wherein the step of measuring retinal reflexes and comeal reflexes from each indicated eye model as an indicator of possible abnormalities in each eye includes the steps of determining and indicating if only one comeal reflex is off- center
The method of claims 11 or 14, wherein the step of measuring retinal reflexes and comeal reflexes from each indicated eye model as an indicator of possible abnormalities in each eye includes the steps of determining and indicating if a crescent reflex exists in any eye.
The method of claims 1 1 or 14, wherein the step of measuring ret.nal reflexes and comeal reflexes from each indicated eye model as an indicator of possible abnormalities in each eye includes the steps of determining and indicating for each eye if an abnormal pupil area exists spaced from the pupillary boundary indicated by the corresponding eye model.
27. A system for locating a patient's eyes and for enabling modeling of the patient's eyes and determination of the presence of anomalies in the patient's visual anatomy, including: (a) a camera having a flash positioned near to the optical axis of a lens of the camera for generating a digital image of each of a patient's eyes with bright, sharp light re- flexes; and (b) a computer programmed to automatically find the light reflexes in the digital images as indicative of the location of each eye.
28. The system of claim 27, wherein the computer is further programmed to analyze such light reflexes to determine possible pupil and sclera borders.
29. The system of claim 28, wherein the computer is further programmed to fit a correspond- ing model to such possible pupil and sclera bor-d-er-s.
30. The system of claim 29, wherein the computer is further programmed to analyze the model of each eye to determine possible abnormalities in each eye.
31. The system of claim 30, wherein the computer is further programmed to output a possible diagnosis for each eye based on such analysis.
PC™S9«75
32. A camera for generating a digital image of each of a patient's eyes, comprising: (a) an electronic digital camera having an optical lens; (b) a flash positioned near to the optical axis of the lens during use of the camera such that digital images produced by the camera contain bright, sharp light reflexes from the eyes, wherein such bright, shaφ light reflexes are capable of being analyzed by a computer system to locate and model the eyes.
PCT/US1998/024275 1997-11-14 1998-11-13 Automated photorefractive screening Ceased WO1999025239A1 (en)

Priority Applications (4)

Application Number Priority Date Filing Date Title
AU15240/99A AU1524099A (en) 1997-11-14 1998-11-13 Automated photorefractive screening
EP98959444A EP1052928A1 (en) 1997-11-14 1998-11-13 Automated photorefractive screening
KR1020007005253A KR20010032112A (en) 1997-11-14 1998-11-13 Automated photorefractive screening
JP2000520683A JP2001522679A (en) 1997-11-14 1998-11-13 Automatic light reflection screening

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
US6553797P 1997-11-14 1997-11-14
US60/065,537 1997-11-14
US09/173,571 1998-10-15
US09/173,571 US6089715A (en) 1998-10-15 1998-10-15 Automated photorefractive screening

Publications (1)

Publication Number Publication Date
WO1999025239A1 true WO1999025239A1 (en) 1999-05-27

Family

ID=26745703

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US1998/024275 Ceased WO1999025239A1 (en) 1997-11-14 1998-11-13 Automated photorefractive screening

Country Status (5)

Country Link
EP (1) EP1052928A1 (en)
JP (1) JP2001522679A (en)
KR (1) KR20010032112A (en)
AU (1) AU1524099A (en)
WO (1) WO1999025239A1 (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7255443B2 (en) 2002-03-05 2007-08-14 Chul Myung Choe Quantitative analysis apparatus for phenomena of glare and the method for the same
WO2010011785A1 (en) * 2008-07-23 2010-01-28 Indiana University Research & Technology Corporation System and method for a non-cooperative iris image acquisition system
CN111832344A (en) * 2019-04-17 2020-10-27 深圳熙卓科技有限公司 Dynamic pupil detection method and device

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2004005167A (en) * 2002-05-31 2004-01-08 Matsushita Electric Ind Co Ltd Eye position specifying method and apparatus
JP4179606B2 (en) * 2003-06-09 2008-11-12 株式会社コーナン・メディカル Photorefractor
IL215883A0 (en) * 2011-10-24 2012-03-01 Iriss Medical Technologies Ltd System and method for indentifying eye conditions
JP6774136B2 (en) * 2015-01-20 2020-10-21 グリーン シー.テック リミテッド Methods and systems for automatic vision diagnosis
US10042181B2 (en) * 2016-01-27 2018-08-07 Johnson & Johnson Vision Care, Inc. Ametropia treatment tracking methods and system

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5218387A (en) * 1990-05-21 1993-06-08 Nissan Motor Co., Ltd. Eye position detecting apparatus

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5218387A (en) * 1990-05-21 1993-06-08 Nissan Motor Co., Ltd. Eye position detecting apparatus

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7255443B2 (en) 2002-03-05 2007-08-14 Chul Myung Choe Quantitative analysis apparatus for phenomena of glare and the method for the same
WO2010011785A1 (en) * 2008-07-23 2010-01-28 Indiana University Research & Technology Corporation System and method for a non-cooperative iris image acquisition system
US8644565B2 (en) 2008-07-23 2014-02-04 Indiana University Research And Technology Corp. System and method for non-cooperative iris image acquisition
CN111832344A (en) * 2019-04-17 2020-10-27 深圳熙卓科技有限公司 Dynamic pupil detection method and device
CN111832344B (en) * 2019-04-17 2023-10-24 深圳熙卓科技有限公司 Dynamic pupil detection method and device

Also Published As

Publication number Publication date
EP1052928A1 (en) 2000-11-22
JP2001522679A (en) 2001-11-20
KR20010032112A (en) 2001-04-16
AU1524099A (en) 1999-06-07

Similar Documents

Publication Publication Date Title
US6089715A (en) Automated photorefractive screening
US12475561B2 (en) Methods and systems for ocular imaging, diagnosis and prognosis
Tobin et al. Detection of anatomic structures in human retinal imagery
Patton et al. Retinal image analysis: concepts, applications and potential
Niemeijer et al. Segmentation of the optic disc, macula and vascular arch in fundus photographs
US10413180B1 (en) System and methods for automatic processing of digital retinal images in conjunction with an imaging device
AU2019204611B2 (en) Assessment of fundus images
US11967075B2 (en) Application to determine reading/working distance
US20220198831A1 (en) System for determining one or more characteristics of a user based on an image of their eye using an ar/vr headset
Fritzsche et al. Automated model based segmentation, tracing and analysis of retinal vasculature from digital fundus images
CN110623629A (en) Visual attention detection method and system based on eyeball motion
KR20190022216A (en) Eye image analysis method
US6616277B1 (en) Sequential eye screening method and apparatus
Narasimha-Iyer et al. Integrated analysis of vascular and nonvascular changes from color retinal fundus image sequences
EP4006833B1 (en) Image processing system and image processing method
EP3769283B1 (en) Pupil edge detection in digital imaging
Gairola et al. Smartkc: Smartphone-based corneal topographer for keratoconus detection
Consejo et al. Detection of subclinical keratoconus with a validated alternative method to corneal densitometry
CN109215039B (en) Method for processing fundus picture based on neural network
WO1999025239A1 (en) Automated photorefractive screening
Giancardo Automated fundus images analysis techniques to screen retinal diseases in diabetic patients
Valencia Automatic detection of diabetic related retina disease in fundus color images
US20240277224A1 (en) Optical coherence tomography (oct) self-testing system, optical coherence tomography method, and eye disease monitoring system
Kwok et al. Democratizing Optometric Care: A Vision-Based, Data-Driven Approach to Automatic Refractive Error Measurement for Vision Screening
Wang Investigation of image processing and computer-assisted diagnosis system for automatic video vision development assessment

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A1

Designated state(s): AL AM AT AU AZ BA BB BG BR BY CA CH CN CU CZ DE DK EE ES FI GB GD GE GH GM HR HU ID IL IS JP KE KG KP KR KZ LC LK LR LS LT LU LV MD MG MK MN MW MX NO NZ PL PT RO RU SD SE SG SI SK SL TJ TM TR TT UA UG UZ VN YU ZW

AL Designated countries for regional patents

Kind code of ref document: A1

Designated state(s): GH GM KE LS MW SD SZ UG ZW AM AZ BY KG KZ MD RU TJ TM AT BE CH CY DE DK ES FI FR GB GR IE IT LU MC NL PT SE BF BJ CF CG CI CM GA GN GW ML MR NE SN TD TG

121 Ep: the epo has been informed by wipo that ep was designated in this application
DFPE Request for preliminary examination filed prior to expiration of 19th month from priority date (pct application filed before 20040101)
WWE Wipo information: entry into national phase

Ref document number: 1020007005253

Country of ref document: KR

ENP Entry into the national phase

Ref country code: JP

Ref document number: 2000 520683

Kind code of ref document: A

Format of ref document f/p: F

WWE Wipo information: entry into national phase

Ref document number: 1998959444

Country of ref document: EP

REG Reference to national code

Ref country code: DE

Ref legal event code: 8642

WWP Wipo information: published in national office

Ref document number: 1998959444

Country of ref document: EP

WWP Wipo information: published in national office

Ref document number: 1020007005253

Country of ref document: KR

NENP Non-entry into the national phase

Ref country code: CA

WWW Wipo information: withdrawn in national office

Ref document number: 1998959444

Country of ref document: EP

WWR Wipo information: refused in national office

Ref document number: 1020007005253

Country of ref document: KR